The PRISM Framework

You cannot responsibly commit
to an outcome you haven't
measured.

PRISM is our proprietary framework for evaluating the true complexity of every implementation before we scope, price, or commit to anything. Every engagement begins with it.

Why Complexity Measurement Exists

"Pre-built" doesn't mean
"ready to run in your environment."

The Moveworks AI Agent Marketplace has hundreds of pre-built agents. Every agent in that marketplace requires real technical implementation work before it produces value — connector authentication, system permissions, schema mapping, exception handling, UAT.

And that complexity varies enormously between agents. A Workday PTO balance lookup has a fundamentally different implementation profile than a Salesforce account update with conditional routing and bidirectional sync. Treating them as equivalent — pricing them equally, scoping them identically — is how programs run over budget, over timeline, and under expectation.

Most partners ignore this. They quote by workflow count. Ten workflows, flat rate. That model assumes all workflows are equivalent. The assumption breaks — and when it breaks, it breaks on your timeline and your budget.

We built PRISM because the most reliable predictor of implementation failure isn't bad technology or bad intent. It's unexamined complexity.

We built PRISM because the most reliable predictor of implementation failure isn't bad technology or bad intent. It's unexamined complexity.

Process Readiness and Integration Scoring Matrix

Five dimensions. One score. Complete transparency.

Every workflow in every engagement is PRISM-scored before we price it. Five dimensions, each one reflecting a specific category of implementation risk. The score drives the complexity tier. The tier drives the price.

PRISM
Framework
P
Process
R
Readiness
I
Integration
S
Scoring
M
Matrix
Each dimension scored independently · Applied across every workflow · Before any commitment
01
D1
Agentic Level
How much reasoning does the agent need to perform?
An L1 agent executes a deterministic task — pull a field, update a record. An L2 agent makes decisions within defined parameters. An L3 agent handles ambiguity, interprets unstructured inputs, and exercises judgment. These are not equivalent. An L3 agent can cost 6× more to implement correctly than an L1 agent with the same apparent scope. Agentic level is the first and most impactful dimension because it determines the entire build architecture.
Low complexity (L1)
Password reset · PTO balance lookup · Pay stub retrieval. Deterministic. Defined inputs and outputs. No reasoning required.
High complexity (L3)
Expense policy interpretation · Onboarding orchestration with conditional routing. Requires judgment across ambiguous inputs.
02
D2
Integration Surface
What systems does it touch — and how hard are those connections?
A certified native connector is not the same as a legacy ERP integration or a custom on-premise agent installation. We score integration quality, not just integration count. A single poorly-documented system can carry more implementation risk than five well-connected ones. Each system in scope requires connector authentication, permission grants, and API configuration specific to your tenant — not a generic installation.
Low complexity
Single certified connector (Workday, Okta). Native integration, documented API, standard permission model.
High complexity
Legacy on-premise ERP with custom middleware. Limited documentation. Non-standard authentication model.
03
D3
Data Entity Complexity
What data moves — and how much does it need to be shaped between systems?
A read-only field lookup carries fundamentally different risk than bidirectional sync across systems with divergent schemas and conflict resolution requirements. Transformation depth — not data volume — is the real cost driver. Custom fields, renamed fields, and deactivated standard fields in enterprise deployments (especially Workday in financial services and healthcare) frequently produce schema divergence that the standard agent template doesn't account for.
Low complexity
Read-only field retrieval. Standard schema. No transformation required between source and destination.
High complexity
Bidirectional sync with schema divergence. Custom fields. Conflict resolution logic required.
04
D4
Exception Surface
How many ways can this workflow fail — and what needs to happen when it does?
The happy path is roughly 20% of the implementation work. The exception logic — timeouts, missing records, auth failures, partial writes, escalation triggers, multi-party approval chains — is the other 80%. We map every failure mode identified in scoring before we price it. The exception surface is the most common source of mid-program scope surprises with partners who don't measure it upfront.
Low exception surface
Single escalation path. Employee or agent. Clean failure modes with standard handling.
High exception surface
Multi-party approvals. Regulatory conditions. Timeout logic with partial-write recovery. Conditional escalation trees.
05
D5
Process Readiness
Is the process documented, stable, and owned?
An undocumented process requires knowledge elicitation before build can begin — that cost belongs in the scope, not hidden in change orders. An unstable process requires Readiness work before Activation is appropriate. Process Readiness is applied as a multiplier to the sum of the other four dimensions — the only dimension that can dramatically change the total score of an otherwise simple workflow. A well-understood, documented, stable process reduces total build cost significantly.
High readiness (multiplier: 1.0×)
Fully documented. Stable. Single clear owner. Consistent execution across the organization.
Low readiness (multiplier: 2.0×)
Undocumented. Variable execution. No single owner. Requires knowledge elicitation before build can begin.
Complexity Tiers

Every workflow gets a tier.
Every tier has a path.

Standard
Score 1–6
1–6
WCU range
Low complexity, predictable build. Clean integration, stable process, manageable exception surface. Suitable for milestone-based delivery with a short timeline. The most common tier for first-phase deployments.
Elevated
Score 7–13
7–13
WCU range
Moderate complexity. Integration quality or exception depth adds meaningful risk. Confirm the exception surface and integration specs before the SOW. Deliverable still achievable in standard timeline with additional validation.
Complex
Score 14–22
14–22
WCU range
High build risk. Integration quality, exception surface, or process readiness require validation before outcomes-based pricing can be committed. Often benefits from a focused discovery sprint before full scoping.
Strategic
Score 23+
23+
WCU range
Scope risk is significant. May involve unstable processes, legacy systems, or undefined exceptions. Requires a Readiness phase before Activation is appropriate. Scoped as a custom engagement.

WCU — Workflow Complexity Unit. The unit of account in every PRISM score. Each engagement is priced per WCU based on the tier. You see the score, you see the WCU count, you see the math — before you see the contract. Standard Track engagements (10 workflows) are priced as a fixed-fee bundle based on the aggregate PRISM scorecard produced in the Readiness Assessment. Custom Track engagements are priced per WCU.

PRISM in Practice

You see the score.
Before you see the price.

There are no surprises after the contract is signed. Not because we're optimistic. Because the complexity was measured.

PRISM is run live during the Readiness Assessment, with you in the room. We score every workflow in your inventory together — not on your behalf, not in a spreadsheet we send you after the fact. The scoring conversation is part of the value: it surfaces disagreements early, validates assumptions, and creates shared ownership of the scope.

The output is a scored inventory of every workflow in scope — complexity tier, pricing rationale, and routing recommendation for each one. You receive it before we send a SOW. If scope changes, the score changes, and so does the price. The math is always visible.

PRISM doesn't stop at Activation. Every new agent built under Managed AI is also PRISM-scored before it enters the build queue. The framework governs the entire lifecycle of your program.

1
Readiness Assessment — Issue Universe Mapping
All candidate workflows identified, with volume data and system landscape documented.
2
PRISM Scoring — Live with the client
Every candidate scored across D1–D5 in a working session. Disagreements surface here, not in the SOW review.
3
Scorecard delivered — before the SOW
Full scored inventory: tier, WCU count, integration requirements, and pricing rationale for every workflow.
4
Scope approved — then contract
You approve the scope before you sign anything. The scorecard is the engagement brief. Activation begins immediately.
5
Managed AI — every new workflow PRISM-scored first
No agent enters the Managed AI build queue without a PRISM score. The framework governs the full program lifecycle.

See PRISM applied to
your workflows.

AIRO runs your organization through a PRISM-informed readiness assessment in 4 minutes. See how your environment scores before you commit to anything.

Start the AIRO Assessment →

Free · 4 minutes · No email required

Or see the full journey →