· 45 min read · Playbook

Algorithmation Playbook: Complete Guide for AI Integrators

A step-by-step technical guide to converting business processes into executable UAPF packages. From process selection to deployment—everything an AI integrator needs to implement Process as Code.

0. Modes & Self-Assessment

The chapters that follow describe how to convert a process into a UAPF package. They do not describe how to decide whether to algorithmate, what mode you are in, or who should do the work. Those questions matter more than the BPMN syntax. Get them wrong and a technically correct package solves the wrong problem.

This section answers four things:

  1. The four starting states an algorithmation engagement can be in.
  2. Who can legitimately perform the work in each state.
  3. How UAPF functions as a safeguard for AI execution — specifically, what makes it safe when the rules themselves were authored by AI.
  4. A six-question self-assessment that produces a concrete recommendation: in-house, outsourced, or hybrid.

0.1 The four starting states

Every algorithmation engagement begins in exactly one of four states. The state determines the methodology, the tools, the team, and the cost. Misidentifying the state is the most common cause of failed engagements.

State 1 — Greenfield

No process exists. The organisation is designing how a new operation should work, often in response to new legislation, a new mandate, or a strategic pivot.

Indicators: "We need to start doing X." "This regulation comes into force in October." "We've never handled this kind of case before."

Methodology: The algorithmation is the process design. Policy, legal, and operational stakeholders co-design the flow. AI can scaffold templates from comparable processes in other jurisdictions or sectors, but the substantive choices are political and require named human authority.

Who can do this: A cross-functional team inside the institution, with facilitation. AI is a research and drafting aid; it cannot make the policy choices.

State 2 — Convert and improve

A documented process exists in Word, Visio, an SOP binder, a spreadsheet, or a regulation. The structure is implicit in the document and the work is to lift it into explicit, executable form.

Indicators: "We have an SOP for this." "There's a Word document somewhere." "The methodology is in a 40-page PDF the Ministry published in 2019."

Methodology: Extract the implicit structure (steps, decisions, roles, systems), encode as BPMN+DMN, package as UAPF, validate against the source document, then iterate on improvements identified during conversion.

Who can do this: An internal team of one analyst plus one domain owner, with Claude or another LLM as a co-pilot. Three to six weeks part-time per medium-complexity process. This is the case where this Playbook from Section 3 onward is directly applicable.

State 3 — AI-generated from description

A high-level prose description exists ("we receive applications, classify them, route them, respond within 30 days") and AI produces the first-draft UAPF package from the description.

Indicators: "I can describe what should happen, but I haven't written it down." "Generate me a process for handling X."

Methodology: Prompt the AI authoring layer to produce a candidate UAPF package. Domain experts review the package against reality, mark each rule as confirmed / corrected / removed, and iterate until the package is signed by an authorised reviewer.

Critical: No AI-generated package is ever promoted to executable without a human approval signature. This is the boundary that makes UAPF a safeguard. See section 0.3 for the authoring/execution separation principle.

Who can do this: The AI does the drafting; the domain expert does the validation. The expert does not need to write BPMN — they need to read it. The approval signature is the substantive work and must be done by someone with the institutional authority to be wrong.

State 4 — Legacy with no description

The process happens every day. No one has ever written it down end-to-end. The institutional knowledge is held by three to five people who learned it from their predecessors. One of them is retiring next year.

Indicators: "Ask Inese, she's done it for twenty years." "We've always done it this way." "There used to be a memo about it from 2007 but no one can find it."

Methodology: Structured discovery — interviews with practitioners, observation of in-flight cases, archaeology of past case files, reconciliation between what people say the process is and what the case files show. Then State-2 methodology applied to the discovered process.

Who can do this: This is the most labour-intensive state and the one where external facilitation usually pays for itself. AI can accelerate capture (transcribing interviews, comparing variants, drafting candidate flows from case-file evidence), but the diplomatic work of getting practitioners to surface and reconcile their tacit knowledge is human work, and the convening authority to do it across institutions is rarely available internally.

0.2 Labour model by state

The pattern: AI can do more of the work the more documented the starting state. AI cannot replace the human signature on the released package in any state.

State AI alone AI + low-knowledge operators AI + domain expert Consultant-led
1 — Greenfield Not appropriate Not appropriate Possible with strong governance Often the right choice
2 — Convert and improve Not appropriate Viable for low-stakes processes Recommended Optional, accelerates
3 — AI-generated from description Not appropriate (no human signature) Viable for drafting; expert must approve Recommended Optional
4 — Legacy discovery Not appropriate Not appropriate Possible if cross-institutional scope is small Usually required

0.3 The safeguard architecture: how UAPF makes AI execution safe

The most common misunderstanding about UAPF as an AI safeguard is this: people assume UAPF is safe because the rules are correct. This is wrong, and dangerous, because rules — even rules written by experts — are frequently wrong.

UAPF is safe because the rules are frozen, versioned, signed, reviewable, and testable before execution. The safety lives in the lifecycle, not in the rule content. The same way that compiled software is not safe because the source code was perfect — it is safe because the source was reviewed, the compiler was deterministic, and the binary is what gets deployed.

The four operational rules
  • Authoring is unconstrained. An AI agent, a junior analyst, an external consultant, or a domain expert may all author UAPF packages at any level of sophistication, including fully autonomous AI drafting. There is no privileged author. There is no requirement that the author be human.
  • Execution is constrained. A UAPF package can only be executed if it is in released status, has a version hash, and carries a signature from an authorised reviewer. The executing AI sees the released package, has no freedom to invent rules outside it, and logs every decision against the version hash.
  • The signature is the safeguard. Between authoring and execution sits exactly one act: a named human reviews a specific version of the package and signs it as deployable. That signature is what differentiates algorithmation from "ask the LLM." Without it, you have an unsupervised chatbot with extra steps.
  • The reviewer is not the same as the author. Even when the author is a human consultant, the reviewer must be the process owner inside the institution — the person with the institutional authority to be wrong about how this process should run. The reviewer does not need to write XML. The reviewer needs to read BPMN diagrams and DMN tables well enough to spot rules that contradict reality.

This architecture answers the question "if AI writes the rules, how can UAPF be a safeguard for AI?" The answer: AI did not deploy the rules. A named human did. The package is the record of that decision.

0.4 Self-assessment: can we do this in-house, or do we need help?

Six questions. Score each 1-5.

# Question Score 1 Score 5
Q1 What is the starting state of the process? State 4 (legacy, undocumented) State 2 (well-documented SOP)
Q2 Does the process cross institutional boundaries? Yes, many institutions No, fully internal
Q3 Do we have an internal analyst who can read BPMN diagrams after a 4-hour training? No Yes, and a backup
Q4 Do we have a named process owner with the authority to sign rules as deployable? No Yes, identified and engaged
Q5 If the algorithmated process produces a wrong output, what is the worst-case harm? Constitutional rights violation Internal admin friction
Q6 Is the deliverable expected to be defensible to a regulator, court, or oversight body? Yes, high scrutiny No, internal use only
Scoring:
  • 25-30 points: Strong fit for in-house algorithmation. Use Algomation Studio plus an LLM co-pilot. Engage a consultant only for one-time methodology training (1-2 days).
  • 18-24 points: Hybrid approach. Internal team owns delivery; external consultant provides methodology coaching, reviews critical packages, and may lead discovery for the highest-risk sub-processes.
  • 12-17 points: Consultant-led with strong internal participation. Specify UAPF/ProcessGit as the contractual deliverable format so the work lands as a machine-executable, version-controlled asset rather than a static report.
  • Below 12: Reconsider scope. Either narrow the process (split into in-scope and out-of-scope segments) or invest first in foundational documentation before attempting algorithmation.

0.5 Worked example: a public oversight body, 2026

Consider an oversight institution that has just reported a record volume of citizen complaints (approximately 2,500/year, up 28% year-on-year), with the largest category being violations of the good-governance principle. The institution is also facing an unresolved structural problem with cross-municipal coordination of out-of-family child care, where children get bounced between municipalities that have no formal obligation to accept each other's cases. Two distinct algorithmation candidates are visible in that situation. Each scores differently and requires a different approach.

Candidate A: internal complaint-handling process

The institution receives complaints, classifies them by topic, routes them to specialists, and produces either an explanation (typical for most cases) or a recommendation to a public body (typical for the remainder). An internal SOP almost certainly exists.

Self-assessment: Q1=4 (State 2, SOP likely exists), Q2=5 (fully internal), Q3=3 (presumed available), Q4=5 (department heads can sign), Q5=3 (moderate harm; wrong routing delays response), Q6=3 (defensible to parliamentary oversight but not court). Total: 23 points → hybrid, leaning in-house.

Recommended approach: Two internal analysts plus Algomation Studio plus an LLM co-pilot, with a one-day methodology coaching session from an external partner. Three to six weeks part-time. Output: a versioned, auditable UAPF model of the complaint-handling process. Side benefit: the audit trail that the oversight body will eventually be asked to produce is now generated automatically.

Candidate B: cross-institutional handoff between municipalities

The failure mode: a child needs out-of-family care, the home municipality has no facility, other municipalities have no obligation to accept, and the child has no declared address while in limbo.

Self-assessment: Q1=1 (State 4, no end-to-end documented flow across all municipalities), Q2=1 (highly cross-institutional), Q3=2 (institution has analysts but not convening authority over municipalities), Q4=1 (no single owner exists today), Q5=1 (constitutional rights violation — children on the street), Q6=1 (defensible to court and government). Total: 7 points → reconsider scope.

Recommended approach: Narrow the scope before procurement. Split into (i) normative gap analysis — what would a correct cross-institutional protocol look like — which is consultant-led legal work; (ii) practice mapping in a representative sample of municipalities, which is consultant-led discovery; and (iii) future-state UAPF design, which is consultant-led but owned by the institution as the long-term steward. Specify UAPF/ProcessGit as the procurement deliverable so the work lands as an asset the institution can audit, version, and improve.

The general principle: the question is never "in-house or external?" The question is "which sub-processes belong in-house and which belong with whom externally, and what contractual form makes the deliverable durable?" UAPF is the answer to the second half — it ensures that even consultant-led work lands as an asset the institution can own, audit, version, and improve, rather than a PDF that ages in a drawer.

0.6 Reading guide for the rest of this Playbook

  • State 1 readers: Start at Section 4 (Process Discovery) but treat it as design rather than discovery. The decision-table work in Section 6 will feel premature; circle back after policy is settled.
  • State 2 readers: Read sequentially from Section 3. This is the case the Playbook is best calibrated for.
  • State 3 readers: Skim Sections 5-8 to know what the AI is generating. Spend most of your time on Section 10 (Validation & Testing) and on the human-signature procedure in Section 11 (Deployment).
  • State 4 readers: Use Section 4 as a starting frame but supplement with discovery techniques (interview protocols, document archaeology, variant reconciliation). External facilitation is usually warranted.

1. What is Algorithmation?

Algorithmation is the discipline of converting implicit business knowledge—the processes, decisions, and rules that exist in people's heads, documents, and tribal knowledge—into explicit, machine-readable, executable algorithms.

Think of it as "Process as Code": just as software developers version and test their code, algorithmation applies the same rigor to business processes. The result is a versioned, testable, auditable representation of how your organization actually operates.

Why Algorithmation Matters for AI

AI agents like Claude are powerful—but they hallucinate when rules are ambiguous. When you ask an AI to "process a loan application," it doesn't inherently know your organization's:

  • Credit score thresholds
  • Debt-to-income ratio limits
  • Exception handling procedures
  • Escalation paths
  • Compliance requirements

Algorithmation solves this by encoding these rules explicitly. The AI agent then executes the algorithm rather than inventing the logic.

Important scope note: Algorithmation does not make AI infallible. It constrains execution to explicit, versioned logic and removes ambiguity in algorithmic decisions. It does not eliminate errors caused by missing data, faulty integrations, incorrect rules, or human override outside the defined process boundary.

In practice, algorithmation shifts AI from reasoning about what to do to executing what is already defined. This is the same distinction that exists between free-form scripting and compiled software.

The Three Standards

Algorithmation builds on three OMG (Object Management Group) standards:

Standard Purpose Use For
BPMN
Business Process Model and Notation
Defines workflows—sequences of activities, decisions, and handoffs The "flow" of work from start to end
DMN
Decision Model and Notation
Defines decision logic as explicit tables Business rules, eligibility checks, pricing logic
CMMN
Case Management Model and Notation
Defines non-linear, knowledge work Investigations, appeals, complex reviews

What is UAPF?

UAPF (Unified Algorithmic Process Format) is the packaging standard that bundles BPMN, DMN, and CMMN models together with metadata, documentation, and resource mappings into a single, portable, versionable artifact.

A .uapf file is simply a ZIP archive containing:

my-process.uapf
├── manifest.json          # Package metadata and entry points
├── models/
│   ├── bpmn/             # Workflow definitions
│   ├── dmn/              # Decision tables
│   └── cmmn/             # Case models (optional)
├── resources/
│   └── resource-mapping.yaml
├── tests/
│   └── scenarios.json
└── docs/
    └── README.md

UAPF Package Requirements

Level Requirement
MUST Contain a manifest.json, at least one executable BPMN process, a resource mapping file, and version metadata.
SHOULD Externalize decision logic into DMN tables, include test scenarios, and document rationale for non-obvious rules.
MAY Include CMMN models, additional documentation, or example datasets when the process involves non-linear case handling.

2. Prerequisites & Tools

Required Tools

Tool Purpose Link
Algomation Studio Visual editor for BPMN/DMN models, UAPF packaging studio.algomation.io
bpmn.io Alternative: Open-source BPMN/DMN editor bpmn.io
ProcessGit Git-based repository for UAPF packages processgit.org
UAPF Engine Runtime for executing UAPF packages GitHub
UAPF Validator Validates UAPF packages against schemas Online Validator
Node.js 20+ Runtime for UAPF Engine nodejs.org
Git Version control for process packages git-scm.com

Recommended Knowledge

  • Basic understanding of business processes and workflows
  • Familiarity with XML and JSON
  • Git basics (commit, push, branch)
  • Basic understanding of REST APIs
Tip: You don't need to be a BPMN expert. This guide covers the essential elements you'll actually use—about 20% of the full specification handles 80% of real-world processes.

3. Selecting the Right Process

Not every process is a good candidate for algorithmation. Use this scoring framework to identify high-impact opportunities.

Process Selection Criteria

Criterion Score 1-5 What to Look For
Volume 1 (rare) → 5 (daily) How often is this process executed?
Decision Complexity 1 (trivial) → 5 (many rules) How many decision points and business rules?
Consistency Need 1 (flexible) → 5 (must be identical) How important is consistent execution?
Audit Requirements 1 (none) → 5 (regulated) Are there compliance or audit requirements?
Documentation Quality 1 (none) → 5 (detailed SOPs) How well is the current process documented?
AI Suitability 1 (pure human judgment) → 5 (rule-based) Can decisions be expressed as explicit rules?

Scoring guide:

  • 25-30 points: Excellent candidate—start here
  • 18-24 points: Good candidate—proceed with some discovery work
  • 12-17 points: Marginal—may require significant documentation effort
  • Below 12: Poor fit—consider other processes first

Industry Process Examples

High-impact processes by industry that are typically good algorithmation candidates:

Industry High-Impact Processes
Financial Services Credit approval, KYC/AML verification, Claims adjudication, Loan servicing, Account opening
Healthcare Prior authorization, Patient intake, Care pathway selection, Discharge planning, Referral management
Government Permit issuance, Benefits determination, License renewal, Compliance inspection, Procurement approval
HR / Operations Employee onboarding, Expense approval, Performance review, Offboarding, Leave request
Supply Chain Order fulfillment, Vendor qualification, Returns processing, Inventory replenishment, Quality inspection
Insurance Policy underwriting, Claims processing, Policy renewal, Fraud investigation, Quote generation

Red Flags: When NOT to Algorithmate

  • Too simple: No decision logic needed (just a checklist)
  • Too chaotic: No repeatable pattern exists
  • Pure creativity: Requires genuine human judgment that can't be codified
  • No measurable outcome: You can't tell if it was done "correctly"
  • One-time process: Only happens once, not worth the investment

4. Process Discovery

Before you can algorithmate a process, you need to understand it completely. This section provides a structured approach to extracting process knowledge.

Step 4.1: Define the Process Boundary

Answer these questions first:

Process Boundary Definition
  • Trigger: What event starts this process? (e.g., "Application received")
  • End states: What outcomes indicate completion? (e.g., "Approved", "Rejected", "Withdrawn")
  • Scope IN: What activities are included?
  • Scope OUT: What activities are explicitly excluded?
  • Owner: Who is accountable for this process?

Step 4.2: Map the Happy Path

Start by documenting the ideal case—when everything goes smoothly. List each activity in sequence:

Happy Path Template
1. [Trigger Event] →
2. [Activity 1: verb + noun] →
3. [Activity 2: verb + noun] →
4. [Decision Point: question?] →
   - If yes: [Activity 3]
   - If no: [Activity 4]
5. [Activity 5] →
6. [End Event: outcome]

Step 4.3: Identify Decision Points

For each decision in the process, capture:

Decision Name Inputs Required Rules/Criteria Possible Outputs
e.g., "Eligibility Check" Credit score, Income, DTI ratio Score ≥ 680, DTI ≤ 43% Eligible, Not Eligible, Manual Review

Step 4.4: Document Exceptions

Real processes have exceptions. For each exception:

  • What triggers the exception?
  • How is it currently handled?
  • Who handles it?
  • What are the possible outcomes?

Step 4.5: List Systems and Integrations

Document every system the process touches:

System Purpose in Process Integration Type
e.g., Credit Bureau API Pull credit score REST API
e.g., Core Banking Create account SOAP/Database

Discovery Output Checklist

  • Process boundary definition (trigger, end states, scope)
  • Happy path sequence documented
  • All decision points identified with inputs/outputs/rules
  • Exception scenarios documented
  • All systems and integrations listed
  • Roles and responsibilities identified
  • Current pain points noted

Definition of Done: Process Discovery

  • Process boundary is explicitly defined and agreed with the process owner
  • At least one happy path is fully documented end-to-end
  • All decision points have named inputs and outputs
  • Exceptions are explicitly listed (not assumed)
  • All external systems and integrations are identified
  • Roles and responsibilities are documented

5. BPMN Workflow Design

BPMN (Business Process Model and Notation) is the standard for modeling workflows. This section covers the essential elements you'll actually use.

Essential BPMN Elements

Element Symbol Purpose Example
Start Event ○ (thin circle) Where the process begins "Application Received"
End Event ◉ (thick circle) Where the process ends "Application Approved"
User Task ▭ with person icon Work done by a human "Review Documentation"
Service Task ▭ with gear icon Automated system action "Pull Credit Score"
Business Rule Task ▭ with table icon Execute a decision table "Evaluate Eligibility"
Exclusive Gateway ◇ with X One path based on condition "Is applicant eligible?"
Parallel Gateway ◇ with + Multiple paths simultaneously "Run all checks in parallel"
Sequence Flow → (arrow) Order of activities Connects elements

Common BPMN Patterns

Pattern 1: Simple Sequence

Start → Task A → Task B → Task C → End

Pattern 2: Decision Branch

Start → Check → Gateway(X) → [Yes: Path A] or [No: Path B] → End

Pattern 3: Parallel Execution

Start → Gateway(+) → [Task A || Task B || Task C] → Gateway(+) → End

Pattern 4: Loop / Rework

Review → Gateway → [Approve: End] or [Rework: Fix → back to Review]

Step-by-Step: Creating a BPMN Model

Step 5.1: Create the File Structure

models/bpmn/main-process.bpmn

<?xml version="1.0" encoding="UTF-8"?>
<bpmn:definitions 
    xmlns:bpmn="http://www.omg.org/spec/BPMN/20100524/MODEL"
    xmlns:bpmndi="http://www.omg.org/spec/BPMN/20100524/DI"
    xmlns:dc="http://www.omg.org/spec/DD/20100524/DC"
    xmlns:di="http://www.omg.org/spec/DD/20100524/DI"
    id="Definitions_1"
    targetNamespace="http://algomation.io/schema/bpmn">

    <bpmn:process id="Process_Main" name="Main Process" isExecutable="true">
        <!-- Process elements go here -->
    </bpmn:process>

    <bpmndi:BPMNDiagram id="BPMNDiagram_1">
        <bpmndi:BPMNPlane id="BPMNPlane_1" bpmnElement="Process_Main">
            <!-- Visual layout goes here -->
        </bpmndi:BPMNPlane>
    </bpmndi:BPMNDiagram>

</bpmn:definitions>

Step 5.2: Add Start Event

<bpmn:startEvent id="StartEvent_1" name="Application Received">
    <bpmn:outgoing>Flow_Start_To_Task1</bpmn:outgoing>
</bpmn:startEvent>

Step 5.3: Add Tasks

<!-- User Task: requires human action -->
<bpmn:userTask id="Task_ReviewDocs" name="Review Documentation">
    <bpmn:incoming>Flow_Start_To_Task1</bpmn:incoming>
    <bpmn:outgoing>Flow_Task1_To_Gateway</bpmn:outgoing>
</bpmn:userTask>

<!-- Service Task: automated system call -->
<bpmn:serviceTask id="Task_PullCredit" name="Pull Credit Score">
    <bpmn:incoming>Flow_Gateway_To_Credit</bpmn:incoming>
    <bpmn:outgoing>Flow_Credit_To_Rules</bpmn:outgoing>
</bpmn:serviceTask>

<!-- Business Rule Task: invokes DMN decision table -->
<bpmn:businessRuleTask id="Task_EvalEligibility" name="Evaluate Eligibility">
    <bpmn:incoming>Flow_Credit_To_Rules</bpmn:incoming>
    <bpmn:outgoing>Flow_Rules_To_Decision</bpmn:outgoing>
</bpmn:businessRuleTask>

Step 5.4: Add Gateways

<!-- Exclusive Gateway: XOR - one path only -->
<bpmn:exclusiveGateway id="Gateway_Eligible" name="Is Eligible?">
    <bpmn:incoming>Flow_Rules_To_Decision</bpmn:incoming>
    <bpmn:outgoing>Flow_Yes_Approve</bpmn:outgoing>
    <bpmn:outgoing>Flow_No_Reject</bpmn:outgoing>
    <bpmn:outgoing>Flow_Maybe_Review</bpmn:outgoing>
</bpmn:exclusiveGateway>

<!-- Parallel Gateway: AND - all paths simultaneously -->
<bpmn:parallelGateway id="Gateway_Split" name="Run All Checks">
    <bpmn:incoming>Flow_To_Parallel</bpmn:incoming>
    <bpmn:outgoing>Flow_Check1</bpmn:outgoing>
    <bpmn:outgoing>Flow_Check2</bpmn:outgoing>
    <bpmn:outgoing>Flow_Check3</bpmn:outgoing>
</bpmn:parallelGateway>

Step 5.5: Add Sequence Flows with Conditions

<!-- Unconditional flow -->
<bpmn:sequenceFlow id="Flow_Start_To_Task1" 
    sourceRef="StartEvent_1" 
    targetRef="Task_ReviewDocs" />

<!-- Conditional flow (for gateway outputs) -->
<bpmn:sequenceFlow id="Flow_Yes_Approve" name="Eligible"
    sourceRef="Gateway_Eligible" 
    targetRef="Task_Approve">
    <bpmn:conditionExpression xsi:type="bpmn:tFormalExpression">
        ${eligibilityResult == 'ELIGIBLE'}
    </bpmn:conditionExpression>
</bpmn:sequenceFlow>

<bpmn:sequenceFlow id="Flow_No_Reject" name="Not Eligible"
    sourceRef="Gateway_Eligible" 
    targetRef="Task_Reject">
    <bpmn:conditionExpression xsi:type="bpmn:tFormalExpression">
        ${eligibilityResult == 'NOT_ELIGIBLE'}
    </bpmn:conditionExpression>
</bpmn:sequenceFlow>

<!-- Default flow (when no other condition matches) -->
<bpmn:sequenceFlow id="Flow_Maybe_Review" name="Manual Review"
    sourceRef="Gateway_Eligible" 
    targetRef="Task_ManualReview" />

Step 5.6: Add End Events

<bpmn:endEvent id="EndEvent_Approved" name="Application Approved">
    <bpmn:incoming>Flow_Approve_To_End</bpmn:incoming>
</bpmn:endEvent>

<bpmn:endEvent id="EndEvent_Rejected" name="Application Rejected">
    <bpmn:incoming>Flow_Reject_To_End</bpmn:incoming>
</bpmn:endEvent>

BPMN Quality Checklist

  • Every element has a unique id
  • Every element has a descriptive name
  • Every gateway has at least two outgoing flows
  • Every conditional flow has a condition expression
  • One flow from each gateway should be the default (no condition)
  • Every path eventually reaches an end event
  • No orphan elements (everything is connected)
  • Task names follow "Verb + Noun" pattern
Tip: Use a visual editor like Algomation Studio or bpmn.io to design your process. These tools generate valid BPMN XML automatically. You can then inspect and modify the XML directly if needed.

6. DMN Decision Tables

DMN (Decision Model and Notation) defines business rules as explicit, testable decision tables. When your BPMN process needs to make a decision based on multiple inputs and rules, you extract that logic into a DMN table.

When to Use DMN vs. Gateway Conditions

Scenario Use Example
Simple binary decision (yes/no) Gateway condition ${amount > 1000}
Multiple inputs, multiple rules DMN table Credit score + DTI + Income → Eligibility
Rules that business users should manage DMN table Pricing tiers, discount rules
Complex rule with multiple outputs DMN table Risk grade + Interest rate + Max amount

Anatomy of a Decision Table

A DMN decision table has:

  • Inputs (blue columns): Data used to make the decision
  • Outputs (green columns): Results of the decision
  • Rules (rows): Each row is a rule—if inputs match, outputs are returned
  • Hit Policy: What happens when multiple rules match

Example: Credit Eligibility Decision

Inputs Output
Credit Score Debt-to-Income Annual Income Eligibility
>= 720 <= 35% - "ELIGIBLE"
[680..720) <= 40% >= 50000 "ELIGIBLE"
[680..720) <= 43% - "MANUAL_REVIEW"
< 680 - - "NOT_ELIGIBLE"
- > 43% - "NOT_ELIGIBLE"

Note: "-" means "any value" (don't care)

Hit Policies

Policy Symbol Meaning Use When
Unique U Exactly one rule must match Most business rules (recommended default)
First F First matching rule wins Priority-based decisions
Any A Any match (all return same output) Validation checks
Collect C Collect all matching outputs Risk factors to aggregate
Collect Sum C+ Sum all matching outputs Calculating total scores

Step-by-Step: Creating a DMN Model

Step 6.1: Create the DMN File Structure

models/dmn/eligibility-rules.dmn

<?xml version="1.0" encoding="UTF-8"?>
<definitions 
    xmlns="https://www.omg.org/spec/DMN/20191111/MODEL/"
    xmlns:dmndi="https://www.omg.org/spec/DMN/20191111/DMNDI/"
    xmlns:dc="http://www.omg.org/spec/DMN/20180521/DC/"
    id="Definitions_Eligibility"
    name="Eligibility Rules"
    namespace="http://algomation.io/dmn">

    <!-- Input definitions -->
    <!-- Decision definition -->

</definitions>

Step 6.2: Define Input Data

<inputData id="InputData_CreditScore" name="creditScore">
    <variable name="creditScore" typeRef="number" />
</inputData>

<inputData id="InputData_DTI" name="debtToIncomeRatio">
    <variable name="debtToIncomeRatio" typeRef="number" />
</inputData>

<inputData id="InputData_Income" name="annualIncome">
    <variable name="annualIncome" typeRef="number" />
</inputData>

Step 6.3: Define the Decision with Decision Table

<decision id="Decision_Eligibility" name="Determine Eligibility">
    <variable name="eligibilityResult" typeRef="string" />

    <informationRequirement id="IR_1">
        <requiredInput href="#InputData_CreditScore" />
    </informationRequirement>
    <informationRequirement id="IR_2">
        <requiredInput href="#InputData_DTI" />
    </informationRequirement>
    <informationRequirement id="IR_3">
        <requiredInput href="#InputData_Income" />
    </informationRequirement>

    <decisionTable id="DecisionTable_1" hitPolicy="FIRST">
        <!-- Input columns -->
        <input id="Input_1" label="Credit Score">
            <inputExpression typeRef="number">
                <text>creditScore</text>
            </inputExpression>
        </input>
        <input id="Input_2" label="Debt-to-Income Ratio">
            <inputExpression typeRef="number">
                <text>debtToIncomeRatio</text>
            </inputExpression>
        </input>
        <input id="Input_3" label="Annual Income">
            <inputExpression typeRef="number">
                <text>annualIncome</text>
            </inputExpression>
        </input>

        <!-- Output column -->
        <output id="Output_1" label="Eligibility" name="eligibilityResult" typeRef="string" />

        <!-- Rules -->
        <rule id="Rule_1">
            <inputEntry id="IE_1_1"><text>>= 720</text></inputEntry>
            <inputEntry id="IE_1_2"><text><= 0.35</text></inputEntry>
            <inputEntry id="IE_1_3"><text></text></inputEntry>
            <outputEntry id="OE_1"><text>"ELIGIBLE"</text></outputEntry>
        </rule>

        <rule id="Rule_2">
            <inputEntry id="IE_2_1"><text>[680..720)</text></inputEntry>
            <inputEntry id="IE_2_2"><text><= 0.40</text></inputEntry>
            <inputEntry id="IE_2_3"><text>>= 50000</text></inputEntry>
            <outputEntry id="OE_2"><text>"ELIGIBLE"</text></outputEntry>
        </rule>

        <rule id="Rule_3">
            <inputEntry id="IE_3_1"><text>[680..720)</text></inputEntry>
            <inputEntry id="IE_3_2"><text><= 0.43</text></inputEntry>
            <inputEntry id="IE_3_3"><text></text></inputEntry>
            <outputEntry id="OE_3"><text>"MANUAL_REVIEW"</text></outputEntry>
        </rule>

        <rule id="Rule_4">
            <inputEntry id="IE_4_1"><text>< 680</text></inputEntry>
            <inputEntry id="IE_4_2"><text></text></inputEntry>
            <inputEntry id="IE_4_3"><text></text></inputEntry>
            <outputEntry id="OE_4"><text>"NOT_ELIGIBLE"</text></outputEntry>
        </rule>

        <rule id="Rule_5">
            <inputEntry id="IE_5_1"><text></text></inputEntry>
            <inputEntry id="IE_5_2"><text>> 0.43</text></inputEntry>
            <inputEntry id="IE_5_3"><text></text></inputEntry>
            <outputEntry id="OE_5"><text>"NOT_ELIGIBLE"</text></outputEntry>
        </rule>
    </decisionTable>
</decision>

DMN Expression Syntax (FEEL)

DMN uses FEEL (Friendly Enough Expression Language) for conditions:

Expression Meaning Example
>= 720 Greater than or equal to 720 Credit score ≥ 720
< 680 Less than 680 Credit score < 680
[680..720) Range: 680 ≤ x < 720 Inclusive start, exclusive end
(680..720] Range: 680 < x ≤ 720 Exclusive start, inclusive end
"ELIGIBLE" String literal Output value
- or empty Any value (don't care) Matches everything
not("X") Not equal to X Exclude specific value

DMN Quality Checklist

  • All input combinations are covered (no gaps)
  • No conflicting rules (or hit policy handles it)
  • Hit policy is explicitly set (U, F, A, or C)
  • Input variable names match process data
  • Output values are actionable in the process
  • Each rule has been tested with sample data
Warning: If using hit policy "Unique" (U), ensure no two rules can match the same input combination. The engine will error if multiple rules match.

7. Resource Mapping

Resource mapping defines who or what executes each task in your process. This is essential for AI agent integration—it specifies which tasks the AI handles and which require human or system action.

Executor Types

Type Description Examples
human_role Work done by a person Underwriter, Supervisor, Clerk
system_api External system integration Credit Bureau API, Core Banking
ai_agent AI assistant (via MCP or API) Claude, GPT, internal AI
mcp_tool Specific MCP tool invocation Document analyzer, Calculator
engine UAPF Engine itself (for rules) DMN evaluation

7.1 Canonical Resource Ontology (Recommended)

To ensure portability and governance, UAPF resource mappings SHOULD use a canonical executor ontology. This avoids ad-hoc role naming and makes packages transferable across organizations and runtimes. This is a logical ontology, not an identity system. Concrete identities are bound at deployment time.

Canonical Type Description Examples
human.role Human responsibility Clerk, Manager
system.api External system ERP, CRM
ai.agent AI executor LLM, classifier
engine.rule Rule evaluation DMN
engine.workflow Process execution BPMN
external.service Non-owned service Government registry

Canonical naming rules:

  • IDs MUST be stable and semantic
  • IDs MUST NOT contain personal names
  • IDs SHOULD be namespaced
role.finance.manager
agent.document.classifier
engine.dmn
system.erp.sap
Important: Resource ontology defines capability, not identity. Identity binding (users, agents, certificates) occurs outside the UAPF package.

Execution Modes

Mode Description Use When
autonomous Fully automated, no human in loop API calls, rule evaluation, data transformation
assisted AI/system prepares, human decides Document summaries, recommendations
manual Human does all work Complex judgment, customer calls

Resource Mapping File

resources/resource-mapping.yaml

# Resource Mapping for Employee Onboarding Process
# Defines who/what executes each task
# Canonical IDs are late-bound to concrete identities at deployment time

targets:
  # Human roles
  - id: role.hr.coordinator
    type: human.role
    name: HR Coordinator
    description: Handles general HR tasks and coordination

  - id: role.it.administrator
    type: human.role
    name: IT Administrator
    description: Provisions accounts and equipment

  - id: role.hiring.manager
    type: human.role
    name: Hiring Manager
    description: Department manager who made the hire

  # System integrations
  - id: system.api.hris
    type: system.api
    name: HRIS System
    endpoint: https://hris.internal/api/v2
    auth: oauth2

  - id: system.api.active-directory
    type: system.api
    name: Active Directory
    endpoint: ldaps://ad.internal
    auth: service-account

  # AI agents
  - id: agent.onboarding.assistant
    type: ai.agent
    name: Onboarding Assistant
    protocol: mcp
    capabilities:
      - document-analysis
      - text-generation
      - summarization

  # Engine (for DMN evaluation)
  - id: engine.dmn
    type: engine.rule
    name: DMN Decision Engine

# Task bindings
bindings:
  # HR Coordinator tasks
  - source:
      type: bpmn.task
      ref: Task_SendWelcomeEmail
    targetId: role.hr.coordinator
    mode: manual

  - source:
      type: bpmn.task
      ref: Task_CollectDocuments
    targetId: role.hr.coordinator
    mode: manual

  # IT tasks (automated)
  - source:
      type: bpmn.task
      ref: Task_CreateADAccount
    targetId: system.api.active-directory
    mode: autonomous
    config:
      operation: createUser

  - source:
      type: bpmn.task
      ref: Task_ProvisionEquipment
    targetId: role.it.administrator
    mode: manual

  # AI-assisted tasks
  - source:
      type: bpmn.task
      ref: Task_GenerateOnboardingPlan
    targetId: agent.onboarding.assistant
    mode: assisted
    config:
      systemPrompt: |
        You are an HR assistant helping create personalized onboarding plans.
        Based on the employee's role and department, generate a 30-60-90 day plan.
      outputFormat: markdown

  - source:
      type: bpmn.task
      ref: Task_AnalyzeDocuments
    targetId: agent.onboarding.assistant
    mode: autonomous
    config:
      systemPrompt: |
        Extract key information from uploaded documents:
        - Full legal name
        - Address
        - Emergency contact
        - Tax withholding preferences
      outputFormat: json

  # Decision tasks (DMN)
  - source:
      type: bpmn.task
      ref: Task_DetermineEquipment
    targetId: engine.dmn
    mode: autonomous
    config:
      decisionId: Decision_EquipmentAllocation

  - source:
      type: bpmn.task
      ref: Task_DetermineAccessLevel
    targetId: engine.dmn
    mode: autonomous
    config:
      decisionId: Decision_AccessLevel

  # HRIS integration
  - source:
      type: bpmn.task
      ref: Task_CreateEmployeeRecord
    targetId: system.api.hris
    mode: autonomous
    config:
      operation: employees.create

Resource Mapping Checklist

  • Every task in BPMN has a corresponding binding
  • Target IDs are unique and descriptive
  • Execution mode is appropriate for task type
  • API endpoints and auth methods are specified
  • AI agent prompts are clear and specific
  • DMN decision IDs match actual DMN files

8. UAPF Package Assembly

Now you'll assemble all the components into a UAPF package—the deployable unit that contains your entire algorithmated process.

Package Structure

employee-onboarding.uapf/
├── manifest.json                    # Package metadata
├── models/
│   ├── bpmn/
│   │   └── onboarding-main.bpmn     # Main workflow
│   ├── dmn/
│   │   ├── equipment-allocation.dmn # Equipment rules
│   │   └── access-level.dmn         # Access rules
│   └── cmmn/                        # (empty for this process)
├── resources/
│   └── resource-mapping.yaml        # Task bindings
├── tests/
│   └── scenarios.json               # Test cases
└── docs/
    ├── README.md                    # Process documentation
    └── decision-rationale.md        # Why rules are what they are

Manifest.json

The manifest is the entry point that describes the package:

manifest.json

{
  "uapfVersion": "0.1.0",
  "id": "com.yourcompany.hr.employee-onboarding",
  "name": "Employee Onboarding",
  "version": "1.0.0",
  "description": "End-to-end employee onboarding from offer acceptance to Day 1 readiness",

  "models": {
    "bpmn": [
      "models/bpmn/onboarding-main.bpmn"
    ],
    "dmn": [
      "models/dmn/equipment-allocation.dmn",
      "models/dmn/access-level.dmn"
    ],
    "cmmn": []
  },

  "interfaces": {
    "processes": [
      {
        "id": "EmployeeOnboarding",
        "bpmnProcessId": "Process_Onboarding",
        "entryPoint": "StartEvent_OfferAccepted",
        "description": "Main onboarding workflow",
        "inputs": [
          {
            "name": "employeeData",
            "type": "object",
            "required": true,
            "schema": {
              "type": "object",
              "properties": {
                "name": { "type": "string" },
                "email": { "type": "string" },
                "department": { "type": "string" },
                "role": { "type": "string" },
                "startDate": { "type": "string", "format": "date" },
                "manager": { "type": "string" }
              },
              "required": ["name", "email", "department", "role", "startDate"]
            }
          }
        ],
        "outputs": [
          {
            "name": "onboardingComplete",
            "type": "boolean"
          },
          {
            "name": "employeeId",
            "type": "string"
          }
        ]
      }
    ],
    "decisions": [
      {
        "id": "EquipmentAllocation",
        "dmnDecisionId": "Decision_Equipment",
        "description": "Determines equipment package based on role and department",
        "inputs": [
          { "name": "department", "type": "string" },
          { "name": "role", "type": "string" },
          { "name": "isRemote", "type": "boolean" }
        ],
        "outputs": [
          { "name": "equipmentPackage", "type": "string" },
          { "name": "laptopSpec", "type": "string" }
        ]
      },
      {
        "id": "AccessLevelDetermination",
        "dmnDecisionId": "Decision_AccessLevel",
        "description": "Determines system access based on role",
        "inputs": [
          { "name": "department", "type": "string" },
          { "name": "role", "type": "string" },
          { "name": "employmentType", "type": "string" }
        ],
        "outputs": [
          { "name": "accessLevel", "type": "string" },
          { "name": "systemAccess", "type": "array" }
        ]
      }
    ]
  },

  "metadata": {
    "owner": "HR Operations",
    "maintainer": "[email protected]",
    "created": "2026-01-15",
    "lastModified": "2026-01-15",
    "tags": ["hr", "onboarding", "employee-lifecycle"],
    "complianceReferences": [
      "SOX Section 404 - Access Controls",
      "GDPR - Employee Data Processing"
    ]
  },

  "dependencies": {},

  "exposure": {
    "mcp": {
      "enabled": true,
      "exposedEntrypoints": ["EmployeeOnboarding", "EquipmentAllocation", "AccessLevelDetermination"]
    }
  }
}

Test Scenarios File

tests/scenarios.json

{
  "scenarios": [
    {
      "id": "scenario-1",
      "name": "Standard Engineer Onboarding",
      "description": "Typical software engineer hire",
      "processId": "EmployeeOnboarding",
      "input": {
        "employeeData": {
          "name": "Jane Smith",
          "email": "[email protected]",
          "department": "Engineering",
          "role": "Software Engineer",
          "startDate": "2026-02-01",
          "manager": "[email protected]",
          "isRemote": false,
          "employmentType": "full-time"
        }
      },
      "expectedDecisions": {
        "EquipmentAllocation": {
          "equipmentPackage": "ENGINEER_STANDARD",
          "laptopSpec": "MacBook Pro 16\""
        },
        "AccessLevelDetermination": {
          "accessLevel": "STANDARD",
          "systemAccess": ["GitHub", "Jira", "Confluence", "Slack", "AWS-Dev"]
        }
      },
      "expectedOutcome": {
        "onboardingComplete": true
      }
    },
    {
      "id": "scenario-2",
      "name": "Executive Hire - Expedited",
      "description": "C-level executive with elevated access",
      "processId": "EmployeeOnboarding",
      "input": {
        "employeeData": {
          "name": "Robert Johnson",
          "email": "[email protected]",
          "department": "Executive",
          "role": "Chief Technology Officer",
          "startDate": "2026-01-20",
          "manager": "[email protected]",
          "isRemote": false,
          "employmentType": "full-time"
        }
      },
      "expectedDecisions": {
        "EquipmentAllocation": {
          "equipmentPackage": "EXECUTIVE",
          "laptopSpec": "MacBook Pro 16\" Max"
        },
        "AccessLevelDetermination": {
          "accessLevel": "EXECUTIVE",
          "systemAccess": ["ALL_SYSTEMS", "BOARD_MATERIALS", "FINANCIAL_REPORTING"]
        }
      }
    },
    {
      "id": "scenario-3",
      "name": "Contractor - Limited Access",
      "description": "External contractor with restricted access",
      "processId": "EmployeeOnboarding",
      "input": {
        "employeeData": {
          "name": "Alex Contractor",
          "email": "[email protected]",
          "department": "Engineering",
          "role": "Contract Developer",
          "startDate": "2026-02-15",
          "manager": "[email protected]",
          "isRemote": true,
          "employmentType": "contractor"
        }
      },
      "expectedDecisions": {
        "AccessLevelDetermination": {
          "accessLevel": "RESTRICTED",
          "systemAccess": ["GitHub-Limited", "Jira-Project", "Slack-Channel"]
        }
      }
    }
  ]
}

Creating the .uapf Package

A .uapf file is simply a ZIP archive. To create it:

# From the package directory
cd employee-onboarding.uapf/

# Create the ZIP archive
zip -r ../employee-onboarding.uapf .

# Or using the UAPF CLI (if available)
uapf pack --output ../employee-onboarding.uapf

Package Assembly Checklist

  • manifest.json is valid JSON
  • All files referenced in manifest exist
  • BPMN process IDs match manifest entries
  • DMN decision IDs match manifest entries
  • Resource mapping covers all tasks
  • Test scenarios cover main paths
  • Documentation is up to date

9. Complete Example: Employee Onboarding

Let's put everything together with a complete, working example. We'll algorithmate an employee onboarding process from scratch.

9.1 Process Overview

Process Boundary Definition
  • Trigger: Offer letter signed by candidate
  • End states: Employee ready for Day 1, Onboarding cancelled
  • Scope IN: Document collection, account provisioning, equipment allocation, orientation scheduling
  • Scope OUT: Recruiting, offer negotiation, performance management
  • Duration: 1-10 business days before start date

9.2 Happy Path

1. [Offer Accepted] →
2. Create employee record in HRIS →
3. Send welcome email →
4. Collect required documents →
5. [Parallel split]
   a. Determine equipment → Order equipment
   b. Determine access level → Create AD account → Provision systems
   c. Schedule orientation → Send calendar invites
6. [Parallel join]
7. Verify all tasks complete →
8. [Day 1 Ready]

9.3 Complete BPMN Model

models/bpmn/onboarding-main.bpmn

<?xml version="1.0" encoding="UTF-8"?>
<bpmn:definitions 
    xmlns:bpmn="http://www.omg.org/spec/BPMN/20100524/MODEL"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    id="Definitions_Onboarding"
    targetNamespace="http://algomation.io/onboarding">

  <bpmn:process id="Process_Onboarding" name="Employee Onboarding" isExecutable="true">

    <!-- START -->
    <bpmn:startEvent id="StartEvent_OfferAccepted" name="Offer Accepted">
      <bpmn:outgoing>Flow_Start_Create</bpmn:outgoing>
    </bpmn:startEvent>

    <!-- Initial Tasks -->
    <bpmn:serviceTask id="Task_CreateEmployeeRecord" name="Create Employee Record">
      <bpmn:incoming>Flow_Start_Create</bpmn:incoming>
      <bpmn:outgoing>Flow_Create_Welcome</bpmn:outgoing>
    </bpmn:serviceTask>

    <bpmn:serviceTask id="Task_SendWelcomeEmail" name="Send Welcome Email">
      <bpmn:incoming>Flow_Create_Welcome</bpmn:incoming>
      <bpmn:outgoing>Flow_Welcome_Docs</bpmn:outgoing>
    </bpmn:serviceTask>

    <bpmn:userTask id="Task_CollectDocuments" name="Collect Required Documents">
      <bpmn:incoming>Flow_Welcome_Docs</bpmn:incoming>
      <bpmn:outgoing>Flow_Docs_Split</bpmn:outgoing>
    </bpmn:userTask>

    <!-- PARALLEL SPLIT -->
    <bpmn:parallelGateway id="Gateway_Split" name="Start Parallel Tasks">
      <bpmn:incoming>Flow_Docs_Split</bpmn:incoming>
      <bpmn:outgoing>Flow_Split_Equipment</bpmn:outgoing>
      <bpmn:outgoing>Flow_Split_Access</bpmn:outgoing>
      <bpmn:outgoing>Flow_Split_Orientation</bpmn:outgoing>
    </bpmn:parallelGateway>

    <!-- EQUIPMENT TRACK -->
    <bpmn:businessRuleTask id="Task_DetermineEquipment" name="Determine Equipment">
      <bpmn:incoming>Flow_Split_Equipment</bpmn:incoming>
      <bpmn:outgoing>Flow_Equip_Order</bpmn:outgoing>
    </bpmn:businessRuleTask>

    <bpmn:userTask id="Task_OrderEquipment" name="Order Equipment">
      <bpmn:incoming>Flow_Equip_Order</bpmn:incoming>
      <bpmn:outgoing>Flow_Order_Join</bpmn:outgoing>
    </bpmn:userTask>

    <!-- ACCESS TRACK -->
    <bpmn:businessRuleTask id="Task_DetermineAccessLevel" name="Determine Access Level">
      <bpmn:incoming>Flow_Split_Access</bpmn:incoming>
      <bpmn:outgoing>Flow_Access_AD</bpmn:outgoing>
    </bpmn:businessRuleTask>

    <bpmn:serviceTask id="Task_CreateADAccount" name="Create AD Account">
      <bpmn:incoming>Flow_Access_AD</bpmn:incoming>
      <bpmn:outgoing>Flow_AD_Provision</bpmn:outgoing>
    </bpmn:serviceTask>

    <bpmn:serviceTask id="Task_ProvisionSystems" name="Provision System Access">
      <bpmn:incoming>Flow_AD_Provision</bpmn:incoming>
      <bpmn:outgoing>Flow_Provision_Join</bpmn:outgoing>
    </bpmn:serviceTask>

    <!-- ORIENTATION TRACK -->
    <bpmn:userTask id="Task_ScheduleOrientation" name="Schedule Orientation">
      <bpmn:incoming>Flow_Split_Orientation</bpmn:incoming>
      <bpmn:outgoing>Flow_Orient_Calendar</bpmn:outgoing>
    </bpmn:userTask>

    <bpmn:serviceTask id="Task_SendCalendarInvites" name="Send Calendar Invites">
      <bpmn:incoming>Flow_Orient_Calendar</bpmn:incoming>
      <bpmn:outgoing>Flow_Calendar_Join</bpmn:outgoing>
    </bpmn:serviceTask>

    <!-- PARALLEL JOIN -->
    <bpmn:parallelGateway id="Gateway_Join" name="All Tasks Complete">
      <bpmn:incoming>Flow_Order_Join</bpmn:incoming>
      <bpmn:incoming>Flow_Provision_Join</bpmn:incoming>
      <bpmn:incoming>Flow_Calendar_Join</bpmn:incoming>
      <bpmn:outgoing>Flow_Join_Verify</bpmn:outgoing>
    </bpmn:parallelGateway>

    <!-- Final verification -->
    <bpmn:userTask id="Task_VerifyComplete" name="Verify Onboarding Complete">
      <bpmn:incoming>Flow_Join_Verify</bpmn:incoming>
      <bpmn:outgoing>Flow_Verify_End</bpmn:outgoing>
    </bpmn:userTask>

    <!-- END -->
    <bpmn:endEvent id="EndEvent_Ready" name="Day 1 Ready">
      <bpmn:incoming>Flow_Verify_End</bpmn:incoming>
    </bpmn:endEvent>

    <!-- SEQUENCE FLOWS -->
    <bpmn:sequenceFlow id="Flow_Start_Create" sourceRef="StartEvent_OfferAccepted" targetRef="Task_CreateEmployeeRecord" />
    <bpmn:sequenceFlow id="Flow_Create_Welcome" sourceRef="Task_CreateEmployeeRecord" targetRef="Task_SendWelcomeEmail" />
    <bpmn:sequenceFlow id="Flow_Welcome_Docs" sourceRef="Task_SendWelcomeEmail" targetRef="Task_CollectDocuments" />
    <bpmn:sequenceFlow id="Flow_Docs_Split" sourceRef="Task_CollectDocuments" targetRef="Gateway_Split" />

    <bpmn:sequenceFlow id="Flow_Split_Equipment" sourceRef="Gateway_Split" targetRef="Task_DetermineEquipment" />
    <bpmn:sequenceFlow id="Flow_Equip_Order" sourceRef="Task_DetermineEquipment" targetRef="Task_OrderEquipment" />
    <bpmn:sequenceFlow id="Flow_Order_Join" sourceRef="Task_OrderEquipment" targetRef="Gateway_Join" />

    <bpmn:sequenceFlow id="Flow_Split_Access" sourceRef="Gateway_Split" targetRef="Task_DetermineAccessLevel" />
    <bpmn:sequenceFlow id="Flow_Access_AD" sourceRef="Task_DetermineAccessLevel" targetRef="Task_CreateADAccount" />
    <bpmn:sequenceFlow id="Flow_AD_Provision" sourceRef="Task_CreateADAccount" targetRef="Task_ProvisionSystems" />
    <bpmn:sequenceFlow id="Flow_Provision_Join" sourceRef="Task_ProvisionSystems" targetRef="Gateway_Join" />

    <bpmn:sequenceFlow id="Flow_Split_Orientation" sourceRef="Gateway_Split" targetRef="Task_ScheduleOrientation" />
    <bpmn:sequenceFlow id="Flow_Orient_Calendar" sourceRef="Task_ScheduleOrientation" targetRef="Task_SendCalendarInvites" />
    <bpmn:sequenceFlow id="Flow_Calendar_Join" sourceRef="Task_SendCalendarInvites" targetRef="Gateway_Join" />

    <bpmn:sequenceFlow id="Flow_Join_Verify" sourceRef="Gateway_Join" targetRef="Task_VerifyComplete" />
    <bpmn:sequenceFlow id="Flow_Verify_End" sourceRef="Task_VerifyComplete" targetRef="EndEvent_Ready" />

  </bpmn:process>
</bpmn:definitions>

9.4 Equipment Allocation DMN

models/dmn/equipment-allocation.dmn

<?xml version="1.0" encoding="UTF-8"?>
<definitions xmlns="https://www.omg.org/spec/DMN/20191111/MODEL/"
    id="Definitions_Equipment"
    name="Equipment Allocation Rules"
    namespace="http://algomation.io/dmn">

  <inputData id="Input_Department" name="department">
    <variable name="department" typeRef="string" />
  </inputData>

  <inputData id="Input_Role" name="role">
    <variable name="role" typeRef="string" />
  </inputData>

  <inputData id="Input_Remote" name="isRemote">
    <variable name="isRemote" typeRef="boolean" />
  </inputData>

  <decision id="Decision_Equipment" name="Determine Equipment Package">
    <variable name="equipment" typeRef="string" />

    <informationRequirement id="IR_1">
      <requiredInput href="#Input_Department" />
    </informationRequirement>
    <informationRequirement id="IR_2">
      <requiredInput href="#Input_Role" />
    </informationRequirement>
    <informationRequirement id="IR_3">
      <requiredInput href="#Input_Remote" />
    </informationRequirement>

    <decisionTable id="DT_Equipment" hitPolicy="FIRST">
      <input id="In_Dept" label="Department">
        <inputExpression typeRef="string"><text>department</text></inputExpression>
      </input>
      <input id="In_Role" label="Role">
        <inputExpression typeRef="string"><text>role</text></inputExpression>
      </input>
      <input id="In_Remote" label="Remote">
        <inputExpression typeRef="boolean"><text>isRemote</text></inputExpression>
      </input>

      <output id="Out_Package" label="Equipment Package" name="equipmentPackage" typeRef="string" />
      <output id="Out_Laptop" label="Laptop Spec" name="laptopSpec" typeRef="string" />

      <!-- Executive -->
      <rule id="Rule_Exec">
        <inputEntry><text>"Executive"</text></inputEntry>
        <inputEntry><text></text></inputEntry>
        <inputEntry><text></text></inputEntry>
        <outputEntry><text>"EXECUTIVE"</text></outputEntry>
        <outputEntry><text>"MacBook Pro 16 Max"</text></outputEntry>
      </rule>

      <!-- Engineering -->
      <rule id="Rule_Eng">
        <inputEntry><text>"Engineering"</text></inputEntry>
        <inputEntry><text></text></inputEntry>
        <inputEntry><text></text></inputEntry>
        <outputEntry><text>"ENGINEER_STANDARD"</text></outputEntry>
        <outputEntry><text>"MacBook Pro 16"</text></outputEntry>
      </rule>

      <!-- Design -->
      <rule id="Rule_Design">
        <inputEntry><text>"Design"</text></inputEntry>
        <inputEntry><text></text></inputEntry>
        <inputEntry><text></text></inputEntry>
        <outputEntry><text>"DESIGNER"</text></outputEntry>
        <outputEntry><text>"MacBook Pro 16 + Studio Display"</text></outputEntry>
      </rule>

      <!-- Remote workers get home office setup -->
      <rule id="Rule_Remote">
        <inputEntry><text></text></inputEntry>
        <inputEntry><text></text></inputEntry>
        <inputEntry><text>true</text></inputEntry>
        <outputEntry><text>"REMOTE_STANDARD"</text></outputEntry>
        <outputEntry><text>"MacBook Air M3 + Monitor + Accessories"</text></outputEntry>
      </rule>

      <!-- Default -->
      <rule id="Rule_Default">
        <inputEntry><text></text></inputEntry>
        <inputEntry><text></text></inputEntry>
        <inputEntry><text></text></inputEntry>
        <outputEntry><text>"OFFICE_STANDARD"</text></outputEntry>
        <outputEntry><text>"MacBook Air M3"</text></outputEntry>
      </rule>
    </decisionTable>
  </decision>
</definitions>

9.5 Access Level DMN

models/dmn/access-level.dmn

<?xml version="1.0" encoding="UTF-8"?>
<definitions xmlns="https://www.omg.org/spec/DMN/20191111/MODEL/"
    id="Definitions_Access"
    name="Access Level Rules"
    namespace="http://algomation.io/dmn">

  <inputData id="Input_Department" name="department">
    <variable name="department" typeRef="string" />
  </inputData>

  <inputData id="Input_Role" name="role">
    <variable name="role" typeRef="string" />
  </inputData>

  <inputData id="Input_EmpType" name="employmentType">
    <variable name="employmentType" typeRef="string" />
  </inputData>

  <decision id="Decision_AccessLevel" name="Determine Access Level">
    <variable name="accessResult" typeRef="any" />

    <informationRequirement id="IR_1">
      <requiredInput href="#Input_Department" />
    </informationRequirement>
    <informationRequirement id="IR_2">
      <requiredInput href="#Input_Role" />
    </informationRequirement>
    <informationRequirement id="IR_3">
      <requiredInput href="#Input_EmpType" />
    </informationRequirement>

    <decisionTable id="DT_Access" hitPolicy="FIRST">
      <input id="In_Dept">
        <inputExpression typeRef="string"><text>department</text></inputExpression>
      </input>
      <input id="In_EmpType">
        <inputExpression typeRef="string"><text>employmentType</text></inputExpression>
      </input>

      <output id="Out_Level" name="accessLevel" typeRef="string" />
      <output id="Out_Systems" name="systemAccess" typeRef="string" />

      <!-- Contractors get restricted access -->
      <rule id="Rule_Contractor">
        <inputEntry><text></text></inputEntry>
        <inputEntry><text>"contractor"</text></inputEntry>
        <outputEntry><text>"RESTRICTED"</text></outputEntry>
        <outputEntry><text>"Slack,Jira-Project,GitHub-Limited"</text></outputEntry>
      </rule>

      <!-- Executive access -->
      <rule id="Rule_Exec">
        <inputEntry><text>"Executive"</text></inputEntry>
        <inputEntry><text></text></inputEntry>
        <outputEntry><text>"EXECUTIVE"</text></outputEntry>
        <outputEntry><text>"ALL_SYSTEMS,BOARD_MATERIALS,FINANCIALS"</text></outputEntry>
      </rule>

      <!-- Engineering access -->
      <rule id="Rule_Eng">
        <inputEntry><text>"Engineering"</text></inputEntry>
        <inputEntry><text></text></inputEntry>
        <outputEntry><text>"STANDARD"</text></outputEntry>
        <outputEntry><text>"Slack,Jira,Confluence,GitHub,AWS-Dev"</text></outputEntry>
      </rule>

      <!-- Finance access -->
      <rule id="Rule_Finance">
        <inputEntry><text>"Finance"</text></inputEntry>
        <inputEntry><text></text></inputEntry>
        <outputEntry><text>"FINANCE"</text></outputEntry>
        <outputEntry><text>"Slack,NetSuite,Expensify,Banking"</text></outputEntry>
      </rule>

      <!-- Default -->
      <rule id="Rule_Default">
        <inputEntry><text></text></inputEntry>
        <inputEntry><text></text></inputEntry>
        <outputEntry><text>"STANDARD"</text></outputEntry>
        <outputEntry><text>"Slack,Jira,Confluence,Google-Workspace"</text></outputEntry>
      </rule>
    </decisionTable>
  </decision>
</definitions>
Complete package: You now have all the components needed for a working UAPF package. Combine these files with the manifest.json, resource-mapping.yaml, and test scenarios from Section 8 to create the complete employee-onboarding.uapf package.

10. Validation & Testing

10.1 Engine-Agnostic Test Harness (Required)

Validation is incomplete without a standardized way to assert expected behavior. A UAPF Test Harness defines how a process or decision can be tested independently of the runtime engine. This ensures that BPMN, DMN, and resource bindings behave consistently across engines, MCP servers, or orchestration layers.

  • Tests validate behavior, not implementation
  • Tests must be executable by any compliant engine
UAPF Test Harness Contract
Element Required Description
scenarioId MUST Unique identifier
target MUST BPMN process ID or DMN decision ID
inputs MUST Structured input payload
expectedPath SHOULD Expected gateway path or decision output
expectedOutputs MUST Explicit output assertions
forbiddenStates MAY States that must not occur

tests/scenarios.json

{
  "scenarioId": "invoice-auto-approval",
  "target": "InvoiceApproval",
  "inputs": {
    "invoiceAmount": 3200
  },
  "expectedPath": ["AUTO_APPROVAL"],
  "expectedOutputs": {
    "approvalStatus": "APPROVED"
  }
}
Definition of Done: Test Harness
  • Every executable process has at least one scenario
  • All DMN decisions have boundary-value tests
  • Tests do not reference engine-specific APIs
  • Expected behavior is explicit and machine-checkable

10.2 Schema Validation

Validate your package against UAPF schemas:

# Using UAPF CLI
uapf validate ./employee-onboarding.uapf

# Or using the online validator
# Upload to: https://algomation.io/uapf-viewer

10.3 Reference Validation

Check that all references are valid:

  • All BPMN process IDs in manifest exist in BPMN files
  • All DMN decision IDs in manifest exist in DMN files
  • All task references in resource-mapping exist in BPMN
  • All target IDs in bindings are defined in targets

10.4 Test Scenario Execution

Run test scenarios against the UAPF Engine:

# Start the engine with your package
cd uapf-engine
npm run dev

# In another terminal, run tests
curl -X POST http://localhost:4000/uapf/execute-process \n  -H "Content-Type: application/json" \n  -d '{
    "packageId": "com.yourcompany.hr.employee-onboarding",
    "processId": "EmployeeOnboarding",
    "input": {
      "employeeData": {
        "name": "Jane Smith",
        "email": "[email protected]",
        "department": "Engineering",
        "role": "Software Engineer",
        "startDate": "2026-02-01",
        "manager": "[email protected]",
        "isRemote": false,
        "employmentType": "full-time"
      }
    }
  }'

10.5 Decision Table Testing

# Test individual decisions
curl -X POST http://localhost:4000/uapf/evaluate-decision \n  -H "Content-Type: application/json" \n  -d '{
    "packageId": "com.yourcompany.hr.employee-onboarding",
    "decisionId": "EquipmentAllocation",
    "input": {
      "department": "Engineering",
      "role": "Software Engineer",
      "isRemote": false
    }
  }'

# Expected output:
# {
#   "equipmentPackage": "ENGINEER_STANDARD",
#   "laptopSpec": "MacBook Pro 16"
# }

10.6 Validation Checklist

  • Package passes schema validation
  • All references are valid
  • Happy path scenario executes successfully
  • All decision tables return expected outputs
  • Exception scenarios are handled correctly
  • All end states are reachable

11. Deployment & Integration

11.1 Push to ProcessGit

# Initialize repository
cd employee-onboarding.uapf
git init
git add .
git commit -m "Initial algorithmation of employee onboarding"

# Push to ProcessGit
git remote add origin https://processgit.org/yourorg/employee-onboarding.git
git push -u origin main

11.2 Configure UAPF Engine

Set environment variables for the engine:

# .env file
PORT=4000
UAPF_MODE=workspace
WORKSPACE_DIR=/path/to/your/packages
UAPF_SCHEMAS_DIR=/path/to/schemas

11.3 MCP Integration for AI Agents

With MCP exposure enabled in manifest.json, your process is available as tools for AI agents like Claude:

// Claude can now invoke:
// - execute_process("EmployeeOnboarding", {...})
// - evaluate_decision("EquipmentAllocation", {...})
// - evaluate_decision("AccessLevelDetermination", {...})

11.4 Production Checklist

  • Package is in version control (ProcessGit)
  • UAPF Engine is deployed and running
  • Health checks are configured
  • Logging and monitoring are enabled
  • API authentication is configured
  • MCP server is registered with AI agents
  • Rollback procedure is documented

11. Measuring Success

Algorithmation should produce measurable improvements. Define baseline metrics before deployment and track changes over time.

Metric Description
Decision consistency Same inputs produce identical outputs
Manual touch rate Percentage of cases requiring human intervention
Exception frequency How often non-happy paths occur
Cycle time End-to-end execution duration
Audit findings Number of compliance issues per period

12. Governance & Change Control

Algorithmated processes are operational assets. They must be governed with the same rigor as application code.

Versioning Rules

  • Every UAPF package MUST use semantic versioning (MAJOR.MINOR.PATCH)
  • Rule changes that affect outcomes increment MINOR or MAJOR
  • Pure documentation or formatting changes increment PATCH

Change Approval

Change Type Required Approval
DMN rule logic Process Owner + Domain Expert
BPMN flow changes Process Owner
Resource mapping changes Operations / IT Owner
Exposure via MCP Security / Architecture

Audit Expectations

  • Every released package must be traceable to an approved change
  • Executed decisions must be loggable with input and output
  • Historical versions must remain retrievable

With an explicit test harness and canonical resource ontology, algorithmated processes become portable, testable, and auditable artifacts. This ensures that AI agents execute governed logic, not inferred intent.

Summary

You've now learned the complete algorithmation workflow:

  1. Select a high-impact process using the scoring framework
  2. Discover the process through structured analysis
  3. Model the workflow in BPMN
  4. Define business rules in DMN decision tables
  5. Map executors to tasks (humans, systems, AI agents)
  6. Package everything into a UAPF artifact
  7. Validate and test thoroughly
  8. Deploy to ProcessGit and UAPF Engine

The result is an explicit, versioned, testable representation of your business process—ready for AI agents to execute with precision rather than guesswork.

Next Steps

Tags: algorithmation, UAPF, BPMN, DMN, process-as-code, AI-integration