🧩 Concept Paper
From Single Processes to the Algorithmated Enterprise: A UAPF Operating Model
By Algomation · Concept Paper
Short version: most examples of algorithmation show one process, one .uapf file, one AI agent. Real enterprises have hundreds or thousands of interconnected processes, changing weekly, running across many agents and systems. This article proposes a UAPF-based operating model for that reality: how to structure, govern, deploy, and connect an entire estate of algorithmated processes to AI agents via MCP and function calling.
1. Why single-process demos aren’t enough
The early UAPF examples are intentionally simple:
- One loan approval process packaged as
loan-approval.uapf - One permit flow packaged as
municipal-building-permit.uapf - One AI agent (Claude, Grok, Gemini) calling that single package via MCP or function calling
That’s useful for understanding the pattern, but it’s not how enterprises actually operate. In reality you have:
- Hundreds or thousands of BPMN/DMN/CMMN models across domains
- Shared rules and sub-processes reused in many products and countries
- Frequent policy change: risk thresholds, eligibility criteria, workflows
- Many agents and channels: web, mobile, call centers, AI assistants, batch jobs
The challenge is not just algorithmating one workflow, but building a systematic, governable algorithm layer for the enterprise – and then exposing it cleanly to AI agents.
2. UAPF as an “algorithm layer”, not just a file
At small scale, a .uapf looks like a convenient ZIP you can hand to an engine or an AI agent. At enterprise scale, it’s better to think of UAPF as a unit of algorithmic capability, similar to:
- a container image in an internal registry, or
- a library in a private package manager.
In this view, an enterprise algorithm stack looks like:
Modeling layer
BPMN / DMN / CMMN in tools and repos
Algorithm packages
UAPF bundles (each with manifest, models, tests)
UAPF Registry
Catalog of all packages, versions, owners, dependencies
Execution layer
UAPF engines running packages as services
Adapter layer
MCP servers, function-calling tools, OpenAPI wrappers
Agent layer
Named AI agents ("Loan Assistant", "HR Agent", "Robert Cane")
calling these algorithms under governance
The key shift: UAPF is not “thing you attach to an LLM”; it is the canonical algorithm layer that both humans and AI agents call into.
3. Structuring UAPFs across the enterprise
3.1. Don’t build one giant enterprise.uapf
A single mega-package for the entire organization quickly becomes unmaintainable. Instead, structure UAPFs along product and domain boundaries, and group together processes that share ownership and lifecycle.
A pragmatic three-level pattern:
- Micro-UAPF (single capability)
One main BPMN process + a few DMNs/CMMNs. Example:
io.bank.retail.credit-check.uapf
Good for small, re-usable “skills”. - Domain UAPF (product / service domain)
Cohesive set of processes and decisions for one domain. Example:
io.bank.loans.retail-core.uapfincluding:- Loan application
- Top-up / refinancing
- Restructuring
- Simple collections
- Shared Library UAPF
Cross-domain rules and fragments. Examples:io.bank.shared.kyc-rules.uapfio.bank.shared.risk-matrix.uapf
Rule of thumb: package together things that share a single owner and release cadence. If responsibility or change frequency differs, split into multiple UAPFs.
4. Lifecycle and CI/CD for UAPF packages
4.1. SemVer and release pipelines
Each UAPF should follow semantic versioning:
- MAJOR – breaking change in policy or interface
- MINOR – new paths or fields, backward compatible
- PATCH – bug fix, non-breaking corrections
A typical algorithmation pipeline:
- Edit BPMN/DMN/CMMN in modeling tools (or Git repo).
- Run syntax validation and unit tests on example cases.
- Run regression tests against historical scenarios.
- Build and sign a new
.uapfpackage. - Publish to the UAPF registry with metadata and release notes.
- Deploy to UAPF engines (DEV → UAT → PROD).
- Update AI adapters (MCP / tools schemas) to allow the new version.
- Decommission old versions after a defined transition window.
4.2. Keeping AI agents in sync
AI agents should not load files directly; they should call versioned algorithm services. In practice:
- MCP servers and function-calling layers know which UAPF version is
currentand which aredeprecated. - System prompts instruct agents to:
- always call tools instead of inferring policy from memory,
- always include package ID + version in explanations.
For gradual rollout, the engine can split traffic:
- 10% of calls to
[email protected] - 90% of calls to
[email protected]
Compare outcomes, then promote the new version. This is CI/CD for algorithms with AI as the front-end.
5. Execution, adapter, and agent layers
At runtime, UAPF sits in the middle of a layered architecture:
Agents & channels
Claude Grok Gemini Internal agents (e.g. "Robert Cane")
│ │ │
└────── Agent layer (tool use, system prompts)
Adapter layer
MCP servers (uapf-loan-approval, uapf-permits, uapf-hr, ...)
Function calling schemas (Gemini, others)
│
└────── HTTP/gRPC calls
Execution layer
UAPF engines per domain
/execute-process, /evaluate-decision endpoints
│
└────── UAPF packages
Modeling & governance
BPMN/DMN/CMMN models, tests, registry, approvals
Agents never embed the rules inside their prompts; they call into the adapter layer, which delegates to the UAPF engines – and those engines execute the authoritative algorithm packages.
6. Resources and agents in BPMN
6.1. Model abstract roles, not concrete bots
Real processes depend on who carries out tasks: humans, systems, or AI agents. The robust approach is:
- In BPMN, use lanes and resource roles that are abstract:
ROLE_LOAN_OFFICERROLE_HR_AGENTROLE_RISK_SYSTEMROLE_AI_ASSISTANT
- Keep specific executors (human groups, AI agents) in a separate resource registry.
A Resource / Capability Registry might say:
ROLE_HR_AGENTcan be performed by:- Humans in group
HR_BALTICS - AI agent
hr-assistant-1(Claude)
- Humans in group
ROLE_RISK_SYSTEMis implemented by:- UAPF
io.bank.shared.risk-engine - MCP tool
uapf_risk_evaluate
- UAPF
At runtime, an orchestrator binds abstract roles to concrete executors without changing the models.
6.2. Capacity and calendars
If needed, the resource registry can also include:
- Agent type (human / AI / system)
- Capacity (tasks per hour, concurrency limits)
- Availability (working hours, maintenance windows)
This allows runtime decisions like “prefer AI Agent X during business hours, fall back to human queue if load is high”, without modifying the UAPF itself.
7. Managing relations and dependencies between UAPFs
7.1. Composition instead of one big package
Rather than merging many small UAPFs into a single monolith, use:
- BPMN call activities – one process calls another, potentially from another package.
- DMN dependency graphs – decision models depending on other decisions, potentially from shared libraries.
The UAPF manifest and engine configuration can express that a Loan Approval process depends on a shared KYC rules package, without physically merging them.
7.2. Dependency graph and impact analysis
The UAPF registry should maintain a dependency graph:
- For each package:
- Uses: which other UAPFs it calls or imports
- Used by: which packages or agents depend on it
Before changing shared.kyc-rules.uapf, you can see:
- “This impacts 7 loan products, 3 credit card flows, and 2 public-facing agents.”
CI pipelines can then automatically run regression tests for all dependents, not just the changed package.
8. An enterprise-wide UAPF operating model
Putting it together, an “Algorithmated Enterprise” around UAPF would include at least seven building blocks:
- Algorithm Catalog (UAPF Registry)
Central catalog of all UAPF packages, IDs, versions, domains, risk levels, owners, dependencies. - Algorithm Factory (CI/CD)
Pipelines to validate, test, sign, and publish UAPFs; semantic versioning; approval workflows. - Execution Fabric
UAPF engines per domain (or per cluster) exposing standardized APIs such as:POST /uapf/{domain}/execute-process POST /uapf/{domain}/evaluate-decision - Agent Gateway
MCP servers and function calling adapters that project algorithm services into AI agents:
uapf_loan_run_application,uapf_permit_evaluate_zoning,uapf_hr_check_eligibility, etc. - Resource & Agent Registry
Mapping from abstract roles in BPMN/DMN to actual executors: humans, legacy systems, or named AI agents (e.g. “Robert Cane”). - Supervising Orchestrator
A process orchestration layer (or supervising agent) that:- chooses which UAPF process/decision to call,
- assigns work to appropriate executors,
- handles SLAs, escalation, retries.
- Governance & Observability
End-to-end logs and dashboards showing:- Which agent called which UAPF version with which inputs
- What decision was made, why, and how long it took
- Where policy changes affected behavior
In other words: UAPF becomes the algorithm substrate of the enterprise; MCP, function calling, and tools are how AI agents tap into it safely.
9. Getting started at enterprise scale
A realistic adoption path might look like:
- Pick one or two domains (e.g. retail loans, permits, HR onboarding) and build domain UAPFs instead of ad-hoc models.
- Stand up a minimal UAPF registry (even as a database + simple UI) to track IDs, versions, owners, and dependencies.
- Define a simple CI/CD pipeline for UAPF packages with tests and approvals.
- Expose these UAPFs via a dedicated engine + adapter (MCP server, function calling tools).
- Assign one or two named AI agents to those domains and give them explicit system prompts to always use the tools.
- Measure usage, drift, and decision quality; refine governance.
- Extend the pattern to more domains and agents.
10. Conclusion: from demos to an Algorithmated Enterprise
UAPF started as a way to package one process into one file so a single agent could use it. In an enterprise context, the same idea scales into a full algorithm layer:
- Processes and rules are explicit, governed, and versioned.
- AI agents become front-ends to these algorithms, not replacements for them.
- Changes in policy are reflected in UAPF packages and instantly visible to all channels.
- Dependencies and risks are visible and testable, not buried in opaque code or prompts.
That is the core promise of Algorithmation at enterprise scale: not just smarter agents, but a disciplined, explainable, and governable foundation for how they act.