Table of contents
I’ve been reading a lot about the potential of adding AI Agent functionality to existing processes and software applications. It’s mostly cautionary tales and warnings about the limitations of AI Agents. So I decided to take some of the most common limitations and combine that with the most common cautionary tale and talk about how orchestration with BPMN does an awful lot to solve these problems.
Let’s start by explaining our cautionary tale: Healthcare. It’s very common for articles about Agentic AI to eventually evoke caution in its readers with the words “Would you trust AI with your health?”. I, like you—would not. People mention very specific reasons for this and I wondered if I could use BPMN to create patterns that alleviate these fears? The idea being that if it works for a healthcare scenario where the stakes are so high—surely it would work for any other kind of process?
Like this interactive embedded model? You can build one too: start a free trial today
So I started with this simple BPMN representation of a diagnosis process. A patient has some medical issue, and after getting all the information they need, the doctor then confirms a diagnosis and makes a reservation for some kind of treatment. Confirmation is then sent to the patient. This model as well as all of the others I’ll be referencing in this post can be found here. So where do I start with my journey towards optimizing this with AI?
Visualize critical information
Problem: When adding an Agent how can I ensure its actions are auditable?
I’m going to jump right in by changing the model to both add AI Agent functionality while also addressing the issue of auditability.
By design BPMN visualizes the execution of actions that will happen or have happened. This creates a clear auditability, both as a log of events internally in the engine but also when superimposed on the model itself. While the standard is mostly known for its structured way of implementing processes, it does have a great way of adding non-deterministic sections to a process. The symbol in question is the ad-hoc sub-process. This allows for your process to break into an unstructured segment which allows for the addition of AI Agent shenanigans. It can look at what the context of the request is and see a list of actions that it can take. (Changes are highlighted below in green.)
Using this construct the Agent has the freedom to perform the actions it feels are required by the context and it is completely visible to the user how and why those choices are being made. Each task, service or event that is triggered by the AI is visualized in the very BPMN model that you create. Afterwards, once the AI has finished its work, the process can continue along a more predictable path.
Increasing trust in results
Problem: AI gets things wrong. How can I ensure these are caught and any damage is undone?
We’ve changed the process so that the Agent is going to be making choices and acting on them. Clearly the first thing to think of is—do you trust their results? Well, you shouldn’t obviously. So, in the next iteration of the process not only have I added a pattern to adjudicate whether the correct choice was made, but I’ve also ensured that if an action has been taken as a result of that decision, it can be undone.
I’ve written before about how this can be done by analyzing the chain of thought output, but this pattern goes a little further. First by allowing the thought checking to happen in parallel to the actions that can be taken, and secondly by being able to actually undo any actions taken once a bad decision has been discovered.
How it works is that after the “Decide on Treatment” sub-process finishes there are two possibilities;
- Treatment is needed and a reservation is made.
- No treatment is needed and nothing is reserved.
In both cases a check is made (in parallel) to ensure the decision makes sense. If it’s all good we end. If some flawed logic is discovered a Compensation event is triggered. This is a really powerful feature of BPMN because this will check what actions have been taken by the process (in this case the “Make Treatment Reservation” task may be complete) and in that case it will undo that that action (in this model that means activating the “Cancel Reservation” Task).
This solves two issues that you’d tend to worry about. It catches mistakes and if those mistakes have led to bad actions it can undo those, and none of this will actually slow down the process because it’s all happening in parallel!
Adding humans in the loop
Problem: In some cases humans should be involved in decision making.
Core business processes, by their nature, have a substantial impact on people and business success. The community of users who implement their processes with Camunda don’t tend to use it for trivial processes, because those processes don’t have the level of complexity and require the flexibility that is a core tenet of Camunda’s technology. With this in mind, it’s obvious that bringing AI Agents into the mix provokes concerns of oversight. Specifically the kind of oversight that needs to be conducted by a person.
Continuing with our model. I’ve added some new functionality that does two things. The first is a pretty simple requirement that means if it’s been decided that the Agent’s chain of thought has led to the wrong choice we’ve added an Escalation End event. This construction throws an event called “Doctor Oversight Needed” which is caught by the event sub-process and creates a user task for a Doctor. A nice feature here is that the context remains intact so the Doctor can look over what the patient details are, see what the AI suggested—even see why the chain of thought was determined to be wacky and then they have the power to decide on how to proceed.
The second addition is a little more subtle but I think very important to maintaining the integrity of the process. It gives users the control of reversing a decision an Agent has made even long after the agent has made it.
This is done by adding an event-based gateway which can wait for an order sent in from a doctor who has decided that they want to work on a new treatment. Sending in this message does two things. First, it cancels the actions the Agent took (in this case, making a reservation for treatment), and secondly it triggers the same escalation event as the other branch, and so now the doctor once again gets full context and can make a new decision about the treatment.
This shows that humans can be easily integrated at both the time of decision making by the Agent but also after the fact.
Guardrail important choices
Problem: AI could make choices that don’t align with fundamental rules.
While human validation is a nice way to keep things in check, humans are neither infallible nor are they scalable. So when your process has an important decision to be taken by an Agent, you don’t want to have to rely on a human to always check the result or have to rely on Agents checking other Agents. You need substantiation guardrails that will not make mistakes. You need business rules.
BPMN’s sister standard DMN lets you visually define complex business rules that can be integrated into a process. If these rules are broken by a decision from an Agent, it’s caught early, before any further action is taken. Also for the more financially conscientious users out there—it wont cost you a call out to an AI Agent, so for high throughput predictable decisions it’s a great choice economically. But it gets even better because in combination with BPMN’s Error event they can also ensure that anytime the rules are broken it can be reported, understood and hopefully improved. Using DMN also ensures auditable compliance. Because there’s no way for a process to break the rules, you can be absolutely sure that every instance of your process is both compliant and auditable, so if there are regulations guiding how your process should or should not perform not only can the business rest assured that things aren’t going to go pear-shapped but it can also be proven to external auditors.
In this model I’ve added a DMN table that is triggered after the “Confirm Treatment Decision” task. The DMN table has a set of rules outlining treatments that should not be given based on existing conditions of the patent. These kinds of rules are made to be easy to define and update so as more treatments become available so do the rules. If a decision made by the Agent breaks the rules an Error event is triggered and this registers the failure as an incident to be corrected so that the Agent can improve and violate fewer rules in the future.
Ad-Hoc human intervention
Problem: It should be possible to call on human intervention at any time
Most AI Agents are built so that once assigned a task, they work on it within their little black box until they completely succeed or completely fail. Basically AI Agents are transactions. The annoying side effect of this is that an AI Agent cannot just reach out mid-thought for human input. Because the all or nothing design pattern means it can’t wait for a response. That’s not the case for AI Agents built with BPMN and Camunda.
As a process grows in complexity and more decision making is being left up to AI, it’s important to maintain human awareness of decisions and approvals when needed. BPMN events allow for users to be called on dynamically to check decisions or give input. These measures are incredibly important for further growth of an agentic process, because they reinforce trust and take minimal amounts of time from experts, who may only need to be called on for verification and validation of the most complex or consequential parts of the process.
In now the final iteration of the diagnostic process, I’ve added a couple of ways to be more dynamic about how human interaction is integrated. Starting with the ad-hoc sub-process. There’s now an Escalation event called “Doctor’s Opinion Needed” that can be triggered at any time by the AI Agent if they feel they need more context before continuing. Unlike previous events, this does not have over decision making to the doctor but instead informs the doctor that the Agent needs some advice in order to continue their diagnosis. The AI Agent then waits for a signal to return that indicates they’ve got an answer to their query.
The agent can theoretically use this as often as they like until they’ve got all the information they need for an informed decision.
The future of AI Agent design
AI agents are going to become ubiquitous for helping navigate a lot of the mundane parts of productivity very soon. For the most consequential parts of business—it’s going to take a little longer, because there’s a lot of risk inherent in giving decision making power to components that can act without oversight. Moving from deterministic to non-deterministic processes is going to require businesses to rethink design principles. Once it starts to happen though, it’s the place that’s going to benefit the most and have the biggest impact on the core business. While it’s still early days and I’m looking forward to seeing how new patterns beyond the ones I’ve talked about will change the way Agents impact business, I’m pretty confident that BPMN is going to be how we see AI Agent design and implementation where it matters most. As Jakob and Daniel have already suggested—those companies are going to be doing it with the best placed technology and simply put, that’s Camunda.

Read more about the future of AI, process orchestration and Camunda
Curious to learn more about AI and how we think it will impact business processes? Keep reading below.
- Agentic Process Orchestration
- Building Your First AI Agent in Camunda
- Operationalize AI by Blending Deterministic and Non-Deterministic Process Orchestration
- Why AI Agents Need Orchestration
- AI Tools and Process Orchestration, the Perfect Match for Developers
- How Artificial Intelligence can Enhance Your Business Process
- Understanding AI Prompt Engineering
- Revolutionizing Health Insurance Underwriting: Harnessing AI for Smarter, Faster, and Fairer Risk Assessment
- Composability for Best in Class Process Orchestration
Start the discussion at forum.camunda.io