Niall Deehan, Author at Camunda https://camunda.com Workflow and Decision Automation Platform Thu, 26 Jun 2025 20:15:39 +0000 en-US hourly 1 https://camunda.com/wp-content/uploads/2022/02/Secondary-Logo_Rounded-Black-150x150.png Niall Deehan, Author at Camunda https://camunda.com 32 32 The Benefits of BPMN AI Agents https://camunda.com/blog/2025/05/benefits-bpmn-ai-agents/ Thu, 22 May 2025 21:14:35 +0000 https://camunda.com/?p=139555 Why are BPMN AI agents better? Read on to learn about the many advantages to using BPMN with your AI agents, and how complete visibility and composability help you overcome key obstacles to operationalizing AI.

The post The Benefits of BPMN AI Agents appeared first on Camunda.

]]>
There are lots of tools for building AI Agents and at the core they need three things. First, they need to understand their overall purpose and the rules in which they should operate. So you might create an agent and tell it, “You’re here to help customers with generic requests about the existing services of the bank.” Secondly, we need a prompt, which is a request to the agent that an agent can try to fulfil. Finally, you need a set of tools. These are the actions and systems that an agent has access to in order to fulfill the request.

Most agent builders will wrap up those three requirements into a single, static, synchronous system, but at Camunda we decided not to do this. We found that it creates too many use case limitations, it’s not scalable and it’s hard to maintain. To overcome these limitations, we came up with a concept that lets us decouple these requirements and completely visualize an agent in a way that opens it up to far more use cases, not only on a technical level, but also in a way that  alleviates a lot of the fears that people have when adding AI agents as part of their core processes.

The value of a complete visualization

Getting insight into how an AI Agent has performed in a given task often requires someone to read through its chain of thought (this is like the AI’s private journal, where it details how it’s thinking about the problem). This will usually let you know what tools it decided to use and why. So in theory if you wanted to check on how your AI Agent was performing, you could read through it. In practice, this is just not practical for two reasons:
1. It limits the visibility of what happened to a text file that needs to be interpreted.
2. AI agents can sometimes lie in their chain of thought—so it might not even be accurate.

Our solution to this is to completely visualize the agent, its tools and its execution all in one place.

Gain full visibility into AI agent performance with BPMN

Ai-agent-visibility-bpmn-camunda

The diagram above shows a BPMN process that has implemented an AI agent. It’s in two distinct parts. The agent logic is contained within the AI Task Agent activity and the tools it has access to is displayed with an ad-hoc sub-process. This is a BPMN construct that allows for completely dynamic execution of the tasks within it.

With this approach the action of an agent is completely visible to the user in design time, during execution, and can even be used to evaluate how well the process performs with the addition of an agent.

Ai-agent-performance-camunda

The diagram above shows a headmap which shows which tools take the longest to run. This is something impossible to accurately measure with a more traditional AI agent building approach.

Decoupling tools from agent logic

This design completely decouples the agent logic from the available tool set. Meaning that the agent will find out only in runtime what tools are at its disposal. The ramifications of this are actually quite profound. It means that you can run multiple versions of the same process with the same agent, but a completely different tool set. This makes context reversing far easier and also lets us qualitatively evaluate the impact of adding or removing certain tools through AB testing.

Improving maintainability for your AI agents

The biggest impact of this decoupling in my opinion though is how it improves maintainability. Designers of the process can add or remove new tools without ever needing to change or update the AI agent. This is a fantastic way of separating responsibilities when a new process is being built. While AI experts can focus on ensuring the AI Task Agent is properly configured, developers can build the tooling independently. And of course, you can also just add pre-built tools for the agent to use.

Ai-agent-maintanability-camunda

Composable design

Choosing, as we did, to marry AI agent design with BPMN design means we’ve unlocked access for AI agent designers to all the BPMN patterns, best practices and functionality that Camunda has been building over the last 10 years or so. While there’s a lot you gain because of that, I want to focus on just one here: Composable architecture.

Composable orchestration is the key to operationalizing AI

Camunda is designed to be an end-to-end orchestrator to a diverse set of tools, rules, services and people. This means we have designed our engine and the tools around it so that there is no limitation on what can be integrated. It also means we want users to be able to switch out services and systems over time, as they become legacy or a better alternative is found.

This should be of particular interest to a developer of AI agents because it lets you not only switch out the tools the AI Agent has access to, but more importantly, it lets you switch out the agent’s own LLM for the latest and greatest. So to add or even just test out the behaviour of a new LLM no longer means building a new agent from scratch—just swap out the brain and keep the rest. This alone is going to lead to incredibly fast improvements and deployments to your agents, and help you make sure that a change is a meaningful and measurable one.

Ai-agent-maintanability-camunda-2

Conclusion

Building AI agents the default way that other tools offer right now leads you to adding a new black box to your system. One that is less maintainable and and far more opaque in execution than anything else you’ve ever integrated. This is going to make it hard to properly maintain and evaluate.

At Camunda we have managed to open up that black box in a way that integrates it directly into your processes as a first-class citizen. Your agent will immediately benefit from everything that BPMN does and become something that can grow with your process.

It’s important to understand that you’re still adding a completely dynamic aspect to your process, but this way you mitigate most concerns early on. For all these reasons, I can imagine that of all the many, many AI agents that are going to be built this year, I’m sure the only ones that will still be used by the end of next year will be built in Camunda with BPMN.

Try it out

All of this is available for you to try out in Camunda today. Learn more about how Camunda approaches agentic orchestration and get started now with a free trial here.

The post The Benefits of BPMN AI Agents appeared first on Camunda.

]]>
Guide to Adding a Tool for an AI Agent https://camunda.com/blog/2025/05/guide-to-adding-tool-ai-agent/ Wed, 21 May 2025 19:31:39 +0000 https://camunda.com/?p=139473 In this quick guide, learn how you can add exactly the tools you want to your AI Agent's toolbox so it can get the job done.

The post Guide to Adding a Tool for an AI Agent appeared first on Camunda.

]]>
AI Agents and BPMN open up an exciting world of agentic orchestration, empowering AI to act with greater autonomy while also preserving auditability and control. With Camunda, a key way that works is by using an ad-hoc sub-process to clearly tell the AI agent which tools it has access to while it attempts to solve a problem. This guide will help you understand exactly how to equip your AI agents with a new tool.

How to build an AI Agent in BPMN with Camunda

There are two aspects to building an AI Agent in BPMN with Camunda.

  1. Defining the AI Task Agent
  2. Defining the available tools for the agent.

The AI Task Agent is the brain, able to understand the context and the goal and then to use the tools at its disposal to complete the goal. But where are these tools?

Adding new tools to your AI agent

The tools for your AI agent are defined inside an ad-hoc sub-process which the agent is told about. So assuming you’ve set up your Task Agent already—and you can! Because you just need the process model from this github repo. The BPMN model without any tools should look like this:

Ad-hoc-sub-process

Basically I’ve removed all the elements from within the ad-hoc sub-process. The agent still has a goal—but now has no way of accomplishing that goal.

In this guide we’re going to add a task to the empty sub-process. By doing this, we’ll give the AI Task Agent access to it as a tool it can use if it needs to.

The sub-process has a multi-instance marker, so for each tool to be used there’s a local variable called toolCall that we can use to get and set variables.

I want to let the AI agent ask a human a technical question, so first I’m going to add a User Task to the sub-process.

Ai-agent-tool

Defining the tool for the agent

The next thing we need to do is somehow tell the agent what this tool is for. This is done by entering a natural language description of the tool in the Element Documentation field of the task.

Element-documentation-ai-agent-tool

Defining variables

Most tools are going to request specific variables in order to operate. Input variables are defined so that the agent is aware of what’s required to run the tool in question. It also helps pass the given context of the current process to the tool. Output variables define how we map the response from the tool back into the process instance, which means that the Task Agent will be aware of the result of the tool’s execution.

In this case, to properly use this tool, the agent will need to come up with a question.

For a User Task like this we will need to create an input variable like the one you see below.

Local-variable-ai-agent-tool

In this case we created a local variable, techQuestion, directly in the task. We’ll then both assign this variable and define it for the task agent we need to call the fromAi function. To do that we must provide:

  1. The location of the variable in question.
    • In this case that would be within the toolCall variable.
  2. A natural language description of what the variable is used for.
    • Here we describe it as the question that needs to be asked.
  3. The variable type.
    • This is a string, but it could be any other primitive variable type.

When all put together, it looks like this:

fromAi(toolCall.techQuestion, "This is a specific question that you’d like to ask", "string")

Next we need an output variable so that the AI agent can be given the context it needs to understand if running this tool produced the output it expected. In this case, we want it to read the answer from the human expert it’s going to consult.

Process-variable-ai-agent-tool

This time, create an output variable. You’ll have two fields to fill in.

  1. Process variable name
    • It’s important that this variable name matches the output expected by the sub-process. The expected name can be found in the output element of the sub-process, and as you can see above, we’ve named our output variable toolCallResult accordingly.
      Output-ai-agent-tool
  2. Variable assignment value
    • This needs to simply take the expected variable from the tool task and add it to a new variable that can be put into the toolCallResult object

So in the end the output variable assignment value should be something like this:

{ “humanAnswer” : humanAnswer}

And that’s it! Now the AI Task Agent knows about this tool, knows what it does and knows what variables are needed in order to get it running. You can repeat this process to give your AI agents access to exactly as many or as few tools as they need to get a job done. The agents will then have the context and access required to autonomously select from the tools you have provided, and you’ll be able to see exactly what choices the agent made in Operate when the task is complete.

All of this is available for you to try out in Camunda today. Learn more about how Camunda approaches agentic orchestration and get started now with a free trial here. For more on getting started with agentic AI, feel free to dig deeper into our approach to AI task agents.

The post Guide to Adding a Tool for an AI Agent appeared first on Camunda.

]]>
Essential Agentic Patterns for AI Agents in BPMN https://camunda.com/blog/2025/03/essential-agentic-patterns-ai-agents-bpmn/ Wed, 05 Mar 2025 16:42:17 +0000 https://camunda.com/?p=130529 Learn how orchestration and BPMN can solve some of the most common limitations and concerns around implementing AI Agents today.

The post Essential Agentic Patterns for AI Agents in BPMN appeared first on Camunda.

]]>

Table of contents

I’ve been reading a lot about the potential of adding AI Agent functionality to existing processes and software applications. It’s mostly cautionary tales and warnings about the limitations of AI Agents. So I decided to take some of the most common limitations and combine that with the most common cautionary tale and talk about how orchestration with BPMN does an awful lot to solve these problems.

Let’s start by explaining our cautionary tale: Healthcare. It’s very common for articles about Agentic AI to eventually evoke caution in its readers with the words “Would you trust AI with your health?”. I, like you—would not. People mention very specific reasons for this and I wondered if I could use BPMN to create patterns that alleviate these fears? The idea being that if it works for a healthcare scenario where the stakes are so high—surely it would work for any other kind of process?

Like this interactive embedded model? You can build one too: start a free trial today

So I started with this simple BPMN representation of a diagnosis process. A patient has some medical  issue, and after getting all the information they need, the doctor then confirms a diagnosis and makes a reservation for some kind of treatment. Confirmation is then sent to the patient. This model as well as all of the others I’ll be referencing in this post can be found here.  So where do I start with my journey towards optimizing this with AI?

Visualize critical information

Problem: When adding an Agent how can I ensure its actions are auditable?

I’m going to jump right in by changing the model to both add AI Agent functionality while also addressing the issue of auditability.

By design BPMN visualizes the execution of actions that will happen or have happened. This creates a clear auditability, both as a log of events internally in the engine but also when superimposed on the model itself. While the standard is mostly known for its structured way of implementing processes, it does have a great way of adding non-deterministic sections to a process. The symbol in question is the ad-hoc sub-process. This allows for your process to break into an unstructured segment which allows for the addition of AI Agent shenanigans. It can look at what the context of the request is and see a list of actions that it can take. (Changes are highlighted below in green.)

Using this construct the Agent has the freedom to perform the actions it feels are required by the context and it is completely visible to the user how and why those choices are being made. Each task, service or event that is triggered by the AI is visualized in the very BPMN model that you create. Afterwards, once the AI has finished its work, the process can continue along a more predictable path.

Increasing trust in results

Problem: AI gets things wrong. How can I ensure these are caught and any damage is undone?

We’ve changed the process so that the Agent is going to be making choices and acting on them. Clearly the first thing to think of is—do you trust their results? Well, you shouldn’t obviously. So, in the next iteration of the process not only have I added a pattern to adjudicate whether the correct choice was made, but I’ve also ensured that if an action has been taken as a result of that decision, it can be undone.

I’ve written before about how this can be done by analyzing the chain of thought output, but this pattern goes a little further. First by allowing the thought checking to happen in parallel to the actions that can be taken, and secondly by being able to actually undo any actions taken once a bad decision has been discovered.

How it works is that after the “Decide on Treatment” sub-process finishes there are two possibilities;

  1. Treatment is needed and a reservation is made.
  2. No treatment is needed and nothing is reserved.

In both cases a check is made (in parallel) to ensure the decision makes sense. If it’s all good we end. If some flawed logic is discovered a Compensation event is triggered. This is a really powerful feature of BPMN because this will check what actions have been taken by the process (in this case the “Make Treatment Reservation” task may be complete) and in that case it will undo that that action (in this model that means activating the “Cancel Reservation” Task).

This solves two issues that you’d tend to worry about. It catches mistakes and if those mistakes have led to bad actions it can undo those, and none of this will actually slow down the process because it’s all happening in parallel!

Adding humans in the loop

Problem: In some cases humans should be involved in decision making.

Core business processes, by their nature, have a substantial impact on people and business success. The community of users who implement their processes with Camunda don’t tend to use it for trivial processes, because those processes don’t have the level of complexity and require the flexibility that is a core tenet of Camunda’s technology. With this in mind, it’s obvious that bringing AI Agents into the mix provokes concerns of oversight. Specifically the kind of oversight that needs to be conducted by a person.

Continuing with our model. I’ve added some new functionality that does two things. The first is a pretty simple requirement that means if it’s been decided that the Agent’s chain of thought has led to the wrong choice we’ve added an Escalation End event. This construction throws an event called “Doctor Oversight Needed” which is caught by the event sub-process and creates a user task for a Doctor. A nice feature here is that the context remains intact so the Doctor can look over what the patient details are, see what the AI suggested—even see why the chain of thought was determined to be wacky and then they have the power to decide on how to proceed.

The second addition is a little more subtle but I think very important to maintaining the integrity of the process. It gives users the control of reversing a decision an Agent has made even long after the agent has made it.

This is done by adding an event-based gateway which can wait for an order sent in from a doctor who has decided that they want to work on a new treatment. Sending in this message does two things. First, it cancels the actions the Agent took (in this case, making a reservation for treatment), and secondly it triggers the same escalation event as the other branch, and so now the doctor once again gets full context and can make a new decision about the treatment.

This shows that humans can be easily integrated at both the time of decision making by the Agent but also after the fact.

Guardrail important choices

Problem: AI could make choices that don’t align with fundamental rules.

While human validation is a nice way to keep things in check, humans are neither infallible nor are they scalable. So when your process has an important decision to be taken by an Agent, you don’t want to have to rely on a human to always check the result or have to rely on Agents checking other Agents. You need substantiation guardrails that will not make mistakes. You need business rules.

BPMN’s sister standard DMN lets you visually define complex business rules that can be integrated into a process. If these rules are broken by a decision from an Agent, it’s caught early, before any further action is taken. Also for the more financially conscientious users out there—it wont cost you a call out to an AI Agent, so for high throughput predictable decisions it’s a great choice economically. But it gets even better because in combination with BPMN’s Error event they can also ensure that anytime the rules are broken it can be reported, understood and hopefully improved. Using DMN also ensures auditable compliance. Because there’s no way for a process to break the rules, you can be absolutely sure that every instance of your process is both compliant and auditable, so if there are regulations guiding how your process should or should not perform not only can the business rest assured that things aren’t going to go pear-shapped but it can also be proven to external auditors.

In this model I’ve added a DMN table that is triggered after the “Confirm Treatment Decision” task. The DMN table has a set of rules outlining treatments that should not be given based on existing conditions of the patent. These kinds of rules are made to be easy to define and update so as more treatments become available so do the rules. If a decision made by the Agent breaks the rules an Error event is triggered and this registers the failure as an incident to be corrected so that the Agent can improve and violate fewer rules in the future.

Ad-Hoc human intervention

Problem: It should be possible to call on human intervention at any time

Most AI Agents are built so that once assigned a task, they work on it within their little black box until they completely succeed or completely fail. Basically AI Agents are transactions. The annoying side effect of this is that an AI Agent cannot just reach out mid-thought for human input. Because the all or nothing design pattern means it can’t wait for a response. That’s not the case for AI Agents built with BPMN and Camunda.

As a process grows in complexity and more decision making is being left up to AI, it’s important to maintain human awareness of decisions and approvals when needed. BPMN events allow for users to be called on dynamically to check decisions or give input. These measures are incredibly important for further growth of an agentic process, because they reinforce trust and take minimal amounts of time from experts, who may only need to be called on for verification and validation of the most complex or consequential parts of the process.

In now the final iteration of the diagnostic process, I’ve added a couple of ways to be more dynamic about how human interaction is integrated. Starting with the ad-hoc sub-process. There’s now an Escalation event called “Doctor’s Opinion Needed” that can be triggered at any time by the AI Agent if they feel they need more context before continuing. Unlike previous events, this does not have over decision making to the doctor but instead informs the doctor that the Agent needs some advice in order to continue their diagnosis. The AI Agent then waits for a signal to return that indicates they’ve got an answer to their query.

The agent can theoretically use this as often as they like until they’ve got all the information they need for an informed decision.

The future of AI Agent design

AI agents are going to become ubiquitous for helping navigate a lot of the mundane parts of productivity very soon. For the most consequential parts of business—it’s going to take a little longer, because there’s a lot of risk inherent in giving decision making power to components that can act without oversight. Moving from deterministic to non-deterministic processes is going to require businesses to rethink design principles. Once it starts to happen though, it’s the place that’s going to benefit the most and have the biggest impact on the core business. While it’s still early days and I’m looking forward to seeing how new patterns beyond the ones I’ve talked about will change the way Agents impact business, I’m pretty confident that BPMN is going to be how we see AI Agent design and implementation where it matters most. As Jakob and Daniel have already suggested—those companies are going to be doing it with the best placed technology and simply put, that’s Camunda.

Ai-company-future-camunda

Read more about the future of AI, process orchestration and Camunda

Curious to learn more about AI and how we think it will impact business processes? Keep reading below.

The post Essential Agentic Patterns for AI Agents in BPMN appeared first on Camunda.

]]>
Why AI Agents Need Orchestration https://camunda.com/blog/2025/02/why-ai-agents-needs-orchestration/ Mon, 10 Feb 2025 23:54:46 +0000 https://camunda.com/?p=128284 Help your AI make better choices and complete more complex tasks with agentic orchestration.

The post Why AI Agents Need Orchestration appeared first on Camunda.

]]>
Currently, most organizations are asking themselves how they can effectively integrate artificial intelligence (AI) agents. These are bots that can take in some natural language query and perform an action.

I’m sure you’ve already come across various experiments that aim to crowbar these little chaps into a product. Results can be mixed. They can range from baffling additions that hinder more than help to ingenious, often subtle enhancements that you can’t believe you ever lived without.

It’s exciting to see the innovation that’s going on, and because of the kind chap I am, I’ve been wondering about how we can build our way towards improving actual end-to-end business processes with AI agents. Naturally, this requires us to get to a point where we trust agents to make consequential decisions for us, and even trust them to action those decisions.

So, how do you build an infrastructure that uses what we’ve learned about the capabilities of AI agents without giving them too much or too little responsibility? And would end users ever be able to trust an AI to make consequential decisions?

How AI agents will evolve

Most people I know have already integrated some AI into their work in some great ways. I build proof of concepts with Camunda reasonably often and use Gemini or ChatGPT to generate test data or JSON objects—it’s very handy. This could be expanded into an AI agent by suggesting that it generate the data and also start an instance of the process with the given data. 

This also tends to be the way organizations are using AI agents—a black box that takes in user input and responds back with some (hopefully) useful response after taking a minor action.

AI agent responses are usually opaque and offer little reasoning

Those actions are always minor of course and it’s for good reason—it’s easy to deploy an AI agent if the worst it’s going to do is feed junk data into a PoC. The AI itself isn’t required to take any action or make any decisions that might have real consequences; if a human makes the decision to use a court filing generated by ChatGPT… Well, that’s just user error. 

For now, it’s safer to keep a distance from consequential decision-making and the erratic and sometimes flawed output of AI agents. This more or less rules out utilizing the full potential of AI agents—because at best you want them in production systems making decisions and taking consequential actions that a human might do.

It’s unrealistic, however, to assume that this will last for long. The logical conclusion of what we’ve seen so far is that AI agents will be given more responsibility regarding the actions they can take. What’s holding that step back is that no one trusts them because they simply don’t produce predictable, repeatable results. In most cases, you’d need them to be able to do that in order to make impactful decisions.

So what do we need to do to make this next step? Three things:

  • Decentralize
  • Orchestrate
  • Control

Agentic AI orchestration

As I mentioned, I use several AI tools daily. Not because I want to, but because no single AI tool can accurately answer the diversity of my queries. For example, I mentioned how I use Gemini to create JSON objects. I was building a small coffee order process and needed an object containing many orders.

{"orders" : [
  {
        "order_id": "20240726-001",
        "customer_name": "Alice Johnson",
        "order_date": "2024-07-26",
        "items": [
          {
            "name": "Latte",
            "size": "Grande",
            "quantity": 1,
            "price": 4.50
          },
          {
            "name": "Croissant",
            "quantity": 2,
            "price": 3.00
          }
        ],
        "payment_method": "Card"
  },
  {
        "order_id": "20240726-002",
        "customer_name": "Bob Williams",
        "order_date": "2024-07-26",
        "items": [
          {
            "name": "Espresso",
            "quantity": 1,
            "price": 3.00
          },
          {
            "name": "Muffin",
            "quantity": 1,
            "price": 2.50
          },
                {
            "name": "Iced Tea",
            "size": "Medium",
            "quantity": 1,
            "price": 3.50
          }
        ],
        "payment_method": "Cash"
  }
]}

I then needed to use Friendly Enough Expression Language (FEEL) to parse this object to get some kind of specific information.

I didn’t use Gemini for this because it reliably gives me bad information when I need a FEEL expression. This is for a few reasons. FEEL is a new and relatively niche expression language so there’s less data for it to be trained on. Also, I’m specifically using Camunda’s FEEL implementation, which contains some additional functions and little quirks that need to be considered. If I asked Gemini to both create the data object and then go ahead and use FEEL to get the first order in the array, I get this:

Incorrect FEEL response from Gemini

This response is a pack of lies. So instead I ask an AI agent which I know has been trained specifically and exclusively on Camunda’s technical documentation. The response is quite different and also quite correct.

Answer from properly trained AI

I’m usually confident that Camunda’s own AI copilot and assistants will give me the correct information. It not only generates the expression, it also runs the expression with the given data to make sure it works. Though the consequences aren’t so drastic. I know FEEL pretty well, so I’m going to be able to spot any likely issues before putting it into production.

In this scenario, I’m essentially working as an orchestrator of AI agents. I’m making decisions to use a specific agent based on two main factors. 

  1. Trust: Which Agent do I trust to give me the correct answer?
  2. Consequences: How impactful are the consequences of trusting the result?

This is what’s blocking the effectiveness of true end-to-end agentic processes. I don’t know if I can trust a given agent enough to decide something and then take action that might have real consequences. This is why people are okay with asking AI to summarize a text but not to purchase flowers for a wedding.

Truth and consequences

So enough theory, let’s talk about the practical steps to increase trust and control consequences in order to utilize AI Agents fully. As I like doing things sequentially, let’s take them one at a time.

Trust

We’ve all experienced looking at the result from an AI model and asked ourselves, “Why?” The biggest reason to distrust AI agents is that, in most cases, you’ll never be able to get a good answer to why a result was given. In situations where you require some kind of audit of decision-making or strict guardrails in place, you really can’t rely on a black box like an AI agent.

There is a nice solution to this though—chain of thought. This is where the AI is clear about how the problem was broken down and subsequently lays out its thought process step by step. The clear hole in this solution is that someone is needed to look over the chain of thought, and here is where we can start seeing how orchestration can lend a hand.

Orchestration can link together services in a way that sends a query to multiple agents. When both have returned with their answer and chain of thought, a third agent can act as a judge to determine how accurate the result is.

Continuing with my example, it would be easier to tell a generic endpoint, “I’m using Camunda and need to use a FEEL expression for finding the first element in an array, and have faith that this question will be routed to the agent best suited to answer it. In this case, that would be Camunda’s kapa.ai instance.

Building this with an orchestrator like Camunda that uses BPMN would be pretty easy.

In this process the query is sent into a process instance. Two different AI agents are triggered in parallel and asked who is best at handling this kind of request. The result is passed to a third agent that can review the chain of thought and make a determination from the results of both. In this case it’s probably clear that FEEL is something a Camunda AI would do a pretty good job of answering and you’d expect the process to be sent off in that direction.

In this case we’ve created a maintainable system where more trustworthy responses are passed back to the user along with a good indication of why a certain agent was chosen to be involved and why a certain response was given.

Consequences

Once trust is established, it’s not hard to imagine that you’d start to consider actions that should be taken. Let’s imagine that a Camunda customer has created a support ticket because they’re also having trouble getting the first element in an array. A Camunda support person could see that and think, That’s something I’m confident kapa.ai could answer—and in fact, I should just let the AI agent respond to this one.

In that case, we just need to make some adjustments to the model. 

In this model, we’ve introduced the action of accessing the ticketing system to find the relevant ticket and then updating the ticket with a trustworthy answer. Because of how we’ve designed the process, we would only do this in cases where we have a very high degree of trust. If we don’t, the information will be sent back to the support person, who can decide what to do next.

The future of AI orchestration

Providing independent, narrowly trained agents and then adding robust, auditable decision-making and orchestration around how and why they’re called upon will initially help users trust results and suggestions more. Beyond that, it will give architects and software designers the confidence to build in situations where direct action can be taken based on these trustworthy agents.

An orchestrator like Camunda is essential for achieving that step because it already specializes in integrating systems and lets designers tightly control how and why those systems are accessed. Another great advantage is far better auditability. Combining the data generated from stepping through the various paths of the process with the chain of thought output from each agent gives a complete picture of how and why certain actions were taken.

With these principles, it would be much easier to convince users that actions performed by AI without user supervision would be trustworthy and save a huge amount of time and money for people by removing remedial work like checking and verifying before taking additional steps.

Of course, it’s not true for everything, and I’m happy to say that I feel we should still leave court filing to humans. Eventually though I would expect we could offer AI agents not just the ability to action their suggestions, but to also choose the specific action.

BPMN has a construct called an ad-hoc subprocess in which a small part of the process decision-making can be handed over to a human or agent. This could be used to give an AI a limited amount of freedom about what action is best.

In the case above I’ve added a way for an AI agent to choose to ask for more information about the request if it needs to—it might do this multiple times before eventually deciding to post a response to the ticket, but the key thing is that if the agent knows it would benefit from more information it can perform an action that will help it make the final decision.

The future is trusting agents with what we believe they can achieve. If we give them access to actions that can help them make better choices and complete tasks, they can be fully integrated into end-to-end business processes.

Learn more about AI and Camunda

We’re excited to debut AI-enabled ad-hoc subprocesses and much more in our coming releases, so stay tuned. You can learn more about how you can already take advantage of AI-enabled process orchestration with Camunda here.

The post Why AI Agents Need Orchestration appeared first on Camunda.

]]>
Beer Suggestions with DMN and AI https://camunda.com/blog/2024/09/beer-suggestions-dmn-ai/ https://camunda.com/blog/2024/09/beer-suggestions-dmn-ai/#comments Wed, 25 Sep 2024 19:06:39 +0000 https://camunda.com/?p=118648 DMN and AI can help solve an incredibly wide range of problems. Learn how it can help with one faced by many in October—picking the right beer to drink ????.

The post Beer Suggestions with DMN and AI appeared first on Camunda.

]]>
Introduction

The summer may be ending, but here in Germany we have learned to drown this disappointment in beer. So, as September gives way to October the inevitable question is raised—“Which beer should I drink?”. Most traditional celebrants of Oktoberfest are likely to insist that you drink something German that abides by the Reinheistgebot, but I’m not so dogmatic and in fact would want everyone to enjoy the perfect beer for them, whatever your tastes and wherever you’re enjoying the seasonal shift. This is why I took some time out of my very busy schedule to build a process to help do just that.

A long while ago, I created a project that used DMN to implement a Sorting Hat (as seen in Harry Potter). This time around, I wanted to expand on that idea a bit because we now happen to have some new tools at our disposal, specifically AI. I discussed in this post about how you can combine DMN and AI to create a really useful way to categorize investment risk. Now, it’s time to combine DMN with AI in order to create something a lot less useful, but a lot more fun.

Building the DMN table

When asking people to describe the kind of beer they like, it’s good to start by understanding the beer style that best suits their palate. You can learn a lot from finding out how sweet, fruity, and hoppy they like their beer. It’s also good to know if they like malty beer or not. So, I created a DRD that represents these inputs for your table.

A DRD with options for deciding which beer to recommend

Each input is a rating between 1 and 10, except for malty, which is a boolean. Entirely because if they were all numbers the table would be more boring. With those defined, I started to create some rules.

Beer-rules

The first important aspect of the table is the Hit Policy, which I decided should be “Collect”. This means that multiple rules could potentially match, because beer is a spectrum and there’s a good bit of overlap on taste.

The rules themselves come from my own personal opinions and some light Googling, that is to say if you disagree with any of the rules here feel free to create an issue (which I will likely ignore because I’m right and you’re wrong).

If this table works the way it should, then you should have one or more beer styles that would suit your preferences. But there’s also the possibility that nothing matches at all! Or, you might know you should drink a stout, but have no idea which ones you can get your hands on! These issues and more are solved by integrating BPMN into the picture.

Building the BPMN model

Adding this DMN table to a BPMN model lets us expand the scope of this invaluable process by letting us integrate other systems and also address different responses depending on the results. In this case, I wanted to add a bunch more features.

  1. A front end where people could enter the data for the table to use
  2. A way to suggest a local beer that suited the beer style we’re suggesting
  3. Email integration so that results can be sent back to the user
Bpmn-beer-recommendations-camunda

The start event will host a front end, and the data gathered there is passed to the DMN table via the “Decide on Beer Style” task—and then the interesting stuff happens.

If the DMN table did manage to find a beer for you, I use the OpenAI Connector to ask ChatGPT to suggest places to find nearby beers in that style. If not, you can trust ChatGPT to suggest an alternative. ????

In both cases, then an email is sent using the SendGrid Connector with the suggested beverage.

Building a front end

At the start, we need to capture a bunch of data—not just the information used to decide on the beer style, but where they find the beer locally, as well as an email address so I can send the information back to the user. I used the form builder to create this pretty simple form that gathers everything we need.

Beer-recommendation-frontend

Deploy and Run

Now all that’s left is to follow the deployment instructions in the project’s readme file and give it a spin.

You can start the process from Tasklist by filling in the start form.

Deploy-process

You can then go to Operate to view the progress of the process as well as the process variables.

Monitor-process

But you can also drill down into the DMN table to see exactly how the rules performed. In this case, we can see that Reb Brown appears to have landed two styles, Wheat Beer and Belgian Ale—both of which are great, of course.

Image1

Finally, Reb can check his email to find some suggestions on where to get beers in his chosen styles in Dublin where he definitely lives, because he is indeed a real person.

Beer-suggestions-email

Finally…

You can find the project in all its glory right here where you can clone it and run it for yourself. You can also skip all that and get right to the beer by filling out the start form. Like magic,your beer options will appear in your mailbox.

Prost to Oktober! ????

The post Beer Suggestions with DMN and AI appeared first on Camunda.

]]>
https://camunda.com/blog/2024/09/beer-suggestions-dmn-ai/feed/ 2
Why BPMN Interchange Is So Important https://camunda.com/blog/2024/08/why-bpmn-interchange-is-so-important/ Tue, 06 Aug 2024 00:21:51 +0000 https://camunda.com/?p=115524 Explore the latest tools supporting the BPMN community, built by users, practitioners, and champions of the standard.

The post Why BPMN Interchange Is So Important appeared first on Camunda.

]]>
I wrote before about why I think BPMN is brilliant, and in that post I shared a little bit about the community. Recently, the main BPMN tool builders worked together to create a demo, the BPMN Interchange, that showcased the power of BPMN as an open standard. I wanted to take some time to explain why this collaboration is so important, both for the community of users and the BPMN standard itself. To do that, I’m going to discuss how and why 11 different BPMN vendors decided to get together and show off how well they all work together—and it all starts with BPMN’s open standards.

Open standards allow flexibility

When you see a BPMN model, you might not be aware that everything you see visually is also being represented by an XML metamodel. For example, behind this simple process:

Simple line drawing showing beginning and end of one task

…is this jumble of text:

<?xml version="1.0" encoding="UTF-8"?>
<bpmn:definitions xmlns:bpmn="http://www.omg.org/spec/BPMN/20100524/MODEL" xmlns:bpmndi="http://www.omg.org/spec/BPMN/20100524/DI" xmlns:dc="http://www.omg.org/spec/DD/20100524/DC" xmlns:di="http://www.omg.org/spec/DD/20100524/DI" xmlns:modeler="http://camunda.org/schema/modeler/1.0" id="Definitions_1kdt7t5" targetNamespace="http://bpmn.io/schema/bpmn" exporter="Camunda Modeler" exporterVersion="5.24.0" modeler:executionPlatform="Camunda Cloud" modeler:executionPlatformVersion="8.5.0">
  <bpmn:process id="Process_1yomcha" isExecutable="true">
	<bpmn:startEvent id="StartEvent_1">
  	<bpmn:outgoing>Flow_1of45it</bpmn:outgoing>
	</bpmn:startEvent>
	<bpmn:sequenceFlow id="Flow_1of45it" sourceRef="StartEvent_1" targetRef="Activity_00i8pry" />
	<bpmn:endEvent id="Event_0kpltiu">
  	<bpmn:incoming>Flow_1pmm9i9</bpmn:incoming>
	</bpmn:endEvent>
	<bpmn:sequenceFlow id="Flow_1pmm9i9" sourceRef="Activity_00i8pry" targetRef="Event_0kpltiu" />
	<bpmn:userTask id="Activity_00i8pry" name="Just a Task">
  	<bpmn:incoming>Flow_1of45it</bpmn:incoming>
  	<bpmn:outgoing>Flow_1pmm9i9</bpmn:outgoing>
	</bpmn:userTask>
  </bpmn:process>
  <bpmndi:BPMNDiagram id="BPMNDiagram_1">
	<bpmndi:BPMNPlane id="BPMNPlane_1" bpmnElement="Process_1yomcha">
  	<bpmndi:BPMNShape id="_BPMNShape_StartEvent_2" bpmnElement="StartEvent_1">
    	<dc:Bounds x="179" y="99" width="36" height="36" />
  	</bpmndi:BPMNShape>
  	<bpmndi:BPMNShape id="Event_0kpltiu_di" bpmnElement="Event_0kpltiu">
    	<dc:Bounds x="432" y="99" width="36" height="36" />
  	</bpmndi:BPMNShape>
  	<bpmndi:BPMNShape id="Activity_10ligid_di" bpmnElement="Activity_00i8pry">
    	<dc:Bounds x="270" y="77" width="100" height="80" />
  	</bpmndi:BPMNShape>
  	<bpmndi:BPMNEdge id="Flow_1of45it_di" bpmnElement="Flow_1of45it">
    	<di:waypoint x="215" y="117" />
    	<di:waypoint x="270" y="117" />
  	</bpmndi:BPMNEdge>
  	<bpmndi:BPMNEdge id="Flow_1pmm9i9_di" bpmnElement="Flow_1pmm9i9">
    	<di:waypoint x="370" y="117" />
    	<di:waypoint x="432" y="117" />
  	</bpmndi:BPMNEdge>
	</bpmndi:BPMNPlane>
  </bpmndi:BPMNDiagram>
</bpmn:definitions>

Reading this XML is not important for the vast majority of users, but it is important to know that the BPMN specification, in addition to describing how all of the symbols look and function, also describes how those symbols are represented in XML. As a result, any modeling tool that wants to properly implement BPMN can start by converting the XML to the visual representation (or vice versa).

The XML standard also means that all tools that properly implement the BPMN standard will be able to display a BPMN model, irrespective of where and how it was created. If you’ve built your model in Trisotech and want to continue modeling it in Camunda—no problem, just move the file from one tool to another.

Following the XML standard for BPMN means that there shouldn’t be any vendor lock-in for your BPMN tooling. If you like, you can even switch from one tool for another depending on the context in which you model. For example, back in 2013, before we had built our BPMN.io tooling, Camunda was simply a company with a BPMN engine that understood the XML and was able to execute those symbols. Our original Eclipse-based modeler was designed specifically for developers to add execution semantics. At that time, most process designers would pick a more design-focused modeler like Signavio to model their process, then transfer the file to Camunda to add the technical attributes.

These days, of course, we have modeler that can do both, but it was the openness of the BPMN standard that created the space for persona-defined tooling, and allowed for a huge variety of options for the community of users—and lots of potential for the BPMN tool makers.

The Model Interchange Working Group demo

Now that we understand how BPMN allows flexibility between different tools, we can talk about the collaboration between 11 different modeling tools. The group behind this annual demo is the Model Interchange Working Group (MIWG). This group is made of representatives from 11 BPMN vendors including our very own Camundi Maciej Barelkowski and Falko Menge. The goal of the group is to facilitate cooperation and support among BPMN vendors so that models built in any tool can be understood and properly rendered by any other. The annual demo is a way of showcasing that the group’s work is really accomplishing its goal.

This year’s demo for the BPMN Interchange is now available on YouTube and features three BPMN models being built piece-by-piece with 11 different modeling tools—live—over the course of about 30 minutes. I think it’s kind of amazing. This demo exists as a testament to how well and thoroughly these tools implement the BPMN standard, but also to showcase the strength of the community of vendors supporting BPMN.

Each year the demo is built around a specific theme—this year was subprocesses. That includes Embedded Subprocess, Call Activities, and of course Event Subprocesses. To demonstrate these symbols three interconnected models were built.

Three business process models using established symbols to indicate actions

Below, you’ll see the model that the Camunda Modeler was involved in creating.

Subprocess built with Camunda's web modeler

Camunda’s Web Modeler, which is built on BPMN.io, is used to build the timeout event subprocess. I was particularly happy about that because as I said during the recording, it truly is my favorite BPMN symbol. That isn’t where our involvement ends though—those open source BPMN.io libraries we started building back in 2014 are now well-established in the BPMN community and available to anyone who might be interested in using them. In this demo, three out of the 11 vendors chose BPMN.io as their preferred tool to display and edit BPMN models. Learning that was very exciting, and we hope to see more in future demos.

Conclusion

You’ll often hear the term “community” being used as a synonym for “users,” but this misrepresents how a real community dynamic can function. In the BPMN community, there are indeed users and practitioners who work with the standard and perhaps champion its usage. However, it’s important to note that this community also includes the people maintaining the standard itself and, of course, those building the tools so that the standard can be the most effective. These groups also overlap! These individuals don’t exist in isolation. They are connected by a shared, honest, and unironic love for the standard, and are working together to make it the success it is.

While BPMN is popular because it is, indeed, brilliant, the community around it is a big reason it shines the way it has. The BPMN community is brilliant in its own right—where else would you find would-be competitors working together to prove that each is as good as another because of the shared standard they implement?

Watch the demo

You can watch the BPMN Interchange demo in action on YouTube or just click Play below. The demo runs every year, and as I mentioned, we try out a different theme each year, so feel free to suggest in the comments some themes, symbols or patterns you’d be interested in seeing at the next demo. The MIWG group is also open to more vendors taking part—so if you use a modeling tool that you’d like to see as part of the group, it’s a great idea to put them forward.

The post Why BPMN Interchange Is So Important appeared first on Camunda.

]]>
How Decision Modeling and AI Help with Risk Assessment https://camunda.com/blog/2024/07/how-decision-modeling-and-ai-help-with-risk-assessment/ Tue, 09 Jul 2024 22:05:23 +0000 https://camunda.com/?p=113595 Easily orchestrate different departments and end-user requirements for risk assessment with AI and decision modeling.

The post How Decision Modeling and AI Help with Risk Assessment appeared first on Camunda.

]]>
The financial and insurance industries depend heavily on their ability to correctly determine the risk involved in any given investment. If there’s a low chance of risk, investing for a certain return is a great idea; of course, if something is determined to be very risky, it may not be worth putting up any investment. But this very rudimentary explanation skips over the incredibly complex question: “How do you determine the risk of a given investment?”

When risk assessment systems are built with Camunda, they tend to use DMN (decision model and notation). This is an open standard from the folks who brought us BPMN, which attempts to marry a business-friendly tool for describing rules with the directly executable metamodel that made BPMN 2.0 so popular.

But risk analysis is more than just a set of rules. In this post, I’m going to talk through a pattern I’ve come across based on working on projects like these over the years.

I’ll be using BPMN to orchestrate DMN tables, front-end applications, and AI integration, all of which will demonstrate a solid pattern for how you can deal with risk analysis.

Decision modeling for risk assessment workflow

Using DMN to make decisions

Let me quickly explain the following DMN table. The columns represent input and output, and the rows act as individual rules that are activated or not based on the value of the variables they’re testing.

Table with run risk rules and hit policy

The idea is that given a set of inputs, the rules are evaluated and an output is produced, which should give you an answer to some kind of question. In this case, the question is, “What are the risks of this investment?”

So, if you have an input like:

{
    "incomeLastYear" : 11000
    "incomeThisYear" : 29000,
    "purchases" : ["Home"],
    "assets" : ["Home"]
}

It will trigger the first rule, with the output being a text description and a score—specifically, Purchased a second home on low salary and 40.

The most interesting thing about this table is actually the Hit Policy, which determines how the rules are activated. The most common hit policy is First, which simply means that the rules are evaluated in order and the first rule that matches is returned.

Here, though, I’m using the Collect hit policy, which means that all matching rules are returned. It’s ideal for this particular use case because you want to find all of the reasons why, for the given input, it could be a risky investment. So, if the given output is:

{
    "purchases": ["Horse","Art","Boat","Other"],
    "incomeLastYear" : 2000,
    "incomeThisYear" : 219872346,
    "assets" : ["Car","Home"]
}

The output will be a description and score for each flagged issue. 

{
    "description":"Big increase in income",
    "score":40},
{
    "description":"A large unspecified purchase has been added to details ",
    "score":20},
{
    "description":"Has sent in details on time.",
    "score":0
}

From an end-user point of view, it would show the highlighted rules as well as the input:

Output shows highlighted rules

What this gives you in the end is a complete list of all potential risks in both natural language and as a score. You can then use this to both route a process and help process participants handle the human tasks. 

Integrating DMN with BPMN

These two standards—DMN and BPMN—are designed to complement each other, and integrating them isn’t hard. So let’s talk about the value of integrating them.

The following image displays a business rules task called Run Risk Rules. It takes the data in the current context of the BPMN process and feeds it into a DMN table.

I’ve already described what happens inside the table, but what about after it’s executed?

Taking data from BPMN and feeding into DMN

As I explained earlier, you have multiple outputs, one of which is a score. Each rule has a specific score aimed at highlighting the riskiness of the rule triggered. When you get the output, simply add all of those scores together to get a single risk score.

riskScore = sum(riskResults.score)

You can then use that value to decide if you can automatically accept the applicant or reject the applicant. If it’s borderline, you will want to involve a human to investigate the applicant, and the investigator can then decide whether to reject the applicant.

This is where you can make really good use of OpenAI.

Using OpenAI to summarize results

In a situation where someone has been tasked with investigating an application, you’ll have some requirements. The first is that you need to give the investigator all the necessary information, but you don’t want to overload them or give them information they shouldn’t have.

I’m thinking specifically of a requirement I came across on a project where the investigator was not supposed to know what the rules of the DMN were. Those rules were highly confidential, and only the team tasked with creating and maintaining the rules was supposed to know about them. The investigation team was much, much bigger, and there was a fear that if someone knew the rules, they would be able to find ways around them.

So there was a bit of a conundrum: how do you give the investigator enough information so that they can do their job without giving away confidential data?

One solution is to use AI to scramble the large output of various descriptions into a single short paragraph. While this might give an indication of the output, it would never reveal the rules themselves.

In this example, Ithe following prompt to OpenAI…

"Someone has submitted their financial details for a risk assessment write a summary of these Findings: " + riskDescriptions + " with detailed suggestions of potential actions for further investigations"

…will return an output to the end user that looks like this:

The financial details submitted for the risk assessment show that a large unspecified purchase has been added to the individual’s expenses. The income reported does not seem to support the required upkeep of assets. However, it is noted that the details were submitted on time. 

Based on these findings, further investigation is recommended. This may include requesting more specific details about the large purchase, verifying the accuracy of the reported income, and assessing the individual’s overall financial stability. It is important to closely monitor any discrepancies and take appropriate action to mitigate potential risks.

This gives a simple, easy-to-understand summary of where to start an investigation while also making it quite difficult to reverse-engineer these suggestions in a way that would reveal the rules.

Summary

There’s a lot to like about this example, and if you want to try it for yourself, the code is available with instructions on how to get it running. But it’s worth taking a step back and seeing how well this solution manages to easily orchestrate different departments and end-user requirements.

It manages to add some useful features for external systems while maintaining the context and flow throughout the process. It also gives a clear indication of what happens in runtime as well as makes redesign and innovation easier.

I hope this example of combining AI with decision modeling manages to inspire you to look at your own use cases and consider a more process-focused approach.

The post How Decision Modeling and AI Help with Risk Assessment appeared first on Camunda.

]]>
Thinking Outside the Microservice Box: End-to-End Solutions (Part 3) https://camunda.com/blog/2024/06/thinking-outside-the-microservice-box-end-to-end-solutions-part-3/ Mon, 24 Jun 2024 23:01:39 +0000 https://camunda.com/?p=110757 Free up services, give yourself clarity, and implement change fast with end-to-end process orchestration.

The post Thinking Outside the Microservice Box: End-to-End Solutions (Part 3) appeared first on Camunda.

]]>
You might already do some kind of process orchestration. It’s inevitable if you’re looking to improve efficiency or digitalization. This post will help you understand what it means to embrace not only process orchestration but specifically end-to-end process orchestration.

Both concepts share the need to make repeatable, predictable, and efficient processes. The difference is the scope of the project and the return on investment, which for true end-to-end processes can be quite remarkable.

End-to-end processes should directly orchestrate every system, service, and user involved in the process from the very beginning to the very end, regardless of the technical and departmental boundaries the process may cross. Creating these processes also requires communication between stakeholders from both business and technical teams. While this might seem like an insurmountable project, orchestration engines like Camunda are designed for this specific purpose. The benefits are very much worth the integration effort. 

In this post I’m going to discuss what a true end-to-end solution looks like and compare it a bit to a classic microservice architecture. Finally, I’ll highlight the benefits for both software architects looking for something sustainable and fast and decision-makers who want a clear understanding of their business. 

Building an end-to-end solution?

When thinking about an end-to-end solution, you need to think of the absolute beginning and the absolute end of your process.

There’s no better way to define that than opening a BPMN modeler and creating a start event and end event and naming them accordingly. 

Defining start and end events

Congratulations! You’ve just defined the scope of your process. It might seem like a trivial step, but this is fundamental to success. Defining the state you’re expecting at the beginning and end of the process helps you clarify both what’s in scope and what’s out of scope.

For instance, this example uses book has been sold as an end event. This should make it very clear that returning a book after it’s sold is out of scope for this process. 

Defining all steps in a process

Once you understand all of the tasks, actions, events, and decisions that need to take place, you can take a look at the variety of systems that are involved. This will help you understand what needs to be built or accessed in the process to make it work. It also shows you the individual responsibilities of each service.

Take for example the Check Delivery Options task. It’s responsible for letting you know if delivery is possible, and if so how. But it’s not responsible for actively communicating to the user if there isn’t an option. It’s also not responsible for when and how to send the information to the logistics company. This becomes the responsibility of the orchestrator itself.

Making these kinds of decisions visible at design time is incredibly helpful, but it’s more important to have this visualized after production. 

Incorporating all departments, people, and technologies within a process

Simply put, an end-to-end process aims to incorporate all departments, people, and technologies without limitation on the natural business scope of the process. People follow this approach for both technical and business reasons.

Technical benefits

A typical microservice project has uniform communication and data structures. Because of this, their scope is limited to systems that can follow those requirements. Microservice projects also tend to embed the logic of process routing in the services themselves, because there isn’t really another way to do it (more about that here).

Adding a process orchestration component to the architecture immediately removes these two limitations (limited scope and logic embedded in the services) by:

  1.  Giving you access to services and systems that would normally be a pain to integrate into a microservice project.
  2. Adding a component designed to keep ownership of all routing logic, so you no longer need that code in your services.

In fact, you now have a central location where updates can be made to the routing of your process independently of the processes being executed. This will also work as centralized error handling and give you the ability to manipulate running instances in flight in ways that would be impossible normally. 

Process instance modification

Process instance modification—you can move or cancel an instance of the state.

This is just the tip of the iceberg. Externalizing the routing logic also gives you version control over it, meaning faster and easier changes can be reverted if necessary, all with minimal impact on the systems, services, and people involved in the orchestration. 

Business benefits 

Improving business outcomes has two stages—getting answers about your processes and implementing change.

Getting answers to questions about your business processes

“How long does it take from a customer order to a delivery of the product?” is a basic question I would hope most companies have some kind of answer to. However, it’s also a question that isn’t going to prompt any specific action except maybe further investigation.

You need questions that are going to help you understand the state of the process and give you ideas about where to find improvements, like: 

  • “What are the correlations between orders that end up being late?”
  • “Which elements of the process have the longest delay in terms of KPIs?” 
  • “Which parts of the process should we be scaling up in order to increase speed of delivery?”

If you work with processes distributed across unconnected functions and hidden in log files or stack traces, you’re in luck. The entire industry of process mining exists to try to help discover the answers to questions like these.

It’s quite obvious as well that if you build a process using an orchestration engine and implement it end to end, you wont need to buy additional tooling to find out what’s already there. But that’s not even the main benefit of the One Model approach.

Report showing the most common path through the process

Report showing the average duration that elements take to complete. 

Implementing the changes that have been identified

The second stage of improving business outcomes is implementing the changes that you identified in stage one. This is truly where the business value cannot be matched. The speed at which new and improved processes can be built and deployed can be measured in days.

These changes can even be implemented in parallel with the older processes, where you can A-B test versions to ensure the desired outcome . It would be a monumental task to accomplish this with traditional software architecture. 

Conclusion

When you’ve embedded decision-making inside your services and you’ve had to limit your architecture to exclude diverse endpoints and systems, you simply don’t have a clear overview of what your end-to-end process looks like.

You might know which services are executed, but if you were to pick out a single execution, it would be really hard to know how it got to where it is and where it’s going to go next. Impossible in fact, unless you dive into the code. That’s a problem because this kind of information is vital to process improvement, and process improvement is fundamental to achieving business goals.

Basic but essential questions like, “Do our processes run efficiently?” or “What causes some processes to take too long or even fail?” would require someone to really dig into the system to answer accurately. Even if successful, making suggested changes would require code changes across multiple services. Measuring whether those changes made a positive impact is another round of research. 

Starting with an end-to-end process orchestration approach frees up services from having to do the routing, gives you a visual clarity on what’s happening, and lets you implement changes far faster than any alternative. 

The post Thinking Outside the Microservice Box: End-to-End Solutions (Part 3) appeared first on Camunda.

]]>
BPMN Is Brilliant, Actually https://camunda.com/blog/2024/06/bpmn-is-brilliant/ https://camunda.com/blog/2024/06/bpmn-is-brilliant/#comments Mon, 17 Jun 2024 11:30:00 +0000 https://camunda.com/?p=110092 There are good reasons BPMN been so popular for so long. Let's talk about what BPMN can really do and why it's brilliant.

The post BPMN Is Brilliant, Actually appeared first on Camunda.

]]>
I don’t understand why people still use writing as a means to communicate their thoughts. Not only is it over 3000 years old—but, I can’t spell particularly well, and learning how would be hard. Luckily, I’ve developed an entirely new method of communicating thoughts! Best of all, this new method is proprietary and will only work on the pens and paper I sell you.

This is the thesis for anything you’ll read railing against Business Process Model & Notation (BPMN)’s long-running popularity. It’s an ISO standard designed for describing the steps, decisions and events that might occur over the course of a process. What’s interesting about arguments against the standard is the absence of a direct comparison to an alternative. I’ve been using BPMN to model and execute processes for almost ten years. I’ve also used a bunch of alternatives, and I’ve never seen any that can hold a candle to what’s possible with BPMN. I think it’s time I do what many businesses don’t do when flaunting their alternatives: I want to talk about what BPMN can really do and show you why it’s brilliant, actually.

Observability and clarity

As opposed to all current alternatives, BPMN was not initially designed to be executable. This has, strangely enough, proven to be a big part of why it’s so much better. The first version of BPMN was built to be a notation that would make it easy for people to design and share processes. The result was a notation that heavily valued visual clarity even when showing a lot of complexity. Consider how quickly someone could understand a diagram presented to them as a core design principle. The second version of BPMN added the execution semantics as well as a few missing symbols. The benefit of creating an executable model from an already well-tested and proven visual framework was key to its success. It inspired a large number of companies to start building a process engine that could take BPMN models and directly execute them.

Making it possible for process analysts and software engineers to work together on the same notation at design time was a massive step forward; nothing was lost between design and development because everything had to be in the model—or it wouldn’t happen. These days, it’s gone even further; DevOps and Support Engineers now have tools that can integrate BPMN, making it so much easier to understand the current state of a process as well as making changes to it.

Bpmn-observability-clarity

This graphic demonstrates how with BPMN and good tools you can easily monitor and manipulate the state like moving from the current state of the process from “Check Details” back to a gateway so that it can be re-evaluated.

Finally, beyond just maintaining a running system, BPMN is in a unique position to help facilitate iterative improvements. If a process designer is interested in whether what they built is running as efficiently as they assumed, they just need to look. The exact process that was designed and deployed is now a key factor in helping suggest and validate improvements.

Iterative-improvements-optimize

This graphic combines the real process data with the process model to give a clear and quick understanding of which part of the process is taking the longest – in this case “Check Details” is bright red so indicates there could be a problem.

Diversity of events

Fundamental to basic process modeling and automation are tasks. Running a microservice, involving a human worker, or even just running a script—all possible in BPMN and even in some alternatives. But, being able to build and execute “basic processes” is not a high bar. Businesses that want to see their core processes automated do not want to compromise on the tools provided for that automation. They want to use their preferred tools to automate document processing, customer relationship management, back office functions, and more. They also want the freedom to swap out automation tools whenever there’s a technical need or a business need to do so. 

One example worthy of a showcase is that of BPMN events. The diversity and flexibility of these events demonstrate a larger theme that runs through the whole standard.

An image showing the BPMN symbols for a Message Event, a Timer Event, a Condition Event, a Signal Event, and an Error Event.

There are many types of events in BPMN, but I want to focus on some common ones and explain why they’re an important foundation of its success.

Utilizing Message events

This is the most common way to communicate between two processes. The message throw event is always sent to an external source (i.e., leaves the scope of the current process) and always has exactly one recipient. It’s a straightforward and reliable way of communicating within a process engine because the engine itself is responsible for delivering the message. You just need to define a message name and correlation keys.

E.g., An order process can send a message to a procurement process and continue again when the procurement is completed.

Utilizing Timer events

Does your process need to wait for a future date or some duration of time? Most processes do, and the timer event just needs to be told how long. The process engine handles the waiting and the waking.  

E.g., Wait until 2 weeks before the end date before sending a renewal request.

Utilizing Signal events

Signal events are like message events but with a big difference—they’re a broadcast, potentially reaching 0 to any number of recipients. It is incredibly useful as it can span all process instances and process definitions to ensure all who are listening pick up the signal.  

E.g., Canceling all processes related to a specific client after they’ve been rejected.

Utilizing Error event

This misunderstood symbol covers a wonderfully common gray area in process design. A problem that means the process shouldn’t continue as is—but it’s not a technical error. Instead, it’s an error that you can catch, and then you can continue the process through a different route.

E.g., an incorrect email has been entered by the user, we need to contact the user and ask them to enter a different email before trying again.

These are some of the first symbols you’ll learn in BPMN, and learning how to use them can give you a nice idea about solving a lot of the basic process challenges that you’ll come across. But, the beauty of BPMN becomes most apparent when you understand the synergies these symbols have across the rest of the standard.

Synergistic Symbols

When I teach BPMN, I tell attendees early on that knowing what all the BPMN symbols do is not the same as knowing BPMN. This probably sounds like an odd thing to say, but it’s the same with writing. Knowing how each letter sounds is not going to be much use when you’re trying to make a sentence. The secret to getting the most out of BPMN is knowing how the symbols combine and work together to create amazingly powerful patterns and structures. I’ve just spoken about some basic events, but what happens when you know how to combine them into patterns?

Messages event

Sure, you can wait for a message to occur at some part of the process. But you can also attach it to a task so that the message is only triggered if the task is still active and waiting for it to be received in parallel with the task work being done.

Messages-event-example

Timer event

What if there’s a specific timeframe of the process that requires some speedy execution? You can scope the tasks in a subprocess and use a non-interrupting timer event to send a warning if you’re coming close to violating an SLA. The complexity involved in building this relatively simple pattern is worth focusing on for a moment and can help underline why this is often missing from alternatives. Here are the requirements:

  1. We need to wait for a duration of time in parallel to the execution of other tasks
  2. Triggering the timer should not affect the work being carried out in parallel
  3. The timer needs to understand the scope so that it activates and deactivates itself in line with the task it’s scoped for

This is not a trivial task for a process engine to implement. But a BPMN engine only needs to understand the individual symbols and how they combine. The engine isn’t programmed to understand the pattern per se, It just needs to understand how BPMN works.

Timer-event-example

Error events

When combined with an event subprocess (my favorite symbol), you get a “global catch” for any errors that might occur in the process. This of course isn’t limited to error events, with BPMN’s “global catch” pattern you are given the scope of the entire process instance and you can decide which event it should wait for.

Error-event-example

These are still just the tip of the iceberg, but what I’ve shown is that on a fundamental level BPMN has created a wide variety of building blocks that are intended to fit together to create something greater. This concept is always missed by companies trying to build their proprietary alternatives.

The community and the standard

I’m always happy to talk about the features and the concepts that make BPMN so great. But, for me, a differentiator that has kept BPMN going and will continue to do so is the community. There are a huge number of people using BPMN to model and execute their processes, and it starts because BPMN is incredibly accessible. You can learn how to model for free (through books, blogs and tutorial videos), you can choose from an impressive variety of modeling tools and even pick which engine executes your process. If you need support with building a solution, there’s a vast community there to help (you can even find me helping out on Camunda’s forum).

But it even goes beyond the end-users. The vendors who build BPMN tools also have a great community; we frequently meet up as part of the OMG’s Model Interchange Working Group, which is actually more fun than the name suggests. Here, we agree on things that move the standard forward and ensure consistency and compatibility of the standard across tools. Once a year, we even do a live demonstration showcasing each of the modeling tools as they work together to build a single BPMN model.

Conclusion

Instead of asking why people are still using BPMN, maybe it is worthwhile asking why you’re being told not to use BPMN? No one has managed to make a better notation, and it’s not like there’s a community out there asking for someone to come up with an alternative. Some process orchestrators simply don’t have any visualization (which I think is a little short-sighted and limiting, but each to their own). Some clearly want to avoid the competition inherent in offering the potential of migrating from one BPMN tool to another. Some perhaps like the idea of being the only source of consulting and training for their own notation.

I could be wrong, since these reasons don’t seem to show up when vendors themselves bring up the subject of BPMN. They tend to say, “BPMN is old, you should use our thing instead,” and hope no one asks, “Sure, but how exactly is your thing better than BPMN?”

Ready to learn BPMN?

There are many resources that can help you get started with BPMN, including our own detailed BPMN tutorial and BPMN documentation. Feel free to take a look, and if you’d like to try modeling your processes, you can sign up for Camunda and model for free.

The post BPMN Is Brilliant, Actually appeared first on Camunda.

]]>
https://camunda.com/blog/2024/06/bpmn-is-brilliant/feed/ 2
Thinking Outside The Microservice Box: Diversity of Endpoints (Part 2) https://camunda.com/blog/2024/03/thinking-outside-the-microservice-box-diverse-endpoints/ Thu, 14 Mar 2024 15:00:40 +0000 https://camunda.com/?p=102675 Event-based microservice architectures are popular but do have limitations. In this post, we'll talk about the effect of diverse endpoints and why it matters.

The post Thinking Outside The Microservice Box: Diversity of Endpoints (Part 2) appeared first on Camunda.

]]>
In this short series (see part 1 here) I’ll be doing a deep dive into some of the limitations of modern event-based microservices architectures. I’ll then be showing how looking at that architecture through the lens of process orchestration can solve these issues and even expand a lot on what you thought is possible for distributed systems. I’ve previously discussed the role of decision making, and in this post I want to talk about the consequences of diverse endpoints to an architecture.

In a perfect world services would use the same communication protocols and the same data structures. Sadly, as the old saying goes, variety is the spice of life and bane of software developers who just want an easy life.

Sure, you can decide that all your services will send and receive Kafka messages, but what happens when you’ve got to integrate a legacy system? Or an external service that only has a GraphQL endpoint? Well let’s discuss those decisions and how embracing a diversity of endpoints can provide a more true-to-life solution.

Diverse Endpoints

A business process that runs through your microservices is easier to manage if the services follow the 12 rules, as laid out by Adam Wiggins. These rules give new developers a good baseline for building new features. It also ensures the scope of services remains relatively similar, avoiding the creation of a “god service” within the architecture. But an often unconsidered value in connecting lots of independent services is being able to streamline your systems in a way that makes it easier to scale throughput, expand complexity and integrate new services. The vast majority of processes today require integration with technologies outside the scope of a typical microservice.

A recent study we commissioned revealed that 60% of new IT projects integrate 26 or more different systems. They need to connect to RPA bots, SaaS endpoints, and front end applications, many of which will not be able to abide by your rules. So what do you do? Most people consider something like a front end application simply out of scope for a microservice architecture, and it might be. It’s not out of scope for the process though. The process doesn’t care that in the middle of your beautifully compliant microservices architecture, there needs to sit a weird little front end that breaks all the rules. If you need to call out to a collection of strange unique endpoints for your process to complete you can’t say that they are out of scope without also admitting the inability of the solution to integrate those services. Meaning the architecture can no longer fulfill the requirements process.

Diverse-endpoints

Faced with this issue most people would attempt to build some kind of facade between the services, which creates technical debt without really solving the problem. To avoid this, designers simply redefine the microservice section of the architecture as being limited to completing “most of the process.” This seems like a perfectly fine compromise, but in time what’s happening outside of a collection of uniform endpoints and services will grow. Making your collection of microservices a smaller part of the process as a whole—because of the inability to integrate what’s needed no matter how old, how new or how strange—will either hamper what your business can achieve or totally limit the impact of your software solution.

Integrating diverse endpoints

There is of course a very good reason why we tend to avoid diverse endpoints. In a point-to-point microservices architecture knowing that each service uses the same data structures and protocols makes everything easier to manage and maintain. So the thinking is that introducing diverse endpoints will be harder to manage and maintain. That’s not strictly true.

Replacing point-to-point communication with a dedicated process orchestrator means there’s a space between two services. This space can be used to process or transform data structures from one service so it can be used in a protocol required in another service. To be clear this doesn’t replace the dumb pipes paradigm Martin Fowler has spoken about. In fact an orchestrator component works by keeping the pipes dumb and ensuring any logic around connectivity and protocol can be uniform within those pipes.

Integrating-diverse-endpoints

The example above demonstrates how It might look when integrating some service calls, message events and even front end interfaces.

There are two really big benefits to being able to easily integrate very different systems and technologies. The first (and likely you’ve guessed this) is that more systems are made available to you for your solution. Makes for fewer compromises in your overall solution.

The other big benefit is that you can now increase the scope of your project in order to implement the complete end-to-end process. Earlier I spoke about how limiting it is to compose your system within the framework of a microservice architecture, and this is how you break out of that limitation. It’s not that the microservice approach is now redundant or obsolete, but rather it can be expanded to better reflect a full and complete solution. The scope of the project no longer ends within the parameters of a technical limitation. Your project ends when you can see an end-to-end solution that is complete. A solution that incorporates any and all necessary systems, services, people, processes and stakeholders. As the diagram below demonstrates.

Integrating-diverse-endpoints-2

Conclusion

At the beginning of this post I mentioned three goals of any good distributed system.

  • Easy to scale throughput
  • Easy to expand complexity
  • Easy to integrate new services

To accomplish these goals, limitations are imposed on most systems because the wisdom of time suggests that without them, the goals cannot be accomplished. So, the compromise set out by these limitations becomes accepted practice. But now orchestrators allow for these goals to be reached without the same limitations, and in fact give more freedom to better solve more complex problems. It’s clear to me that for someone who is trying to build a system for a critical business process an orchestrator is fundamental to avoiding unwanted compromises that will in the end limit the effectiveness of the overall solution.

The post Thinking Outside The Microservice Box: Diversity of Endpoints (Part 2) appeared first on Camunda.

]]>