agentic ai Archives | Camunda https://camunda.com/blog/tag/agentic-ai/ Workflow and Decision Automation Platform Thu, 26 Jun 2025 20:15:39 +0000 en-US hourly 1 https://camunda.com/wp-content/uploads/2022/02/Secondary-Logo_Rounded-Black-150x150.png agentic ai Archives | Camunda https://camunda.com/blog/tag/agentic-ai/ 32 32 Ensuring Responsible AI at Scale: Camunda’s Role in Governance and Control https://camunda.com/blog/2025/06/responsible-ai-at-scale-camunda-governance-and-control/ Tue, 24 Jun 2025 19:51:13 +0000 https://camunda.com/?p=142479 Camunda enables effective AI governance by acting as an operational backbone, so you can integrate, orchestrate and monitor AI usage in your processes.

The post Ensuring Responsible AI at Scale: Camunda’s Role in Governance and Control appeared first on Camunda.

]]>
If your organization is adding generative AI into its processes, you’ve probably hit the same wall as everyone else: “How do we govern this responsibly?”

It’s one thing to get a large language model (LLM) to generate a summary, write an email, or classify a support ticket. It’s another entirely to make sure that use of AI fits your company’s legal, ethical, operational, and technical policies. That’s where governance comes in—and frankly, it’s where most organizations are struggling to find their footing.

The challenge isn’t just technical. Sure, you need to worry about prompt injection attacks, hallucinations, and model drift. But you also need to think about compliance audits, cost control, human oversight, and the dreaded question from your CEO: “Can you explain why the AI made that decision?” These aren’t abstract concerns anymore—they’re real business risks that can derail AI initiatives faster than you can say “responsible deployment.”

That’s where Camunda comes into the picture. We’re not an AI governance platform in the abstract sense. We don’t decide your policies for you, and we’re not going to tell you whether your use case is ethical or compliant. But what we do provide is something absolutely essential: a controlled environment to integrate, orchestrate, and monitor AI usage inside your processes, complete with the guardrails and visibility that support enterprise-grade governance.

Think of it this way: if AI governance is about making sure your organization uses AI responsibly, then Camunda is the operational backbone that makes those policies actually enforceable in production systems. We’re the difference between having a beautiful AI ethics document sitting in a SharePoint folder somewhere and actually implementing those principles in your day-to-day business operations.

This post will explore how Camunda fits into the broader picture of AI governance, diving into specific features—from agent orchestration to prompt tracking—that help you operationalize your policies and build trustworthy, compliant automations.

What is AI governance, and where does Camunda fit?

Before we dive into the technical details, it’s worth stepping back and talking about what AI governance actually means. The term gets thrown around a lot, but in practice, it covers everything from high-level ethical principles to nitty-gritty technical controls.

We’re framing this discussion around the “AI Governance Framework” provided by ai-governance.eu, which defines a comprehensive model for responsible AI oversight in enterprise and public-sector settings. The framework covers organizational structures, procedural requirements, legal compliance, and technical implementations.

Ai-governance-eu

Camunda plays a vital role in many areas of governance, but none more so than the “Technical Controls (TeC)” category.. This is where the rubber meets the road—where your governance policies get translated into actual system behaviors. Technical controls include enforcing process-level constraints on AI use, ensuring explainability and traceability of AI decisions, supporting human oversight and fallback mechanisms, and monitoring inputs, outputs, and usage metrics across your entire AI ecosystem.

Here’s the crucial point: these technical controls don’t replace governance policies—they ensure that those policies are actually followed in production systems, rather than just existing as aspirational documents that nobody reads.

1. Fine-grained control over how AI is used

The first step to responsible AI isn’t choosing the right model or writing the perfect prompt—it’s being deliberate about when, where, and how AI is used in the first place. This sounds obvious, but many organizations end up with AI sprawl, where different teams spin up AI integrations without any coordinated approach to governance.

With Camunda, AI usage is modeled explicitly in BPMN (Business Process Model and Notation), which means every AI interaction is part of a documented, versioned, and auditable process flow.

Agentic-ai-camunda

You can design processes that use Service Tasks to call out to LLMs or other AI services, but only under specific conditions and with explicit input validation. User Tasks can involve human reviewers before or after an AI step, ensuring critical decisions always have human oversight. Decision Tables (DMN) can evaluate whether AI is actually needed based on specific inputs or context. Error events and boundary events capture and handle failed or ambiguous AI responses, building governance directly into your process logic.

Because the tasks executed by Camunda’s AI agents are defined with BPMN, those tasks can be deterministic workflows themselves, ensuring that, on a granular level, execution is still predictable.

This level of orchestration lets you inject AI into your business processes on your own terms, rather than letting the AI system dictate behavior. You’re not just calling an API and hoping for the best—you’re designing a controlled environment where AI operates within explicit boundaries.

Here’s a concrete example: if you’re processing insurance claims and want to use AI to classify them as high, medium, or low priority, you can insert a user task to verify all “high priority” classifications before they get routed to your fraud investigation team. You can also add decision logic that automatically escalates claims above a certain dollar amount, regardless of what the AI thinks. This way, you keep humans in the loop for critical decisions without slowing down routine processing.

2. Your models, your infrastructure, your rules

One of the most frequent concerns about enterprise AI adoption centers on data privacy and vendor risk. Many organizations have strict requirements that no customer data, internal business logic, or proprietary context can be sent to third-party APIs or cloud-hosted LLMs.

Camunda’s approach to agentic orchestration supports complete model flexibility without sacrificing governance capabilities. You can use OpenAI, Anthropic, Mistral, Hugging Face, or any provider you choose, and, starting with Camunda 8.8 (coming in October 2025), you can also route calls to self-hosted LLMs running on your own infrastructure. Whether you’re running LLaMA 3 on-premises, using Ollama for local development, or connecting to a private cloud deployment, Camunda treats all of these as different endpoints in your process orchestration.

There’s no “magic” behind our AI integration—we provide open, composable connectors and SDKs that integrate with standard AI frameworks like LangChain. You control the routing logic, prompt templates, authentication mechanisms, and access credentials. Most importantly, your data stays exactly where you want it.

For example, a financial services provider might route customer account inquiries to a cloud-hosted model, but keep transaction details and personal financial information on-premises. With Camunda, you can model this routing logic explicitly using decision tables to determine which endpoint to use based on content and context.

3. Design AI tasks with guardrails: Preventing prompt injection and hallucinations

Prompt injection isn’t just a theoretical attack—it’s a real risk that can have serious business consequences. Any time an AI model processes user-generated input, there’s potential for malicious content to manipulate the model’s behavior in unintended ways.

Camunda helps mitigate these risks by providing structured approaches to AI integration. All data can be validated and sanitized before it is used in a prompt, preventing raw input from reaching the models. Prompts are designed using FEEL (Friendly Enough Expression Language), allowing prompts to be flexible and dynamic. This centralized prompt design means prompts become part of your process documentation rather than buried in application code. Camunda’s powerful execution listeners can be utilized to analyze and sanitize the prompt before it is sent to the agent.

Ai-prompt-guardrails-camunda

Decision tables provide another layer of protection by filtering or flagging suspicious content before it reaches the model. You can build rules that automatically escalate requests containing certain keywords or patterns to human review.

When you build AI tasks with Camunda’s orchestration engine, you create a clear separation between the “business logic” of your process and the “creative output” of the model. This separation makes it much easier to test different scenarios, trace unexpected behaviors, and implement corrective measures. Camunda’s AI Task Agent supports guardrails, such as limiting the number of iterations it can perform, or the maximum number of tokens per request to help control costs.

4. Monitoring and auditing AI activity

You can’t govern what you can’t see. This might sound obvious, but many organizations deploy AI systems with minimal visibility into how they’re actually being used in production.

Optimize gives you comprehensive visibility into AI usage across all your processes. You can track the number of AI calls made per process or task, token usage (and therefore associated costs), response times and failure rates, and confidence scores or output quality metrics when available from your models.

This monitoring data supports multiple governance objectives. For cost control, you can spot overuse patterns and identify inefficient prompt chains. For policy compliance, you can prove that AI steps were reviewed when required. For performance tuning, you can compare model outputs over time or across different vendors to optimize both cost and quality.

You can build custom dashboards that break down AI usage by business unit, region, or product line, making AI usage measurable, accountable, and auditable. When auditors ask about your AI governance, you can show them actual data rather than just policy documents.

5. Multi-agent systems, modeled with guardrails

The future of enterprise AI isn’t just about better individual models—it’s about creating systems where multiple AI agents work together to achieve complex business goals.

Camunda’s agentic orchestration lets you design and govern these complex systems with the same rigor you’d apply to any other business process. Each agent—whether AI, human expert, or traditional software—gets modeled as a task within a larger orchestration flow. The platform defines how agents collaborate, hand off work, escalate problems, and recover from failures.

Ai-multiagent-guardrails-camunda

You can design parallel agent workflows with explicit coordination logic, conditional execution paths based on agent outputs, and human involvement at any point where governance requires it. Composable confidence checks ensure work only proceeds when all agents meet minimum quality thresholds.

Here’s a concrete example: in a legal document review process, one AI agent extracts key clauses, another summarizes the document, and a human attorney provides final review. Camunda coordinates these steps, tracks outcomes, and escalates if confidence scores are low or agents disagree on their assessments.

6. Enabling explainability and traceability

One of the most challenging aspects of AI governance is explainability. When an AI system makes a decision that affects your business or customers, stakeholders want to understand how and why that decision was made—and this is often a legal requirement in regulated industries.

Modern AI models are probabilistic systems that don’t provide neat explanations for their outputs. But Camunda addresses this by creating comprehensive audit trails that capture the context and process around every AI interaction.

For every AI step, Camunda persists the inputs provided to the model, outputs generated, and all prompt metadata. Each interaction gets correlated with the exact process instance that triggered it, creating a clear chain of causation. Version control for models, prompts, and orchestration logic means you can trace any historical decision back to the exact system configuration that was in place when it was made.

Through REST APIs, event streams, and Optimize reports, you can answer complex questions about AI usage patterns and decision outcomes. When regulators ask about specific decisions, you can provide comprehensive answers about what data was used, what models were involved, what confidence levels were reported, and whether human review occurred.

Camunda as a cornerstone of process-level AI governance

AI governance is a team sport that requires coordination across multiple organizational functions. You need clear policies, compliance frameworks, technical implementation, and ongoing oversight. No single platform can address all requirements, nor should it try to.

What Camunda brings to this collaborative effort is operational enforcement of governance policies at the process level. We’re not here to define your ethics policies—we provide the technical infrastructure to ensure that whatever policies you establish actually get implemented and enforced in your production AI systems.

Camunda gives you fine-grained control over exactly how AI gets used in your business processes, complete flexibility in model and hosting choices, robust orchestration of human-in-the-loop processes, comprehensive monitoring and auditing capabilities, protection against AI-specific risks like prompt injection, and support for cost tracking and usage visibility.

You bring the policies, compliance frameworks, and business requirements—Camunda helps you enforce them at runtime, at scale, and with the visibility and control that enterprise governance demands.

If you’re looking for a way to govern AI at the process layer—to bridge the gap between governance policy and operational reality—Camunda offers the controls, insights, and flexibility you need to do it safely, confidently, and sustainably as your AI initiatives grow and evolve.

Learn more

Looking to get started today? Download our ultimate guide to AI-powered process orchestration and automation to discover how to start effectively implementing AI into your business processes quickly.

The post Ensuring Responsible AI at Scale: Camunda’s Role in Governance and Control appeared first on Camunda.

]]>
Camunda Alpha Release for June 2025 https://camunda.com/blog/2025/06/camunda-alpha-release-june-2025/ Tue, 10 Jun 2025 10:00:00 +0000 https://camunda.com/?p=141378 We're excited to announce the June 2025 alpha release of Camunda. Check out what's new, including new capabilities like the FEEL Copilot, agentic orchestration connectors, and improved migration tooling.

The post Camunda Alpha Release for June 2025 appeared first on Camunda.

]]>
We’re excited to share that the latest alpha of Camunda will be live very soon and you will soon see it available for download. For our SaaS customers who are up to date, you may have already noticed some of these features as we make them available for you automatically.

Update: The alpha release is officially live for all who wish to download.

Below is a summary of everything new in Camunda for this June with the 8.8-alpha5 release.

This blog is organized using the following product house, with E2E Process Orchestration at the foundation and our product components represented by the building bricks. This organization allows us to organize the components to highlight how we believe Camunda builds the best infrastructure for your processes, with a strong foundation of orchestration and AI thoughtfully infused throughout.

Product-house

E2E Process Orchestration

This section will update you on the components that make up Camunda’s foundation, including the underlying engine, platform operations, security, and API.

Zeebe

The Zeebe team focused on big fixes for this release.

Operate

For this release, our Operate engineering team worked on bug fixes.

Tasklist

For this release, we have continued to work on bug fixes in Tasklist as well.

Web Modeler

With this alpha release of Web Modeler, we’re introducing powerful new features that streamline process modeling and enhance the developer experience.

Azure Repos Sync

Camunda now supports an integration with Azure DevOps, which allows for direct synchronization with Azure repositories.

Azure-devops-camunda

FEEL Copilot

Pro- and low-code developers using Web Modeler SaaS can develop FEEL expressions with an integrated editor that pulls process variables and process context, making it easy for anyone to perform business logic in Camunda.

For Web Modeler SaaS customers, it also features the ‘FEEL Copilot’ which takes advantage of integrated generative AI to write and debug executable FEEL (Friendly Enough Expression Language) expressions.

Camunda-feel-copilot

Desktop Modeler

This alpha, we have also provided more functionality for our Desktop Modeler.

Process application deployment

A process application is now deployed as a single bundle of files. This allows using deployment binding for called processes, decisions, and linked forms.

Deployed decision link to Operate

After a DMN file is deployed to Camunda, links to the deployed decisions in Operate are displayed in the success notification.

Enhanced FEEL suggestions

Literal values like true or false are now displayed in the autocompletion for fast and easy expression writing.

Check out the full release notes for the latest Desktop Modeler 5.36 release right here.

Optimize

Our Optimize engineering team has been working on bug fixes this release cycle.

Identity

Camunda’s new Identity service delivers enhanced authentication and fine-grained authorization capabilities across both Self-Managed and SaaS environments. Key updates include:

  • Self-Managed Identity Management: Administrators can natively manage users, groups, roles, and memberships via the Identity database—without relying on external systems.
  • OIDC Integration: Supports seamless integration with standards-compliant external Identity Providers (IdPs), including Keycloak and Microsoft Entra (formerly Azure AD), enabling single sign-on (SSO) and federated identity management.
  • Role-Based Access Control (RBAC): Provides resource-level access control with assignable roles and group-based permissions, enabling precise scoping of user capabilities across the platform.
  • Flexible Mapping: Users, groups, and roles can now be dynamically mapped to resource authorizations and multi-tenant contexts, supporting complex enterprise and multi-tenant deployment scenarios.
  • Migration Support: Simplified tooling facilitates migration from legacy Identity configurations to the new service, reducing operational overhead and enabling a phased rollout.
  • Organizational Identity for SaaS: In SaaS deployments, customers can integrate their own IdP, allowing centralized management of organizational identities while maintaining cluster-specific resource isolation.
  • Cluster-Specific Roles & Groups: SaaS environments now support tenant-isolated roles, groups, and authorizations per cluster, ensuring that customer-specific access policies are enforced at runtime.

Please see our release notes for more on the updates to Identity management.

Console

The Console engineering team has been working on bug fixes this release cycle.

Installation Options

This section gives updates on our installation options and various supported software components.

Self-Managed

For our self-managed customers, we have introduced a graceful shutdown for C8Run by rebuilding how we manage C8Run started processes. This resolves an issue where stopping C8Run during the startup process can create zombie processes.

We have also added features to support supplying image.digest in the values.yaml file instead of an image tag as well as the support for an Ingress external hostname.

Task Automation Components

In this section, you can find information related to the components that allow you to build and automate your processes including our modelers and connectors.

Connectors

We have introduced two connectors to support agentic AI with Camunda. You can find more on Camunda and Agentic in the Agentic Orchestration section in this blog post.

  • The AI Agent connector which was recently published on Camunda Marketplace is now officially included as part of this alpha release and directly available in Web Modeler. This connector is designed for use with an ad-hoc sub-process in a feedback loop, providing automated user interaction and tool discovery/selection.

    The connector supports providing a custom OpenAI endpoint to be used in combination with custom providers and locally hosted models (such as Ollama).
  • The Vector Database connector, also published to Camunda Marketplace, allows embedding, storing, and retrieving Large Language Model (LLM) embeddings. This enables building AI-based solutions for your organizations, such as context document search and long-term LLM memory and can be used in combination with the AI Agent connector for RAG (Retrieval-Augmented Generation) use cases.

Agentic Orchestration

With a continued focus on operationalizing AI, this section provides information about the continued support of agentic orchestration in our product components. This new Agentic Orchestration documentation section of our release blog is a great starting point to explore Camunda’s Agentic Orchestration approach. 

Camunda-agentic-orchestration

To support modern automation requirements, Camunda has adopted orchestration patterns that enable AI agents and processes to remain adaptive by combining deterministic with dynamic orchestration.

This architecture allows agents to incorporate dynamic knowledge into their planning loops and decision processes. The same mechanisms also support continuous learning, by updating and expanding the knowledge base based on runtime feedback.

To support this approach, Camunda has incorporated both our Vector Database connector and AI Agent Outbound connector directly into its orchestration layer.

Together, these capabilities allow Camunda to support agentic orchestration patterns such as:

  • Planning loops that select and sequence tasks dynamically
  • Use of short-term memory (process variables) and long-term memory (vector database retrievals)
  • Integration of event-driven orchestration and multi-agent behaviors through nested ad-hoc subprocesses.

As mentioned in the Connectors section, we have recently released two connectors to support our approach:

  • The AI Agent connector is designed for use with an ad-hoc sub-process in a feedback loop, providing automated user interaction and tool discovery/selection.

    This connector integrates with large language models (LLMs)—such as OpenAI or Anthropic—giving agents reasoning capabilities to select and execute ad-hoc sub-processes within a BPMN-modeled orchestration. Agents can evaluate the current process context, decide which tasks to run, and act autonomously—while maintaining full traceability and governance through the orchestration engine.
  • The Vector Database connector which allows embedding, storing, and retrieving Large Language Model (LLM) embeddings. This enables building AI-based solutions for your organizations, such as context document search and long-term LLM memory and can be used in combination with the AI Agent connector for RAG (Retrieval-Augmented Generation) use cases.

If you would like to see these new connectors in action, we encourage you to review our website and see a video of how Camunda provides this functionality. We also have a step-by-step tutorial for using the AI Agent Connector in our blog.

Camunda 7

There are several updates in this release for Camunda 7.

Support for Spring Boot 3.5

This alpha release features support for Spring Boot 3.5.0.

New LegacyJobRetryBehaviorEnabled process engine flag

Starting with versions 7.22.5, 7.23.2 and 7.24.0, the process engine introduces a new configuration flag: legacyJobRetryBehaviorEnabled.

By default, when a job is created, its retry count is determined based on the camunda:failedJobRetryTimeCycle expression defined in the BPMN model.

However, setting legacyJobRetryBehaviorEnabled to true enables the legacy behavior, where the job is initially assigned a fixed number of retries (typically 3), regardless of the retry configuration.

In 7.22.5+ and in 7.23.2+ the default value is true for legacyJobRetryBehaviorEnabled. For 7.24.0+ the default value is false for legacyJobRetryBehaviorEnabled .

External task REST API and OpenAPI extended

Now the External task REST API is extended with the createTime field. OpenAPI is updated as well along with the extensionProperties for the LockedExternalTaskDto.

You can find the latest OpenAPI documentation here. Thank you for this community contribution.

Camunda 7 to Camunda 8 Migration Tools

With our Camunda 7 to Camunda 8 Migration Tools 0.1.0-alpha2 release, the Camunda 7 to Camunda 8 Data Migrator brings many quality of life improvements for our customers that are moving from Camunda 7 to Camunda 8.

Auto-deployment with Migrator Application

To help you migrate seamlessly, the BPMN diagrams that are placed in ./configuration/resources directory are auto-deployed to Camunda 8 when starting the migrator application.

Simplified Configuration

We’ve also made it easier to configure the Camunda 8 client allowing you to define client settings such as the Zeebe URL directly in the application.yml file.

Logging Levels

In addition, logging has been enhanced with the introduction of logging levels, as well as more specific warnings and errors.

For example, if a Camunda 7 process instance is in a state that cannot be consistently translated to Camunda 8, a warning is logged and the process instance is skipped.

To proceed, these instances must be adjusted in Camunda 7. Once complete, with the recent updates, you can resume migration for previously skipped and adjusted instances.

While the Camunda 7 to Camunda 8 Migration Tools are still in alpha, you can already check out the project and give it a try! Visit https://github.com/camunda/c7-data-migrator.

Thank you

We hope you enjoy our latest minor release updates! For more details, be sure to review the latest release notes as well. If you have any feedback or thoughts, please feel free to contact us or let us know on our forum.

If you don’t have an account, you can try out the latest version today with a free trial.

The post Camunda Alpha Release for June 2025 appeared first on Camunda.

]]>
The Benefits of BPMN AI Agents https://camunda.com/blog/2025/05/benefits-bpmn-ai-agents/ Thu, 22 May 2025 21:14:35 +0000 https://camunda.com/?p=139555 Why are BPMN AI agents better? Read on to learn about the many advantages to using BPMN with your AI agents, and how complete visibility and composability help you overcome key obstacles to operationalizing AI.

The post The Benefits of BPMN AI Agents appeared first on Camunda.

]]>
There are lots of tools for building AI Agents and at the core they need three things. First, they need to understand their overall purpose and the rules in which they should operate. So you might create an agent and tell it, “You’re here to help customers with generic requests about the existing services of the bank.” Secondly, we need a prompt, which is a request to the agent that an agent can try to fulfil. Finally, you need a set of tools. These are the actions and systems that an agent has access to in order to fulfill the request.

Most agent builders will wrap up those three requirements into a single, static, synchronous system, but at Camunda we decided not to do this. We found that it creates too many use case limitations, it’s not scalable and it’s hard to maintain. To overcome these limitations, we came up with a concept that lets us decouple these requirements and completely visualize an agent in a way that opens it up to far more use cases, not only on a technical level, but also in a way that  alleviates a lot of the fears that people have when adding AI agents as part of their core processes.

The value of a complete visualization

Getting insight into how an AI Agent has performed in a given task often requires someone to read through its chain of thought (this is like the AI’s private journal, where it details how it’s thinking about the problem). This will usually let you know what tools it decided to use and why. So in theory if you wanted to check on how your AI Agent was performing, you could read through it. In practice, this is just not practical for two reasons:
1. It limits the visibility of what happened to a text file that needs to be interpreted.
2. AI agents can sometimes lie in their chain of thought—so it might not even be accurate.

Our solution to this is to completely visualize the agent, its tools and its execution all in one place.

Gain full visibility into AI agent performance with BPMN

Ai-agent-visibility-bpmn-camunda

The diagram above shows a BPMN process that has implemented an AI agent. It’s in two distinct parts. The agent logic is contained within the AI Task Agent activity and the tools it has access to is displayed with an ad-hoc sub-process. This is a BPMN construct that allows for completely dynamic execution of the tasks within it.

With this approach the action of an agent is completely visible to the user in design time, during execution, and can even be used to evaluate how well the process performs with the addition of an agent.

Ai-agent-performance-camunda

The diagram above shows a headmap which shows which tools take the longest to run. This is something impossible to accurately measure with a more traditional AI agent building approach.

Decoupling tools from agent logic

This design completely decouples the agent logic from the available tool set. Meaning that the agent will find out only in runtime what tools are at its disposal. The ramifications of this are actually quite profound. It means that you can run multiple versions of the same process with the same agent, but a completely different tool set. This makes context reversing far easier and also lets us qualitatively evaluate the impact of adding or removing certain tools through AB testing.

Improving maintainability for your AI agents

The biggest impact of this decoupling in my opinion though is how it improves maintainability. Designers of the process can add or remove new tools without ever needing to change or update the AI agent. This is a fantastic way of separating responsibilities when a new process is being built. While AI experts can focus on ensuring the AI Task Agent is properly configured, developers can build the tooling independently. And of course, you can also just add pre-built tools for the agent to use.

Ai-agent-maintanability-camunda

Composable design

Choosing, as we did, to marry AI agent design with BPMN design means we’ve unlocked access for AI agent designers to all the BPMN patterns, best practices and functionality that Camunda has been building over the last 10 years or so. While there’s a lot you gain because of that, I want to focus on just one here: Composable architecture.

Composable orchestration is the key to operationalizing AI

Camunda is designed to be an end-to-end orchestrator to a diverse set of tools, rules, services and people. This means we have designed our engine and the tools around it so that there is no limitation on what can be integrated. It also means we want users to be able to switch out services and systems over time, as they become legacy or a better alternative is found.

This should be of particular interest to a developer of AI agents because it lets you not only switch out the tools the AI Agent has access to, but more importantly, it lets you switch out the agent’s own LLM for the latest and greatest. So to add or even just test out the behaviour of a new LLM no longer means building a new agent from scratch—just swap out the brain and keep the rest. This alone is going to lead to incredibly fast improvements and deployments to your agents, and help you make sure that a change is a meaningful and measurable one.

Ai-agent-maintanability-camunda-2

Conclusion

Building AI agents the default way that other tools offer right now leads you to adding a new black box to your system. One that is less maintainable and and far more opaque in execution than anything else you’ve ever integrated. This is going to make it hard to properly maintain and evaluate.

At Camunda we have managed to open up that black box in a way that integrates it directly into your processes as a first-class citizen. Your agent will immediately benefit from everything that BPMN does and become something that can grow with your process.

It’s important to understand that you’re still adding a completely dynamic aspect to your process, but this way you mitigate most concerns early on. For all these reasons, I can imagine that of all the many, many AI agents that are going to be built this year, I’m sure the only ones that will still be used by the end of next year will be built in Camunda with BPMN.

Try it out

All of this is available for you to try out in Camunda today. Learn more about how Camunda approaches agentic orchestration and get started now with a free trial here.

The post The Benefits of BPMN AI Agents appeared first on Camunda.

]]>
Guide to Adding a Tool for an AI Agent https://camunda.com/blog/2025/05/guide-to-adding-tool-ai-agent/ Wed, 21 May 2025 19:31:39 +0000 https://camunda.com/?p=139473 In this quick guide, learn how you can add exactly the tools you want to your AI Agent's toolbox so it can get the job done.

The post Guide to Adding a Tool for an AI Agent appeared first on Camunda.

]]>
AI Agents and BPMN open up an exciting world of agentic orchestration, empowering AI to act with greater autonomy while also preserving auditability and control. With Camunda, a key way that works is by using an ad-hoc sub-process to clearly tell the AI agent which tools it has access to while it attempts to solve a problem. This guide will help you understand exactly how to equip your AI agents with a new tool.

How to build an AI Agent in BPMN with Camunda

There are two aspects to building an AI Agent in BPMN with Camunda.

  1. Defining the AI Task Agent
  2. Defining the available tools for the agent.

The AI Task Agent is the brain, able to understand the context and the goal and then to use the tools at its disposal to complete the goal. But where are these tools?

Adding new tools to your AI agent

The tools for your AI agent are defined inside an ad-hoc sub-process which the agent is told about. So assuming you’ve set up your Task Agent already—and you can! Because you just need the process model from this github repo. The BPMN model without any tools should look like this:

Ad-hoc-sub-process

Basically I’ve removed all the elements from within the ad-hoc sub-process. The agent still has a goal—but now has no way of accomplishing that goal.

In this guide we’re going to add a task to the empty sub-process. By doing this, we’ll give the AI Task Agent access to it as a tool it can use if it needs to.

The sub-process has a multi-instance marker, so for each tool to be used there’s a local variable called toolCall that we can use to get and set variables.

I want to let the AI agent ask a human a technical question, so first I’m going to add a User Task to the sub-process.

Ai-agent-tool

Defining the tool for the agent

The next thing we need to do is somehow tell the agent what this tool is for. This is done by entering a natural language description of the tool in the Element Documentation field of the task.

Element-documentation-ai-agent-tool

Defining variables

Most tools are going to request specific variables in order to operate. Input variables are defined so that the agent is aware of what’s required to run the tool in question. It also helps pass the given context of the current process to the tool. Output variables define how we map the response from the tool back into the process instance, which means that the Task Agent will be aware of the result of the tool’s execution.

In this case, to properly use this tool, the agent will need to come up with a question.

For a User Task like this we will need to create an input variable like the one you see below.

Local-variable-ai-agent-tool

In this case we created a local variable, techQuestion, directly in the task. We’ll then both assign this variable and define it for the task agent we need to call the fromAi function. To do that we must provide:

  1. The location of the variable in question.
    • In this case that would be within the toolCall variable.
  2. A natural language description of what the variable is used for.
    • Here we describe it as the question that needs to be asked.
  3. The variable type.
    • This is a string, but it could be any other primitive variable type.

When all put together, it looks like this:

fromAi(toolCall.techQuestion, "This is a specific question that you’d like to ask", "string")

Next we need an output variable so that the AI agent can be given the context it needs to understand if running this tool produced the output it expected. In this case, we want it to read the answer from the human expert it’s going to consult.

Process-variable-ai-agent-tool

This time, create an output variable. You’ll have two fields to fill in.

  1. Process variable name
    • It’s important that this variable name matches the output expected by the sub-process. The expected name can be found in the output element of the sub-process, and as you can see above, we’ve named our output variable toolCallResult accordingly.
      Output-ai-agent-tool
  2. Variable assignment value
    • This needs to simply take the expected variable from the tool task and add it to a new variable that can be put into the toolCallResult object

So in the end the output variable assignment value should be something like this:

{ “humanAnswer” : humanAnswer}

And that’s it! Now the AI Task Agent knows about this tool, knows what it does and knows what variables are needed in order to get it running. You can repeat this process to give your AI agents access to exactly as many or as few tools as they need to get a job done. The agents will then have the context and access required to autonomously select from the tools you have provided, and you’ll be able to see exactly what choices the agent made in Operate when the task is complete.

All of this is available for you to try out in Camunda today. Learn more about how Camunda approaches agentic orchestration and get started now with a free trial here. For more on getting started with agentic AI, feel free to dig deeper into our approach to AI task agents.

The post Guide to Adding a Tool for an AI Agent appeared first on Camunda.

]]>
MCP, ACP, and A2A, Oh My! The Growing World of Inter-agent Communication https://camunda.com/blog/2025/05/mcp-acp-a2a-growing-world-inter-agent-communication/ Tue, 20 May 2025 20:03:51 +0000 https://camunda.com/?p=139339 Making sense of the evolving agentic communication landscape: Model Context Protocol, Agent Communication Protocol and Agent2Agent Protocol.

The post MCP, ACP, and A2A, Oh My! The Growing World of Inter-agent Communication appeared first on Camunda.

]]>
The AI ecosystem is rapidly evolving from isolated AI models toward multi-agent systems: environments where AI agents must coordinate, communicate, and interoperate efficiently. At Camunda, we see this pattern quickly emerging as organizations are evolving their end-to-end business processes to take advantage of agent-driven automation.

As developers explore how to make multi-agent systems useful and reliable, new communication standards are emerging to address the need for interoperability, security, and shared understanding. Three notable efforts in this domain are:

  • Model Context Protocol (MCP) developed by Anthropic
  • Agent Communication Protocol (ACP) developed by IBM Research
  • Agent2Agent (A2A) Protocol developed by Google and Microsoft

Each targets a specific layer of the multi-agent interaction stack and reflects different philosophical and architectural priorities.

Model Context Protocol (MCP)

Anthropic developed the Model Context Protocol (MCP) to solve a narrow but critical problem: how to give large language models (LLMs) structured context about tools, APIs, and systems they can interact with. MCP focuses on standardizing the input context that LLMs receive before agents execute their tasks. As the Anthropic documentation says, “Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.”

With MCP, tools expose a structured schema such as an OpenAPI or JSON schema along with natural language descriptions. The LLM receives the schema via MCP when it is being prompted to act, ensuring a consistent understanding of available actions.

As an early example of agent-to-agent communication, MCP has several advantages. It has promise as a way to equip AI agents with structured context to call APIs, tools, and plugins intelligently. It’s an open specification that’s designed to work with any LLM or agentic framework, making it highly flexible. And it’s lightweight and easy for software developers to adopt because it aligns with common software development practices.

However, it’s important to note that MCP is limited to tool and model interactions; it’s not a general-purpose agent protocol. As of now, it doesn’t define inter-agent negotiation or dynamic delegation of tasks.

Example: Customer onboarding in financial services

Imagine a customer onboarding process at a retail bank, which requires validating identity documents, performing Know Your Customer (KYC) checks, interfacing with fraud detection services, and activating new customer accounts. Traditionally, each of these tasks is handled by a separate back-end service with its own API.

Using MCP, an AI-powered onboarding agent could be equipped with structured, real-time context about all of these APIs. MCP ensures the agent understands what each service does, how to call it, and what inputs and outputs to expect, without the need for a developer to hard-code specific logic into the model.

This allows the onboarding agent to dynamically compose API calls, intelligently route requests, and adapt its workflow if a service is temporarily unavailable—all while minimizing human intervention. The result is a faster, more consistent onboarding experience that reduces manual handoffs and operational delays.

Agent Communication Protocol (ACP)

The Agent Communication Protocol (ACP) developed by IBM Research is designed to define how autonomous AI agents communicate with one another, with an emphasis on structured dialogue and coordination across heterogeneous systems. It aims to provide a shared semantic foundation for multi-agent communication, including message types, intents, context markers, and response expectations.

With ACP, agents exchange structured messages that encapsulate intention, task parameters, and context. The protocol enables dynamic negotiation between agents—for example, for delegation or task refinement.

ACP’s strong focus on semantics and interoperability mean it has the potential to become a very powerful protocol for high-level coordination of agents (beyond simple messaging). It can facilitate distributed task-solving by autonomous agents that have overlapping goals.

However, a potential hurdle to adoption is that ACP requires agent developers to agree on shared ontologies. This may position it as the protocol of choice for development teams that work for the same company or that work on the same software product. ACP is still in the early stages of development in terms of syntax, implementation, and tooling, so much remains to be seen as it grows.

Example: Supply chain coordination across departments

Consider a global manufacturer with autonomous agents representing procurement, inventory, and logistics. These agents must coordinate continuously to maintain optimal stock levels, anticipate shortages, and reroute shipments when needed. Using ACP, these agents could engage in structured, semantically rich dialogue to negotiate changes in supply orders, reallocate inventory based on real-time demand forecasts, or trigger alerts if a delay will cause cascading disruptions. For example:

  • The procurement agent might notify logistics: “Delay expected from supplier X. Can we reassign delivery Y?”
  • The logistics agent can respond: “Yes, rerouting via warehouse Z. Updating inventory accordingly.”

By using shared ontologies and structured message types, ACP aims to support adaptive, high-fidelity inter-agent collaboration across teams and systems. This is especially valuable in environments where distributed decision-making and resilience are key.

Agent2Agent Protocol (A2A)

The Agent2Agent Protocol (A2A) being developed by Google with support from Microsoft is another open standard. It’s designed to allow different AI agents from different companies or domains to exchange messages and perform coordinated tasks. Its development was prompted by the growing use of LLM-based agents in workflows that span multiple applications.

In A2A, agents advertise their capabilities using a structured metadata format called “agent cards.” Agents then communicate through signed, structured messages based on a shared schema. A2A includes provisions for trust, routing, and structured memory exchange. This design maximizes the options for composability and cross-platform collaboration between AI agents.

While A2A is a community-driven project, it is supported by two of the largest software companies in the world, both of which are cloud providers and LLM vendors, which may accelerate its development. However, the protocol is still an early-stage alpha with evolving security and governance capabilities, making it difficult for other vendors to start developing with it.

Example: Cross-platform customer support automation

Picture a scenario where a retail company uses Google Workspace, Zendesk, Salesforce, and Microsoft Teams. Different LLM-based agents exist in each environment and perform tasks such as summarizing conversations, logging support tickets, updating customer relationship management (CRM) records, and scheduling follow-ups.

With A2A, these agents can collaborate across platforms:

  • A Google-based agent summarizes a support call and shares the summary with a Salesforce agent
  • The Salesforce agent updates the customer record and flags a follow-up
  • A Microsoft-based assistant sees the flag and books a Teams meeting with the customer

Through agent cards and structured messaging, A2A aims to enable interoperability across agent ecosystems, so tasks can flow fluidly without brittle, point-to-point integrations. This supports consistent, personalized, and efficient customer service—at scale.

Comparing developing communication protocols

The following table summarizes the current state of MCP, ACP, and A2A:

MCPACPA2A
DeveloperAnthropicIBM ResearchGoogle and Microsoft
ScopeLLM tool context injectionSemantic multi-agent dialogueInter-agent message exchange
OpennessOpen specificationConceptual, not yet standardizedOpen-source, WIP standard
Primary focusStructured API/tool inputIntent and coordinationCapability discovery, secure messaging
Best forAgents interfacing with toolsComplex, interdependent agentsCross-platform agent workflows
LimitationsNo inter-agent messagingUndefined implementationStill maturing, needs consensus

As you can see, these three protocols represent complementary approaches to the problem of inter-agent communication. MCP addresses the immediate need to contextualize LLMs effectively; ACP looks further ahead at semantic richness and intent modeling; and A2A targets broad interoperability across agents and platforms.

For developers and organizations building agentic architectures, understanding and experimenting with these protocols will be essential. While no single standard has emerged as dominant, the collective momentum suggests that interoperability and shared context will be key to unlock the full potential of multi-agent AI.

Camunda can help you operationalize AI agents

Whether you’re new to agentic AI or you’ve already started building agents, Camunda process orchestration and automation can help you put AI into action. To learn about Camunda’s agentic orchestration capabilities, check out our guide, “Why agentic process orchestration belongs in your automation strategy.”

The post MCP, ACP, and A2A, Oh My! The Growing World of Inter-agent Communication appeared first on Camunda.

]]>
Agentic Orchestration: Automation’s Next Big Shift https://camunda.com/blog/2025/05/agentic-orchestration-automations-next-big-shift/ Wed, 14 May 2025 11:30:00 +0000 https://camunda.com/?p=138589 We've always believed in end-to-end process orchestration. Agentic orchestration lets us take it further, as we design the autonomous, AI-powered organization of the future.

The post Agentic Orchestration: Automation’s Next Big Shift appeared first on Camunda.

]]>
Since starting Camunda, we’ve believed in one thing above all: End-to-end process orchestration is the best way to make automation work—across people, systems, and devices.

We’ve seen time and time again that task-based automation might deliver quick wins, but it doesn’t scale. The moment processes get complex, those isolated tools start pulling in different directions. The result? Broken customer experiences. Inefficient teams. A lack of visibility and an inability to improve processes.

That’s the problem we set out to solve back in 2013. And it’s the same problem we continue to solve—only now, the stakes are higher.

AI is changing everything. Nearly every conversation we’re having with customers right now touches on it. According to the 2025 State of Process Orchestration and Automation Report, 84% of organizations want to add more AI capabilities over the next three years. But 85% struggle to make AI actually work at scale.

There are a few reasons why this is happening. First, simply adding AI into an automation strategy doesn’t magically create value. Done incorrectly, it just creates another silo—and yet another layer of technical debt. 

Second, traditional process automation focuses on automating around a set of predetermined rules (or deterministic orchestration). AI presents the opportunity to break those rules by executing processes dynamically.

That’s where agentic orchestration comes in.

Overcoming limitations in traditional process design

Process orchestration as we know it is deterministic, meaning you design processes and define their logic in advance. Sure, it can handle variants, but only if they’re a part of the original process model in BPMN or DMN. What we think of today as a fully automated process, or “straight through processing” (STP), usually relies on this structure.

AI agents make process automation much more dynamic. Dynamic orchestration uses AI to handle “unforeseen” tasks. It orchestrates based on defined goals and a given context, but doesn’t need specific instructions like a deterministic process.

But most business processes are somewhere in the middle. They have some STP in place, but are still using human case management to handle exceptions or tasks without a straightforward action. Agentic orchestration blends deterministic and dynamic orchestration seamlessly.

For example, most of the time, STP is done in seconds or minutes. But sometimes it fails. And when it does, people step in to investigate. It’s slow, messy, and manual. That’s where AI can help. Agentic orchestration takes over when the unexpected happens—analyzing unstructured data, spotting patterns, and suggesting actions.

Image1

Real world examples of agentic orchestration

And here’s where things get really exciting: This isn’t theoretical anymore. It’s real. It’s working. And it’s already creating serious value.

Our partner EY has built a tool for agentic trade reconciliation with Camunda. Reconciliation errors are usually handled manually. Because they are very labor intensive, they take a lot of time to review and are error-prone—resulting in a risk of fines. In fact, the world’s largest banks employ up to 25,000 people to review these exceptions. With agentic orchestration, they’re now using AI to suggest the next best action based on trade data and LLMs. That means faster resolution and T+1 compliance. But the most impressive value is in productivity: With agentic trade reconciliation, one employee can now handle far more cases per day on average, resulting in an increase in productivity of 7x.

Here’s another example: Payter, a payment terminal business for vending machines, is drowning in case management when payments fail. They have now started using Camunda to blend deterministic process logic with AI agent-driven exception handling. The expected outcome? Resolution times will drop by 50% from 24 to 12 minutes. Even better? Customer service will improve not just because of the shorter resolution time, but also because employees are now able to spend more time on complex issues.

Building the autonomous organization of the future

And the examples above are only the beginning. We’re seeing more and more companies wanting to bring more AI into their processes. In order to do that, they’re operationalizing AI in a way that’s composable, scalable, and flexible—not stuck in isolated systems. And Camunda is at the foundation of this shift. We’ve spent over a decade building a platform that does one thing exceptionally well: orchestrate complex, mission-critical processes from end to end.

Now, we’ve taken our powerful orchestration engine and infused it with embedded AI. The result? The ability to blend deterministic and dynamic orchestration in a unified agentic orchestration model—with guardrails, auditability, and control.

Camunda allows users to blend deterministic orchestration (via BPMN) with agentic orchestration (via agents) so you can implement as much or as little AI as you want within guardrails.

What does that mean in practice?

It means you can now:

  • Blend structured BPMN and DMN process modeling with flexible AI agents.
  • Automate what was once “un-automatable” (like complex case management).
  • Inject AI into your legacy systems without a big bang transformation.
  • Use low-code tools and connectors to move fast.
  • Implement AI safely and reliably, with “guardrails” for full auditability and control.

We’re giving you AI-native capabilities, like:

  • Ad-hoc sub-processes: Let agents decide what happens next.
  • Camunda Copilot: Go from a text prompt to a running process.
  • RPA and IDP: Integrated, out-of-the-box, and ready to go.
  • ERP Integration: Orchestrate AI across SAP, ServiceNow and beyond.

Here’s a look into the future: AI agents that get even smarter by working alongside humans—automating more and more over time. Think AI loan specialists that are trained directly from human input.

Our long-term vision hasn’t changed

We’ve always believed in end-to-end process orchestration. What’s different now is how far we can take it. Agentic orchestration brings us closer to a world where AI and humans truly collaborate across systems, teams, and time zones. We’re designing the autonomous, AI-powered organization of the future.

If you’re thinking about bringing agents into your business—this is the moment. With Camunda, you’ve got the foundational technology and the vision to do it right.

The next chapter of automation just started. And I couldn’t be more excited.

Let’s build the future together.

Learn more

You can learn more about our agentic orchestration capabilities here, and if you want to dive deeper, be sure to watch the recording of the keynote from CamundaCon 2025 Amsterdam (available soon).

The post Agentic Orchestration: Automation’s Next Big Shift appeared first on Camunda.

]]>
Intelligent by Design: A Step-by-Step Guide to AI Task Agents in Camunda https://camunda.com/blog/2025/05/step-by-step-guide-ai-task-agents-camunda/ Wed, 14 May 2025 07:00:00 +0000 https://camunda.com/?p=138550 In this step-by-step guide (with video), you'll learn about the latest ways to use agentic ai and take advantage of agentic orchestration with Camunda today.

The post Intelligent by Design: A Step-by-Step Guide to AI Task Agents in Camunda appeared first on Camunda.

]]>
Camunda is pleased to announce new features and functionality related to how we offer agentic AI. With this post, we provide detailed step-by-step instructions to use Camunda’s AI Agent to take advantage of agentic orchestration with Camunda.

Note: Camunda also offers an agentic AI blueprint on our marketplace.

Camunda approach to AI agents

Camunda has taken a systemic, future-ready approach for agentic AI by building on the proven foundation of BPMN. At the core of this approach is our use of the BPMN ad-hoc sub-process construct, which allows for tasks to be executed in any order, skipped, or repeated—all determined dynamically at runtime based on the context of the process instance.

This pattern is instrumental in introducing dynamic (non-deterministic) behavior into otherwise deterministic process models. Within Camunda, the ad-hoc sub-process becomes the agent’s decision workspace—a flexible execution container where large language models (LLMs) can assess available actions and determine the most appropriate next steps in real time.

We’ve extended this capability with the introduction of the AI Agent Outbound connector (example blueprint of usage) and the Embeddings Vector Database connector (example blueprint of usage). Together, they enable full-spectrum agentic orchestration, where workflows seamlessly combine deterministic flow control with dynamic, AI-driven decision-making. This dual capability supports both high-volume straight-through processing (STP) and adaptive case management, empowering agents to plan, reason, and collaborate in complex environments. With Camunda’s approach, the AI agents can add additional context for handling exceptions from STP.

This represents our next phase of AI Agent support and we intend to continue adding richer features and capabilities.

Camunda support for agentic AI

To power next-generation automation, Camunda embraces structured orchestration patterns. Camunda’s approach ensures your AI orchestration remains adaptive, goal-oriented, and seamlessly interoperable across complex, distributed systems.

As part of this evolution, Camunda has integrated Retrieval-Augmented Generation (RAG) into its orchestration fabric. RAG enables agents to retrieve relevant external knowledge—such as historical case data or domain-specific content—and use that context to generate more informed and accurate decisions. This is operationalized through durable, event-driven workflows that coordinate retrieval, reasoning, and human collaboration at scale.

Camunda supports this with our new Embeddings Vector Database Outbound connector—a modular component that integrates RAG with long-term memory systems. This connector supports a variety of vector databases, including both Amazon Managed OpenSearch (used in this exercise) and Elasticsearch.

With this setup, agents can inject knowledge into their decision-making loops by retrieving semantically relevant data at runtime. This same mechanism can also be used to update and evolve the knowledge base, enabling self-learning behaviors through continuous feedback.

To complete the agentic stack, Camunda also offers the AI Agent Outbound connector. This connector interfaces with a broad ecosystem of large language models (LLMs) like OpenAI and Anthropic, equipping agents with reasoning capabilities that allow them to autonomously select and execute ad-hoc sub-processes. These agents evaluate the current process context, determine which tasks are most relevant, and act accordingly—all within the governed boundaries of a BPMN-modeled orchestration.

How this applies to our exercise

Before we step through an exercise, let’s review a quick explanation about how these new components and Camunda’s approach will be used in this example and in your agentic AI orchestration.

The first key component is the AI Task Agent. It is the brains behind the operations. You give this agent a goal, instructions, limits and its chain of thought so it can make decisions on how to accomplish the set goal.

The second component is the ad-hoc sub-process. This encompasses the various tools and tasks that can be performed to accomplish the goal.

A prompt is provided to the AI Agent and it decides which tools should be run to accomplish this goal. The agent reevaluates the goal and the information from the ad-hoc sub-process and determines which of these tools, if any, are needed again to accomplish the goal; otherwise, the process ends.

Now armed with this information, we can get into our example and what you are going to build today.

Example overview

This BPMN process defines a message delivery service for the Hawk Emporium where AI-powered task agents make real-time decisions to interpret customer requests and select the optimal communication channels for message delivery.

Our example model for this process is the Message Delivery Service as shown below.

Message-delivery-service-agentic-orchestration

The process begins with user input filling out a form including a message, the desired  individual(s) to send it to, and the sender. Based on this input, a script task generates a prompt to send to the AI Task Agent. The AI Task processes the generated prompt and determines appropriate tasks to execute. Based on the AI Agent’s decision, the process either ends or continues to refine using various tools until the message is delivered.

The tasks that can be performed are located in the ah-hoc sub-process and are:

  1. Send a Slack message (Send Slack Message) to specific Slack channels,
  2. Send an email message (Send an Email) using SendGrid,
  3. Request additional information (Ask an Expert) with a User Task and corresponding form.

If the AI Task Agent has all the information it needs to generate, send and deliver the message, it will execute the appropriate message via the correct tool for the request. If the AI Agent determines it needs additional information; such as a missing email address or the tone of the message, the agent will send the process instance to a human for that information.

The process completes when no further action is required.

Process breakdown

Let’s take a little deeper dive on the components of the BPMN process before jumping in to build and execute it.

AI Task Agent

The AI Task Agent for this exercise uses AWS Bedrock’s Claude 3 Sonnet model for processing requests. The agent makes decisions on which tools to use based on the context. You can alternatively use Anthropic or OpenAI.

SendGrid

For the email message task, you will be sending email as community@camunda.com. Please note that if you use your own SendGrid account, this email source may change to the email address for that particular account.

Slack

For the Slack message task, you will need to create the following channels in your Slack organization:

  • #good-news
  • #bad-news
  • #other-news

Assumptions, prerequisites, and initial configuration

A few assumptions are made for those who will be using this step-by-step guide to implement your first an agentic AI process with Camunda’s new agentic AI features. These are outlined in this section.

The proper environment

In order to take advantage of the latest and greatest functionality provided by Camunda, you will need to have a Camunda 8.8-alpha4 cluster or higher available for use. You will be using Web Modeler and Forms to create your model and human task interface, and then Tasklist when executing the process.

Required skills

It is assumed that those using this guide have the following skills with Camunda:

  • Form Editor – the ability to create forms for use in a process.
  • Web Modeler – the ability to create elements in BPMN and connect elements together properly, link forms, and update properties for connectors.
  • Tasklist – the ability to open items and act upon them accordingly as well as starting processes.
  • Operate – the ability to monitor processes in flight and review variables, paths and loops taken by the process instance.

Video tutorial

Accompanying this guide, we have created a step-by-step video tutorial for you. The steps provided in this guide closely mirror the steps taken in the video tutorial. We have also provided a GitHub repository with the assets used in this exercise. 

Connector keys and secrets

If you do not have existing accounts for the connectors that will be used, you can create them.

You will need to have an AWS with the proper credentials for AWS Bedrock. If you do not have this, you can follow the instructions on the AWS site to accomplish this and obtain the required keys:

  • AWS Region
  • AWS Access key
  • AWS Secret key

You will also need a SendGrid account and a Slack organization. You will need to obtain an API key for each service which will be used in the Camunda Console to create your secrets.

Secrets

The secrets will be referenced in your model using {{secrets.yourSecretHere}} where yourSecretHere represents the name of your connector secret.

For this example to work you’ll need to create secrets with the following names if you use our example and follow the screenshots provided:

  • SendGrid
  • Slack
  • AWS_SECRET_KEY
  • AWS_ACCESS_KEY
  • AWS_REGION

Separating sensitive information from the process model is a best practice. Since we will be using a few connectors in this model, you will need to create the appropriate connector secrets within your cluster. You can follow the instructions provided in our documentation to learn about how to create secrets within your cluster.

Now that you have all the background, let’s jump right in and build the process.

Note: Don’t forget you can download the model and assets from the GitHub repository.

Overview of the step-by-step guide

For this exercise, we will take the following steps:

  • Create the initial high-level process in design mode.
    • Create  the ad-hoc sub-process of AI Task Agent elements.
  • Implement the process.
    • Configure the connectors.
      • Configure the AI Agent connector.
      • Configure the Slack connector.
        • Create the starting form.
        • Configure the AI Task Agent.
        • Update the gateways for routing.
        • Configure the ad-hoc sub-process.
        • Connect the ad-hoc sub-process and the AI Task agent
  • Deploy and run the process.
  • Enhance the process, deploy and run again.

Build your initial process

Create your process application

The first step is to create a process application for your process model and any other associated assets. Create a new project using the blue button at the top right of your Modeler environment.

Build-process

Enter the name for your project. In this case we have used the name “AI Task Agent Tutorial” as shown below.

Process-name

Next, create your process application using the blue button provided.

Enter the name of your process application, in this example “AI Task Agent Tutorial,” select the Camunda 8.8-alpha4 (or greater) cluster that you will be using for your project, and select Create to create the application within this project.

Initial model

The next step is to build your process model in BPMN and the appropriate forms for any human tasks. We will be building the model represented below.

Message-delivery-service-agentic-orchestration

Click on the process “AI Agent Tutorial” to open to diagram the process. First, change the name of your process to “Message Delivery Service” and then switch to Design mode as shown below.

Design-mode

These steps will help you create your initial model.

  1. Name your start event. We have called it “Message needs to be sent” as shown below. This start event will have a form front-end that we will build a bit later.
    Start-event

  2. Add an end event and call it “Message delivered”
    End-event

  3. The step following the start event will be a script task called “Create Prompt.” This task will be used to hold the prompt for the AI Task Agent.
    Script-task

  4. Now we want to create the AI Task Agent. We will build out this step later after building our process diagram.
    Ai-agent

Create the ad-hoc sub-process

Now we are at the point in our process where we want to create the ad-hoc sub-process that will hold our toolbox for the AI Task Agent to use to achieve the goal.

  1. Drag and drop the proper element from the palette for an expanded subprocess.
    Sub-process


    Your process will now look something like this.
    Sub-process-2

  2. Now this is a standard sub-process, which we can see because it has a start event. We need to remove the start event and then change the element to an “Ad-hoc sub-process.”
    Ad-hoc sub-process

    Once the type of sub-process is changed, you will see the BPMN symbol (~) in the subprocess denoting it is an ad-hoc sub-process.
  3. Now you want to change this to a “Parallel multi-instance” so the elements in the sub-process can be run more than once, if required.
    Parallel multi-instance


    This is the key to our process, as the ad-hoc sub-process will contain a set of tools that may or may not be activated to accomplish the goal. Although BPMN is usually very strict about what gets activated, this construct allows us to control what gets triggered by what is passed to the sub-process.
  4. We need to make a decision after the AI Task Agent executes which will properly route the process instance back through the toolbox, if required. So, add a mutually exclusive gateway between the AI Task Agent and the end event, as shown below, and call it “Should I run more tools?”.
    Run-tools

  5. Now connect that task to the right hand side of your ad-hoc sub-process.
    Connect-to-ad-hoc-sub-process

  6. If no further tools are required, we want to end this process. If there are, we want to go back to the ad-hoc sub-process. Label the route to the end event as “No” and the route to the sub-process as “Yes” to route appropriately.
    Label-paths

  7. Take a little time to expand the physical size of the sub-process as we will be adding elements into it.
  8. We are going to start by just adding a single task for sending a Slack message.
    Slack-message

  9. Now we need to create the gateway to loop back to the AI Task Agent to evaluate if the goal has been accomplished. Add a mutually exclusive gateway after the “Create Prompt” task with an exit route from the ad-hoc sub-process to the gateway.
    Loop-gateway

Implement your initial process

We will now move into setting up the details for each construct to implement the model, so switch to the Implement tag in your Web Modeler.

Configure remaining tasks

The next thing you want to do in implementation mode is to use the correct task types for the constructs that are currently using a blank task type.

AI Agent connector

First we will update the AI Task Agent to use the proper connector.

  1. Confirm that you are using the proper cluster version. You can do this on the lower right-hand side of Web Modeler and be sure to select a cluster that is at least 8.8 alpha4 or higher.
    Zeebe-88-cluster

  2. Now select the AI Task Agent and choose to change the element to “Agentic AI Connector” as shown below.
    Agentic-ai-connector-camunda


    This will change the icon on your task agent to look like the one below.
    Agentic-ai-connector-camunda-2

Slack connector

  1. Select the “Send a Slack Message” task inside the ad-hoc sub-process and change the element to the Slack Outbound Connector.
    Slack-connector

Create the starting form

Let’s start by creating a form to kick off the process.

Note: If you do not want to create the form from scratch, simply download the forms from the GitHub repository provided. To build your own, follow these instructions.

The initial form is required to ask the user:

  • Which individuals at Hawk Emporium should receive the message
  • What the message will say
  • Who is sending the message

The completed form should look something like this.

Form

To enter the Form Builder, select the start event, click the chain link icon and select + Create new form.

Start by creating a Text View for the title and enter the text “# What do you want to Say?” in the Text field on the component properties.

You will need the following fields on this form:

FieldTypeDescriptionReq?Key
To whom does this concern?TextYperson
What do you want to say?TextYmessage
Who are you?TextYsender

Once you have completed your form, click Go to Diagram -> to return to your model.

Create the prompt

Now we want to generate the prompt that will be used in our script task to tell the AI Task Agent what needs to be done.

  1. Select the “Create Prompt” script task and update the properties starting with the “Implementation” type which will be set to “FEEL expression.”

    This action will open two additional required variables: Result variable and FEEL expression.
  2. For the “Result” variable, you will create the variable for the prompt, so enter prompt here.
  3. For the FEEL expression, you will want to create your prompt.
    "I have a message from " + sender + " they would like to convey the following message: " + message + " It is intended for " + person

    Feel-prompt-message

Configure the AI Task Agent

Now we need to configure the brains of our operation, the AI Task Agent. This task takes care of accepting the prompt and sending the request to the LLM to determine next steps. In this section, we will configure this agent with specific variables and values based on our model and using some default values where appropriate.

  1. First, we need to pick the “Model Provider” that we will use for our exercise, so we are selecting “AWS Bedrock.”
    Agentic-ai-connector-properties-camunda


    Additional fields specific to this model will open in the properties panel for input.
  2. The next field is the ”Region” for AWS. In this case, a secret was created for the region (AWS_REGION) which will be used in this field.
    Agentic-ai-connector-properties-camunda-2

    Remember the secrets will be referenced in your model using {{secrets.yourSecretHere}} where yourSecretHere represents the name of your connector secret.

    Note: See the Connector and secrets section in this blog for more information on what is required, the importance of protecting these keys, and how to create the secrets.
  3. Now we want to update the authorization credentials with our AWS Access Key and our AWS Secret key from our connector secrets.
    Agentic-ai-connector-properties-camunda-3

  4. The next part is to set the Agent Context in the “Memory” section of your task. This variable is very important as you can see by the text underneath the variable box.

    The agent context variable contains all relevant data for the agent to support the feedback loop between user requests, tool calls and LLM responses. Make sure this variable points to the context variable which is returned from the agent response.

    In this case, we will be creating a variable called  agent and in that variable there is another variable called context, so for this field, we will use the variable agent.context. This variable will play an important part in this process.

    Agentic-ai-connector-properties-camunda-4

    We will leave the maximum messages at 20 which is a solid limit.
  5. Now we will update the system prompt. For this, we have provided a detailed system prompt for you to use for this exercise. You are welcome to create your own. It will be entered in the “System Prompt” section for the “System Prompt” variable.

    Hint: If you are creating your own prompt, try taking advantage of tools like ChatGPT or other AI tools to help you build a strong prompt. For more on prompt engineering, you can also check out this blog series.

    Agentic-ai-connector-properties-camunda-system-prompt

    If you want to copy and paste in the prompt, you can use the code below:
You are **TaskAgent**, a helpful, generic chat agent that can handle a wide variety of customer requests using your own domain knowledge **and** any tools explicitly provided to you at runtime.

────────────────────────────────
# 0. CONTEXT — WHO IS “USER”?
────────────────────────────────
• **Every incoming user message is from the customer.**  
• Treat “user” and “customer” as the same person throughout the conversation.  
• Internal staff or experts communicate only through the expert-communication tool(s).

────────────────────────────────
# 1. MANDATORY TOOL-DRIVEN WORKFLOW
────────────────────────────────
For **every** customer request, follow this exact sequence:

1. **Inspect** the full list of available tools.  
2. **Evaluate** each tool’s relevance.  
3. **Invoke at least one relevant tool** *before* replying to the customer.  
   • Call the same tool multiple times with different inputs if useful.  
   • If no domain-specific tool fits, you **must**  
     a. call a generic search / knowledge-retrieval tool **or**  
     b. escalate via the expert-communication tool (e.g. `ask_expert`, `escalate_expert`).  
   • Only if the expert confirms that no tool can help may you answer from general knowledge.  
   • Any decision to skip a potentially helpful tool must be justified inside `<reflection>`.  
4. **Communication mandate**:  
   • To gather more information from the **customer**, call the *customer-communication tool* (e.g. `ask_customer`, `send_customer_msg`).  
   • To seek guidance from an **expert**, call the *expert-communication tool*.  
5. **Never** invent or call tools that are not in the supplied list.  
6. After exhausting every relevant tool—and expert escalation if required—if you still cannot help, reply exactly with  
   `ERROR: <brief explanation>`.

────────────────────────────────
# 2. DATA PRIVACY & LOOKUPS
────────────────────────────────
When real-person data or contact details are involved, do **not** fabricate information.  
Use the appropriate lookup tools; if data cannot be retrieved, reply with the standard error message above.

────────────────────────────────
# 3. CHAIN-OF-THOUGHT FORMAT  (MANDATORY BEFORE EVERY TOOL CALL)
────────────────────────────────
Wrap minimal, inspectable reasoning in *exactly* this XML template:

<thinking>
  <context>…briefly state the customer’s need and current state…</context>
  <reflection>…list candidate tools, justify which you will call next and why…</reflection>
</thinking>

Reveal **no** additional private reasoning outside these tags.

────────────────────────────────
# 4. SATISFACTION CONFIRMATION, FINAL EMAIL & TASK RESOLUTION
────────────────────────────────
A. When you believe the request is fulfilled, end your reply with a confirmation question such as  
   “Does this fully resolve your issue?”  
B. If the customer answers positively (e.g. “yes”, “that’s perfect”, “thanks”):  
   1. **Immediately call** the designated email-delivery tool (e.g. `send_email`, `send_customer_msg`) with an appropriate subject and body that contains the final solution.  
   2. After that tool call, your *next* chat message must contain **only** this word:  
      RESOLVED  
C. If the customer’s very next message already expresses satisfaction without the confirmation question, do step B immediately.  
D. Never append anything after “RESOLVED”.  
E. If no email-delivery tool exists, escalate to the expert-communication tool; if the expert confirms none exists, reply with an error as described in §1-6.
  1. Remember that in the Create Prompt task, we stored the prompt in a variable called prompt. We will use this variable in the “User Prompt” section for the “User Prompt.”
    Image54

  2. The key to this step are the tools at the disposal of the AI Task Agent, so we need to link the agent to the ad-hoc sub-process. We do this by mapping the ID of the sub-process to the proper tools field in the AI Task Agent.
    1. Start by selecting your ad-hoc sub-process and giving it a name and an ID. In the example, we will use “Hawk Tools” for the name and hawkTools for the “ID.”
      Link-agent-to-ad-hoc-sub-process-camunda-1

    2. Go back to the AI Task Agent and update the “Ad-hoc subprocess ID” to hawkTools for the ID of the sub-process.
      Link-agent-to-ad-hoc-sub-process-camunda-2

    3. Now we need a variable to store the results from calling the toolbox to place in the “Tool Call Results” variable field. We will use toolCallResults.
      Link-agent-to-ad-hoc-sub-process-camunda-3

    4. There are several other parameters of importance. We will use the defaults for several of these variables. We will leave the “Maximum model calls” in the “Limits” section set at “10” which will limit the number of times the model is called to 10 times. This is important for cost control.
      Link-agent-to-ad-hoc-sub-process-camunda-4

    5. There are additional parameters to help provide constraints around the results. Update these as shown below.
      Link-agent-to-ad-hoc-sub-process-camunda-5

    6. Now we need to update the “Output Mapping” section, first the “Result variable” which is where we are going to use our agent variable that will contain all the components of the result including the train of thought taken by the AI Task Agent.
      Link-agent-to-ad-hoc-sub-process-camunda-6

Congratulations, you have completed the configuration of your AI Task Agent. Now we just need to make some final connections and updates before we can see this running in action.

Gateway updates

We are going to use the variable values from the AI Task Agent to determine if we need to run more tools.

  1. Select the “Yes” path and add the following:
    not(agent.toolCalls = null) and count(agent.toolCalls) > 0
    Flow-condition

  2. For the “No” path, we will make this our default flow.
    Default-flow

Ad-hoc sub-process final details

We first need to provide the input collection of tools for the sub-process to use, and we do that by updating the “Input collection” in the “Multi-instance” variable.

  1. We will then provide each individual “Input element” with the single toolCall.
    Toolcall-toolcallresults
  2. We will then update the “Output Collection” to our result variable, toolCallResults.
    Toolcall-toolcallresults

  3. Finally, we want to create a FEEL expression for our “Output element” as shown below.
    {<br>  id: toolCall._meta.id,<br>  name: toolCall._meta.name,<br>  content: toolCallResult<br>}
     
    Output-element


    This expression provides the id, name and content for each tool.
  4. Finally, we need to provide the variable for the “Active elements” for the “Active elements collection” showing which element is active in the sub-process.
    [toolCall._meta.name]
    Active-element

    To better explain this, the AI Task Agent determines a list of elements (tools) to run and this variable represents which element gets activated in this instance.

Connect sub-process elements and the AI Task Agent

Now, how do we tell the agent that it has access to the tools in the ad-hoc subprocess?

  1. First of all, we are going to use the” Element Documentation” field to help us connect these together. We will add some descriptive text about the element’s job. In this case, we will be using:
    This can send a slack message to everyone at Niall's Hawk Emporium
    Element-documentation

Now we need to provide the Slack connector with the message to send and what channel to send that message on.

  1. We need to use a FEEL expression for our message and take advantage of the keyword fromAi and we will enter some additional information in the expression. Something like this:
    fromAi(toolCall.slackMessage, "This is the message to be sent to slack, always good to include emojis")
    Message


    Notice that we have used our variable toolCall again and told AI that you need to provide us with a variable called slackMessage.
  2. We also need to explain to AI which channel is appropriate for the type of message being sent. Remember that we provided three (3) different channels in our Slack organization. We will use another FEEL expression to provide guidance on the channel that should be used.
    fromAi(toolCall.slackChannel, "There are 3 channels to use they are called, '#good-news', '#bad-news' and '#other-news'. Their names are self explanatory and depending on the type of message you want to send, you should use one of the 3 options. Make sure you  use the exact name of the channel only.")
    Channels

  3. Finally, be sure to add your secret for “Authentication” for Slack in the “OAuth token” field. In our case this is:
    {{secrets.Slack}}
    Secrets

Well, you did it! You now should have a working process model that accesses an AI Task Agent to determine which elements in its toolbox can help it achieve its goal. Now you just need to deploy it and see it in action.

Deploy and run your model

Now we need to see if our model will deploy. If you haven’t already, you might want to give your process a better name and ID something like what is shown below.

Name-process
  1. Click Deploy and your process should deploy to the selected cluster.
    Deploy-agentic-ai-process-camunda

  2. Go to Tasklist and Processes and find your process called “Message System” and start the process clicking the blue button Start Process ->.
    Start-process
  3. You will be presented with the form you created so that you can enter who you are, the message content and who should receive the message. Enter the following for the fields:
    • To whom does this concern?
      Everyone at the Hawk Emporium
    • What do you want to say?
      We have a serious problem. Hawks are escaping. Please be sure to lock cages. Can you make sure this issue is taken more seriously?
    • Who are you?
      Joyce, assistant to Niall - Owner, Hawk Emporium
      Or enter anything you want for this.

Your completed form should look something like the one shown below.

Form

The process is now running and should post a Slack message to the appropriate channel, so open your Slack application.

  1. We can assume that this would likely be a “bad news” message, so let’s review our Slack channels and see if something comes to the #bad-news channel. You should see a message that might appear like this one.
    Ai-results-slack

  2. Open Camunda Operate and locate your process instance. It should look something like that seen below.
    Camunda-operate-check

  3. You can review the execution and see what took place and the variable values.
    Camunda-operate-check-details

You have successfully executed your first AI Task Agent and associated possible tasks or elements associated with that agent, but let’s take this a step further and add a few additional options for our AI Task Agent to use when trying to achieve its “send message” goal.

Add tasks to the toolbox

Why don’t we give our AI Task Agent a few more options to help it accomplish its goal to send the proper message. To do that, we are going to add a couple other options for our AI Task Agent within our ad-hoc sub-process now.

Add a human task

The first thing we want to do is add a human task as an option.

  1. Drag another task into your ad-hoc sub-process and call it “Ask an Expert”.
  2. Change the element type to a “User Task.” The result should look something like this.
    Add-tasks


    Now we need to connect this to our sub-process and provide it as an option to the AI Task Agent.
  3. Update the “Element Documentation” field with the information about this particular element. Something like:
    If you need some additional information that would help you with your request, you can ask this expert.
    Element-documentation-user-task

  4. We will need to provide the expert with some inputs, so hover over the + and click Create+ to create a new input variable.
  5. For the “Local variable name” use  aiquestion and then we will use a FEEL expression for the “Variable assigned value” following the same pattern we used before with the fromAi tool.
    fromAi(toolCall.aiquestion, "Add here the question you want to ask our expert. Keep it short and be friendly", "string")
    User-task-inputs

  6. In this case, we need to see the response from the expert so that the AI Task Agent can use this information to determine how to achieve our goal. So add an “Output Variable” and call it toolCallResult and we will be providing the answer using the following JSON in the Variable assignment value.
    {<br>  “Personal_info_response”: humanAnswer<br>}

    Your output variable section should now look like that shown below.
    User-task-output

  7. Now we need to create a form for this user task to display the question and give the user a place to enter their response to the question. Select the “Ask an Expert” task and choose the link icon and then click on the + Create new form from the dialog.
    Add-form
         
    New-form

  8. The form we need to build will look something like this:
    Question-from-ai


    Start by creating a Text View for the title and enter the text “# Question from AI” in the Text field on the component properties.

    You will need the following fields on this form:
FieldTypeDescriptionReq?Key
{{aiquestion}}Text viewN
AnswerText areaYhumanAnswer

The Text view field for the question will display the value of the aiquestion variable that will be passed to this task. We also provided a place to add a document that will be of some assistance.

Once you have completed your form, click Go to Diagram -> to return to your model.

Because we have already connected the AI Task Agent to the ad-hoc sub-process and the tools it can use, we do not have to provide more at this step.

Optional: Send an email

If you have a SendGrid account and key, you can complete the steps below, but if you do not, you can just keep two elements in your ad-hoc sub-process for this exercise.

  1. Create one more task in your ad-hoc sub-process and call it “Send an Email.”
  2. Change the task type to use the SendGrid Outbound Connector.
  3. Enter your secret for the SendGrid API key using the format previously discussed.

    Remember the secrets will be referenced in your model using {{secrets.yourSecretHere}} where yourSecretHere represents the name of your connector secret. In this case, we have used:
    {{secrets.SendGrid}}
  4. You will need to provide the reason the AI Task Agent might want to use this element in the Element documentation. The text below can be used.
    This is a service that lets you send an email to someone.
    Email

  5. For the Sender “Name” you want to use the information provided to the AI Task Agent about the person that is requesting the message be sent. We do this using the following information.
    fromAi(toolCall.emailSenderName, "This is the name of the person sending the email")

    In our case, the outgoing “Email address” is “community@camunda.com” which we also need to add to the “Sender” section of the connector properties. You will want to use the email address for your own SendGrid configuration.
    Sender-name-fromai


    Note: Don’t forget to click the fx icon before entering your expressions.
  6. For the “Receiver,” we also will use information provided to the AI Task Agent about who should receive the message. For the “Name”, we can use this expression:
    fromAi(toolCall.emailReceiveName, "This is the name of the person getting the email")

    For the Email address, we will need to make sure that the AI Task Agent knows the email address for the intended individual(s) for the message.
    fromAi(toolCall.emailReceiveAddress, "This is the email address of the person you want to send an email to, make very sure that if you use this that the email address is correctly formatted you also should be completely sure that the email is correct. Don't send an email unless you're sure it's going to the right person")

    Your properties should now look something like this.
    Receiver-name-fromai

  7. Select “Simple (no dynamic template)” for the “Mail contents” property in the “Compose email” section.
  8. In the “Compose email” section for the subject, we will let the AI Task Agent determine the best subject for the email, so this text will provide that to the process.
    fromAi(toolCall.emailSubject, "Subject of the email to be sent")
  9. The AI Task Agent will determine the email message body as well with the following:
    fromAi(toolCall.emailBody, "Body of the email to be sent")

    Your properties should look something like this.
    Properties-fromai

That should do it. You now have three (3) elements or tools for your AI Task Agent to use in order to achieve the goal of sending a message for you.

Deploy and run again

Now that you have more options for the AI Task Agent, let’s try running this again. However, we are going to make an attempt to have the AI Task Agent use the human task to show how this might work.

  1. Deploy your newly updated process as you did before.
  2. Go to Tasklist and Processes and find your process called “Message System” and start the process clicking the blue button.
    Start-process
  3. You will be presented with the form you created so that you can enter who you are, the message content and who should receive the message. Enter the following for the fields
    • To whom does this concern?
      I want to send this to Reb Brown. But only if he is working today. So, find that out.
    • What do you want to say?
      Can you please stop feeding the hawks chocolate? It is not healthy.
    • Who are you?
      Joyce, assistant to Niall - Owner, Hawk Emporium
      Or enter anything you want for this.

Your completed form should look something like the one shown below.

New-form-to-user-task-from-ai

The process is now running.

  1. Open Camunda Operate and locate your process instance. It should look something like that seen below.
    Camunda-operate-check-again

  2. You can review the execution and see what took place and the variable values.
  3. If you then access Tasklist and select the Tasks tab, you should have an “Ask an Expert” task asking you if Reb Brown is working today. Respond as follows:
    He is working today, but it’s also his birthday, so it would be nice to let him know the important message with a happy birthday as well.

    What-ai-asked-and-user-answer

  4. In Operate, you will see that the process instance has looped around with this additional information.
    Camunda-operate-check-details-again


    You can also toggle the “Show Execution Count” to see how many times each element in the process was executed.
    Camunda-operate-execution-count

  5. Now open your Slack application and you should have a message now that the AI Task Agent knows that not only is Reb Brown working, but it is his birthday.
    Ai-message

Congratulations! You have successfully executed your first AI Task Agent and associated possible tasks or elements associated with that agent.

We encourage you to add more tools to the ad-hoc sub-process to continue to enhance your AI Task Agent process. Have fun!

Congratulations!

You did it! You completed building an AI Agent in Camunda from start to finish including running through the process to see the results. You can try different data in the initial form and see what happens with new variables. Don’t forget to watch the accompanying step-by-step video tutorial if you haven’t already done so.

The post Intelligent by Design: A Step-by-Step Guide to AI Task Agents in Camunda appeared first on Camunda.

]]>
Building Trustworthy AI Agents: How Camunda Aligns with Industry Best Practices https://camunda.com/blog/2025/05/ai-agent-design-patterns-in-camunda/ Fri, 09 May 2025 00:08:15 +0000 https://camunda.com/?p=137829 Build, deploy, and scale AI agents with an enterprise-ready framework that balances automation, control, speed, safety, complexity, and clarity.

The post Building Trustworthy AI Agents: How Camunda Aligns with Industry Best Practices appeared first on Camunda.

]]>
The rapid evolution of AI agents has triggered an industry-wide focus on design patterns that ensure reliability, safety, and scalability. Two major players—OpenAI and Anthropic—have each published detailed guidance on building effective AI agents. Camunda’s own approach to agentic orchestration shows how an enterprise-ready solution can embody these best practices.

Let’s take a look at how Camunda’s AI agent implementation aligns with the recommendations from OpenAI and Anthropic, and why this matters for enterprise success.

Clear task boundaries and explicit handoffs

Both Anthropic and OpenAI stress the importance of defining clear task boundaries for agents. According to Anthropic’s recommendations, ambiguity in agent responsibilities often leads to unpredictable behavior and systemic errors. OpenAI similarly highlights that agents should have narrowly scoped responsibilities to ensure predictability and reliability.

At Camunda, we address this by orchestrating agents through BPMN workflows. Each agent’s task is represented as a discrete service task with well-defined inputs and expected outputs. For example, in our example agent implementation, an email is sent only after a Generate Email Inquiry task completes its work and delivers validated output. This sequencing ensures that each agent knows precisely when to act, what data it receives, and what deliverables it is accountable for, thereby minimizing risks of cascading failures.

By visualizing these handoffs in BPMN diagrams, stakeholders across technical and nontechnical domains can easily understand the agent responsibilities, audit workflows, and troubleshoot when necessary.

AI agent inserted into BPMN diagram for process visibility

Narrow scope with composable capabilities

OpenAI’s guide highlights the benefits of agents that are designed with specialized, narrow scopes, which can then be composed into larger systems for more complex tasks. Anthropic echoes this, suggesting that mega-agents often become unwieldy and hard to trust.

Camunda’s architecture embraces this philosophy through microservices-style orchestration. Each AI agent within Camunda focuses on mastering a single task—for instance, information retrieval, natural language generation, decision support, or classification. These specialized agents can then be strung together through BPMN models to create sophisticated end-to-end business processes.

Let’s look at a practical example.

In an insurance claims process, Camunda orchestrates a Document Extraction agent to pull key fields, a Fraud Detection agent to assess risk, and a Claims Decision agent to recommend next steps. Each agent operates independently yet collaboratively, enhancing system resilience and allowing incremental upgrades without overhauling the entire workflow.

AI agents working together with their separate tasks
Each agent has its own limited set of specialized tasks, with the ability to compose tasks together within agents.

Monitoring, error handling, and human-in-the-loop

Both OpenAI and Anthropic emphasize that no agent should operate without proper supervision mechanisms. Agents must report their states, signal when they encounter issues, and escalate gracefully to human overseers.

Camunda is particularly strong in this area thanks to our suite of tools like Operate, Optimize, and Tasklist. Here’s how we achieve enterprise-grade monitoring and human-in-the-loop design:

  • Full observability: Camunda Operate provides real-time visibility into every process instance, showing exactly which agent did what, when, and with what outcome.
  • Error boundaries and fallbacks: BPMN error events and boundary timers allow processes to anticipate common failures (like timeouts or bad data) and take corrective actions, such as retrying, skipping, or escalating to a human operator.
  • Seamless human escalation: When agents cannot confidently complete a task—for example, due to ambiguity or ethical concerns—Camunda can dynamically activate a human task, prompting a person to step in, review, and make decisions.

In a future release—the 8.8 release scheduled for October—Camunda is taking this one step further by connecting these features directly to the agent. Failed tasks will automatically trigger the agent to reevaluate the prompt, allowing the agent to respond dynamically as the environment changes. Operate will provide real-time visibility into the agent, allowing seamless human escalation and recovery.

These capabilities ensure that agents augment rather than replace human judgment, a key principle recommended by both OpenAI and Anthropic.

Composability and reusability

Anthropic strongly recommends composable agent architectures to allow rapid iteration and minimize technical debt. Composable systems are more adaptable, easier to troubleshoot, and more cost-effective to maintain.

Camunda’s approach to process design aligns perfectly with this recommendation. Our BPMN models are built around modularity, enabling teams to:

  • Swap out individual agents without rewriting the entire workflow
  • Reuse standard subprocesses across different projects
  • Version-control agent behaviors separately, making it easy to A/B test and roll back changes

Drawing from IBM’s insights on agent design, Camunda’s platform allows enterprises to build libraries of reusable agent modules. These can be assembled like building blocks to rapidly create new processes or modify existing ones, significantly accelerating innovation cycles.

Transparent orchestration and explainability

OpenAI’s guide makes it clear: trustworthy AI systems must provide explainable decision pathways. Stakeholders need to understand why an agent acted a certain way, especially when decisions have legal, ethical, or financial consequences.

Camunda’s BPMN-driven orchestration inherently provides this transparency. Every agent interaction, every decision point, and every data handoff is visually modeled and logged. Teams can:

  • Trace the complete lineage of a decision from input to output
  • Generate audit trails automatically for compliance needs
  • Explain system behavior to both technical audiences and nontechnical stakeholders

In highly regulated industries like banking, healthcare, or insurance, this kind of transparency isn’t just a nice-to-have—it’s a nonnegotiable requirement. With Camunda, organizations can meet these standards confidently.

Centralized orchestration provides guardrails

Today, AI agents do not yet exhibit the level of trustworthiness, transparency, or security required to make a fully autonomous swarm of agents safe for enterprise contexts. In decentralized models, agents independently delegate tasks to one another, which can lead to a lack of oversight, unpredictable behavior, and challenges in ensuring compliance.

At Camunda, we believe that the decentralized agent pattern represents an exciting vision for the future. However, we see it as a pattern that is still years away from being viable for enterprise-grade AI systems.

For now, Camunda strongly supports centralized or manager patterns. With this approach, a single orchestrator (in Camunda’s case, the BPMN engine) manages when, why, and how agents act. This centralized orchestration ensures:

  • Full visibility into agent activities
  • Clear accountability for decision points
  • Easier implementation of security, compliance, and auditing mechanisms

Our philosophy is simple: while the future may hold promise for decentralized agent ecosystems, today’s enterprises need reliability, explainability, and control. Centralized orchestration, powered by Camunda, offers the safest and most effective path forward that you can utilize immediately, without sacrificing your flexibility for improvements and innovations in AI that may come in the future.

Enterprise-grade agentic orchestration is here!

By closely adhering to the industry best practices, Camunda delivers an enterprise-ready framework for building, deploying, and scaling AI agents. Our approach balances automation with control, speed with safety, and complexity with clarity.

We believe that AI agents should operate transparently, predictably, and with human-centric governance. With Camunda, enterprises gain not just a platform but a reliable foundation to scale AI responsibly and sustainably.

Want to learn more? Dive into our latest release announcement or check out our guide on building your first AI agent.

Stay tuned—the future of responsible, scalable AI is being built right now, and Camunda is at the forefront.

The post Building Trustworthy AI Agents: How Camunda Aligns with Industry Best Practices appeared first on Camunda.

]]>
How to Grow Commercial Revenue with Open Banking https://camunda.com/blog/2025/05/how-to-grow-commercial-revenue-with-open-banking/ Fri, 02 May 2025 02:29:38 +0000 https://camunda.com/?p=137117 Transform open banking from a checkbox exercise into a growth opportunity with process orchestration and automation.

The post How to Grow Commercial Revenue with Open Banking appeared first on Camunda.

]]>
Commercial banking clients face a stark reality. The volatility across sectors is creating a need for greater connectivity and access to liquidity. The rise of real-time payments and treasury as a service underpins these pressures. Yet fraudsters are becoming equally savvy with emerging technology such as generative AI. 

They can execute complex scams in minutes using AI. And with real-time payments, they can wire money across different accounts using synthetic identities. Meanwhile, legitimate cross-border transfers can still take weeks at certain institutions. This gap represents both a challenge and an opportunity for banks ready to transform their approach to commercial services.

In a recent webinar, Enrico Camerinelli, strategic advisor at Datos Insights, and Sathya Sethuraman, field CTO for banking and financial services at Camunda, explored how process orchestration and automation enables banks to bridge this divide and generate more value for commercial banking clients.

What commercial clients really want

As Camerinelli explains in the webinar, Datos Insights research reveals that nearly 90% of corporate treasurers consider it essential to run banking operations directly from their enterprise systems. This seamless integration creates significant orchestration challenges spanning technology, processes, and people. Not just for the bank itself, but also for commercial clients.

“Corporate users want to control inbound and outbound transactions directly from their enterprise system,” explains Camerinelli. “But this integration creates potential break points that require thoughtful orchestration of applications on both the enterprise and banking sides.” Half of corporate treasurers cited issues with integration, multiple screens, and a high dependence on Excel or external systems.

Without effective orchestration, this integration challenge creates risk. When Datos Insights asked why corporate treasurers partner with fintech firms instead of traditional banks, they cited better functionality (48%), more payment options (46%), better integration with internal systems (43%), and access to real-time payments (41%).

These proof points show the growing risk posed to incumbent banks that are slow to respond to their clients’ needs. If commercial clients don’t get what they need, they’ll pursue other options without thinking about loyalty or long-standing relationships.

Why API catalogs alone fail to deliver value

Many banks have responded to integration demands by building extensive API catalogs, but this approach creates new problems rather than solutions.

“The more API catalogs banks create, the more they risk widening the divide from corporate users,” notes Camerinelli. “These APIs exist, but corporate clients struggle to use them efficiently or build ROI from them.”

According to Datos Insights data, the challenges are multithreaded. The biggest hurdle is the underlying process and operational changes are difficult to manage, with over 50% citing this obstacle. Cost (45%) and IT dependence (41%) issues were cited next, which is expected given the complexity of a modern enterprise.

Without a strategic framework in place, banks can often find their APIs and technology stacks grow out of control. As teams can often work in silos, they risk building the same integration multiple times. The lack of reuse and increased duplication isn’t just bad for productivity. It increases maintenance costs and adds to the risk of complexity, which ultimately increases the total cost of ownership. 

Instead, banks need to think differently. They need to think beyond APIs and offer customers what they want.

Focusing on API calls is missing the point

When discussing open banking, conversations should fixate on the end users. However, research shows that corporate clients have a matrix of important capabilities. They prioritize liquidity, convenience, security, and of course, yield. But each of these elements carry different weights.

“Corporate treasurers want yield, but not at the cost of missing other priorities like sufficient liquidity, safety, and ease of use,” explains Camerinelli. “No treasurer will lose their job for missing a few extra basis points, but they certainly will if they compromise liquidity or security.”

Sethuraman adds that properly orchestrated open banking actually enhances security while delivering on these other priorities: “By opening your APIs strategically, you can embed your services into corporate ERPs. This delivers the functionality and capabilities they demand.”

Orchestrating and scaling AI capabilities in banking

Process orchestration creates the critical framework for effectively applying artificial intelligence across banking operations. This represents a shift from isolated AI applications to a cohesive approach for embedding capabilities spanning deterministic and non-deterministic processes

Yet in banking, there needs to be a balance of both. Mission-critical processes need to function as designed. Every time. Otherwise, there’s a risk of disruption, regulatory action, or brand impact.

Agentic AI opportunities are still plentiful in the industry that’s practically led the revolution. Having the ability to blend both gives banks the freedom to apply the right technology at the right time instead of being limited.

Orchestration allows banks to speed up deploying new models or capabilities. It prevents AI hallucination risks while creating a governance framework that helps banks accelerate innovation without compromising safety.

For example, one bank monitoring for synthetic identity fraud implemented an agentic approach that allowed their fraud team to identify repetitive patterns in certain document types without disrupting their existing processes. They could test these patterns with real data, refine their models, and gradually deploy improvements. 

By essentially A/B testing fraud models, the bank was able to reduce false positives while simultaneously improving the detection of bad actors. Something impossible with traditional, static approaches.

Building incrementally while maintaining vision

One of the most powerful advantages of process orchestration is enabling incremental modernization within a coherent strategic framework. Rather than waiting years for comprehensive implementations, banks can deliver value continuously from the start.

Sethuraman described how one multinational bank evolved from a narrow payment system implementation in one country to a 70-plus country platform vision through orchestration. Without process orchestration, they would have faced an impossible choice: wait years for a complete solution or implement disconnected point solutions.

“Process orchestration provides the flexibility to start small but think big,” he explained. “The bank didn’t wait five years to deliver value. They incrementally built their platform while maintaining a consistent vision.”

This approach requires business and IT collaboration to map the true vision, identify customer requirements, and build, buy, and blend what’s needed to achieve strategic goals. Process landscapes allow business stakeholders to create standardized process hierarchies and catalogs that IT can implement incrementally, preventing both analysis paralysis and technology sprawl.

When another audience member asked if orchestration just adds more complexity to overlapping systems, Camerinelli clarified: “Orchestration isn’t just connecting fragmented pieces. It ensures processes are properly reviewed and revised first. You’re not just automating existing problems. You’re resolving them within a coherent framework.”

The path forward

As open banking transforms commercial relationships, process orchestration provides the critical link between everything and enables rapid innovation.

When implemented thoughtfully, you can:

  • Create secure connections to client systems with appropriate permissions
  • Deliver enhanced functionality that meets rising expectations
  • Apply AI intelligently to improve experiences while reducing costs
  • Build incrementally toward a comprehensive, bank-wide transformation that reduces the total cost of ownership and enables faster scaling

Process orchestration transforms open banking from a checkbox exercise into a growth opportunity. It balances innovation with practical security measures that protect you and your clients while delivering the capabilities commercial clients actually value.

Ready to learn how process orchestration helps banks grow revenue with open banking? Watch the complete conversation between industry experts Enrico Camarinelli and Sathya Sethuraman to discover practical strategies for balancing innovation with security in open banking. 

The post How to Grow Commercial Revenue with Open Banking appeared first on Camunda.

]]>
An Advanced Ad-Hoc Sub-Process Tutorial https://camunda.com/blog/2025/04/an-advanced-ad-hoc-sub-process-tutorial/ Fri, 25 Apr 2025 02:09:15 +0000 https://camunda.com/?p=135934 Learn about the new ad-hoc sub-process capabilities and how you can take advantage of them to create dynamic process flows.

The post An Advanced Ad-Hoc Sub-Process Tutorial appeared first on Camunda.

]]>
Ad-hoc sub-processes are a new feature in Camunda 8.7 that allow you to define what task or tasks are to be performed during the execution of a process instance. Who or what decides which of the tasks are to be performed could be a person, rule, microservice, or artificial intelligence.

In this example, you’ll decide what those tasks are, and later on you’ll be able to add more tasks as you work through the process. We’ll use decision model and notation (DMN) rules along with Friendly Enough Expression Language (FEEL) expressions to carry out the logic. Let’s get started!

Table of contents

SaaS or C8Run?

Download and install Camunda 8 Run

Download and install Camunda Desktop Modeler

Create a process using an ad-hoc sub-process

Add logic for sequential or parallel tasks

Create a form to add more tasks and to include a breadcrumb trail for visibility

Run the process!

You’ve built your ad-hoc sub-process!

SaaS or C8Run?

You can choose either Camunda SaaS or Self-Managed. Camunda provides a free 30-day SaaS trial, or you can choose Self-Managed. I recommend using Camunda 8 Run to simplify standing up a local environment on your computer.

The next sections provide links to assist you in installing Camunda 8 Run and Desktop Modeler. If you’ve already installed Camunda or are using SaaS, you can skip to Create a process using an ad-hoc sub-process.

If using Saas, be sure to create an 8.7 cluster first.

Download and install Camunda 8 Run

For detailed instructions on how to download and install Camunda 8.7 Run, refer to our documentation. Once you have it installed and running, continue on your journey right back here!

Download and install Camunda Desktop Modeler

Download and install Desktop Modeler. You may need to open the Alternative downloads dropdown to find your desired installation.

Select the appropriate operating system and follow the instructions to start Modeler up. We’ll use Desktop Modeler to create and deploy applications to Camunda 8 Run a little bit later.

Create a process using an ad-hoc sub-process

Start by creating a process that will let you select from a number of tasks to be executed in the ad-hoc sub-process.

Open Modeler and create a new process diagram. This post uses SaaS and Web Modeler, but the same principles apply to Desktop Modeler. Be sure to switch versions, if not set correctly already, to 8.7, as ad-hoc sub-processes are available to Camunda 8.7 and later versions.

ad-hoc sub-process 1

Next, add an ad-hoc sub-process after the start event; add a task and click the Change element icon.

ad-hoc sub-process 2

Your screen should look something like this. Notice the tilde (~) denoting the ad-hoc sub-process:

ad-hoc sub-process 3

Now add four User Tasks to the subprocess. We’ll label them Task A, Task B, Task C, and Task D. Be sure to update the ID for each of the tasks to Task_A, Task_B, Task_C, and Task_D. We’ll use these IDs later to determine which of the tasks to execute.

You can ignore the warnings indicating forms should be associated with User Tasks.

Add an end event after the ad-hoc sub-process as well.

ad-hoc sub-process 4

Add a collection (otherwise known as an array) to the ad-hoc sub-process that determines what task or tasks should be completed within it.

Put focus on the ad-hoc sub-process and add the variable activeElements to the Active elements collection property in the Properties panel of the ad-hoc sub-process. You’ll need to pass in this collection from the start of the process.

ad-hoc sub-process 5

Now you need to update the start event by giving it a name and adding a form to it. Put focus on the start event and enter in a name. It can be anything actually, but it’s always a best practice to name events. This post uses the name Select tasks.

Click the link icon above the start event and click Create new form.

ad-hoc sub-process 6

The form should take the name of the start event: Select tasks.

Now drag and drop a Tag list form element onto the Form Definition panel.

ad-hoc sub-process 7

The Tag list form element allows users to select from an array of items and pass it to Camunda as an array.

Next, update the Field label in the Tag list element to Select tasks and the Key to activeElements.

ad-hoc sub-process 8

By default, a Tag list uses Static options with one default option, and we’ll use that in this example. Add three more static options and rename the Label and Value of each to Task A, Task_A; Task B, Task_B; Task C, Task_C; and Task D, Task_D.

ad-hoc sub-process 9

Let’s run the process! Click Deploy and Run. For SaaS, be sure to switch back to the ad-hoc sub-process diagram to deploy and run it.

You can also shortcut this by simply running the process, as running a process also deploys it.

ad-hoc sub-process 10

You’ll receive a prompt asking which Camunda cluster to deploy to, but there is only one choice. Deploy and run processes from Desktop Modeler to Camunda 8 Run.

Upon running a process instance, you should see the screen we created for the start event. Select one or more tasks and submit the form. This post selects Task A, Task B, and Task C. Click Run to start the process.

ad-hoc sub-process 11

A pop-up gives you a link to Camunda’s administrative console, Operate. If you happen to miss the pop-up, you can always click the grid icon in the upper left corner in Web Modeler. Select Operate in the menu.

ad-hoc sub-process 12
ad-hoc sub-process 13

Check out the documentation to see how to get to Operate in Camunda 8 Run.

Once in Operate, you should see your process definition. You can navigate to the process instance by clicking through the hyperlinks. If you caught the link in Web Modeler, you should be brought to the process instance directly. You should see something like this:

ad-hoc sub-process 14

As you can see, the process was started, the ad-hoc sub-process was invoked, and Task A, Task B, and Task C are all active. This was accomplished by passing in the activeElements variable set by the Tag list element in the Start form.

You can switch to Tasklist to complete the tasks. The ad-hoc sub-process will not complete until all three tasks are completed. Navigate to a task by clicking on it in the process diagram panel and clicking Open Tasklist in the dialog box.

ad-hoc sub-process 15

You should see all three tasks in Tasklist. Complete them by selecting each one, then click Assign to me and then click Complete Task.

ad-hoc sub-process 16

Once all three tasks are complete, you can return to Operate and confirm the process has completed.

ad-hoc sub-process 17

Now that you understand the basics of ad-hoc sub-processes, let’s add more advanced behavior:

  • What if you wanted to be able to decide whether those tasks are to be completed in parallel or in sequence?
  • What if you wanted to add more tasks to the process as you execute them?
  • What if you wanted a breadcrumb trail of the tasks that have been completed or will be completed?

In the next section, we’ll add rules and expressions to handle these scenarios. If you get turned around in the pursuit of building this example, we’ll provide solutions to help out.

Add logic for sequential or parallel tasks

Now we’ll add logic to allow the person starting the process to decide whether to run the selected tasks in sequence or in parallel. We’ll add a radio button group, an index variable, FEEL expressions, and rules to handle this.

Go back to the Select tasks form in Web Modeler. Add a Radio group element to the form.

ad-hoc sub-process 18

Update the Radio group element, selecting a Label of Sequential or Parallel and Static options of Sequential with a value of sequential and Parallel with a value of parallel. Update the Key to routingChoice and set the Default value to Sequential. Your screen should look something like this:

ad-hoc sub-process 19

Now you need to add some outputs to the Select tasks start event. Go back to the ad-hoc sub-process diagram and put focus on the Select tasks start event. Add the following Outputs, as shown below:

  • activeElements
  • index
  • tasksToExecute
ad-hoc sub-process 20

Next, update each with a FEEL expression. For activeElements, add the following expression:

{ "initialList": [],
  "appendedList": if routingChoice = "sequential" then append(initialList, tasksToExecute[1]) else tasksToExecute
}.appendedList

If you recall, activeElements is the collection of the task or tasks that are to be executed in the ad-hoc sub-process. Before, you simply passed the entire list, but now that you can choose between sequential or parallel behavior, you need to update the logic to account for that choice. If the choice is sequential, add the next task and that task only to activeElements.

If you’re not familiar with FEEL, let’s explain what you’re seeing here. This FEEL expression starts with the creation of a list called initialList. We then create another variable called appendedList by appending initialList with either the first task (if routingChoice is sequential) or the entire list (if routingChoice is parallel). We then pass back the contents of appendedList, as denoted by .appendedList on the last line, and populate `activeElements`.

ad-hoc sub-process 21

The index variable will be used to track where you are in the process. Set it to 1:

ad-hoc sub-process 22

In tasksToExecute, you’ll hold all of the tasks, whether in sequence or in parallel, in a list which you can use to display where you are in a breadcrumb trail. Use the following expression:

{ "initialList": [],
  "appendedList": if routingChoice = "parallel" then insert before(initialList, 1, tasksToExecute) else tasksToExecute
}.appendedList

In a similar fashion to activeElements, create a list variable called initialList. Next, insert tasks as a nested list if routingChoice is parallel or the entire list if routingChoice is sequential.

ad-hoc sub-process 23

Your screen should look something like this:

ad-hoc sub-process 24

Now you need to increase the index after completion of the ad-hoc sub-process and add some logic to determine if you’re done. In the process diagram, put focus on the ad-hoc sub-process and add an Output called index. Then add an expression of index + 1. Your screen should look something like this:

ad-hoc sub-process 25

Add two more Outputs to the ad-hoc sub-process, interjectYesNo with a value of no and interjectTasks with a value of null. We’ll be using these values later in a form inside the subprocess and this will set those variables to default values upon the conclusion of a sub-process iteration:

ad-hoc sub-process 26

Next, we’ll add a business rule task and a gateway to the process. Drag and drop a generic task from the palette on the left and change it to a Business rule task. Then drag and drop an Exclusive gateway from the palette after the Business rule task. You’ll probably need to move the End event to accommodate these items.

Your screen should look like this (you can see the palette on the left):

ad-hoc sub-process 27

Let’s create a rule set. Put focus on the Business rule task and click the link icon in the context pad that appears.

ad-hoc sub-process 28

In the dialog box that appears, click Create DMN diagram.

ad-hoc sub-process 29

In the decision requirements diagram (DRD) diagram that appears, set the Diagram and Decision names to Set next task.

ad-hoc sub-process 30

The names aren’t critical, but they should be descriptive.

Let’s write some rules! Click the blue list icon in the upper left corner of the Set next task decision table to open the DMN editor.

In the DMN editor, you’ll see a split-screen view. On the left is the DRD diagram with the Set next task decision table. On the right is the DMN editor where you can add and edit rules.

ad-hoc sub-process 31

First things first, update the Hit policy to First to keep things simple. The DMN will execute until it hits the first rule that matches. Check out the documentation for more information regarding Hit Policy.

ad-hoc sub-process 32

Let’s add some rules. In the DMN editor, you can add rule rows by clicking on the blue plus icon in the lower left. Add two rows to the decision table.

ad-hoc sub-process 33

Next, double click Input to open the expression editor. Your screen should look something like this:

ad-hoc sub-process 34

In this screen, enter the following expression: tasksToExecute[index]. Select Any for the Type. Your screen should look like this:

ad-hoc sub-process 35

Just to recap, you’ve incremented the index by one. Here you retrieve the next task or tasks, and now you’ll write rules to determine what to do based on what is retrieved.

In the first row input, enter the following FEEL expression: count(tasksToExecute[index]) > 1.

This checks to see if the count of the tasksToExecute list at the new index is greater than one which indicates parallel tasks. For now it’s not important, but it will be later. Next, double-click Output to open the expression editor.

ad-hoc sub-process 36

For Output name, enter activeElements, and for the Type, enter Any.

ad-hoc sub-process 37

In the first rule row output, enter the expression tasksToExecute[index].

If the count is greater than one, this means that there are parallel tasks to be executed next. All that’s needed is to pass on these tasks. The expression above does just that. You may also want to put in an annotation to remind yourself of the logic.

For example, you can enter Next set of tasks are parallel for the annotation.

Your screen should look like this:

ad-hoc sub-process 38

Next, add logic to the second row. Leave the otherwise notation - in for the input on the second row. Enter the following expression for the output of the second row:

{ "initialArray":[],  "appendedList": append (initialArray, tasksToExecute[index]) }.appendedList

What this does is create an empty list, add the next single task to be executed to the empty list, and then populate activeElements. You may want to add an annotation here as well: Next task is sequential.

Your screen should look like this:

ad-hoc sub-process 39

Now you need to add logic to the gateway to either end the process or to loop back to the ad-hoc sub-process. Go back to the ad-hoc sub-process in your project.

You might notice this in your process:

ad-hoc sub-process 40

Add a Result variable of activeElements and add a name of Set next task. Your screen should look like this:

ad-hoc sub-process 41

Add a name to the Exclusive gateway. Let’s use All tasks completed? Also, add the name Yes on the sequence flow from the gateway to the end event. Your screen should look like this:

ad-hoc sub-process 42

Change that sequence flow to a Default flow. Put focus on the sequence flow, click the Change element icon, and select Default flow.

ad-hoc sub-process 43

Notice the difference in the sequence flow now?

ad-hoc sub-process 44

Next, add a sequence flow from the All tasks completed? gateway back to the ad-hoc sub-process. Put focus on the gateway and click the arrow icon in the context pad.

ad-hoc sub-process 45

Draw the sequence flow back to the ad-hoc sub-process. You may need to adjust the sequence path for better clarity in the diagram.

ad-hoc sub-process 46

Add the name No to the sequence flow. Add the following Condition expression: activeElements[1] != null.

Your screen should look like this:

ad-hoc sub-process 47

Before running this process again, you need to deploy the Set next task rule. Switch over to the Set next rule DMN and click Deploy.

ad-hoc sub-process 48

One update is needed in the starting form. Open the Select tasks form and go to the Select tasks form element. Change the Key from activeElements to tasksToExecute.

ad-hoc sub-process 49

If you recall, the outputs you defined in the Start event will add activeElements.

Go back to the ad-hoc sub-process diagram and click Run. This time, select Task A and Task B and leave the routing choice set to Sequential. Click Run.

ad-hoc sub-process 50

In your Tasklist, you should only see Task A. Claim and complete the task. Wait for a moment, and you should then see Task B in Tasklist. Claim and complete the task.

Now, if you go to Operate and view the completed process instance, it should look something like this:

ad-hoc sub-process 51

Start another ad-hoc sub-process but this time select a number of tasks and choose Parallel. Did you see the tasks execute in parallel? You should have!

In the next section, you’ll add a form to the tasks in the ad-hoc sub-process to allow users to add more parallel and sequential tasks during process execution. You’ll also add a breadcrumb trail to the form to provide users visibility into the tasks that have been completed and tasks that are yet to be completed.

Create a form to add more tasks and to include a breadcrumb trail for visibility

Go back to Web Modeler and make a duplicate of the start form. To do this, click the three-dot icon to the right of the form entry and click Duplicate.

ad-hoc sub-process 52

While you could use the same form for both the start of the process and task completion, it’ll be easier to make changes without being concerned about breaking other things in the short term. Name this duplicate Task completion.

ad-hoc sub-process 53

Click the Select tasks form element and change the Key to interjectTasks.

ad-hoc sub-process 54

We’ll add logic later to add to the tasksToExecute variable.

Next, add a condition to the form elements to show or hide them based on a variable. You’ll add this variable, based on a radio button group, soon. In the Select tasks form element, open the Condition property and enter the expression interjectYesNo = “no”.

Your screen should look something like this:

ad-hoc sub-process 55

Repeat the same for the Sequential or parallel form element:

ad-hoc sub-process 56

You could just as easily put these elements into a container form element and set the condition property in the container instead, rather than setting the condition in each of the elements.

Next, add a Radio group to the form, above the Select tasks element. Set Field label to Interject any tasks?, Key to interjectYesNo, Static options to Yes and No with values of yes and no. Set Default value to No. Your screen should look like this:

ad-hoc sub-process 57

If you’ve done everything correctly, you should notice that the fields Select tasks and Sequential or parallel do not appear in the Form Preview pane. Given that No is selected in Interject any tasks?, this is the correct behavior. You should see both the Select tasks and Sequential or parallel fields if you select Yes in the Interject any tasks? radio group in Form Preview.

Next, you’ll add HTML to show a breadcrumb trail of tasks at the top of the form. Drag and drop an HTML view form element to the top of the form.

ad-hoc sub-process 58

Copy and paste the following into the Content property of the HTML view:

<div>
<style>
  .breadcrumb li {
    display: inline; /* Inline for horizontal list */
    margin-right: 5px;
  }

  .breadcrumb li:not(:last-child)::after {
    content: " > "; /* Insert " > " after all items except the last */
    padding-left: 5px;
  }

  .breadcrumb li:nth-child({{currentTask}}){ 
    font-weight: bold; /* Bold the current task */
    color: green;
  }
</style>
<ul class="breadcrumb">
    {{#loop breadcrumbTrail}}
      <li>{{this}}</li>
    {{/loop}}
</div>

Essentially this creates a breadcrumb trail using an HTML unordered list along with some CSS styling. You’ll need to provide two inputs, currentTask and breadcrumbTrail, which we’ll define next.

Your screen should look something like this:

ad-hoc sub-process 59

Let’s test the HTML view component. Copy and paste this into the Form Input pane:

{"breadcrumbTrail":["Task_A","Task_B","Task_C & Task_D"], "currentTask":2}

Your screen should look something like this (note that Task B is highlighted):

ad-hoc sub-process 60

Feel free to experiment with the CSS.

Go back to the ad-hoc sub-process diagram. You need to add inputs to the ad-hoc sub-process to feed this view. Be sure to put focus on the ad-hoc sub-process. Add an input called currentTask and set the value to index.

ad-hoc sub-process 61

Next, add an input called breadcrumbTrail and enter the following expression:

{  
  "build": [],
  parallelTasksFunction: function(tasks) string join(tasks, " & ") ,
  checkTaskFunction: function(task) if count(task) > 1 then parallelTasksFunction(task) else task,   
  "breadcrumbTrail": for task in tasksToExecute return concatenate (build, checkTaskFunction(task)),
  "breadcrumbTrail": flatten(breadcrumbTrail)
}.breadcrumbTrail

This expression takes the tasksToExecute variable and creates an HTML-friendly unordered list. It creates an empty array, build[], then defines a couple of functions:

  • parallelTasksFunction, that takes the parallel tasks and joins them together into a single string
  • checkTaskFunction, that sees if the list item is an array.

If the list item is an array, it calls the parallelTaskFunction. Otherwise it just returns the task. All the while, data is being added to the build[] list as defined in the loop in breadcrumbTrail. It is eventually flattened and returned for use by the HTML view to show the breadcrumb trail.

Your screen should look something like this:

ad-hoc sub-process 62

Next, link the four tasks in the ad-hoc sub-process to the Task Completion form.

ad-hoc sub-process 63

One last thing you need to do is add a rule set in the ad-hoc sub-process. This will add tasks to the taskToExecute variables if users opt to add tasks as they complete tasks.

Add a Business rule task to the ad-hoc sub-process, add an exclusive gateway join, then add sequence flows from the tasks to the exclusive gateway join. Finally, add a sequence flow from the exclusive gateway join to the business rule task.

It might be easier to just view the next screenshot:

ad-hoc sub-process 64

Every time a task completes, it will also invoke the rule that you’re about to author.

Click the Business rule task and give it the name Update list of tasks. Click the link icon in the context pad, then click Create DMN diagram.

ad-hoc sub-process 65

You should see the DRD screen pop up. Click the blue list icon in the upper left corner of the Update list of tasks decision table.

ad-hoc sub-process 66

In the DMN editor, update the Hit Policy to First. Double-click Input and enter the following expression: interjectYesNo.

Optionally you can enter a label for the input, but we’ll leave it blank for now.

ad-hoc sub-process 67

Add another input to the table by clicking the plus sign button next to interjectYesNo.

ad-hoc sub-process 68

Once again double-click the second Input to open the expression editor and enter the following expression: routingChoice.

Double click Output to open the expression editor and enter the following: tasksToExecute.

ad-hoc sub-process 69

Just to recap—you’ll use the variables interjectYesNo and routingChoice from the form to determine what to do with tasksToExecute.

Let’s add the rules. Here is the matrix of rules if you don’t want to enter them manually:

injectYesNoroutingChoicetasksToExecute
"no"tasksToExecute
"yes""sequential"concatenate(tasksToExecute, interjectTasks)
"yes""parallel"if count(interjectTasks) > 1 then append(tasksToExecute, interjectTasks) else concatenate(tasksToExecute, interjectTasks)

Your screen should look something like this:

ad-hoc sub-process 70

There are some differences between concatenate and append in FEEL in this context. The behavior of concatenate in this context will add the tasks as individual elements into the tasksToExecute list. Since the second argument of append takes Any object, it will add the entire object. In this case, it’s a list that needs to be added in its entirety to tasksToExecute. It’s a subtle but important distinction.

You’ll need an additional check of the count of interjectTasks in row 3 of the Output, in the event the user selects Parallel but only selects one task. In that case, it’s treated like a sequential addition.

Don’t forget to click Deploy as the rule will not be automatically deployed to the server upon the execution of the process.

ad-hoc sub-process 71

Go back to ad-hoc sub-process and add the Result variable tasksToExecute.

ad-hoc sub-process 72

Run the process!

The moment of truth has arrived! Be sure to select the cluster you’ve created for running the process. Select Task A and Task B in the form. The form will default to a routing choice of sequential. Click Run. You should be presented with the start screen upon running the process.

ad-hoc sub-process 73

Check Operate, and your process instance should look something like this:

ad-hoc sub-process 74

Now check Tasklist and open Task A. It should look something like this:

ad-hoc sub-process 75

Click Assign to me to assign yourself the task. Select Yes to interject tasks. Next, select Task C and Task D and Parallel.

ad-hoc sub-process 76

Complete the task. You should see Task B appear in Tasklist. Select it and notice how Task C and Task D have been added in parallel to be executed after Task B.

Also note the current task highlighted in green. Assign yourself the task and complete it.

ad-hoc sub-process 77

You should now see Task C and Task D in Tasklist.

ad-hoc sub-process 78

Assign yourself Task C, interject Task A sequentially, and complete the task. You may need to clear out previous selections.

ad-hoc sub-process 79

Complete Task D without adding any more tasks. You’ll notice that Task A has not been picked up yet in the breadcrumb trail.

ad-hoc sub-process 80

Task A should appear in Tasklist:

ad-hoc sub-process 81

Notice the breadcrumb trail updates. Assign it to yourself and complete the task. Check Operate to ensure that the process has been completed.

ad-hoc sub-process 82

You can view the complete execution of the process in Instance History in the lower left pane.

You’ve built your ad-hoc sub-process!

Congratulations on completing the build and taking advantage of the power of ad-hoc sub-processes! Keep in mind that you can replace yourself in deciding which tasks to add, if any, by using rules, microservices, or even artificial intelligence.

Want to start working with AI agents in your ad-hoc sub-processes right now? Check out this guide for how to build an AI agent with Camunda.

Stay tuned for even more on how to make the most of this exciting new capability.

The post An Advanced Ad-Hoc Sub-Process Tutorial appeared first on Camunda.

]]>