AI Archives | Camunda https://camunda.com/blog/tag/ai/ Workflow and Decision Automation Platform Thu, 26 Jun 2025 20:15:39 +0000 en-US hourly 1 https://camunda.com/wp-content/uploads/2022/02/Secondary-Logo_Rounded-Black-150x150.png AI Archives | Camunda https://camunda.com/blog/tag/ai/ 32 32 Ensuring Responsible AI at Scale: Camunda’s Role in Governance and Control https://camunda.com/blog/2025/06/responsible-ai-at-scale-camunda-governance-and-control/ Tue, 24 Jun 2025 19:51:13 +0000 https://camunda.com/?p=142479 Camunda enables effective AI governance by acting as an operational backbone, so you can integrate, orchestrate and monitor AI usage in your processes.

The post Ensuring Responsible AI at Scale: Camunda’s Role in Governance and Control appeared first on Camunda.

]]>
If your organization is adding generative AI into its processes, you’ve probably hit the same wall as everyone else: “How do we govern this responsibly?”

It’s one thing to get a large language model (LLM) to generate a summary, write an email, or classify a support ticket. It’s another entirely to make sure that use of AI fits your company’s legal, ethical, operational, and technical policies. That’s where governance comes in—and frankly, it’s where most organizations are struggling to find their footing.

The challenge isn’t just technical. Sure, you need to worry about prompt injection attacks, hallucinations, and model drift. But you also need to think about compliance audits, cost control, human oversight, and the dreaded question from your CEO: “Can you explain why the AI made that decision?” These aren’t abstract concerns anymore—they’re real business risks that can derail AI initiatives faster than you can say “responsible deployment.”

That’s where Camunda comes into the picture. We’re not an AI governance platform in the abstract sense. We don’t decide your policies for you, and we’re not going to tell you whether your use case is ethical or compliant. But what we do provide is something absolutely essential: a controlled environment to integrate, orchestrate, and monitor AI usage inside your processes, complete with the guardrails and visibility that support enterprise-grade governance.

Think of it this way: if AI governance is about making sure your organization uses AI responsibly, then Camunda is the operational backbone that makes those policies actually enforceable in production systems. We’re the difference between having a beautiful AI ethics document sitting in a SharePoint folder somewhere and actually implementing those principles in your day-to-day business operations.

This post will explore how Camunda fits into the broader picture of AI governance, diving into specific features—from agent orchestration to prompt tracking—that help you operationalize your policies and build trustworthy, compliant automations.

What is AI governance, and where does Camunda fit?

Before we dive into the technical details, it’s worth stepping back and talking about what AI governance actually means. The term gets thrown around a lot, but in practice, it covers everything from high-level ethical principles to nitty-gritty technical controls.

We’re framing this discussion around the “AI Governance Framework” provided by ai-governance.eu, which defines a comprehensive model for responsible AI oversight in enterprise and public-sector settings. The framework covers organizational structures, procedural requirements, legal compliance, and technical implementations.

Ai-governance-eu

Camunda plays a vital role in many areas of governance, but none more so than the “Technical Controls (TeC)” category.. This is where the rubber meets the road—where your governance policies get translated into actual system behaviors. Technical controls include enforcing process-level constraints on AI use, ensuring explainability and traceability of AI decisions, supporting human oversight and fallback mechanisms, and monitoring inputs, outputs, and usage metrics across your entire AI ecosystem.

Here’s the crucial point: these technical controls don’t replace governance policies—they ensure that those policies are actually followed in production systems, rather than just existing as aspirational documents that nobody reads.

1. Fine-grained control over how AI is used

The first step to responsible AI isn’t choosing the right model or writing the perfect prompt—it’s being deliberate about when, where, and how AI is used in the first place. This sounds obvious, but many organizations end up with AI sprawl, where different teams spin up AI integrations without any coordinated approach to governance.

With Camunda, AI usage is modeled explicitly in BPMN (Business Process Model and Notation), which means every AI interaction is part of a documented, versioned, and auditable process flow.

Agentic-ai-camunda

You can design processes that use Service Tasks to call out to LLMs or other AI services, but only under specific conditions and with explicit input validation. User Tasks can involve human reviewers before or after an AI step, ensuring critical decisions always have human oversight. Decision Tables (DMN) can evaluate whether AI is actually needed based on specific inputs or context. Error events and boundary events capture and handle failed or ambiguous AI responses, building governance directly into your process logic.

Because the tasks executed by Camunda’s AI agents are defined with BPMN, those tasks can be deterministic workflows themselves, ensuring that, on a granular level, execution is still predictable.

This level of orchestration lets you inject AI into your business processes on your own terms, rather than letting the AI system dictate behavior. You’re not just calling an API and hoping for the best—you’re designing a controlled environment where AI operates within explicit boundaries.

Here’s a concrete example: if you’re processing insurance claims and want to use AI to classify them as high, medium, or low priority, you can insert a user task to verify all “high priority” classifications before they get routed to your fraud investigation team. You can also add decision logic that automatically escalates claims above a certain dollar amount, regardless of what the AI thinks. This way, you keep humans in the loop for critical decisions without slowing down routine processing.

2. Your models, your infrastructure, your rules

One of the most frequent concerns about enterprise AI adoption centers on data privacy and vendor risk. Many organizations have strict requirements that no customer data, internal business logic, or proprietary context can be sent to third-party APIs or cloud-hosted LLMs.

Camunda’s approach to agentic orchestration supports complete model flexibility without sacrificing governance capabilities. You can use OpenAI, Anthropic, Mistral, Hugging Face, or any provider you choose, and, starting with Camunda 8.8 (coming in October 2025), you can also route calls to self-hosted LLMs running on your own infrastructure. Whether you’re running LLaMA 3 on-premises, using Ollama for local development, or connecting to a private cloud deployment, Camunda treats all of these as different endpoints in your process orchestration.

There’s no “magic” behind our AI integration—we provide open, composable connectors and SDKs that integrate with standard AI frameworks like LangChain. You control the routing logic, prompt templates, authentication mechanisms, and access credentials. Most importantly, your data stays exactly where you want it.

For example, a financial services provider might route customer account inquiries to a cloud-hosted model, but keep transaction details and personal financial information on-premises. With Camunda, you can model this routing logic explicitly using decision tables to determine which endpoint to use based on content and context.

3. Design AI tasks with guardrails: Preventing prompt injection and hallucinations

Prompt injection isn’t just a theoretical attack—it’s a real risk that can have serious business consequences. Any time an AI model processes user-generated input, there’s potential for malicious content to manipulate the model’s behavior in unintended ways.

Camunda helps mitigate these risks by providing structured approaches to AI integration. All data can be validated and sanitized before it is used in a prompt, preventing raw input from reaching the models. Prompts are designed using FEEL (Friendly Enough Expression Language), allowing prompts to be flexible and dynamic. This centralized prompt design means prompts become part of your process documentation rather than buried in application code. Camunda’s powerful execution listeners can be utilized to analyze and sanitize the prompt before it is sent to the agent.

Ai-prompt-guardrails-camunda

Decision tables provide another layer of protection by filtering or flagging suspicious content before it reaches the model. You can build rules that automatically escalate requests containing certain keywords or patterns to human review.

When you build AI tasks with Camunda’s orchestration engine, you create a clear separation between the “business logic” of your process and the “creative output” of the model. This separation makes it much easier to test different scenarios, trace unexpected behaviors, and implement corrective measures. Camunda’s AI Task Agent supports guardrails, such as limiting the number of iterations it can perform, or the maximum number of tokens per request to help control costs.

4. Monitoring and auditing AI activity

You can’t govern what you can’t see. This might sound obvious, but many organizations deploy AI systems with minimal visibility into how they’re actually being used in production.

Optimize gives you comprehensive visibility into AI usage across all your processes. You can track the number of AI calls made per process or task, token usage (and therefore associated costs), response times and failure rates, and confidence scores or output quality metrics when available from your models.

This monitoring data supports multiple governance objectives. For cost control, you can spot overuse patterns and identify inefficient prompt chains. For policy compliance, you can prove that AI steps were reviewed when required. For performance tuning, you can compare model outputs over time or across different vendors to optimize both cost and quality.

You can build custom dashboards that break down AI usage by business unit, region, or product line, making AI usage measurable, accountable, and auditable. When auditors ask about your AI governance, you can show them actual data rather than just policy documents.

5. Multi-agent systems, modeled with guardrails

The future of enterprise AI isn’t just about better individual models—it’s about creating systems where multiple AI agents work together to achieve complex business goals.

Camunda’s agentic orchestration lets you design and govern these complex systems with the same rigor you’d apply to any other business process. Each agent—whether AI, human expert, or traditional software—gets modeled as a task within a larger orchestration flow. The platform defines how agents collaborate, hand off work, escalate problems, and recover from failures.

Ai-multiagent-guardrails-camunda

You can design parallel agent workflows with explicit coordination logic, conditional execution paths based on agent outputs, and human involvement at any point where governance requires it. Composable confidence checks ensure work only proceeds when all agents meet minimum quality thresholds.

Here’s a concrete example: in a legal document review process, one AI agent extracts key clauses, another summarizes the document, and a human attorney provides final review. Camunda coordinates these steps, tracks outcomes, and escalates if confidence scores are low or agents disagree on their assessments.

6. Enabling explainability and traceability

One of the most challenging aspects of AI governance is explainability. When an AI system makes a decision that affects your business or customers, stakeholders want to understand how and why that decision was made—and this is often a legal requirement in regulated industries.

Modern AI models are probabilistic systems that don’t provide neat explanations for their outputs. But Camunda addresses this by creating comprehensive audit trails that capture the context and process around every AI interaction.

For every AI step, Camunda persists the inputs provided to the model, outputs generated, and all prompt metadata. Each interaction gets correlated with the exact process instance that triggered it, creating a clear chain of causation. Version control for models, prompts, and orchestration logic means you can trace any historical decision back to the exact system configuration that was in place when it was made.

Through REST APIs, event streams, and Optimize reports, you can answer complex questions about AI usage patterns and decision outcomes. When regulators ask about specific decisions, you can provide comprehensive answers about what data was used, what models were involved, what confidence levels were reported, and whether human review occurred.

Camunda as a cornerstone of process-level AI governance

AI governance is a team sport that requires coordination across multiple organizational functions. You need clear policies, compliance frameworks, technical implementation, and ongoing oversight. No single platform can address all requirements, nor should it try to.

What Camunda brings to this collaborative effort is operational enforcement of governance policies at the process level. We’re not here to define your ethics policies—we provide the technical infrastructure to ensure that whatever policies you establish actually get implemented and enforced in your production AI systems.

Camunda gives you fine-grained control over exactly how AI gets used in your business processes, complete flexibility in model and hosting choices, robust orchestration of human-in-the-loop processes, comprehensive monitoring and auditing capabilities, protection against AI-specific risks like prompt injection, and support for cost tracking and usage visibility.

You bring the policies, compliance frameworks, and business requirements—Camunda helps you enforce them at runtime, at scale, and with the visibility and control that enterprise governance demands.

If you’re looking for a way to govern AI at the process layer—to bridge the gap between governance policy and operational reality—Camunda offers the controls, insights, and flexibility you need to do it safely, confidently, and sustainably as your AI initiatives grow and evolve.

Learn more

Looking to get started today? Download our ultimate guide to AI-powered process orchestration and automation to discover how to start effectively implementing AI into your business processes quickly.

The post Ensuring Responsible AI at Scale: Camunda’s Role in Governance and Control appeared first on Camunda.

]]>
Reinventing Fraud Detection: NatWest’s Journey to Operationalize AI with Camunda https://camunda.com/blog/2025/06/reinventing-fraud-detection-natwests-journey-to-operationalize-ai-with-camunda/ Mon, 23 Jun 2025 19:40:01 +0000 https://camunda.com/?p=142332 Learn how NatWest used Camunda to put AI into action to help them improve customer experiences, increase employee productivity and detect fraud faster.

The post Reinventing Fraud Detection: NatWest’s Journey to Operationalize AI with Camunda appeared first on Camunda.

]]>
Fraud has long been a problem for financial institutions, and the growing threat of AI-supported fraud attempts is not making things any easier. On the other hand, AI can also be operationalized in support of fraud detection and to improve customer experiences, and done right this can be a major differentiator for global banks.

That’s what NatWest Bank, a leading retail and commercial bank in the UK, set out to do with the help of Camunda. Joanne Barry, Head of Technology Fraud Prevention COE at NatWest, led off her recent CamundaCon presentation by quoting the CEO of NatWest, Paul Thwaite, whose stated goal is to “build a simpler, more integrated and technology-driven bank that is capable of even greater impact.” One of the most powerful ways to do that in 2025 is by operationalizing AI.

A complex environment

Milesh Chudasama, Digital Transformation Director at NatWest, then took the stage to set the scene. The Fraud Center of Excellence group at NatWest includes over 800 fraud center agents using an average of 14 applications per call, with over 60 tools available to them across a range of platforms.

As you can imagine, this is a complex setup and it takes a long time to train people well to provide a good customer experience and to be in full compliance with regulations. And when they leave, all that painstaking training goes with them.

The lack of standardization

Inconsistency from a lack of standardization is a big problem. This not only means that customers are not served in the same way every call, but it also means that processes are often recreated across multiple platforms, duplicating effort.

Milesh explained that a lack of orchestration is holding them back, and that “what we want to do is create a good solid process orchestration engine.” This will enable a headless architecture that they can replicate across as many platforms as they need to, saving enormous amounts of time and effort.

Unlocking the value of AI for fraud detection

NatWest is looking for AI to drive specific business outcomes. Joanne explained, “We know AI will transform our ability to deliver” these key results:

  1. Increase customer engagement
  2. Maximize agility
  3. Drive operating leverage

To drive those outcomes, they have outlined a series of initiatives where AI can produce valuable results. As Milesh put it, the goal is to “use AI in the right way, and not let it run free… and do some stuff that can damage your reputation.”

Analyzing and digesting large amounts of data, along with writing and testing code, were powerful ways that the team is able to use AI to speed up their work and reduce their time to market. AI can also build on existing rules-based systems for something like anomaly detection, helping them detect patterns quickly and be more proactive, rather than reactive.

One exciting initiative included using AI within Camunda to move “the creation of initial BPMN diagrams to the left, so that the business generates it themselves,” which will result in much tighter business-IT alignment and accelerate the way they build processes to begin with. AI will also help the team reduce the number of applications they need to touch to solve a problem, improving the way a process is “governed and managed by a human and an AI agent” or by straight-through processing.

Disrupt and transform

AI clearly has the power to both disrupt and transform banking and financial services. The team at NatWest has identified a number of opportunities.

  • Customer experience: AI can respond faster and automate more cases, improving customer satisfaction.
  • Engineering productivity: AI tools can help teams write code or design BPMN diagrams much more rapidly than before, speeding up production and making collaboration easier.
  • Process simplification: Features like Optimize give them data points so they can understand what works and what doesn’t, and where AI can be most effective.
  • Fraud/Financial crime: Understanding patterns and which things to react to and which not to, which is critical with the reports of fraud growing daily.

One key Milesh highlighted was the ability to determine “how far you want the AI to service a customer before you break out from the AI” and hand off to a human. This is a critical step in ensuring a good customer experience and avoiding the problems of AI attempting to navigate an issue that is too complex for it.

What’s next?

Milesh noted that what he really wants his team to do is make sure the customer gets the same journey no matter how they interact with the company. “Not omnichannel,” he observed, “let’s call it optichannel—the right customer to the right journey to get the right output.” Using tools like Optimize to simplify and automate processes, and orchestrating end-to-end with Camunda to better understand processes and remove human touchpoints where it’s most effective, will help them get there faster.

See the full presentation and more from CamundaCon Amsterdam 2025

You can check out the full presentation from NatWest here or watch it below, and for more be sure to check out all the recordings from CamundaCon Amsterdam 2025.

If you liked this and wish you’d seen it live, don’t miss your chance to join us at the next CamundaCon! Register now for CamundaCon New York 2025.

The post Reinventing Fraud Detection: NatWest’s Journey to Operationalize AI with Camunda appeared first on Camunda.

]]>
Camunda Alpha Release for June 2025 https://camunda.com/blog/2025/06/camunda-alpha-release-june-2025/ Tue, 10 Jun 2025 10:00:00 +0000 https://camunda.com/?p=141378 We're excited to announce the June 2025 alpha release of Camunda. Check out what's new, including new capabilities like the FEEL Copilot, agentic orchestration connectors, and improved migration tooling.

The post Camunda Alpha Release for June 2025 appeared first on Camunda.

]]>
We’re excited to share that the latest alpha of Camunda will be live very soon and you will soon see it available for download. For our SaaS customers who are up to date, you may have already noticed some of these features as we make them available for you automatically.

Update: The alpha release is officially live for all who wish to download.

Below is a summary of everything new in Camunda for this June with the 8.8-alpha5 release.

This blog is organized using the following product house, with E2E Process Orchestration at the foundation and our product components represented by the building bricks. This organization allows us to organize the components to highlight how we believe Camunda builds the best infrastructure for your processes, with a strong foundation of orchestration and AI thoughtfully infused throughout.

Product-house

E2E Process Orchestration

This section will update you on the components that make up Camunda’s foundation, including the underlying engine, platform operations, security, and API.

Zeebe

The Zeebe team focused on big fixes for this release.

Operate

For this release, our Operate engineering team worked on bug fixes.

Tasklist

For this release, we have continued to work on bug fixes in Tasklist as well.

Web Modeler

With this alpha release of Web Modeler, we’re introducing powerful new features that streamline process modeling and enhance the developer experience.

Azure Repos Sync

Camunda now supports an integration with Azure DevOps, which allows for direct synchronization with Azure repositories.

Azure-devops-camunda

FEEL Copilot

Pro- and low-code developers using Web Modeler SaaS can develop FEEL expressions with an integrated editor that pulls process variables and process context, making it easy for anyone to perform business logic in Camunda.

For Web Modeler SaaS customers, it also features the ‘FEEL Copilot’ which takes advantage of integrated generative AI to write and debug executable FEEL (Friendly Enough Expression Language) expressions.

Camunda-feel-copilot

Desktop Modeler

This alpha, we have also provided more functionality for our Desktop Modeler.

Process application deployment

A process application is now deployed as a single bundle of files. This allows using deployment binding for called processes, decisions, and linked forms.

Deployed decision link to Operate

After a DMN file is deployed to Camunda, links to the deployed decisions in Operate are displayed in the success notification.

Enhanced FEEL suggestions

Literal values like true or false are now displayed in the autocompletion for fast and easy expression writing.

Check out the full release notes for the latest Desktop Modeler 5.36 release right here.

Optimize

Our Optimize engineering team has been working on bug fixes this release cycle.

Identity

Camunda’s new Identity service delivers enhanced authentication and fine-grained authorization capabilities across both Self-Managed and SaaS environments. Key updates include:

  • Self-Managed Identity Management: Administrators can natively manage users, groups, roles, and memberships via the Identity database—without relying on external systems.
  • OIDC Integration: Supports seamless integration with standards-compliant external Identity Providers (IdPs), including Keycloak and Microsoft Entra (formerly Azure AD), enabling single sign-on (SSO) and federated identity management.
  • Role-Based Access Control (RBAC): Provides resource-level access control with assignable roles and group-based permissions, enabling precise scoping of user capabilities across the platform.
  • Flexible Mapping: Users, groups, and roles can now be dynamically mapped to resource authorizations and multi-tenant contexts, supporting complex enterprise and multi-tenant deployment scenarios.
  • Migration Support: Simplified tooling facilitates migration from legacy Identity configurations to the new service, reducing operational overhead and enabling a phased rollout.
  • Organizational Identity for SaaS: In SaaS deployments, customers can integrate their own IdP, allowing centralized management of organizational identities while maintaining cluster-specific resource isolation.
  • Cluster-Specific Roles & Groups: SaaS environments now support tenant-isolated roles, groups, and authorizations per cluster, ensuring that customer-specific access policies are enforced at runtime.

Please see our release notes for more on the updates to Identity management.

Console

The Console engineering team has been working on bug fixes this release cycle.

Installation Options

This section gives updates on our installation options and various supported software components.

Self-Managed

For our self-managed customers, we have introduced a graceful shutdown for C8Run by rebuilding how we manage C8Run started processes. This resolves an issue where stopping C8Run during the startup process can create zombie processes.

We have also added features to support supplying image.digest in the values.yaml file instead of an image tag as well as the support for an Ingress external hostname.

Task Automation Components

In this section, you can find information related to the components that allow you to build and automate your processes including our modelers and connectors.

Connectors

We have introduced two connectors to support agentic AI with Camunda. You can find more on Camunda and Agentic in the Agentic Orchestration section in this blog post.

  • The AI Agent connector which was recently published on Camunda Marketplace is now officially included as part of this alpha release and directly available in Web Modeler. This connector is designed for use with an ad-hoc sub-process in a feedback loop, providing automated user interaction and tool discovery/selection.

    The connector supports providing a custom OpenAI endpoint to be used in combination with custom providers and locally hosted models (such as Ollama).
  • The Vector Database connector, also published to Camunda Marketplace, allows embedding, storing, and retrieving Large Language Model (LLM) embeddings. This enables building AI-based solutions for your organizations, such as context document search and long-term LLM memory and can be used in combination with the AI Agent connector for RAG (Retrieval-Augmented Generation) use cases.

Agentic Orchestration

With a continued focus on operationalizing AI, this section provides information about the continued support of agentic orchestration in our product components. This new Agentic Orchestration documentation section of our release blog is a great starting point to explore Camunda’s Agentic Orchestration approach. 

Camunda-agentic-orchestration

To support modern automation requirements, Camunda has adopted orchestration patterns that enable AI agents and processes to remain adaptive by combining deterministic with dynamic orchestration.

This architecture allows agents to incorporate dynamic knowledge into their planning loops and decision processes. The same mechanisms also support continuous learning, by updating and expanding the knowledge base based on runtime feedback.

To support this approach, Camunda has incorporated both our Vector Database connector and AI Agent Outbound connector directly into its orchestration layer.

Together, these capabilities allow Camunda to support agentic orchestration patterns such as:

  • Planning loops that select and sequence tasks dynamically
  • Use of short-term memory (process variables) and long-term memory (vector database retrievals)
  • Integration of event-driven orchestration and multi-agent behaviors through nested ad-hoc subprocesses.

As mentioned in the Connectors section, we have recently released two connectors to support our approach:

  • The AI Agent connector is designed for use with an ad-hoc sub-process in a feedback loop, providing automated user interaction and tool discovery/selection.

    This connector integrates with large language models (LLMs)—such as OpenAI or Anthropic—giving agents reasoning capabilities to select and execute ad-hoc sub-processes within a BPMN-modeled orchestration. Agents can evaluate the current process context, decide which tasks to run, and act autonomously—while maintaining full traceability and governance through the orchestration engine.
  • The Vector Database connector which allows embedding, storing, and retrieving Large Language Model (LLM) embeddings. This enables building AI-based solutions for your organizations, such as context document search and long-term LLM memory and can be used in combination with the AI Agent connector for RAG (Retrieval-Augmented Generation) use cases.

If you would like to see these new connectors in action, we encourage you to review our website and see a video of how Camunda provides this functionality. We also have a step-by-step tutorial for using the AI Agent Connector in our blog.

Camunda 7

There are several updates in this release for Camunda 7.

Support for Spring Boot 3.5

This alpha release features support for Spring Boot 3.5.0.

New LegacyJobRetryBehaviorEnabled process engine flag

Starting with versions 7.22.5, 7.23.2 and 7.24.0, the process engine introduces a new configuration flag: legacyJobRetryBehaviorEnabled.

By default, when a job is created, its retry count is determined based on the camunda:failedJobRetryTimeCycle expression defined in the BPMN model.

However, setting legacyJobRetryBehaviorEnabled to true enables the legacy behavior, where the job is initially assigned a fixed number of retries (typically 3), regardless of the retry configuration.

In 7.22.5+ and in 7.23.2+ the default value is true for legacyJobRetryBehaviorEnabled. For 7.24.0+ the default value is false for legacyJobRetryBehaviorEnabled .

External task REST API and OpenAPI extended

Now the External task REST API is extended with the createTime field. OpenAPI is updated as well along with the extensionProperties for the LockedExternalTaskDto.

You can find the latest OpenAPI documentation here. Thank you for this community contribution.

Camunda 7 to Camunda 8 Migration Tools

With our Camunda 7 to Camunda 8 Migration Tools 0.1.0-alpha2 release, the Camunda 7 to Camunda 8 Data Migrator brings many quality of life improvements for our customers that are moving from Camunda 7 to Camunda 8.

Auto-deployment with Migrator Application

To help you migrate seamlessly, the BPMN diagrams that are placed in ./configuration/resources directory are auto-deployed to Camunda 8 when starting the migrator application.

Simplified Configuration

We’ve also made it easier to configure the Camunda 8 client allowing you to define client settings such as the Zeebe URL directly in the application.yml file.

Logging Levels

In addition, logging has been enhanced with the introduction of logging levels, as well as more specific warnings and errors.

For example, if a Camunda 7 process instance is in a state that cannot be consistently translated to Camunda 8, a warning is logged and the process instance is skipped.

To proceed, these instances must be adjusted in Camunda 7. Once complete, with the recent updates, you can resume migration for previously skipped and adjusted instances.

While the Camunda 7 to Camunda 8 Migration Tools are still in alpha, you can already check out the project and give it a try! Visit https://github.com/camunda/c7-data-migrator.

Thank you

We hope you enjoy our latest minor release updates! For more details, be sure to review the latest release notes as well. If you have any feedback or thoughts, please feel free to contact us or let us know on our forum.

If you don’t have an account, you can try out the latest version today with a free trial.

The post Camunda Alpha Release for June 2025 appeared first on Camunda.

]]>
The Benefits of BPMN AI Agents https://camunda.com/blog/2025/05/benefits-bpmn-ai-agents/ Thu, 22 May 2025 21:14:35 +0000 https://camunda.com/?p=139555 Why are BPMN AI agents better? Read on to learn about the many advantages to using BPMN with your AI agents, and how complete visibility and composability help you overcome key obstacles to operationalizing AI.

The post The Benefits of BPMN AI Agents appeared first on Camunda.

]]>
There are lots of tools for building AI Agents and at the core they need three things. First, they need to understand their overall purpose and the rules in which they should operate. So you might create an agent and tell it, “You’re here to help customers with generic requests about the existing services of the bank.” Secondly, we need a prompt, which is a request to the agent that an agent can try to fulfil. Finally, you need a set of tools. These are the actions and systems that an agent has access to in order to fulfill the request.

Most agent builders will wrap up those three requirements into a single, static, synchronous system, but at Camunda we decided not to do this. We found that it creates too many use case limitations, it’s not scalable and it’s hard to maintain. To overcome these limitations, we came up with a concept that lets us decouple these requirements and completely visualize an agent in a way that opens it up to far more use cases, not only on a technical level, but also in a way that  alleviates a lot of the fears that people have when adding AI agents as part of their core processes.

The value of a complete visualization

Getting insight into how an AI Agent has performed in a given task often requires someone to read through its chain of thought (this is like the AI’s private journal, where it details how it’s thinking about the problem). This will usually let you know what tools it decided to use and why. So in theory if you wanted to check on how your AI Agent was performing, you could read through it. In practice, this is just not practical for two reasons:
1. It limits the visibility of what happened to a text file that needs to be interpreted.
2. AI agents can sometimes lie in their chain of thought—so it might not even be accurate.

Our solution to this is to completely visualize the agent, its tools and its execution all in one place.

Gain full visibility into AI agent performance with BPMN

Ai-agent-visibility-bpmn-camunda

The diagram above shows a BPMN process that has implemented an AI agent. It’s in two distinct parts. The agent logic is contained within the AI Task Agent activity and the tools it has access to is displayed with an ad-hoc sub-process. This is a BPMN construct that allows for completely dynamic execution of the tasks within it.

With this approach the action of an agent is completely visible to the user in design time, during execution, and can even be used to evaluate how well the process performs with the addition of an agent.

Ai-agent-performance-camunda

The diagram above shows a headmap which shows which tools take the longest to run. This is something impossible to accurately measure with a more traditional AI agent building approach.

Decoupling tools from agent logic

This design completely decouples the agent logic from the available tool set. Meaning that the agent will find out only in runtime what tools are at its disposal. The ramifications of this are actually quite profound. It means that you can run multiple versions of the same process with the same agent, but a completely different tool set. This makes context reversing far easier and also lets us qualitatively evaluate the impact of adding or removing certain tools through AB testing.

Improving maintainability for your AI agents

The biggest impact of this decoupling in my opinion though is how it improves maintainability. Designers of the process can add or remove new tools without ever needing to change or update the AI agent. This is a fantastic way of separating responsibilities when a new process is being built. While AI experts can focus on ensuring the AI Task Agent is properly configured, developers can build the tooling independently. And of course, you can also just add pre-built tools for the agent to use.

Ai-agent-maintanability-camunda

Composable design

Choosing, as we did, to marry AI agent design with BPMN design means we’ve unlocked access for AI agent designers to all the BPMN patterns, best practices and functionality that Camunda has been building over the last 10 years or so. While there’s a lot you gain because of that, I want to focus on just one here: Composable architecture.

Composable orchestration is the key to operationalizing AI

Camunda is designed to be an end-to-end orchestrator to a diverse set of tools, rules, services and people. This means we have designed our engine and the tools around it so that there is no limitation on what can be integrated. It also means we want users to be able to switch out services and systems over time, as they become legacy or a better alternative is found.

This should be of particular interest to a developer of AI agents because it lets you not only switch out the tools the AI Agent has access to, but more importantly, it lets you switch out the agent’s own LLM for the latest and greatest. So to add or even just test out the behaviour of a new LLM no longer means building a new agent from scratch—just swap out the brain and keep the rest. This alone is going to lead to incredibly fast improvements and deployments to your agents, and help you make sure that a change is a meaningful and measurable one.

Ai-agent-maintanability-camunda-2

Conclusion

Building AI agents the default way that other tools offer right now leads you to adding a new black box to your system. One that is less maintainable and and far more opaque in execution than anything else you’ve ever integrated. This is going to make it hard to properly maintain and evaluate.

At Camunda we have managed to open up that black box in a way that integrates it directly into your processes as a first-class citizen. Your agent will immediately benefit from everything that BPMN does and become something that can grow with your process.

It’s important to understand that you’re still adding a completely dynamic aspect to your process, but this way you mitigate most concerns early on. For all these reasons, I can imagine that of all the many, many AI agents that are going to be built this year, I’m sure the only ones that will still be used by the end of next year will be built in Camunda with BPMN.

Try it out

All of this is available for you to try out in Camunda today. Learn more about how Camunda approaches agentic orchestration and get started now with a free trial here.

The post The Benefits of BPMN AI Agents appeared first on Camunda.

]]>
Guide to Adding a Tool for an AI Agent https://camunda.com/blog/2025/05/guide-to-adding-tool-ai-agent/ Wed, 21 May 2025 19:31:39 +0000 https://camunda.com/?p=139473 In this quick guide, learn how you can add exactly the tools you want to your AI Agent's toolbox so it can get the job done.

The post Guide to Adding a Tool for an AI Agent appeared first on Camunda.

]]>
AI Agents and BPMN open up an exciting world of agentic orchestration, empowering AI to act with greater autonomy while also preserving auditability and control. With Camunda, a key way that works is by using an ad-hoc sub-process to clearly tell the AI agent which tools it has access to while it attempts to solve a problem. This guide will help you understand exactly how to equip your AI agents with a new tool.

How to build an AI Agent in BPMN with Camunda

There are two aspects to building an AI Agent in BPMN with Camunda.

  1. Defining the AI Task Agent
  2. Defining the available tools for the agent.

The AI Task Agent is the brain, able to understand the context and the goal and then to use the tools at its disposal to complete the goal. But where are these tools?

Adding new tools to your AI agent

The tools for your AI agent are defined inside an ad-hoc sub-process which the agent is told about. So assuming you’ve set up your Task Agent already—and you can! Because you just need the process model from this github repo. The BPMN model without any tools should look like this:

Ad-hoc-sub-process

Basically I’ve removed all the elements from within the ad-hoc sub-process. The agent still has a goal—but now has no way of accomplishing that goal.

In this guide we’re going to add a task to the empty sub-process. By doing this, we’ll give the AI Task Agent access to it as a tool it can use if it needs to.

The sub-process has a multi-instance marker, so for each tool to be used there’s a local variable called toolCall that we can use to get and set variables.

I want to let the AI agent ask a human a technical question, so first I’m going to add a User Task to the sub-process.

Ai-agent-tool

Defining the tool for the agent

The next thing we need to do is somehow tell the agent what this tool is for. This is done by entering a natural language description of the tool in the Element Documentation field of the task.

Element-documentation-ai-agent-tool

Defining variables

Most tools are going to request specific variables in order to operate. Input variables are defined so that the agent is aware of what’s required to run the tool in question. It also helps pass the given context of the current process to the tool. Output variables define how we map the response from the tool back into the process instance, which means that the Task Agent will be aware of the result of the tool’s execution.

In this case, to properly use this tool, the agent will need to come up with a question.

For a User Task like this we will need to create an input variable like the one you see below.

Local-variable-ai-agent-tool

In this case we created a local variable, techQuestion, directly in the task. We’ll then both assign this variable and define it for the task agent we need to call the fromAi function. To do that we must provide:

  1. The location of the variable in question.
    • In this case that would be within the toolCall variable.
  2. A natural language description of what the variable is used for.
    • Here we describe it as the question that needs to be asked.
  3. The variable type.
    • This is a string, but it could be any other primitive variable type.

When all put together, it looks like this:

fromAi(toolCall.techQuestion, "This is a specific question that you’d like to ask", "string")

Next we need an output variable so that the AI agent can be given the context it needs to understand if running this tool produced the output it expected. In this case, we want it to read the answer from the human expert it’s going to consult.

Process-variable-ai-agent-tool

This time, create an output variable. You’ll have two fields to fill in.

  1. Process variable name
    • It’s important that this variable name matches the output expected by the sub-process. The expected name can be found in the output element of the sub-process, and as you can see above, we’ve named our output variable toolCallResult accordingly.
      Output-ai-agent-tool
  2. Variable assignment value
    • This needs to simply take the expected variable from the tool task and add it to a new variable that can be put into the toolCallResult object

So in the end the output variable assignment value should be something like this:

{ “humanAnswer” : humanAnswer}

And that’s it! Now the AI Task Agent knows about this tool, knows what it does and knows what variables are needed in order to get it running. You can repeat this process to give your AI agents access to exactly as many or as few tools as they need to get a job done. The agents will then have the context and access required to autonomously select from the tools you have provided, and you’ll be able to see exactly what choices the agent made in Operate when the task is complete.

All of this is available for you to try out in Camunda today. Learn more about how Camunda approaches agentic orchestration and get started now with a free trial here. For more on getting started with agentic AI, feel free to dig deeper into our approach to AI task agents.

The post Guide to Adding a Tool for an AI Agent appeared first on Camunda.

]]>
MCP, ACP, and A2A, Oh My! The Growing World of Inter-agent Communication https://camunda.com/blog/2025/05/mcp-acp-a2a-growing-world-inter-agent-communication/ Tue, 20 May 2025 20:03:51 +0000 https://camunda.com/?p=139339 Making sense of the evolving agentic communication landscape: Model Context Protocol, Agent Communication Protocol and Agent2Agent Protocol.

The post MCP, ACP, and A2A, Oh My! The Growing World of Inter-agent Communication appeared first on Camunda.

]]>
The AI ecosystem is rapidly evolving from isolated AI models toward multi-agent systems: environments where AI agents must coordinate, communicate, and interoperate efficiently. At Camunda, we see this pattern quickly emerging as organizations are evolving their end-to-end business processes to take advantage of agent-driven automation.

As developers explore how to make multi-agent systems useful and reliable, new communication standards are emerging to address the need for interoperability, security, and shared understanding. Three notable efforts in this domain are:

  • Model Context Protocol (MCP) developed by Anthropic
  • Agent Communication Protocol (ACP) developed by IBM Research
  • Agent2Agent (A2A) Protocol developed by Google and Microsoft

Each targets a specific layer of the multi-agent interaction stack and reflects different philosophical and architectural priorities.

Model Context Protocol (MCP)

Anthropic developed the Model Context Protocol (MCP) to solve a narrow but critical problem: how to give large language models (LLMs) structured context about tools, APIs, and systems they can interact with. MCP focuses on standardizing the input context that LLMs receive before agents execute their tasks. As the Anthropic documentation says, “Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.”

With MCP, tools expose a structured schema such as an OpenAPI or JSON schema along with natural language descriptions. The LLM receives the schema via MCP when it is being prompted to act, ensuring a consistent understanding of available actions.

As an early example of agent-to-agent communication, MCP has several advantages. It has promise as a way to equip AI agents with structured context to call APIs, tools, and plugins intelligently. It’s an open specification that’s designed to work with any LLM or agentic framework, making it highly flexible. And it’s lightweight and easy for software developers to adopt because it aligns with common software development practices.

However, it’s important to note that MCP is limited to tool and model interactions; it’s not a general-purpose agent protocol. As of now, it doesn’t define inter-agent negotiation or dynamic delegation of tasks.

Example: Customer onboarding in financial services

Imagine a customer onboarding process at a retail bank, which requires validating identity documents, performing Know Your Customer (KYC) checks, interfacing with fraud detection services, and activating new customer accounts. Traditionally, each of these tasks is handled by a separate back-end service with its own API.

Using MCP, an AI-powered onboarding agent could be equipped with structured, real-time context about all of these APIs. MCP ensures the agent understands what each service does, how to call it, and what inputs and outputs to expect, without the need for a developer to hard-code specific logic into the model.

This allows the onboarding agent to dynamically compose API calls, intelligently route requests, and adapt its workflow if a service is temporarily unavailable—all while minimizing human intervention. The result is a faster, more consistent onboarding experience that reduces manual handoffs and operational delays.

Agent Communication Protocol (ACP)

The Agent Communication Protocol (ACP) developed by IBM Research is designed to define how autonomous AI agents communicate with one another, with an emphasis on structured dialogue and coordination across heterogeneous systems. It aims to provide a shared semantic foundation for multi-agent communication, including message types, intents, context markers, and response expectations.

With ACP, agents exchange structured messages that encapsulate intention, task parameters, and context. The protocol enables dynamic negotiation between agents—for example, for delegation or task refinement.

ACP’s strong focus on semantics and interoperability mean it has the potential to become a very powerful protocol for high-level coordination of agents (beyond simple messaging). It can facilitate distributed task-solving by autonomous agents that have overlapping goals.

However, a potential hurdle to adoption is that ACP requires agent developers to agree on shared ontologies. This may position it as the protocol of choice for development teams that work for the same company or that work on the same software product. ACP is still in the early stages of development in terms of syntax, implementation, and tooling, so much remains to be seen as it grows.

Example: Supply chain coordination across departments

Consider a global manufacturer with autonomous agents representing procurement, inventory, and logistics. These agents must coordinate continuously to maintain optimal stock levels, anticipate shortages, and reroute shipments when needed. Using ACP, these agents could engage in structured, semantically rich dialogue to negotiate changes in supply orders, reallocate inventory based on real-time demand forecasts, or trigger alerts if a delay will cause cascading disruptions. For example:

  • The procurement agent might notify logistics: “Delay expected from supplier X. Can we reassign delivery Y?”
  • The logistics agent can respond: “Yes, rerouting via warehouse Z. Updating inventory accordingly.”

By using shared ontologies and structured message types, ACP aims to support adaptive, high-fidelity inter-agent collaboration across teams and systems. This is especially valuable in environments where distributed decision-making and resilience are key.

Agent2Agent Protocol (A2A)

The Agent2Agent Protocol (A2A) being developed by Google with support from Microsoft is another open standard. It’s designed to allow different AI agents from different companies or domains to exchange messages and perform coordinated tasks. Its development was prompted by the growing use of LLM-based agents in workflows that span multiple applications.

In A2A, agents advertise their capabilities using a structured metadata format called “agent cards.” Agents then communicate through signed, structured messages based on a shared schema. A2A includes provisions for trust, routing, and structured memory exchange. This design maximizes the options for composability and cross-platform collaboration between AI agents.

While A2A is a community-driven project, it is supported by two of the largest software companies in the world, both of which are cloud providers and LLM vendors, which may accelerate its development. However, the protocol is still an early-stage alpha with evolving security and governance capabilities, making it difficult for other vendors to start developing with it.

Example: Cross-platform customer support automation

Picture a scenario where a retail company uses Google Workspace, Zendesk, Salesforce, and Microsoft Teams. Different LLM-based agents exist in each environment and perform tasks such as summarizing conversations, logging support tickets, updating customer relationship management (CRM) records, and scheduling follow-ups.

With A2A, these agents can collaborate across platforms:

  • A Google-based agent summarizes a support call and shares the summary with a Salesforce agent
  • The Salesforce agent updates the customer record and flags a follow-up
  • A Microsoft-based assistant sees the flag and books a Teams meeting with the customer

Through agent cards and structured messaging, A2A aims to enable interoperability across agent ecosystems, so tasks can flow fluidly without brittle, point-to-point integrations. This supports consistent, personalized, and efficient customer service—at scale.

Comparing developing communication protocols

The following table summarizes the current state of MCP, ACP, and A2A:

MCPACPA2A
DeveloperAnthropicIBM ResearchGoogle and Microsoft
ScopeLLM tool context injectionSemantic multi-agent dialogueInter-agent message exchange
OpennessOpen specificationConceptual, not yet standardizedOpen-source, WIP standard
Primary focusStructured API/tool inputIntent and coordinationCapability discovery, secure messaging
Best forAgents interfacing with toolsComplex, interdependent agentsCross-platform agent workflows
LimitationsNo inter-agent messagingUndefined implementationStill maturing, needs consensus

As you can see, these three protocols represent complementary approaches to the problem of inter-agent communication. MCP addresses the immediate need to contextualize LLMs effectively; ACP looks further ahead at semantic richness and intent modeling; and A2A targets broad interoperability across agents and platforms.

For developers and organizations building agentic architectures, understanding and experimenting with these protocols will be essential. While no single standard has emerged as dominant, the collective momentum suggests that interoperability and shared context will be key to unlock the full potential of multi-agent AI.

Camunda can help you operationalize AI agents

Whether you’re new to agentic AI or you’ve already started building agents, Camunda process orchestration and automation can help you put AI into action. To learn about Camunda’s agentic orchestration capabilities, check out our guide, “Why agentic process orchestration belongs in your automation strategy.”

The post MCP, ACP, and A2A, Oh My! The Growing World of Inter-agent Communication appeared first on Camunda.

]]>
Camunda 8.8 Preview: Introducing FEEL Capabilities in Copilot https://camunda.com/blog/2025/05/camunda-88-preview-feel-capabilities-copilot/ Mon, 19 May 2025 21:18:07 +0000 https://camunda.com/?p=139288 Easily turn plain language into accurate FEEL expressions with the latest capabilities coming to Camunda Copilot.

The post Camunda 8.8 Preview: Introducing FEEL Capabilities in Copilot appeared first on Camunda.

]]>
Despite FEEL being friendly enough, users, both professional and low-code developers alike, can still stumble at times when writing, debugging, and managing FEEL expressions.

According to the 2025 State of Process Orchestration & Automation Report, the complexity of processes and the diversity of systems are major hurdles for both business and IT teams. Nearly 78% of organizations report that complex logic and conditionality increase the difficulty of automation. That’s why we are excited to give you a preview of the FEEL capabilities that will be introduced in Camunda’s Copilot for Camunda Web Modeler SaaS in the Camunda 8.8 release. These new features will empower users to author FEEL logic much more easily and accurately.

With this added functionality, users will be able to type their intent in plain text, and Copilot has been trained and tested to generate valid FEEL expressions using the available process variables and context. It can also fix incorrect or incomplete FEEL expressions, guiding users toward valid logic. Additionally, Copilot can explain the purpose and syntax of FEEL functions, helping users understand how each expression works and translate code snippets like Java, JUEL or Python into FEEL. Lastly, seamless debugging is built in; users can test, refine and validate FEEL expressions directly within the Modeler without switching to Play mode. Copilot can even create missing context variables and generate mock values for testing, reducing time-consuming trial and error.

Generate, explain, debug, and convert FEEL expressions from plain text or code with Camunda Copilot’s new FEEL capabilities.

From natural language to executable processes

This new support for FEEL expressions adds a powerful dimension to Camunda Copilot’s existing BPMN capabilities. Users can now generate BPMN-compliant diagrams from natural language descriptions (or, for that matter, create documentation based on a BPMN diagram) and immediately enrich them with executable business logic. Imagine describing a process in plain language, watching it take shape as a BPMN diagram, and then seamlessly using Copilot to define conditions, calculations, and decision logic, all without writing complex code or switching out of Modeler.

Camunda-copilot
Quickly generate BPMN diagrams from any text input, whether it’s simple natural language or legacy code, or generate documentation or suggestions from existing diagrams in seconds using Camunda Copilot.

Whether migrating legacy models, translating documentation into BPMN, or iterating on end-to-end processes, Copilot can help you accelerate and improve your process modeling. It enables both technical and business users to rapidly prototype, validate, and enhance process models with high-quality, executable logic, reducing manual effort and increasing model quality. We’re excited that this feature is coming soon to Web Modeler and we can’t wait for you to try it out and let us know your feedback!

Update: As of the June Camunda alpha release, this feature is now available and ready for use! Check it out in the latest alpha and learn more here.

The post Camunda 8.8 Preview: Introducing FEEL Capabilities in Copilot appeared first on Camunda.

]]>
Agentic Orchestration: Automation’s Next Big Shift https://camunda.com/blog/2025/05/agentic-orchestration-automations-next-big-shift/ Wed, 14 May 2025 11:30:00 +0000 https://camunda.com/?p=138589 We've always believed in end-to-end process orchestration. Agentic orchestration lets us take it further, as we design the autonomous, AI-powered organization of the future.

The post Agentic Orchestration: Automation’s Next Big Shift appeared first on Camunda.

]]>
Since starting Camunda, we’ve believed in one thing above all: End-to-end process orchestration is the best way to make automation work—across people, systems, and devices.

We’ve seen time and time again that task-based automation might deliver quick wins, but it doesn’t scale. The moment processes get complex, those isolated tools start pulling in different directions. The result? Broken customer experiences. Inefficient teams. A lack of visibility and an inability to improve processes.

That’s the problem we set out to solve back in 2013. And it’s the same problem we continue to solve—only now, the stakes are higher.

AI is changing everything. Nearly every conversation we’re having with customers right now touches on it. According to the 2025 State of Process Orchestration and Automation Report, 84% of organizations want to add more AI capabilities over the next three years. But 85% struggle to make AI actually work at scale.

There are a few reasons why this is happening. First, simply adding AI into an automation strategy doesn’t magically create value. Done incorrectly, it just creates another silo—and yet another layer of technical debt. 

Second, traditional process automation focuses on automating around a set of predetermined rules (or deterministic orchestration). AI presents the opportunity to break those rules by executing processes dynamically.

That’s where agentic orchestration comes in.

Overcoming limitations in traditional process design

Process orchestration as we know it is deterministic, meaning you design processes and define their logic in advance. Sure, it can handle variants, but only if they’re a part of the original process model in BPMN or DMN. What we think of today as a fully automated process, or “straight through processing” (STP), usually relies on this structure.

AI agents make process automation much more dynamic. Dynamic orchestration uses AI to handle “unforeseen” tasks. It orchestrates based on defined goals and a given context, but doesn’t need specific instructions like a deterministic process.

But most business processes are somewhere in the middle. They have some STP in place, but are still using human case management to handle exceptions or tasks without a straightforward action. Agentic orchestration blends deterministic and dynamic orchestration seamlessly.

For example, most of the time, STP is done in seconds or minutes. But sometimes it fails. And when it does, people step in to investigate. It’s slow, messy, and manual. That’s where AI can help. Agentic orchestration takes over when the unexpected happens—analyzing unstructured data, spotting patterns, and suggesting actions.

Image1

Real world examples of agentic orchestration

And here’s where things get really exciting: This isn’t theoretical anymore. It’s real. It’s working. And it’s already creating serious value.

Our partner EY has built a tool for agentic trade reconciliation with Camunda. Reconciliation errors are usually handled manually. Because they are very labor intensive, they take a lot of time to review and are error-prone—resulting in a risk of fines. In fact, the world’s largest banks employ up to 25,000 people to review these exceptions. With agentic orchestration, they’re now using AI to suggest the next best action based on trade data and LLMs. That means faster resolution and T+1 compliance. But the most impressive value is in productivity: With agentic trade reconciliation, one employee can now handle far more cases per day on average, resulting in an increase in productivity of 7x.

Here’s another example: Payter, a payment terminal business for vending machines, is drowning in case management when payments fail. They have now started using Camunda to blend deterministic process logic with AI agent-driven exception handling. The expected outcome? Resolution times will drop by 50% from 24 to 12 minutes. Even better? Customer service will improve not just because of the shorter resolution time, but also because employees are now able to spend more time on complex issues.

Building the autonomous organization of the future

And the examples above are only the beginning. We’re seeing more and more companies wanting to bring more AI into their processes. In order to do that, they’re operationalizing AI in a way that’s composable, scalable, and flexible—not stuck in isolated systems. And Camunda is at the foundation of this shift. We’ve spent over a decade building a platform that does one thing exceptionally well: orchestrate complex, mission-critical processes from end to end.

Now, we’ve taken our powerful orchestration engine and infused it with embedded AI. The result? The ability to blend deterministic and dynamic orchestration in a unified agentic orchestration model—with guardrails, auditability, and control.

Camunda allows users to blend deterministic orchestration (via BPMN) with agentic orchestration (via agents) so you can implement as much or as little AI as you want within guardrails.

What does that mean in practice?

It means you can now:

  • Blend structured BPMN and DMN process modeling with flexible AI agents.
  • Automate what was once “un-automatable” (like complex case management).
  • Inject AI into your legacy systems without a big bang transformation.
  • Use low-code tools and connectors to move fast.
  • Implement AI safely and reliably, with “guardrails” for full auditability and control.

We’re giving you AI-native capabilities, like:

  • Ad-hoc sub-processes: Let agents decide what happens next.
  • Camunda Copilot: Go from a text prompt to a running process.
  • RPA and IDP: Integrated, out-of-the-box, and ready to go.
  • ERP Integration: Orchestrate AI across SAP, ServiceNow and beyond.

Here’s a look into the future: AI agents that get even smarter by working alongside humans—automating more and more over time. Think AI loan specialists that are trained directly from human input.

Our long-term vision hasn’t changed

We’ve always believed in end-to-end process orchestration. What’s different now is how far we can take it. Agentic orchestration brings us closer to a world where AI and humans truly collaborate across systems, teams, and time zones. We’re designing the autonomous, AI-powered organization of the future.

If you’re thinking about bringing agents into your business—this is the moment. With Camunda, you’ve got the foundational technology and the vision to do it right.

The next chapter of automation just started. And I couldn’t be more excited.

Let’s build the future together.

Learn more

You can learn more about our agentic orchestration capabilities here, and if you want to dive deeper, be sure to watch the recording of the keynote from CamundaCon 2025 Amsterdam (available soon).

The post Agentic Orchestration: Automation’s Next Big Shift appeared first on Camunda.

]]>
Intelligent by Design: A Step-by-Step Guide to AI Task Agents in Camunda https://camunda.com/blog/2025/05/step-by-step-guide-ai-task-agents-camunda/ Wed, 14 May 2025 07:00:00 +0000 https://camunda.com/?p=138550 In this step-by-step guide (with video), you'll learn about the latest ways to use agentic ai and take advantage of agentic orchestration with Camunda today.

The post Intelligent by Design: A Step-by-Step Guide to AI Task Agents in Camunda appeared first on Camunda.

]]>
Camunda is pleased to announce new features and functionality related to how we offer agentic AI. With this post, we provide detailed step-by-step instructions to use Camunda’s AI Agent to take advantage of agentic orchestration with Camunda.

Note: Camunda also offers an agentic AI blueprint on our marketplace.

Camunda approach to AI agents

Camunda has taken a systemic, future-ready approach for agentic AI by building on the proven foundation of BPMN. At the core of this approach is our use of the BPMN ad-hoc sub-process construct, which allows for tasks to be executed in any order, skipped, or repeated—all determined dynamically at runtime based on the context of the process instance.

This pattern is instrumental in introducing dynamic (non-deterministic) behavior into otherwise deterministic process models. Within Camunda, the ad-hoc sub-process becomes the agent’s decision workspace—a flexible execution container where large language models (LLMs) can assess available actions and determine the most appropriate next steps in real time.

We’ve extended this capability with the introduction of the AI Agent Outbound connector (example blueprint of usage) and the Embeddings Vector Database connector (example blueprint of usage). Together, they enable full-spectrum agentic orchestration, where workflows seamlessly combine deterministic flow control with dynamic, AI-driven decision-making. This dual capability supports both high-volume straight-through processing (STP) and adaptive case management, empowering agents to plan, reason, and collaborate in complex environments. With Camunda’s approach, the AI agents can add additional context for handling exceptions from STP.

This represents our next phase of AI Agent support and we intend to continue adding richer features and capabilities.

Camunda support for agentic AI

To power next-generation automation, Camunda embraces structured orchestration patterns. Camunda’s approach ensures your AI orchestration remains adaptive, goal-oriented, and seamlessly interoperable across complex, distributed systems.

As part of this evolution, Camunda has integrated Retrieval-Augmented Generation (RAG) into its orchestration fabric. RAG enables agents to retrieve relevant external knowledge—such as historical case data or domain-specific content—and use that context to generate more informed and accurate decisions. This is operationalized through durable, event-driven workflows that coordinate retrieval, reasoning, and human collaboration at scale.

Camunda supports this with our new Embeddings Vector Database Outbound connector—a modular component that integrates RAG with long-term memory systems. This connector supports a variety of vector databases, including both Amazon Managed OpenSearch (used in this exercise) and Elasticsearch.

With this setup, agents can inject knowledge into their decision-making loops by retrieving semantically relevant data at runtime. This same mechanism can also be used to update and evolve the knowledge base, enabling self-learning behaviors through continuous feedback.

To complete the agentic stack, Camunda also offers the AI Agent Outbound connector. This connector interfaces with a broad ecosystem of large language models (LLMs) like OpenAI and Anthropic, equipping agents with reasoning capabilities that allow them to autonomously select and execute ad-hoc sub-processes. These agents evaluate the current process context, determine which tasks are most relevant, and act accordingly—all within the governed boundaries of a BPMN-modeled orchestration.

How this applies to our exercise

Before we step through an exercise, let’s review a quick explanation about how these new components and Camunda’s approach will be used in this example and in your agentic AI orchestration.

The first key component is the AI Task Agent. It is the brains behind the operations. You give this agent a goal, instructions, limits and its chain of thought so it can make decisions on how to accomplish the set goal.

The second component is the ad-hoc sub-process. This encompasses the various tools and tasks that can be performed to accomplish the goal.

A prompt is provided to the AI Agent and it decides which tools should be run to accomplish this goal. The agent reevaluates the goal and the information from the ad-hoc sub-process and determines which of these tools, if any, are needed again to accomplish the goal; otherwise, the process ends.

Now armed with this information, we can get into our example and what you are going to build today.

Example overview

This BPMN process defines a message delivery service for the Hawk Emporium where AI-powered task agents make real-time decisions to interpret customer requests and select the optimal communication channels for message delivery.

Our example model for this process is the Message Delivery Service as shown below.

Message-delivery-service-agentic-orchestration

The process begins with user input filling out a form including a message, the desired  individual(s) to send it to, and the sender. Based on this input, a script task generates a prompt to send to the AI Task Agent. The AI Task processes the generated prompt and determines appropriate tasks to execute. Based on the AI Agent’s decision, the process either ends or continues to refine using various tools until the message is delivered.

The tasks that can be performed are located in the ah-hoc sub-process and are:

  1. Send a Slack message (Send Slack Message) to specific Slack channels,
  2. Send an email message (Send an Email) using SendGrid,
  3. Request additional information (Ask an Expert) with a User Task and corresponding form.

If the AI Task Agent has all the information it needs to generate, send and deliver the message, it will execute the appropriate message via the correct tool for the request. If the AI Agent determines it needs additional information; such as a missing email address or the tone of the message, the agent will send the process instance to a human for that information.

The process completes when no further action is required.

Process breakdown

Let’s take a little deeper dive on the components of the BPMN process before jumping in to build and execute it.

AI Task Agent

The AI Task Agent for this exercise uses AWS Bedrock’s Claude 3 Sonnet model for processing requests. The agent makes decisions on which tools to use based on the context. You can alternatively use Anthropic or OpenAI.

SendGrid

For the email message task, you will be sending email as community@camunda.com. Please note that if you use your own SendGrid account, this email source may change to the email address for that particular account.

Slack

For the Slack message task, you will need to create the following channels in your Slack organization:

  • #good-news
  • #bad-news
  • #other-news

Assumptions, prerequisites, and initial configuration

A few assumptions are made for those who will be using this step-by-step guide to implement your first an agentic AI process with Camunda’s new agentic AI features. These are outlined in this section.

The proper environment

In order to take advantage of the latest and greatest functionality provided by Camunda, you will need to have a Camunda 8.8-alpha4 cluster or higher available for use. You will be using Web Modeler and Forms to create your model and human task interface, and then Tasklist when executing the process.

Required skills

It is assumed that those using this guide have the following skills with Camunda:

  • Form Editor – the ability to create forms for use in a process.
  • Web Modeler – the ability to create elements in BPMN and connect elements together properly, link forms, and update properties for connectors.
  • Tasklist – the ability to open items and act upon them accordingly as well as starting processes.
  • Operate – the ability to monitor processes in flight and review variables, paths and loops taken by the process instance.

Video tutorial

Accompanying this guide, we have created a step-by-step video tutorial for you. The steps provided in this guide closely mirror the steps taken in the video tutorial. We have also provided a GitHub repository with the assets used in this exercise. 

Connector keys and secrets

If you do not have existing accounts for the connectors that will be used, you can create them.

You will need to have an AWS with the proper credentials for AWS Bedrock. If you do not have this, you can follow the instructions on the AWS site to accomplish this and obtain the required keys:

  • AWS Region
  • AWS Access key
  • AWS Secret key

You will also need a SendGrid account and a Slack organization. You will need to obtain an API key for each service which will be used in the Camunda Console to create your secrets.

Secrets

The secrets will be referenced in your model using {{secrets.yourSecretHere}} where yourSecretHere represents the name of your connector secret.

For this example to work you’ll need to create secrets with the following names if you use our example and follow the screenshots provided:

  • SendGrid
  • Slack
  • AWS_SECRET_KEY
  • AWS_ACCESS_KEY
  • AWS_REGION

Separating sensitive information from the process model is a best practice. Since we will be using a few connectors in this model, you will need to create the appropriate connector secrets within your cluster. You can follow the instructions provided in our documentation to learn about how to create secrets within your cluster.

Now that you have all the background, let’s jump right in and build the process.

Note: Don’t forget you can download the model and assets from the GitHub repository.

Overview of the step-by-step guide

For this exercise, we will take the following steps:

  • Create the initial high-level process in design mode.
    • Create  the ad-hoc sub-process of AI Task Agent elements.
  • Implement the process.
    • Configure the connectors.
      • Configure the AI Agent connector.
      • Configure the Slack connector.
        • Create the starting form.
        • Configure the AI Task Agent.
        • Update the gateways for routing.
        • Configure the ad-hoc sub-process.
        • Connect the ad-hoc sub-process and the AI Task agent
  • Deploy and run the process.
  • Enhance the process, deploy and run again.

Build your initial process

Create your process application

The first step is to create a process application for your process model and any other associated assets. Create a new project using the blue button at the top right of your Modeler environment.

Build-process

Enter the name for your project. In this case we have used the name “AI Task Agent Tutorial” as shown below.

Process-name

Next, create your process application using the blue button provided.

Enter the name of your process application, in this example “AI Task Agent Tutorial,” select the Camunda 8.8-alpha4 (or greater) cluster that you will be using for your project, and select Create to create the application within this project.

Initial model

The next step is to build your process model in BPMN and the appropriate forms for any human tasks. We will be building the model represented below.

Message-delivery-service-agentic-orchestration

Click on the process “AI Agent Tutorial” to open to diagram the process. First, change the name of your process to “Message Delivery Service” and then switch to Design mode as shown below.

Design-mode

These steps will help you create your initial model.

  1. Name your start event. We have called it “Message needs to be sent” as shown below. This start event will have a form front-end that we will build a bit later.
    Start-event

  2. Add an end event and call it “Message delivered”
    End-event

  3. The step following the start event will be a script task called “Create Prompt.” This task will be used to hold the prompt for the AI Task Agent.
    Script-task

  4. Now we want to create the AI Task Agent. We will build out this step later after building our process diagram.
    Ai-agent

Create the ad-hoc sub-process

Now we are at the point in our process where we want to create the ad-hoc sub-process that will hold our toolbox for the AI Task Agent to use to achieve the goal.

  1. Drag and drop the proper element from the palette for an expanded subprocess.
    Sub-process


    Your process will now look something like this.
    Sub-process-2

  2. Now this is a standard sub-process, which we can see because it has a start event. We need to remove the start event and then change the element to an “Ad-hoc sub-process.”
    Ad-hoc sub-process

    Once the type of sub-process is changed, you will see the BPMN symbol (~) in the subprocess denoting it is an ad-hoc sub-process.
  3. Now you want to change this to a “Parallel multi-instance” so the elements in the sub-process can be run more than once, if required.
    Parallel multi-instance


    This is the key to our process, as the ad-hoc sub-process will contain a set of tools that may or may not be activated to accomplish the goal. Although BPMN is usually very strict about what gets activated, this construct allows us to control what gets triggered by what is passed to the sub-process.
  4. We need to make a decision after the AI Task Agent executes which will properly route the process instance back through the toolbox, if required. So, add a mutually exclusive gateway between the AI Task Agent and the end event, as shown below, and call it “Should I run more tools?”.
    Run-tools

  5. Now connect that task to the right hand side of your ad-hoc sub-process.
    Connect-to-ad-hoc-sub-process

  6. If no further tools are required, we want to end this process. If there are, we want to go back to the ad-hoc sub-process. Label the route to the end event as “No” and the route to the sub-process as “Yes” to route appropriately.
    Label-paths

  7. Take a little time to expand the physical size of the sub-process as we will be adding elements into it.
  8. We are going to start by just adding a single task for sending a Slack message.
    Slack-message

  9. Now we need to create the gateway to loop back to the AI Task Agent to evaluate if the goal has been accomplished. Add a mutually exclusive gateway after the “Create Prompt” task with an exit route from the ad-hoc sub-process to the gateway.
    Loop-gateway

Implement your initial process

We will now move into setting up the details for each construct to implement the model, so switch to the Implement tag in your Web Modeler.

Configure remaining tasks

The next thing you want to do in implementation mode is to use the correct task types for the constructs that are currently using a blank task type.

AI Agent connector

First we will update the AI Task Agent to use the proper connector.

  1. Confirm that you are using the proper cluster version. You can do this on the lower right-hand side of Web Modeler and be sure to select a cluster that is at least 8.8 alpha4 or higher.
    Zeebe-88-cluster

  2. Now select the AI Task Agent and choose to change the element to “Agentic AI Connector” as shown below.
    Agentic-ai-connector-camunda


    This will change the icon on your task agent to look like the one below.
    Agentic-ai-connector-camunda-2

Slack connector

  1. Select the “Send a Slack Message” task inside the ad-hoc sub-process and change the element to the Slack Outbound Connector.
    Slack-connector

Create the starting form

Let’s start by creating a form to kick off the process.

Note: If you do not want to create the form from scratch, simply download the forms from the GitHub repository provided. To build your own, follow these instructions.

The initial form is required to ask the user:

  • Which individuals at Hawk Emporium should receive the message
  • What the message will say
  • Who is sending the message

The completed form should look something like this.

Form

To enter the Form Builder, select the start event, click the chain link icon and select + Create new form.

Start by creating a Text View for the title and enter the text “# What do you want to Say?” in the Text field on the component properties.

You will need the following fields on this form:

FieldTypeDescriptionReq?Key
To whom does this concern?TextYperson
What do you want to say?TextYmessage
Who are you?TextYsender

Once you have completed your form, click Go to Diagram -> to return to your model.

Create the prompt

Now we want to generate the prompt that will be used in our script task to tell the AI Task Agent what needs to be done.

  1. Select the “Create Prompt” script task and update the properties starting with the “Implementation” type which will be set to “FEEL expression.”

    This action will open two additional required variables: Result variable and FEEL expression.
  2. For the “Result” variable, you will create the variable for the prompt, so enter prompt here.
  3. For the FEEL expression, you will want to create your prompt.
    "I have a message from " + sender + " they would like to convey the following message: " + message + " It is intended for " + person

    Feel-prompt-message

Configure the AI Task Agent

Now we need to configure the brains of our operation, the AI Task Agent. This task takes care of accepting the prompt and sending the request to the LLM to determine next steps. In this section, we will configure this agent with specific variables and values based on our model and using some default values where appropriate.

  1. First, we need to pick the “Model Provider” that we will use for our exercise, so we are selecting “AWS Bedrock.”
    Agentic-ai-connector-properties-camunda


    Additional fields specific to this model will open in the properties panel for input.
  2. The next field is the ”Region” for AWS. In this case, a secret was created for the region (AWS_REGION) which will be used in this field.
    Agentic-ai-connector-properties-camunda-2

    Remember the secrets will be referenced in your model using {{secrets.yourSecretHere}} where yourSecretHere represents the name of your connector secret.

    Note: See the Connector and secrets section in this blog for more information on what is required, the importance of protecting these keys, and how to create the secrets.
  3. Now we want to update the authorization credentials with our AWS Access Key and our AWS Secret key from our connector secrets.
    Agentic-ai-connector-properties-camunda-3

  4. The next part is to set the Agent Context in the “Memory” section of your task. This variable is very important as you can see by the text underneath the variable box.

    The agent context variable contains all relevant data for the agent to support the feedback loop between user requests, tool calls and LLM responses. Make sure this variable points to the context variable which is returned from the agent response.

    In this case, we will be creating a variable called  agent and in that variable there is another variable called context, so for this field, we will use the variable agent.context. This variable will play an important part in this process.

    Agentic-ai-connector-properties-camunda-4

    We will leave the maximum messages at 20 which is a solid limit.
  5. Now we will update the system prompt. For this, we have provided a detailed system prompt for you to use for this exercise. You are welcome to create your own. It will be entered in the “System Prompt” section for the “System Prompt” variable.

    Hint: If you are creating your own prompt, try taking advantage of tools like ChatGPT or other AI tools to help you build a strong prompt. For more on prompt engineering, you can also check out this blog series.

    Agentic-ai-connector-properties-camunda-system-prompt

    If you want to copy and paste in the prompt, you can use the code below:
You are **TaskAgent**, a helpful, generic chat agent that can handle a wide variety of customer requests using your own domain knowledge **and** any tools explicitly provided to you at runtime.

────────────────────────────────
# 0. CONTEXT — WHO IS “USER”?
────────────────────────────────
• **Every incoming user message is from the customer.**  
• Treat “user” and “customer” as the same person throughout the conversation.  
• Internal staff or experts communicate only through the expert-communication tool(s).

────────────────────────────────
# 1. MANDATORY TOOL-DRIVEN WORKFLOW
────────────────────────────────
For **every** customer request, follow this exact sequence:

1. **Inspect** the full list of available tools.  
2. **Evaluate** each tool’s relevance.  
3. **Invoke at least one relevant tool** *before* replying to the customer.  
   • Call the same tool multiple times with different inputs if useful.  
   • If no domain-specific tool fits, you **must**  
     a. call a generic search / knowledge-retrieval tool **or**  
     b. escalate via the expert-communication tool (e.g. `ask_expert`, `escalate_expert`).  
   • Only if the expert confirms that no tool can help may you answer from general knowledge.  
   • Any decision to skip a potentially helpful tool must be justified inside `<reflection>`.  
4. **Communication mandate**:  
   • To gather more information from the **customer**, call the *customer-communication tool* (e.g. `ask_customer`, `send_customer_msg`).  
   • To seek guidance from an **expert**, call the *expert-communication tool*.  
5. **Never** invent or call tools that are not in the supplied list.  
6. After exhausting every relevant tool—and expert escalation if required—if you still cannot help, reply exactly with  
   `ERROR: <brief explanation>`.

────────────────────────────────
# 2. DATA PRIVACY & LOOKUPS
────────────────────────────────
When real-person data or contact details are involved, do **not** fabricate information.  
Use the appropriate lookup tools; if data cannot be retrieved, reply with the standard error message above.

────────────────────────────────
# 3. CHAIN-OF-THOUGHT FORMAT  (MANDATORY BEFORE EVERY TOOL CALL)
────────────────────────────────
Wrap minimal, inspectable reasoning in *exactly* this XML template:

<thinking>
  <context>…briefly state the customer’s need and current state…</context>
  <reflection>…list candidate tools, justify which you will call next and why…</reflection>
</thinking>

Reveal **no** additional private reasoning outside these tags.

────────────────────────────────
# 4. SATISFACTION CONFIRMATION, FINAL EMAIL & TASK RESOLUTION
────────────────────────────────
A. When you believe the request is fulfilled, end your reply with a confirmation question such as  
   “Does this fully resolve your issue?”  
B. If the customer answers positively (e.g. “yes”, “that’s perfect”, “thanks”):  
   1. **Immediately call** the designated email-delivery tool (e.g. `send_email`, `send_customer_msg`) with an appropriate subject and body that contains the final solution.  
   2. After that tool call, your *next* chat message must contain **only** this word:  
      RESOLVED  
C. If the customer’s very next message already expresses satisfaction without the confirmation question, do step B immediately.  
D. Never append anything after “RESOLVED”.  
E. If no email-delivery tool exists, escalate to the expert-communication tool; if the expert confirms none exists, reply with an error as described in §1-6.
  1. Remember that in the Create Prompt task, we stored the prompt in a variable called prompt. We will use this variable in the “User Prompt” section for the “User Prompt.”
    Image54

  2. The key to this step are the tools at the disposal of the AI Task Agent, so we need to link the agent to the ad-hoc sub-process. We do this by mapping the ID of the sub-process to the proper tools field in the AI Task Agent.
    1. Start by selecting your ad-hoc sub-process and giving it a name and an ID. In the example, we will use “Hawk Tools” for the name and hawkTools for the “ID.”
      Link-agent-to-ad-hoc-sub-process-camunda-1

    2. Go back to the AI Task Agent and update the “Ad-hoc subprocess ID” to hawkTools for the ID of the sub-process.
      Link-agent-to-ad-hoc-sub-process-camunda-2

    3. Now we need a variable to store the results from calling the toolbox to place in the “Tool Call Results” variable field. We will use toolCallResults.
      Link-agent-to-ad-hoc-sub-process-camunda-3

    4. There are several other parameters of importance. We will use the defaults for several of these variables. We will leave the “Maximum model calls” in the “Limits” section set at “10” which will limit the number of times the model is called to 10 times. This is important for cost control.
      Link-agent-to-ad-hoc-sub-process-camunda-4

    5. There are additional parameters to help provide constraints around the results. Update these as shown below.
      Link-agent-to-ad-hoc-sub-process-camunda-5

    6. Now we need to update the “Output Mapping” section, first the “Result variable” which is where we are going to use our agent variable that will contain all the components of the result including the train of thought taken by the AI Task Agent.
      Link-agent-to-ad-hoc-sub-process-camunda-6

Congratulations, you have completed the configuration of your AI Task Agent. Now we just need to make some final connections and updates before we can see this running in action.

Gateway updates

We are going to use the variable values from the AI Task Agent to determine if we need to run more tools.

  1. Select the “Yes” path and add the following:
    not(agent.toolCalls = null) and count(agent.toolCalls) > 0
    Flow-condition

  2. For the “No” path, we will make this our default flow.
    Default-flow

Ad-hoc sub-process final details

We first need to provide the input collection of tools for the sub-process to use, and we do that by updating the “Input collection” in the “Multi-instance” variable.

  1. We will then provide each individual “Input element” with the single toolCall.
    Toolcall-toolcallresults
  2. We will then update the “Output Collection” to our result variable, toolCallResults.
    Toolcall-toolcallresults

  3. Finally, we want to create a FEEL expression for our “Output element” as shown below.
    {<br>  id: toolCall._meta.id,<br>  name: toolCall._meta.name,<br>  content: toolCallResult<br>}
     
    Output-element


    This expression provides the id, name and content for each tool.
  4. Finally, we need to provide the variable for the “Active elements” for the “Active elements collection” showing which element is active in the sub-process.
    [toolCall._meta.name]
    Active-element

    To better explain this, the AI Task Agent determines a list of elements (tools) to run and this variable represents which element gets activated in this instance.

Connect sub-process elements and the AI Task Agent

Now, how do we tell the agent that it has access to the tools in the ad-hoc subprocess?

  1. First of all, we are going to use the” Element Documentation” field to help us connect these together. We will add some descriptive text about the element’s job. In this case, we will be using:
    This can send a slack message to everyone at Niall's Hawk Emporium
    Element-documentation

Now we need to provide the Slack connector with the message to send and what channel to send that message on.

  1. We need to use a FEEL expression for our message and take advantage of the keyword fromAi and we will enter some additional information in the expression. Something like this:
    fromAi(toolCall.slackMessage, "This is the message to be sent to slack, always good to include emojis")
    Message


    Notice that we have used our variable toolCall again and told AI that you need to provide us with a variable called slackMessage.
  2. We also need to explain to AI which channel is appropriate for the type of message being sent. Remember that we provided three (3) different channels in our Slack organization. We will use another FEEL expression to provide guidance on the channel that should be used.
    fromAi(toolCall.slackChannel, "There are 3 channels to use they are called, '#good-news', '#bad-news' and '#other-news'. Their names are self explanatory and depending on the type of message you want to send, you should use one of the 3 options. Make sure you  use the exact name of the channel only.")
    Channels

  3. Finally, be sure to add your secret for “Authentication” for Slack in the “OAuth token” field. In our case this is:
    {{secrets.Slack}}
    Secrets

Well, you did it! You now should have a working process model that accesses an AI Task Agent to determine which elements in its toolbox can help it achieve its goal. Now you just need to deploy it and see it in action.

Deploy and run your model

Now we need to see if our model will deploy. If you haven’t already, you might want to give your process a better name and ID something like what is shown below.

Name-process
  1. Click Deploy and your process should deploy to the selected cluster.
    Deploy-agentic-ai-process-camunda

  2. Go to Tasklist and Processes and find your process called “Message System” and start the process clicking the blue button Start Process ->.
    Start-process
  3. You will be presented with the form you created so that you can enter who you are, the message content and who should receive the message. Enter the following for the fields:
    • To whom does this concern?
      Everyone at the Hawk Emporium
    • What do you want to say?
      We have a serious problem. Hawks are escaping. Please be sure to lock cages. Can you make sure this issue is taken more seriously?
    • Who are you?
      Joyce, assistant to Niall - Owner, Hawk Emporium
      Or enter anything you want for this.

Your completed form should look something like the one shown below.

Form

The process is now running and should post a Slack message to the appropriate channel, so open your Slack application.

  1. We can assume that this would likely be a “bad news” message, so let’s review our Slack channels and see if something comes to the #bad-news channel. You should see a message that might appear like this one.
    Ai-results-slack

  2. Open Camunda Operate and locate your process instance. It should look something like that seen below.
    Camunda-operate-check

  3. You can review the execution and see what took place and the variable values.
    Camunda-operate-check-details

You have successfully executed your first AI Task Agent and associated possible tasks or elements associated with that agent, but let’s take this a step further and add a few additional options for our AI Task Agent to use when trying to achieve its “send message” goal.

Add tasks to the toolbox

Why don’t we give our AI Task Agent a few more options to help it accomplish its goal to send the proper message. To do that, we are going to add a couple other options for our AI Task Agent within our ad-hoc sub-process now.

Add a human task

The first thing we want to do is add a human task as an option.

  1. Drag another task into your ad-hoc sub-process and call it “Ask an Expert”.
  2. Change the element type to a “User Task.” The result should look something like this.
    Add-tasks


    Now we need to connect this to our sub-process and provide it as an option to the AI Task Agent.
  3. Update the “Element Documentation” field with the information about this particular element. Something like:
    If you need some additional information that would help you with your request, you can ask this expert.
    Element-documentation-user-task

  4. We will need to provide the expert with some inputs, so hover over the + and click Create+ to create a new input variable.
  5. For the “Local variable name” use  aiquestion and then we will use a FEEL expression for the “Variable assigned value” following the same pattern we used before with the fromAi tool.
    fromAi(toolCall.aiquestion, "Add here the question you want to ask our expert. Keep it short and be friendly", "string")
    User-task-inputs

  6. In this case, we need to see the response from the expert so that the AI Task Agent can use this information to determine how to achieve our goal. So add an “Output Variable” and call it toolCallResult and we will be providing the answer using the following JSON in the Variable assignment value.
    {<br>  “Personal_info_response”: humanAnswer<br>}

    Your output variable section should now look like that shown below.
    User-task-output

  7. Now we need to create a form for this user task to display the question and give the user a place to enter their response to the question. Select the “Ask an Expert” task and choose the link icon and then click on the + Create new form from the dialog.
    Add-form
         
    New-form

  8. The form we need to build will look something like this:
    Question-from-ai


    Start by creating a Text View for the title and enter the text “# Question from AI” in the Text field on the component properties.

    You will need the following fields on this form:
FieldTypeDescriptionReq?Key
{{aiquestion}}Text viewN
AnswerText areaYhumanAnswer

The Text view field for the question will display the value of the aiquestion variable that will be passed to this task. We also provided a place to add a document that will be of some assistance.

Once you have completed your form, click Go to Diagram -> to return to your model.

Because we have already connected the AI Task Agent to the ad-hoc sub-process and the tools it can use, we do not have to provide more at this step.

Optional: Send an email

If you have a SendGrid account and key, you can complete the steps below, but if you do not, you can just keep two elements in your ad-hoc sub-process for this exercise.

  1. Create one more task in your ad-hoc sub-process and call it “Send an Email.”
  2. Change the task type to use the SendGrid Outbound Connector.
  3. Enter your secret for the SendGrid API key using the format previously discussed.

    Remember the secrets will be referenced in your model using {{secrets.yourSecretHere}} where yourSecretHere represents the name of your connector secret. In this case, we have used:
    {{secrets.SendGrid}}
  4. You will need to provide the reason the AI Task Agent might want to use this element in the Element documentation. The text below can be used.
    This is a service that lets you send an email to someone.
    Email

  5. For the Sender “Name” you want to use the information provided to the AI Task Agent about the person that is requesting the message be sent. We do this using the following information.
    fromAi(toolCall.emailSenderName, "This is the name of the person sending the email")

    In our case, the outgoing “Email address” is “community@camunda.com” which we also need to add to the “Sender” section of the connector properties. You will want to use the email address for your own SendGrid configuration.
    Sender-name-fromai


    Note: Don’t forget to click the fx icon before entering your expressions.
  6. For the “Receiver,” we also will use information provided to the AI Task Agent about who should receive the message. For the “Name”, we can use this expression:
    fromAi(toolCall.emailReceiveName, "This is the name of the person getting the email")

    For the Email address, we will need to make sure that the AI Task Agent knows the email address for the intended individual(s) for the message.
    fromAi(toolCall.emailReceiveAddress, "This is the email address of the person you want to send an email to, make very sure that if you use this that the email address is correctly formatted you also should be completely sure that the email is correct. Don't send an email unless you're sure it's going to the right person")

    Your properties should now look something like this.
    Receiver-name-fromai

  7. Select “Simple (no dynamic template)” for the “Mail contents” property in the “Compose email” section.
  8. In the “Compose email” section for the subject, we will let the AI Task Agent determine the best subject for the email, so this text will provide that to the process.
    fromAi(toolCall.emailSubject, "Subject of the email to be sent")
  9. The AI Task Agent will determine the email message body as well with the following:
    fromAi(toolCall.emailBody, "Body of the email to be sent")

    Your properties should look something like this.
    Properties-fromai

That should do it. You now have three (3) elements or tools for your AI Task Agent to use in order to achieve the goal of sending a message for you.

Deploy and run again

Now that you have more options for the AI Task Agent, let’s try running this again. However, we are going to make an attempt to have the AI Task Agent use the human task to show how this might work.

  1. Deploy your newly updated process as you did before.
  2. Go to Tasklist and Processes and find your process called “Message System” and start the process clicking the blue button.
    Start-process
  3. You will be presented with the form you created so that you can enter who you are, the message content and who should receive the message. Enter the following for the fields
    • To whom does this concern?
      I want to send this to Reb Brown. But only if he is working today. So, find that out.
    • What do you want to say?
      Can you please stop feeding the hawks chocolate? It is not healthy.
    • Who are you?
      Joyce, assistant to Niall - Owner, Hawk Emporium
      Or enter anything you want for this.

Your completed form should look something like the one shown below.

New-form-to-user-task-from-ai

The process is now running.

  1. Open Camunda Operate and locate your process instance. It should look something like that seen below.
    Camunda-operate-check-again

  2. You can review the execution and see what took place and the variable values.
  3. If you then access Tasklist and select the Tasks tab, you should have an “Ask an Expert” task asking you if Reb Brown is working today. Respond as follows:
    He is working today, but it’s also his birthday, so it would be nice to let him know the important message with a happy birthday as well.

    What-ai-asked-and-user-answer

  4. In Operate, you will see that the process instance has looped around with this additional information.
    Camunda-operate-check-details-again


    You can also toggle the “Show Execution Count” to see how many times each element in the process was executed.
    Camunda-operate-execution-count

  5. Now open your Slack application and you should have a message now that the AI Task Agent knows that not only is Reb Brown working, but it is his birthday.
    Ai-message

Congratulations! You have successfully executed your first AI Task Agent and associated possible tasks or elements associated with that agent.

We encourage you to add more tools to the ad-hoc sub-process to continue to enhance your AI Task Agent process. Have fun!

Congratulations!

You did it! You completed building an AI Agent in Camunda from start to finish including running through the process to see the results. You can try different data in the initial form and see what happens with new variables. Don’t forget to watch the accompanying step-by-step video tutorial if you haven’t already done so.

The post Intelligent by Design: A Step-by-Step Guide to AI Task Agents in Camunda appeared first on Camunda.

]]>
Building Trustworthy AI Agents: How Camunda Aligns with Industry Best Practices https://camunda.com/blog/2025/05/ai-agent-design-patterns-in-camunda/ Fri, 09 May 2025 00:08:15 +0000 https://camunda.com/?p=137829 Build, deploy, and scale AI agents with an enterprise-ready framework that balances automation, control, speed, safety, complexity, and clarity.

The post Building Trustworthy AI Agents: How Camunda Aligns with Industry Best Practices appeared first on Camunda.

]]>
The rapid evolution of AI agents has triggered an industry-wide focus on design patterns that ensure reliability, safety, and scalability. Two major players—OpenAI and Anthropic—have each published detailed guidance on building effective AI agents. Camunda’s own approach to agentic orchestration shows how an enterprise-ready solution can embody these best practices.

Let’s take a look at how Camunda’s AI agent implementation aligns with the recommendations from OpenAI and Anthropic, and why this matters for enterprise success.

Clear task boundaries and explicit handoffs

Both Anthropic and OpenAI stress the importance of defining clear task boundaries for agents. According to Anthropic’s recommendations, ambiguity in agent responsibilities often leads to unpredictable behavior and systemic errors. OpenAI similarly highlights that agents should have narrowly scoped responsibilities to ensure predictability and reliability.

At Camunda, we address this by orchestrating agents through BPMN workflows. Each agent’s task is represented as a discrete service task with well-defined inputs and expected outputs. For example, in our example agent implementation, an email is sent only after a Generate Email Inquiry task completes its work and delivers validated output. This sequencing ensures that each agent knows precisely when to act, what data it receives, and what deliverables it is accountable for, thereby minimizing risks of cascading failures.

By visualizing these handoffs in BPMN diagrams, stakeholders across technical and nontechnical domains can easily understand the agent responsibilities, audit workflows, and troubleshoot when necessary.

AI agent inserted into BPMN diagram for process visibility

Narrow scope with composable capabilities

OpenAI’s guide highlights the benefits of agents that are designed with specialized, narrow scopes, which can then be composed into larger systems for more complex tasks. Anthropic echoes this, suggesting that mega-agents often become unwieldy and hard to trust.

Camunda’s architecture embraces this philosophy through microservices-style orchestration. Each AI agent within Camunda focuses on mastering a single task—for instance, information retrieval, natural language generation, decision support, or classification. These specialized agents can then be strung together through BPMN models to create sophisticated end-to-end business processes.

Let’s look at a practical example.

In an insurance claims process, Camunda orchestrates a Document Extraction agent to pull key fields, a Fraud Detection agent to assess risk, and a Claims Decision agent to recommend next steps. Each agent operates independently yet collaboratively, enhancing system resilience and allowing incremental upgrades without overhauling the entire workflow.

AI agents working together with their separate tasks
Each agent has its own limited set of specialized tasks, with the ability to compose tasks together within agents.

Monitoring, error handling, and human-in-the-loop

Both OpenAI and Anthropic emphasize that no agent should operate without proper supervision mechanisms. Agents must report their states, signal when they encounter issues, and escalate gracefully to human overseers.

Camunda is particularly strong in this area thanks to our suite of tools like Operate, Optimize, and Tasklist. Here’s how we achieve enterprise-grade monitoring and human-in-the-loop design:

  • Full observability: Camunda Operate provides real-time visibility into every process instance, showing exactly which agent did what, when, and with what outcome.
  • Error boundaries and fallbacks: BPMN error events and boundary timers allow processes to anticipate common failures (like timeouts or bad data) and take corrective actions, such as retrying, skipping, or escalating to a human operator.
  • Seamless human escalation: When agents cannot confidently complete a task—for example, due to ambiguity or ethical concerns—Camunda can dynamically activate a human task, prompting a person to step in, review, and make decisions.

In a future release—the 8.8 release scheduled for October—Camunda is taking this one step further by connecting these features directly to the agent. Failed tasks will automatically trigger the agent to reevaluate the prompt, allowing the agent to respond dynamically as the environment changes. Operate will provide real-time visibility into the agent, allowing seamless human escalation and recovery.

These capabilities ensure that agents augment rather than replace human judgment, a key principle recommended by both OpenAI and Anthropic.

Composability and reusability

Anthropic strongly recommends composable agent architectures to allow rapid iteration and minimize technical debt. Composable systems are more adaptable, easier to troubleshoot, and more cost-effective to maintain.

Camunda’s approach to process design aligns perfectly with this recommendation. Our BPMN models are built around modularity, enabling teams to:

  • Swap out individual agents without rewriting the entire workflow
  • Reuse standard subprocesses across different projects
  • Version-control agent behaviors separately, making it easy to A/B test and roll back changes

Drawing from IBM’s insights on agent design, Camunda’s platform allows enterprises to build libraries of reusable agent modules. These can be assembled like building blocks to rapidly create new processes or modify existing ones, significantly accelerating innovation cycles.

Transparent orchestration and explainability

OpenAI’s guide makes it clear: trustworthy AI systems must provide explainable decision pathways. Stakeholders need to understand why an agent acted a certain way, especially when decisions have legal, ethical, or financial consequences.

Camunda’s BPMN-driven orchestration inherently provides this transparency. Every agent interaction, every decision point, and every data handoff is visually modeled and logged. Teams can:

  • Trace the complete lineage of a decision from input to output
  • Generate audit trails automatically for compliance needs
  • Explain system behavior to both technical audiences and nontechnical stakeholders

In highly regulated industries like banking, healthcare, or insurance, this kind of transparency isn’t just a nice-to-have—it’s a nonnegotiable requirement. With Camunda, organizations can meet these standards confidently.

Centralized orchestration provides guardrails

Today, AI agents do not yet exhibit the level of trustworthiness, transparency, or security required to make a fully autonomous swarm of agents safe for enterprise contexts. In decentralized models, agents independently delegate tasks to one another, which can lead to a lack of oversight, unpredictable behavior, and challenges in ensuring compliance.

At Camunda, we believe that the decentralized agent pattern represents an exciting vision for the future. However, we see it as a pattern that is still years away from being viable for enterprise-grade AI systems.

For now, Camunda strongly supports centralized or manager patterns. With this approach, a single orchestrator (in Camunda’s case, the BPMN engine) manages when, why, and how agents act. This centralized orchestration ensures:

  • Full visibility into agent activities
  • Clear accountability for decision points
  • Easier implementation of security, compliance, and auditing mechanisms

Our philosophy is simple: while the future may hold promise for decentralized agent ecosystems, today’s enterprises need reliability, explainability, and control. Centralized orchestration, powered by Camunda, offers the safest and most effective path forward that you can utilize immediately, without sacrificing your flexibility for improvements and innovations in AI that may come in the future.

Enterprise-grade agentic orchestration is here!

By closely adhering to the industry best practices, Camunda delivers an enterprise-ready framework for building, deploying, and scaling AI agents. Our approach balances automation with control, speed with safety, and complexity with clarity.

We believe that AI agents should operate transparently, predictably, and with human-centric governance. With Camunda, enterprises gain not just a platform but a reliable foundation to scale AI responsibly and sustainably.

Want to learn more? Dive into our latest release announcement or check out our guide on building your first AI agent.

Stay tuned—the future of responsible, scalable AI is being built right now, and Camunda is at the forefront.

The post Building Trustworthy AI Agents: How Camunda Aligns with Industry Best Practices appeared first on Camunda.

]]>