Getting Started Archives | Camunda https://camunda.com/blog/category/getting-started/ Workflow and Decision Automation Platform Thu, 26 Jun 2025 20:15:39 +0000 en-US hourly 1 https://camunda.com/wp-content/uploads/2022/02/Secondary-Logo_Rounded-Black-150x150.png Getting Started Archives | Camunda https://camunda.com/blog/category/getting-started/ 32 32 The Benefits of BPMN AI Agents https://camunda.com/blog/2025/05/benefits-bpmn-ai-agents/ Thu, 22 May 2025 21:14:35 +0000 https://camunda.com/?p=139555 Why are BPMN AI agents better? Read on to learn about the many advantages to using BPMN with your AI agents, and how complete visibility and composability help you overcome key obstacles to operationalizing AI.

The post The Benefits of BPMN AI Agents appeared first on Camunda.

]]>
There are lots of tools for building AI Agents and at the core they need three things. First, they need to understand their overall purpose and the rules in which they should operate. So you might create an agent and tell it, “You’re here to help customers with generic requests about the existing services of the bank.” Secondly, we need a prompt, which is a request to the agent that an agent can try to fulfil. Finally, you need a set of tools. These are the actions and systems that an agent has access to in order to fulfill the request.

Most agent builders will wrap up those three requirements into a single, static, synchronous system, but at Camunda we decided not to do this. We found that it creates too many use case limitations, it’s not scalable and it’s hard to maintain. To overcome these limitations, we came up with a concept that lets us decouple these requirements and completely visualize an agent in a way that opens it up to far more use cases, not only on a technical level, but also in a way that  alleviates a lot of the fears that people have when adding AI agents as part of their core processes.

The value of a complete visualization

Getting insight into how an AI Agent has performed in a given task often requires someone to read through its chain of thought (this is like the AI’s private journal, where it details how it’s thinking about the problem). This will usually let you know what tools it decided to use and why. So in theory if you wanted to check on how your AI Agent was performing, you could read through it. In practice, this is just not practical for two reasons:
1. It limits the visibility of what happened to a text file that needs to be interpreted.
2. AI agents can sometimes lie in their chain of thought—so it might not even be accurate.

Our solution to this is to completely visualize the agent, its tools and its execution all in one place.

Gain full visibility into AI agent performance with BPMN

Ai-agent-visibility-bpmn-camunda

The diagram above shows a BPMN process that has implemented an AI agent. It’s in two distinct parts. The agent logic is contained within the AI Task Agent activity and the tools it has access to is displayed with an ad-hoc sub-process. This is a BPMN construct that allows for completely dynamic execution of the tasks within it.

With this approach the action of an agent is completely visible to the user in design time, during execution, and can even be used to evaluate how well the process performs with the addition of an agent.

Ai-agent-performance-camunda

The diagram above shows a headmap which shows which tools take the longest to run. This is something impossible to accurately measure with a more traditional AI agent building approach.

Decoupling tools from agent logic

This design completely decouples the agent logic from the available tool set. Meaning that the agent will find out only in runtime what tools are at its disposal. The ramifications of this are actually quite profound. It means that you can run multiple versions of the same process with the same agent, but a completely different tool set. This makes context reversing far easier and also lets us qualitatively evaluate the impact of adding or removing certain tools through AB testing.

Improving maintainability for your AI agents

The biggest impact of this decoupling in my opinion though is how it improves maintainability. Designers of the process can add or remove new tools without ever needing to change or update the AI agent. This is a fantastic way of separating responsibilities when a new process is being built. While AI experts can focus on ensuring the AI Task Agent is properly configured, developers can build the tooling independently. And of course, you can also just add pre-built tools for the agent to use.

Ai-agent-maintanability-camunda

Composable design

Choosing, as we did, to marry AI agent design with BPMN design means we’ve unlocked access for AI agent designers to all the BPMN patterns, best practices and functionality that Camunda has been building over the last 10 years or so. While there’s a lot you gain because of that, I want to focus on just one here: Composable architecture.

Composable orchestration is the key to operationalizing AI

Camunda is designed to be an end-to-end orchestrator to a diverse set of tools, rules, services and people. This means we have designed our engine and the tools around it so that there is no limitation on what can be integrated. It also means we want users to be able to switch out services and systems over time, as they become legacy or a better alternative is found.

This should be of particular interest to a developer of AI agents because it lets you not only switch out the tools the AI Agent has access to, but more importantly, it lets you switch out the agent’s own LLM for the latest and greatest. So to add or even just test out the behaviour of a new LLM no longer means building a new agent from scratch—just swap out the brain and keep the rest. This alone is going to lead to incredibly fast improvements and deployments to your agents, and help you make sure that a change is a meaningful and measurable one.

Ai-agent-maintanability-camunda-2

Conclusion

Building AI agents the default way that other tools offer right now leads you to adding a new black box to your system. One that is less maintainable and and far more opaque in execution than anything else you’ve ever integrated. This is going to make it hard to properly maintain and evaluate.

At Camunda we have managed to open up that black box in a way that integrates it directly into your processes as a first-class citizen. Your agent will immediately benefit from everything that BPMN does and become something that can grow with your process.

It’s important to understand that you’re still adding a completely dynamic aspect to your process, but this way you mitigate most concerns early on. For all these reasons, I can imagine that of all the many, many AI agents that are going to be built this year, I’m sure the only ones that will still be used by the end of next year will be built in Camunda with BPMN.

Try it out

All of this is available for you to try out in Camunda today. Learn more about how Camunda approaches agentic orchestration and get started now with a free trial here.

The post The Benefits of BPMN AI Agents appeared first on Camunda.

]]>
Guide to Adding a Tool for an AI Agent https://camunda.com/blog/2025/05/guide-to-adding-tool-ai-agent/ Wed, 21 May 2025 19:31:39 +0000 https://camunda.com/?p=139473 In this quick guide, learn how you can add exactly the tools you want to your AI Agent's toolbox so it can get the job done.

The post Guide to Adding a Tool for an AI Agent appeared first on Camunda.

]]>
AI Agents and BPMN open up an exciting world of agentic orchestration, empowering AI to act with greater autonomy while also preserving auditability and control. With Camunda, a key way that works is by using an ad-hoc sub-process to clearly tell the AI agent which tools it has access to while it attempts to solve a problem. This guide will help you understand exactly how to equip your AI agents with a new tool.

How to build an AI Agent in BPMN with Camunda

There are two aspects to building an AI Agent in BPMN with Camunda.

  1. Defining the AI Task Agent
  2. Defining the available tools for the agent.

The AI Task Agent is the brain, able to understand the context and the goal and then to use the tools at its disposal to complete the goal. But where are these tools?

Adding new tools to your AI agent

The tools for your AI agent are defined inside an ad-hoc sub-process which the agent is told about. So assuming you’ve set up your Task Agent already—and you can! Because you just need the process model from this github repo. The BPMN model without any tools should look like this:

Ad-hoc-sub-process

Basically I’ve removed all the elements from within the ad-hoc sub-process. The agent still has a goal—but now has no way of accomplishing that goal.

In this guide we’re going to add a task to the empty sub-process. By doing this, we’ll give the AI Task Agent access to it as a tool it can use if it needs to.

The sub-process has a multi-instance marker, so for each tool to be used there’s a local variable called toolCall that we can use to get and set variables.

I want to let the AI agent ask a human a technical question, so first I’m going to add a User Task to the sub-process.

Ai-agent-tool

Defining the tool for the agent

The next thing we need to do is somehow tell the agent what this tool is for. This is done by entering a natural language description of the tool in the Element Documentation field of the task.

Element-documentation-ai-agent-tool

Defining variables

Most tools are going to request specific variables in order to operate. Input variables are defined so that the agent is aware of what’s required to run the tool in question. It also helps pass the given context of the current process to the tool. Output variables define how we map the response from the tool back into the process instance, which means that the Task Agent will be aware of the result of the tool’s execution.

In this case, to properly use this tool, the agent will need to come up with a question.

For a User Task like this we will need to create an input variable like the one you see below.

Local-variable-ai-agent-tool

In this case we created a local variable, techQuestion, directly in the task. We’ll then both assign this variable and define it for the task agent we need to call the fromAi function. To do that we must provide:

  1. The location of the variable in question.
    • In this case that would be within the toolCall variable.
  2. A natural language description of what the variable is used for.
    • Here we describe it as the question that needs to be asked.
  3. The variable type.
    • This is a string, but it could be any other primitive variable type.

When all put together, it looks like this:

fromAi(toolCall.techQuestion, "This is a specific question that you’d like to ask", "string")

Next we need an output variable so that the AI agent can be given the context it needs to understand if running this tool produced the output it expected. In this case, we want it to read the answer from the human expert it’s going to consult.

Process-variable-ai-agent-tool

This time, create an output variable. You’ll have two fields to fill in.

  1. Process variable name
    • It’s important that this variable name matches the output expected by the sub-process. The expected name can be found in the output element of the sub-process, and as you can see above, we’ve named our output variable toolCallResult accordingly.
      Output-ai-agent-tool
  2. Variable assignment value
    • This needs to simply take the expected variable from the tool task and add it to a new variable that can be put into the toolCallResult object

So in the end the output variable assignment value should be something like this:

{ “humanAnswer” : humanAnswer}

And that’s it! Now the AI Task Agent knows about this tool, knows what it does and knows what variables are needed in order to get it running. You can repeat this process to give your AI agents access to exactly as many or as few tools as they need to get a job done. The agents will then have the context and access required to autonomously select from the tools you have provided, and you’ll be able to see exactly what choices the agent made in Operate when the task is complete.

All of this is available for you to try out in Camunda today. Learn more about how Camunda approaches agentic orchestration and get started now with a free trial here. For more on getting started with agentic AI, feel free to dig deeper into our approach to AI task agents.

The post Guide to Adding a Tool for an AI Agent appeared first on Camunda.

]]>
Intelligent by Design: A Step-by-Step Guide to AI Task Agents in Camunda https://camunda.com/blog/2025/05/step-by-step-guide-ai-task-agents-camunda/ Wed, 14 May 2025 07:00:00 +0000 https://camunda.com/?p=138550 In this step-by-step guide (with video), you'll learn about the latest ways to use agentic ai and take advantage of agentic orchestration with Camunda today.

The post Intelligent by Design: A Step-by-Step Guide to AI Task Agents in Camunda appeared first on Camunda.

]]>
Camunda is pleased to announce new features and functionality related to how we offer agentic AI. With this post, we provide detailed step-by-step instructions to use Camunda’s AI Agent to take advantage of agentic orchestration with Camunda.

Note: Camunda also offers an agentic AI blueprint on our marketplace.

Camunda approach to AI agents

Camunda has taken a systemic, future-ready approach for agentic AI by building on the proven foundation of BPMN. At the core of this approach is our use of the BPMN ad-hoc sub-process construct, which allows for tasks to be executed in any order, skipped, or repeated—all determined dynamically at runtime based on the context of the process instance.

This pattern is instrumental in introducing dynamic (non-deterministic) behavior into otherwise deterministic process models. Within Camunda, the ad-hoc sub-process becomes the agent’s decision workspace—a flexible execution container where large language models (LLMs) can assess available actions and determine the most appropriate next steps in real time.

We’ve extended this capability with the introduction of the AI Agent Outbound connector (example blueprint of usage) and the Embeddings Vector Database connector (example blueprint of usage). Together, they enable full-spectrum agentic orchestration, where workflows seamlessly combine deterministic flow control with dynamic, AI-driven decision-making. This dual capability supports both high-volume straight-through processing (STP) and adaptive case management, empowering agents to plan, reason, and collaborate in complex environments. With Camunda’s approach, the AI agents can add additional context for handling exceptions from STP.

This represents our next phase of AI Agent support and we intend to continue adding richer features and capabilities.

Camunda support for agentic AI

To power next-generation automation, Camunda embraces structured orchestration patterns. Camunda’s approach ensures your AI orchestration remains adaptive, goal-oriented, and seamlessly interoperable across complex, distributed systems.

As part of this evolution, Camunda has integrated Retrieval-Augmented Generation (RAG) into its orchestration fabric. RAG enables agents to retrieve relevant external knowledge—such as historical case data or domain-specific content—and use that context to generate more informed and accurate decisions. This is operationalized through durable, event-driven workflows that coordinate retrieval, reasoning, and human collaboration at scale.

Camunda supports this with our new Embeddings Vector Database Outbound connector—a modular component that integrates RAG with long-term memory systems. This connector supports a variety of vector databases, including both Amazon Managed OpenSearch (used in this exercise) and Elasticsearch.

With this setup, agents can inject knowledge into their decision-making loops by retrieving semantically relevant data at runtime. This same mechanism can also be used to update and evolve the knowledge base, enabling self-learning behaviors through continuous feedback.

To complete the agentic stack, Camunda also offers the AI Agent Outbound connector. This connector interfaces with a broad ecosystem of large language models (LLMs) like OpenAI and Anthropic, equipping agents with reasoning capabilities that allow them to autonomously select and execute ad-hoc sub-processes. These agents evaluate the current process context, determine which tasks are most relevant, and act accordingly—all within the governed boundaries of a BPMN-modeled orchestration.

How this applies to our exercise

Before we step through an exercise, let’s review a quick explanation about how these new components and Camunda’s approach will be used in this example and in your agentic AI orchestration.

The first key component is the AI Task Agent. It is the brains behind the operations. You give this agent a goal, instructions, limits and its chain of thought so it can make decisions on how to accomplish the set goal.

The second component is the ad-hoc sub-process. This encompasses the various tools and tasks that can be performed to accomplish the goal.

A prompt is provided to the AI Agent and it decides which tools should be run to accomplish this goal. The agent reevaluates the goal and the information from the ad-hoc sub-process and determines which of these tools, if any, are needed again to accomplish the goal; otherwise, the process ends.

Now armed with this information, we can get into our example and what you are going to build today.

Example overview

This BPMN process defines a message delivery service for the Hawk Emporium where AI-powered task agents make real-time decisions to interpret customer requests and select the optimal communication channels for message delivery.

Our example model for this process is the Message Delivery Service as shown below.

Message-delivery-service-agentic-orchestration

The process begins with user input filling out a form including a message, the desired  individual(s) to send it to, and the sender. Based on this input, a script task generates a prompt to send to the AI Task Agent. The AI Task processes the generated prompt and determines appropriate tasks to execute. Based on the AI Agent’s decision, the process either ends or continues to refine using various tools until the message is delivered.

The tasks that can be performed are located in the ah-hoc sub-process and are:

  1. Send a Slack message (Send Slack Message) to specific Slack channels,
  2. Send an email message (Send an Email) using SendGrid,
  3. Request additional information (Ask an Expert) with a User Task and corresponding form.

If the AI Task Agent has all the information it needs to generate, send and deliver the message, it will execute the appropriate message via the correct tool for the request. If the AI Agent determines it needs additional information; such as a missing email address or the tone of the message, the agent will send the process instance to a human for that information.

The process completes when no further action is required.

Process breakdown

Let’s take a little deeper dive on the components of the BPMN process before jumping in to build and execute it.

AI Task Agent

The AI Task Agent for this exercise uses AWS Bedrock’s Claude 3 Sonnet model for processing requests. The agent makes decisions on which tools to use based on the context. You can alternatively use Anthropic or OpenAI.

SendGrid

For the email message task, you will be sending email as community@camunda.com. Please note that if you use your own SendGrid account, this email source may change to the email address for that particular account.

Slack

For the Slack message task, you will need to create the following channels in your Slack organization:

  • #good-news
  • #bad-news
  • #other-news

Assumptions, prerequisites, and initial configuration

A few assumptions are made for those who will be using this step-by-step guide to implement your first an agentic AI process with Camunda’s new agentic AI features. These are outlined in this section.

The proper environment

In order to take advantage of the latest and greatest functionality provided by Camunda, you will need to have a Camunda 8.8-alpha4 cluster or higher available for use. You will be using Web Modeler and Forms to create your model and human task interface, and then Tasklist when executing the process.

Required skills

It is assumed that those using this guide have the following skills with Camunda:

  • Form Editor – the ability to create forms for use in a process.
  • Web Modeler – the ability to create elements in BPMN and connect elements together properly, link forms, and update properties for connectors.
  • Tasklist – the ability to open items and act upon them accordingly as well as starting processes.
  • Operate – the ability to monitor processes in flight and review variables, paths and loops taken by the process instance.

Video tutorial

Accompanying this guide, we have created a step-by-step video tutorial for you. The steps provided in this guide closely mirror the steps taken in the video tutorial. We have also provided a GitHub repository with the assets used in this exercise. 

Connector keys and secrets

If you do not have existing accounts for the connectors that will be used, you can create them.

You will need to have an AWS with the proper credentials for AWS Bedrock. If you do not have this, you can follow the instructions on the AWS site to accomplish this and obtain the required keys:

  • AWS Region
  • AWS Access key
  • AWS Secret key

You will also need a SendGrid account and a Slack organization. You will need to obtain an API key for each service which will be used in the Camunda Console to create your secrets.

Secrets

The secrets will be referenced in your model using {{secrets.yourSecretHere}} where yourSecretHere represents the name of your connector secret.

For this example to work you’ll need to create secrets with the following names if you use our example and follow the screenshots provided:

  • SendGrid
  • Slack
  • AWS_SECRET_KEY
  • AWS_ACCESS_KEY
  • AWS_REGION

Separating sensitive information from the process model is a best practice. Since we will be using a few connectors in this model, you will need to create the appropriate connector secrets within your cluster. You can follow the instructions provided in our documentation to learn about how to create secrets within your cluster.

Now that you have all the background, let’s jump right in and build the process.

Note: Don’t forget you can download the model and assets from the GitHub repository.

Overview of the step-by-step guide

For this exercise, we will take the following steps:

  • Create the initial high-level process in design mode.
    • Create  the ad-hoc sub-process of AI Task Agent elements.
  • Implement the process.
    • Configure the connectors.
      • Configure the AI Agent connector.
      • Configure the Slack connector.
        • Create the starting form.
        • Configure the AI Task Agent.
        • Update the gateways for routing.
        • Configure the ad-hoc sub-process.
        • Connect the ad-hoc sub-process and the AI Task agent
  • Deploy and run the process.
  • Enhance the process, deploy and run again.

Build your initial process

Create your process application

The first step is to create a process application for your process model and any other associated assets. Create a new project using the blue button at the top right of your Modeler environment.

Build-process

Enter the name for your project. In this case we have used the name “AI Task Agent Tutorial” as shown below.

Process-name

Next, create your process application using the blue button provided.

Enter the name of your process application, in this example “AI Task Agent Tutorial,” select the Camunda 8.8-alpha4 (or greater) cluster that you will be using for your project, and select Create to create the application within this project.

Initial model

The next step is to build your process model in BPMN and the appropriate forms for any human tasks. We will be building the model represented below.

Message-delivery-service-agentic-orchestration

Click on the process “AI Agent Tutorial” to open to diagram the process. First, change the name of your process to “Message Delivery Service” and then switch to Design mode as shown below.

Design-mode

These steps will help you create your initial model.

  1. Name your start event. We have called it “Message needs to be sent” as shown below. This start event will have a form front-end that we will build a bit later.
    Start-event

  2. Add an end event and call it “Message delivered”
    End-event

  3. The step following the start event will be a script task called “Create Prompt.” This task will be used to hold the prompt for the AI Task Agent.
    Script-task

  4. Now we want to create the AI Task Agent. We will build out this step later after building our process diagram.
    Ai-agent

Create the ad-hoc sub-process

Now we are at the point in our process where we want to create the ad-hoc sub-process that will hold our toolbox for the AI Task Agent to use to achieve the goal.

  1. Drag and drop the proper element from the palette for an expanded subprocess.
    Sub-process


    Your process will now look something like this.
    Sub-process-2

  2. Now this is a standard sub-process, which we can see because it has a start event. We need to remove the start event and then change the element to an “Ad-hoc sub-process.”
    Ad-hoc sub-process

    Once the type of sub-process is changed, you will see the BPMN symbol (~) in the subprocess denoting it is an ad-hoc sub-process.
  3. Now you want to change this to a “Parallel multi-instance” so the elements in the sub-process can be run more than once, if required.
    Parallel multi-instance


    This is the key to our process, as the ad-hoc sub-process will contain a set of tools that may or may not be activated to accomplish the goal. Although BPMN is usually very strict about what gets activated, this construct allows us to control what gets triggered by what is passed to the sub-process.
  4. We need to make a decision after the AI Task Agent executes which will properly route the process instance back through the toolbox, if required. So, add a mutually exclusive gateway between the AI Task Agent and the end event, as shown below, and call it “Should I run more tools?”.
    Run-tools

  5. Now connect that task to the right hand side of your ad-hoc sub-process.
    Connect-to-ad-hoc-sub-process

  6. If no further tools are required, we want to end this process. If there are, we want to go back to the ad-hoc sub-process. Label the route to the end event as “No” and the route to the sub-process as “Yes” to route appropriately.
    Label-paths

  7. Take a little time to expand the physical size of the sub-process as we will be adding elements into it.
  8. We are going to start by just adding a single task for sending a Slack message.
    Slack-message

  9. Now we need to create the gateway to loop back to the AI Task Agent to evaluate if the goal has been accomplished. Add a mutually exclusive gateway after the “Create Prompt” task with an exit route from the ad-hoc sub-process to the gateway.
    Loop-gateway

Implement your initial process

We will now move into setting up the details for each construct to implement the model, so switch to the Implement tag in your Web Modeler.

Configure remaining tasks

The next thing you want to do in implementation mode is to use the correct task types for the constructs that are currently using a blank task type.

AI Agent connector

First we will update the AI Task Agent to use the proper connector.

  1. Confirm that you are using the proper cluster version. You can do this on the lower right-hand side of Web Modeler and be sure to select a cluster that is at least 8.8 alpha4 or higher.
    Zeebe-88-cluster

  2. Now select the AI Task Agent and choose to change the element to “Agentic AI Connector” as shown below.
    Agentic-ai-connector-camunda


    This will change the icon on your task agent to look like the one below.
    Agentic-ai-connector-camunda-2

Slack connector

  1. Select the “Send a Slack Message” task inside the ad-hoc sub-process and change the element to the Slack Outbound Connector.
    Slack-connector

Create the starting form

Let’s start by creating a form to kick off the process.

Note: If you do not want to create the form from scratch, simply download the forms from the GitHub repository provided. To build your own, follow these instructions.

The initial form is required to ask the user:

  • Which individuals at Hawk Emporium should receive the message
  • What the message will say
  • Who is sending the message

The completed form should look something like this.

Form

To enter the Form Builder, select the start event, click the chain link icon and select + Create new form.

Start by creating a Text View for the title and enter the text “# What do you want to Say?” in the Text field on the component properties.

You will need the following fields on this form:

FieldTypeDescriptionReq?Key
To whom does this concern?TextYperson
What do you want to say?TextYmessage
Who are you?TextYsender

Once you have completed your form, click Go to Diagram -> to return to your model.

Create the prompt

Now we want to generate the prompt that will be used in our script task to tell the AI Task Agent what needs to be done.

  1. Select the “Create Prompt” script task and update the properties starting with the “Implementation” type which will be set to “FEEL expression.”

    This action will open two additional required variables: Result variable and FEEL expression.
  2. For the “Result” variable, you will create the variable for the prompt, so enter prompt here.
  3. For the FEEL expression, you will want to create your prompt.
    "I have a message from " + sender + " they would like to convey the following message: " + message + " It is intended for " + person

    Feel-prompt-message

Configure the AI Task Agent

Now we need to configure the brains of our operation, the AI Task Agent. This task takes care of accepting the prompt and sending the request to the LLM to determine next steps. In this section, we will configure this agent with specific variables and values based on our model and using some default values where appropriate.

  1. First, we need to pick the “Model Provider” that we will use for our exercise, so we are selecting “AWS Bedrock.”
    Agentic-ai-connector-properties-camunda


    Additional fields specific to this model will open in the properties panel for input.
  2. The next field is the ”Region” for AWS. In this case, a secret was created for the region (AWS_REGION) which will be used in this field.
    Agentic-ai-connector-properties-camunda-2

    Remember the secrets will be referenced in your model using {{secrets.yourSecretHere}} where yourSecretHere represents the name of your connector secret.

    Note: See the Connector and secrets section in this blog for more information on what is required, the importance of protecting these keys, and how to create the secrets.
  3. Now we want to update the authorization credentials with our AWS Access Key and our AWS Secret key from our connector secrets.
    Agentic-ai-connector-properties-camunda-3

  4. The next part is to set the Agent Context in the “Memory” section of your task. This variable is very important as you can see by the text underneath the variable box.

    The agent context variable contains all relevant data for the agent to support the feedback loop between user requests, tool calls and LLM responses. Make sure this variable points to the context variable which is returned from the agent response.

    In this case, we will be creating a variable called  agent and in that variable there is another variable called context, so for this field, we will use the variable agent.context. This variable will play an important part in this process.

    Agentic-ai-connector-properties-camunda-4

    We will leave the maximum messages at 20 which is a solid limit.
  5. Now we will update the system prompt. For this, we have provided a detailed system prompt for you to use for this exercise. You are welcome to create your own. It will be entered in the “System Prompt” section for the “System Prompt” variable.

    Hint: If you are creating your own prompt, try taking advantage of tools like ChatGPT or other AI tools to help you build a strong prompt. For more on prompt engineering, you can also check out this blog series.

    Agentic-ai-connector-properties-camunda-system-prompt

    If you want to copy and paste in the prompt, you can use the code below:
You are **TaskAgent**, a helpful, generic chat agent that can handle a wide variety of customer requests using your own domain knowledge **and** any tools explicitly provided to you at runtime.

────────────────────────────────
# 0. CONTEXT — WHO IS “USER”?
────────────────────────────────
• **Every incoming user message is from the customer.**  
• Treat “user” and “customer” as the same person throughout the conversation.  
• Internal staff or experts communicate only through the expert-communication tool(s).

────────────────────────────────
# 1. MANDATORY TOOL-DRIVEN WORKFLOW
────────────────────────────────
For **every** customer request, follow this exact sequence:

1. **Inspect** the full list of available tools.  
2. **Evaluate** each tool’s relevance.  
3. **Invoke at least one relevant tool** *before* replying to the customer.  
   • Call the same tool multiple times with different inputs if useful.  
   • If no domain-specific tool fits, you **must**  
     a. call a generic search / knowledge-retrieval tool **or**  
     b. escalate via the expert-communication tool (e.g. `ask_expert`, `escalate_expert`).  
   • Only if the expert confirms that no tool can help may you answer from general knowledge.  
   • Any decision to skip a potentially helpful tool must be justified inside `<reflection>`.  
4. **Communication mandate**:  
   • To gather more information from the **customer**, call the *customer-communication tool* (e.g. `ask_customer`, `send_customer_msg`).  
   • To seek guidance from an **expert**, call the *expert-communication tool*.  
5. **Never** invent or call tools that are not in the supplied list.  
6. After exhausting every relevant tool—and expert escalation if required—if you still cannot help, reply exactly with  
   `ERROR: <brief explanation>`.

────────────────────────────────
# 2. DATA PRIVACY & LOOKUPS
────────────────────────────────
When real-person data or contact details are involved, do **not** fabricate information.  
Use the appropriate lookup tools; if data cannot be retrieved, reply with the standard error message above.

────────────────────────────────
# 3. CHAIN-OF-THOUGHT FORMAT  (MANDATORY BEFORE EVERY TOOL CALL)
────────────────────────────────
Wrap minimal, inspectable reasoning in *exactly* this XML template:

<thinking>
  <context>…briefly state the customer’s need and current state…</context>
  <reflection>…list candidate tools, justify which you will call next and why…</reflection>
</thinking>

Reveal **no** additional private reasoning outside these tags.

────────────────────────────────
# 4. SATISFACTION CONFIRMATION, FINAL EMAIL & TASK RESOLUTION
────────────────────────────────
A. When you believe the request is fulfilled, end your reply with a confirmation question such as  
   “Does this fully resolve your issue?”  
B. If the customer answers positively (e.g. “yes”, “that’s perfect”, “thanks”):  
   1. **Immediately call** the designated email-delivery tool (e.g. `send_email`, `send_customer_msg`) with an appropriate subject and body that contains the final solution.  
   2. After that tool call, your *next* chat message must contain **only** this word:  
      RESOLVED  
C. If the customer’s very next message already expresses satisfaction without the confirmation question, do step B immediately.  
D. Never append anything after “RESOLVED”.  
E. If no email-delivery tool exists, escalate to the expert-communication tool; if the expert confirms none exists, reply with an error as described in §1-6.
  1. Remember that in the Create Prompt task, we stored the prompt in a variable called prompt. We will use this variable in the “User Prompt” section for the “User Prompt.”
    Image54

  2. The key to this step are the tools at the disposal of the AI Task Agent, so we need to link the agent to the ad-hoc sub-process. We do this by mapping the ID of the sub-process to the proper tools field in the AI Task Agent.
    1. Start by selecting your ad-hoc sub-process and giving it a name and an ID. In the example, we will use “Hawk Tools” for the name and hawkTools for the “ID.”
      Link-agent-to-ad-hoc-sub-process-camunda-1

    2. Go back to the AI Task Agent and update the “Ad-hoc subprocess ID” to hawkTools for the ID of the sub-process.
      Link-agent-to-ad-hoc-sub-process-camunda-2

    3. Now we need a variable to store the results from calling the toolbox to place in the “Tool Call Results” variable field. We will use toolCallResults.
      Link-agent-to-ad-hoc-sub-process-camunda-3

    4. There are several other parameters of importance. We will use the defaults for several of these variables. We will leave the “Maximum model calls” in the “Limits” section set at “10” which will limit the number of times the model is called to 10 times. This is important for cost control.
      Link-agent-to-ad-hoc-sub-process-camunda-4

    5. There are additional parameters to help provide constraints around the results. Update these as shown below.
      Link-agent-to-ad-hoc-sub-process-camunda-5

    6. Now we need to update the “Output Mapping” section, first the “Result variable” which is where we are going to use our agent variable that will contain all the components of the result including the train of thought taken by the AI Task Agent.
      Link-agent-to-ad-hoc-sub-process-camunda-6

Congratulations, you have completed the configuration of your AI Task Agent. Now we just need to make some final connections and updates before we can see this running in action.

Gateway updates

We are going to use the variable values from the AI Task Agent to determine if we need to run more tools.

  1. Select the “Yes” path and add the following:
    not(agent.toolCalls = null) and count(agent.toolCalls) > 0
    Flow-condition

  2. For the “No” path, we will make this our default flow.
    Default-flow

Ad-hoc sub-process final details

We first need to provide the input collection of tools for the sub-process to use, and we do that by updating the “Input collection” in the “Multi-instance” variable.

  1. We will then provide each individual “Input element” with the single toolCall.
    Toolcall-toolcallresults
  2. We will then update the “Output Collection” to our result variable, toolCallResults.
    Toolcall-toolcallresults

  3. Finally, we want to create a FEEL expression for our “Output element” as shown below.
    {<br>  id: toolCall._meta.id,<br>  name: toolCall._meta.name,<br>  content: toolCallResult<br>}
     
    Output-element


    This expression provides the id, name and content for each tool.
  4. Finally, we need to provide the variable for the “Active elements” for the “Active elements collection” showing which element is active in the sub-process.
    [toolCall._meta.name]
    Active-element

    To better explain this, the AI Task Agent determines a list of elements (tools) to run and this variable represents which element gets activated in this instance.

Connect sub-process elements and the AI Task Agent

Now, how do we tell the agent that it has access to the tools in the ad-hoc subprocess?

  1. First of all, we are going to use the” Element Documentation” field to help us connect these together. We will add some descriptive text about the element’s job. In this case, we will be using:
    This can send a slack message to everyone at Niall's Hawk Emporium
    Element-documentation

Now we need to provide the Slack connector with the message to send and what channel to send that message on.

  1. We need to use a FEEL expression for our message and take advantage of the keyword fromAi and we will enter some additional information in the expression. Something like this:
    fromAi(toolCall.slackMessage, "This is the message to be sent to slack, always good to include emojis")
    Message


    Notice that we have used our variable toolCall again and told AI that you need to provide us with a variable called slackMessage.
  2. We also need to explain to AI which channel is appropriate for the type of message being sent. Remember that we provided three (3) different channels in our Slack organization. We will use another FEEL expression to provide guidance on the channel that should be used.
    fromAi(toolCall.slackChannel, "There are 3 channels to use they are called, '#good-news', '#bad-news' and '#other-news'. Their names are self explanatory and depending on the type of message you want to send, you should use one of the 3 options. Make sure you  use the exact name of the channel only.")
    Channels

  3. Finally, be sure to add your secret for “Authentication” for Slack in the “OAuth token” field. In our case this is:
    {{secrets.Slack}}
    Secrets

Well, you did it! You now should have a working process model that accesses an AI Task Agent to determine which elements in its toolbox can help it achieve its goal. Now you just need to deploy it and see it in action.

Deploy and run your model

Now we need to see if our model will deploy. If you haven’t already, you might want to give your process a better name and ID something like what is shown below.

Name-process
  1. Click Deploy and your process should deploy to the selected cluster.
    Deploy-agentic-ai-process-camunda

  2. Go to Tasklist and Processes and find your process called “Message System” and start the process clicking the blue button Start Process ->.
    Start-process
  3. You will be presented with the form you created so that you can enter who you are, the message content and who should receive the message. Enter the following for the fields:
    • To whom does this concern?
      Everyone at the Hawk Emporium
    • What do you want to say?
      We have a serious problem. Hawks are escaping. Please be sure to lock cages. Can you make sure this issue is taken more seriously?
    • Who are you?
      Joyce, assistant to Niall - Owner, Hawk Emporium
      Or enter anything you want for this.

Your completed form should look something like the one shown below.

Form

The process is now running and should post a Slack message to the appropriate channel, so open your Slack application.

  1. We can assume that this would likely be a “bad news” message, so let’s review our Slack channels and see if something comes to the #bad-news channel. You should see a message that might appear like this one.
    Ai-results-slack

  2. Open Camunda Operate and locate your process instance. It should look something like that seen below.
    Camunda-operate-check

  3. You can review the execution and see what took place and the variable values.
    Camunda-operate-check-details

You have successfully executed your first AI Task Agent and associated possible tasks or elements associated with that agent, but let’s take this a step further and add a few additional options for our AI Task Agent to use when trying to achieve its “send message” goal.

Add tasks to the toolbox

Why don’t we give our AI Task Agent a few more options to help it accomplish its goal to send the proper message. To do that, we are going to add a couple other options for our AI Task Agent within our ad-hoc sub-process now.

Add a human task

The first thing we want to do is add a human task as an option.

  1. Drag another task into your ad-hoc sub-process and call it “Ask an Expert”.
  2. Change the element type to a “User Task.” The result should look something like this.
    Add-tasks


    Now we need to connect this to our sub-process and provide it as an option to the AI Task Agent.
  3. Update the “Element Documentation” field with the information about this particular element. Something like:
    If you need some additional information that would help you with your request, you can ask this expert.
    Element-documentation-user-task

  4. We will need to provide the expert with some inputs, so hover over the + and click Create+ to create a new input variable.
  5. For the “Local variable name” use  aiquestion and then we will use a FEEL expression for the “Variable assigned value” following the same pattern we used before with the fromAi tool.
    fromAi(toolCall.aiquestion, "Add here the question you want to ask our expert. Keep it short and be friendly", "string")
    User-task-inputs

  6. In this case, we need to see the response from the expert so that the AI Task Agent can use this information to determine how to achieve our goal. So add an “Output Variable” and call it toolCallResult and we will be providing the answer using the following JSON in the Variable assignment value.
    {<br>  “Personal_info_response”: humanAnswer<br>}

    Your output variable section should now look like that shown below.
    User-task-output

  7. Now we need to create a form for this user task to display the question and give the user a place to enter their response to the question. Select the “Ask an Expert” task and choose the link icon and then click on the + Create new form from the dialog.
    Add-form
         
    New-form

  8. The form we need to build will look something like this:
    Question-from-ai


    Start by creating a Text View for the title and enter the text “# Question from AI” in the Text field on the component properties.

    You will need the following fields on this form:
FieldTypeDescriptionReq?Key
{{aiquestion}}Text viewN
AnswerText areaYhumanAnswer

The Text view field for the question will display the value of the aiquestion variable that will be passed to this task. We also provided a place to add a document that will be of some assistance.

Once you have completed your form, click Go to Diagram -> to return to your model.

Because we have already connected the AI Task Agent to the ad-hoc sub-process and the tools it can use, we do not have to provide more at this step.

Optional: Send an email

If you have a SendGrid account and key, you can complete the steps below, but if you do not, you can just keep two elements in your ad-hoc sub-process for this exercise.

  1. Create one more task in your ad-hoc sub-process and call it “Send an Email.”
  2. Change the task type to use the SendGrid Outbound Connector.
  3. Enter your secret for the SendGrid API key using the format previously discussed.

    Remember the secrets will be referenced in your model using {{secrets.yourSecretHere}} where yourSecretHere represents the name of your connector secret. In this case, we have used:
    {{secrets.SendGrid}}
  4. You will need to provide the reason the AI Task Agent might want to use this element in the Element documentation. The text below can be used.
    This is a service that lets you send an email to someone.
    Email

  5. For the Sender “Name” you want to use the information provided to the AI Task Agent about the person that is requesting the message be sent. We do this using the following information.
    fromAi(toolCall.emailSenderName, "This is the name of the person sending the email")

    In our case, the outgoing “Email address” is “community@camunda.com” which we also need to add to the “Sender” section of the connector properties. You will want to use the email address for your own SendGrid configuration.
    Sender-name-fromai


    Note: Don’t forget to click the fx icon before entering your expressions.
  6. For the “Receiver,” we also will use information provided to the AI Task Agent about who should receive the message. For the “Name”, we can use this expression:
    fromAi(toolCall.emailReceiveName, "This is the name of the person getting the email")

    For the Email address, we will need to make sure that the AI Task Agent knows the email address for the intended individual(s) for the message.
    fromAi(toolCall.emailReceiveAddress, "This is the email address of the person you want to send an email to, make very sure that if you use this that the email address is correctly formatted you also should be completely sure that the email is correct. Don't send an email unless you're sure it's going to the right person")

    Your properties should now look something like this.
    Receiver-name-fromai

  7. Select “Simple (no dynamic template)” for the “Mail contents” property in the “Compose email” section.
  8. In the “Compose email” section for the subject, we will let the AI Task Agent determine the best subject for the email, so this text will provide that to the process.
    fromAi(toolCall.emailSubject, "Subject of the email to be sent")
  9. The AI Task Agent will determine the email message body as well with the following:
    fromAi(toolCall.emailBody, "Body of the email to be sent")

    Your properties should look something like this.
    Properties-fromai

That should do it. You now have three (3) elements or tools for your AI Task Agent to use in order to achieve the goal of sending a message for you.

Deploy and run again

Now that you have more options for the AI Task Agent, let’s try running this again. However, we are going to make an attempt to have the AI Task Agent use the human task to show how this might work.

  1. Deploy your newly updated process as you did before.
  2. Go to Tasklist and Processes and find your process called “Message System” and start the process clicking the blue button.
    Start-process
  3. You will be presented with the form you created so that you can enter who you are, the message content and who should receive the message. Enter the following for the fields
    • To whom does this concern?
      I want to send this to Reb Brown. But only if he is working today. So, find that out.
    • What do you want to say?
      Can you please stop feeding the hawks chocolate? It is not healthy.
    • Who are you?
      Joyce, assistant to Niall - Owner, Hawk Emporium
      Or enter anything you want for this.

Your completed form should look something like the one shown below.

New-form-to-user-task-from-ai

The process is now running.

  1. Open Camunda Operate and locate your process instance. It should look something like that seen below.
    Camunda-operate-check-again

  2. You can review the execution and see what took place and the variable values.
  3. If you then access Tasklist and select the Tasks tab, you should have an “Ask an Expert” task asking you if Reb Brown is working today. Respond as follows:
    He is working today, but it’s also his birthday, so it would be nice to let him know the important message with a happy birthday as well.

    What-ai-asked-and-user-answer

  4. In Operate, you will see that the process instance has looped around with this additional information.
    Camunda-operate-check-details-again


    You can also toggle the “Show Execution Count” to see how many times each element in the process was executed.
    Camunda-operate-execution-count

  5. Now open your Slack application and you should have a message now that the AI Task Agent knows that not only is Reb Brown working, but it is his birthday.
    Ai-message

Congratulations! You have successfully executed your first AI Task Agent and associated possible tasks or elements associated with that agent.

We encourage you to add more tools to the ad-hoc sub-process to continue to enhance your AI Task Agent process. Have fun!

Congratulations!

You did it! You completed building an AI Agent in Camunda from start to finish including running through the process to see the results. You can try different data in the initial form and see what happens with new variables. Don’t forget to watch the accompanying step-by-step video tutorial if you haven’t already done so.

The post Intelligent by Design: A Step-by-Step Guide to AI Task Agents in Camunda appeared first on Camunda.

]]>
How to Succeed When Getting Started with Camunda 8 https://camunda.com/blog/2025/04/how-to-succeed-when-getting-started-with-camunda-8/ Tue, 29 Apr 2025 22:20:12 +0000 https://camunda.com/?p=136597 Avoid these four common pitfalls as you set up Camunda 8.

The post How to Succeed When Getting Started with Camunda 8 appeared first on Camunda.

]]>
After spending five years in Camunda support helping customers get up and running with the product, I’ve noticed a few recurring issues that are easy to avoid. Keep reading to learn about how you can ensure a good start with Camunda!

Don’t treat Camunda as a system of record

Using Camunda as a system of record can lead to several issues, such as a bloated data store that causes performance problems. There’s also the risk of storing personally identifiable information (PII) or other sensitive data in systems that don’t require it. To avoid these challenges, keep variable data to a minimum, only storing what’s necessary for the process.

For critical information, use a separate data store and reference it in the process using an ID, rather than storing it directly in Camunda. Ultimately, the data you store in Camunda should be strictly relevant to the process flow itself, helping to maintain both efficiency and security.

One way to keep variable data small is to make use of the Result Expression when using a connector. Most connectors will offer an Output Mapping to store the result of the connector. I commonly see users storing the entire result instead of making use of the Result Expression to store only the data they need.

For example, if you use the REST connector, you may get a result containing the status code of the response, some headers, and then some additional data from the endpoint. By using the Result Expression, you can map just the data you need to variables.

Prevent running into backpressure

Backpressure is a crucial mechanism in Zeebe that helps maintain system stability when processing slows down. It kicks in when the broker experiences high latency, preventing new events from being accepted until the system can catch up and the processing speeds return to normal. This ensures that the broker doesn’t become overwhelmed and continues to function efficiently.

To avoid hitting backpressure, there are several proactive steps you can take.

First, conduct thorough load testing to simulate real-world traffic and identify potential bottlenecks. The Camunda community provides two GitHub projects to help you perform load tests: the benchmarking toolset and the process automator.

Check out this in-depth blog post about benchmarking in Camunda.

Next, review your hardware specs to ensure they meet the performance requirements for your use case—insufficient resources can cause delays in processing. There are two common pitfalls when choosing your hardware, both relating to the hard drives attached to Zeebe brokers: be sure that your drives have a consistent minimum of 1000 IOPS and are not NFS. Slower drives, and the latency incurred by NFS, will cause your Zeebe brokers to perform inefficiently.

Make your system observable

Observability refers to the ability to track and analyze your system’s performance in real time. This gives you insight into how well your application is functioning and allows you to catch issues before they become significant problems.

Despite its importance, I often see people delay setting up observability until it’s too late. Being proactive about monitoring your application’s health can save a lot of time and effort in the long run by helping you identify and resolve issues early.

To help you get started monitoring your Camunda platform, Camunda comes out of the box with support for Prometheus and OpenTelemetry. You can review the metrics we provide in our documentation. We also provide dashboards for Grafana to make visualizing these metrics quick and easy.

In addition to monitoring the metrics provided by our application, be sure that you can collect and monitor application logs. These are critical for determining the root cause of issues that may arise. If accessing raw logs is an issue in your team, consider leveraging cloud vendor solutions like AWS CloudWatch or Azure Monitor.

Set up data retention early

Databases have limits, and as they grow, they can start to slow down. Storing unnecessary or outdated data causes your system to become bloated, which will negatively impact query performance and overall responsiveness. That’s where data retention policies come in.

By defining what data should be kept and what should be discarded, you ensure that your database stays lean and efficient, preventing performance issues as your system scales.

Don’t wait until your database becomes overwhelming to start cleaning it up. Make data retention a key part of your system planning from the beginning, so you can keep things running smoothly as your data grows.

Check out our documentation for more information on setting data retention policies for all the Camunda components:

While there is always more to learn as you get deeper into Camunda 8, avoiding these common pitfalls will help you get off to a strong start. And if you don’t want to deal with the hassle of hosting, sizing, and monitoring your platform, we also offer a SaaS option.

For anyone looking for a hand, of course, be sure to check out the docs, our forum, or contact your Camunda support representative directly, and we’ll be happy to help!

The post How to Succeed When Getting Started with Camunda 8 appeared first on Camunda.

]]>
Creating and Testing Custom Exporters Using Camunda 8 Run https://camunda.com/blog/2025/04/creating-testing-custom-exporters-camunda-8-run/ Fri, 18 Apr 2025 17:26:17 +0000 https://camunda.com/?p=134975 Learn how to create custom exporters and how to test them quickly using Camunda 8 Run.

The post Creating and Testing Custom Exporters Using Camunda 8 Run appeared first on Camunda.

]]>
If you’re familiar with Camunda 8, you’ll know that it includes exporters to Elasticsearch and Opensearch for user interfaces, reporting, and historical data storage. And many times folks want the ability to send data to other warehouses for their own purposes. While creating custom exporters has been available for some time, in this post we’ll explore how you can easily test them on your laptop using Camunda 8 Run (C8 Run).

C8 Run is specifically targeted for local development, making it faster and easier to build and test applications on your laptop before deploying it to a shared test environment. Thank you to our colleague Josh Wulf for this blog post detailing how to build an exporter.

Download and install Camunda 8 Run

For detailed instructions on how to download and install Camunda 8 Run, refer to our documentation here. Once you have it installed and running, continue on your journey right back here!

Download and install Camunda Desktop Modeler

You can download and install Desktop Modeler using instructions found here. You may need to open the dropdown menu for “Alternative downloads” to find your preferred installation.. Select the appropriate operating system and follow the instructions and be sure to start Modeler up. We’ll use Desktop Modeler to create and deploy sample applications to Camunda 8 Run a little bit later.

Create a sample exporter

First, we’ll create a very simple exporter and install it on your local C8 Run environment and see the results. In this example, we’ll create a Maven project in IntelliJ, adding the exporter dependency, and then create a Java class, implementing the Exporter interface with  straightforward logging to system out. Feel free to use your favorite integrated development environment and build automation tools.

Once you’ve created a sample Maven project, add the following dependency to the pom.xml file. Be sure to match the version of the dependency, at the very least the minor version, with your C8 Run installation.

<dependencies>
  <dependency>
      <groupId>io.camunda</groupId>
      <artifactId>zeebe-exporter-api</artifactId>
      <version>8.6.12</version>
  </dependency>
</dependencies>

After reloading the project with the updated dependency, go to the src/main/java folder and create a package called io.sample.exporter:

Sample-exporter

Next, create a class called SimpleExporter in the package:

Simple-exporter

In SimpleExporter add implements Exporter, and then you should be prompted to select an interface. Be sure to choose Exporter io.camunda.zeebe.exporter.api:

Exporter-interface

You’ll likely get a message saying you’ll need to implement the export method of the interface. You’ll also want to implement the open method as well. Either select the option to implement the methods or create them yourself. The code should look something like this:

package io.sample.exporter;


import io.camunda.zeebe.exporter.api.Exporter;
import io.camunda.zeebe.exporter.api.context.Controller;
import io.camunda.zeebe.protocol.record.Record;


public class SimpleExporter implements Exporter
{
   @Override
   public void open(Controller controller) {
       Exporter.super.open(controller);
   }


   @Override
   public void export(Record<?> record) {
      
   }
}

Let’s make some updates. First we’ll add a Controller object that includes a method to mark a record as exported and moves the record position forward. The Zeebe broker will not truncate the event log otherwise and will lead to full disks. Add a controller object: Controller controller; to the class and update the open method, replacing the generated code with: this.controller = controller;

Your code should now look something like this:

public class SimpleExporter implements Exporter
{
   Controller controller;


   @Override
   public void open(Controller controller) {
       this.controller = controller;
   }


   @Override
   public void export(Record<?> record) {
   }
}

Let’s implement the export method. We’ll print something to the log and move the record position forward. Add the following code to the export method:

if(! record.getValue().toString().contains("worker")) {
   System.out.println("SIMPLE_EXPORTER " + record.getValue().toString());
}

The connectors will generate a number of records and the if statement above will cut down on the noise so we can focus on events generated from processes. Your class should now look something like this:

public class SimpleExporter implements Exporter
{
   Controller controller;


   @Override
   public void open(Controller controller) {
       this.controller = controller;
   }


   @Override
   public void export(Record<?> record) {
       if(! record.getValue().toString().contains("worker")) {
   		System.out.println("SIMPLE_EXPORTER " + record.getValue().toString());
 	 }
   }
}

Next, we’ll package this up as a jar file, add it to the Camunda 8 Run libraries, update the configuration file to point to this exporter and see it in action.

Add custom exporter to Camunda 8 Run

Using either Maven terminal commands ie: mvn package, or your IDE Maven command interface, package the exporter. Depending on what you’ve defined for artifactId and version in your pom file, you should see a file named artifactId-version.jar.

In the target directory. Here is an example jar file with an artifactId of exporter and a version of 1.0-SNAPSHOT:

Example-jar-artifactid

While you don’t have to copy and paste this jar file into the Camunda 8 installation, it’s a good idea. As long as the Camunda 8 Run application can access the directory, you can place it anywhere. In this example we’re placing the jar into the lib directory of the Camunda 8 Run installation in <Camunda 8 Run root directory>/camunda-zeebe-8.x.x/lib.

Lib-directory

Next, update the application.yaml configuration file to reference the custom exporter jar file. It can be found in the <Camunda 8 Run root directory>/camunda-zeebe-8.x.x/config directory.

Example Configuration:

zeebe:
  broker:
    exporters:
      customExporter:
        className: io.sample.exporter.SimpleExporter
        jarPath: <C8 Run dir>/camunda-zeebe-8.x.x/lib/exporter-1.0-SNAPSHOT.jar

This ensures that Camunda 8 Run recognizes and loads your custom exporter during startup.

Now let’s start up Camunda 8 Run.

Start Camunda 8 Run and observe the custom exporter in action

Open a terminal window, change directory to the Camunda 8 Run root directory. In it you should find the start.sh or c8run.exe file, depending on your operating system. Start either one (./start.sh or .c8run.exe).

Once Camunda 8 Run has started and you once again have a prompt, change the directory to log, ie: <Camunda 8 Run root directory>/log. In that directory there should be three logs, camunda.log, connectors.log, and elasticsearch.log.

Log-directory

Start tailing or viewing camunda.log using your favorite tool. Next, what we’ll do is create a very simple process, deploy it, and run it to view sample records from a process instance.

Create and deploy a process flow in Desktop Modeler

Go to Modeler and create a new Camunda 8 BPMN diagram. Build a simple one step process with a Start Event, a User Task, and an End Event. Deploy it to the Camunda 8 Run instance.Your Desktop Modeler should look something like this:

Process-camunda-desktop-modeler

You can then start a process instance from Desktop Modeler as shown here:

Start-instance-camunda-desktop-modeler

Go back to camunda.log and you should see entries that look something like this:

SIMPLE_EXPORTER {"resources":[],"processesMetadata":[{"bpmnProcessId":"Process_0nhopct","version":1,"processDefinitionKey":2251799813703443,"resourceName":"diagram_1.bpmn","checksum":"xbmiHFXd3lVQbwV1gq/UEQ==","isDuplicate":true,"tenantId":"<default>","deploymentKey":2251799813703442,"versionTag":""}],"decisionRequirementsMetadata":[],"decisionsMetadata":[],"formMetadata":[],"tenantId":"<default>","deploymentKey":2251799813704156}
SIMPLE_EXPORTER {"bpmnProcessId":"Process_0nhopct","processDefinitionKey":0,"processInstanceKey":-1,"version":-1,"variables":"gA==","fetchVariables":[],"startInstructions":[],"tenantId":"<default>"}
SIMPLE_EXPORTER {"bpmnElementType":"PROCESS","elementId":"Process_0nhopct","bpmnProcessId":"Process_0nhopct","version":1,"processDefinitionKey":2251799813703443,"processInstanceKey":2251799813704157,"flowScopeKey":-1,"bpmnEventType":"UNSPECIFIED","parentProcessInstanceKey":-1,"parentElementInstanceKey":-1,"tenantId":"<default>"}
SIMPLE_EXPORTER {"bpmnProcessId":"Process_0nhopct","processDefinitionKey":2251799813703443,"processInstanceKey":2251799813704157,"version":1,"variables":"gA==","fetchVariables":[],"startInstructions":[],"tenantId":"<default>"}
SIMPLE_EXPORTER {"bpmnElementType":"PROCESS","elementId":"Process_0nhopct","bpmnProcessId":"Process_0nhopct","version":1,"processDefinitionKey":2251799813703443,"processInstanceKey":2251799813704157,"flowScopeKey":-1,"bpmnEventType":"UNSPECIFIED","parentProcessInstanceKey":-1,"parentElementInstanceKey":-1,"tenantId":"<default>"}
SIMPLE_EXPORTER {"bpmnElementType":"PROCESS","elementId":"Process_0nhopct","bpmnProcessId":"Process_0nhopct","version":1,"processDefinitionKey":2251799813703443,"processInstanceKey":2251799813704157,"flowScopeKey":-1,"bpmnEventType":"UNSPECIFIED","parentProcessInstanceKey":-1,"parentElementInstanceKey":-1,"tenantId":"<default>"}
SIMPLE_EXPORTER {"bpmnElementType":"START_EVENT","elementId":"StartEvent_1","bpmnProcessId":"Process_0nhopct","version":1,"processDefinitionKey":2251799813703443,"processInstanceKey":2251799813704157,"flowScopeKey":2251799813704157,"bpmnEventType":"NONE","parentProcessInstanceKey":-1,"parentElementInstanceKey":-1,"tenantId":"<default>"}

Now you can experiment extracting data from the JSON objects for your own purposes and experiment with sending data to warehouses of your choice. Enjoy!

Looking for more?

Camunda 8 Run is free for local development, but our complete agentic orchestration platform lets you take full advantage of our leading platform for composable, AI-powered end-to-end process orchestration. Try it out today.

The post Creating and Testing Custom Exporters Using Camunda 8 Run appeared first on Camunda.

]]>
Essential Agentic Patterns for AI Agents in BPMN https://camunda.com/blog/2025/03/essential-agentic-patterns-ai-agents-bpmn/ Wed, 05 Mar 2025 16:42:17 +0000 https://camunda.com/?p=130529 Learn how orchestration and BPMN can solve some of the most common limitations and concerns around implementing AI Agents today.

The post Essential Agentic Patterns for AI Agents in BPMN appeared first on Camunda.

]]>

Table of contents

I’ve been reading a lot about the potential of adding AI Agent functionality to existing processes and software applications. It’s mostly cautionary tales and warnings about the limitations of AI Agents. So I decided to take some of the most common limitations and combine that with the most common cautionary tale and talk about how orchestration with BPMN does an awful lot to solve these problems.

Let’s start by explaining our cautionary tale: Healthcare. It’s very common for articles about Agentic AI to eventually evoke caution in its readers with the words “Would you trust AI with your health?”. I, like you—would not. People mention very specific reasons for this and I wondered if I could use BPMN to create patterns that alleviate these fears? The idea being that if it works for a healthcare scenario where the stakes are so high—surely it would work for any other kind of process?

Like this interactive embedded model? You can build one too: start a free trial today

So I started with this simple BPMN representation of a diagnosis process. A patient has some medical  issue, and after getting all the information they need, the doctor then confirms a diagnosis and makes a reservation for some kind of treatment. Confirmation is then sent to the patient. This model as well as all of the others I’ll be referencing in this post can be found here.  So where do I start with my journey towards optimizing this with AI?

Visualize critical information

Problem: When adding an Agent how can I ensure its actions are auditable?

I’m going to jump right in by changing the model to both add AI Agent functionality while also addressing the issue of auditability.

By design BPMN visualizes the execution of actions that will happen or have happened. This creates a clear auditability, both as a log of events internally in the engine but also when superimposed on the model itself. While the standard is mostly known for its structured way of implementing processes, it does have a great way of adding non-deterministic sections to a process. The symbol in question is the ad-hoc sub-process. This allows for your process to break into an unstructured segment which allows for the addition of AI Agent shenanigans. It can look at what the context of the request is and see a list of actions that it can take. (Changes are highlighted below in green.)

Using this construct the Agent has the freedom to perform the actions it feels are required by the context and it is completely visible to the user how and why those choices are being made. Each task, service or event that is triggered by the AI is visualized in the very BPMN model that you create. Afterwards, once the AI has finished its work, the process can continue along a more predictable path.

Increasing trust in results

Problem: AI gets things wrong. How can I ensure these are caught and any damage is undone?

We’ve changed the process so that the Agent is going to be making choices and acting on them. Clearly the first thing to think of is—do you trust their results? Well, you shouldn’t obviously. So, in the next iteration of the process not only have I added a pattern to adjudicate whether the correct choice was made, but I’ve also ensured that if an action has been taken as a result of that decision, it can be undone.

I’ve written before about how this can be done by analyzing the chain of thought output, but this pattern goes a little further. First by allowing the thought checking to happen in parallel to the actions that can be taken, and secondly by being able to actually undo any actions taken once a bad decision has been discovered.

How it works is that after the “Decide on Treatment” sub-process finishes there are two possibilities;

  1. Treatment is needed and a reservation is made.
  2. No treatment is needed and nothing is reserved.

In both cases a check is made (in parallel) to ensure the decision makes sense. If it’s all good we end. If some flawed logic is discovered a Compensation event is triggered. This is a really powerful feature of BPMN because this will check what actions have been taken by the process (in this case the “Make Treatment Reservation” task may be complete) and in that case it will undo that that action (in this model that means activating the “Cancel Reservation” Task).

This solves two issues that you’d tend to worry about. It catches mistakes and if those mistakes have led to bad actions it can undo those, and none of this will actually slow down the process because it’s all happening in parallel!

Adding humans in the loop

Problem: In some cases humans should be involved in decision making.

Core business processes, by their nature, have a substantial impact on people and business success. The community of users who implement their processes with Camunda don’t tend to use it for trivial processes, because those processes don’t have the level of complexity and require the flexibility that is a core tenet of Camunda’s technology. With this in mind, it’s obvious that bringing AI Agents into the mix provokes concerns of oversight. Specifically the kind of oversight that needs to be conducted by a person.

Continuing with our model. I’ve added some new functionality that does two things. The first is a pretty simple requirement that means if it’s been decided that the Agent’s chain of thought has led to the wrong choice we’ve added an Escalation End event. This construction throws an event called “Doctor Oversight Needed” which is caught by the event sub-process and creates a user task for a Doctor. A nice feature here is that the context remains intact so the Doctor can look over what the patient details are, see what the AI suggested—even see why the chain of thought was determined to be wacky and then they have the power to decide on how to proceed.

The second addition is a little more subtle but I think very important to maintaining the integrity of the process. It gives users the control of reversing a decision an Agent has made even long after the agent has made it.

This is done by adding an event-based gateway which can wait for an order sent in from a doctor who has decided that they want to work on a new treatment. Sending in this message does two things. First, it cancels the actions the Agent took (in this case, making a reservation for treatment), and secondly it triggers the same escalation event as the other branch, and so now the doctor once again gets full context and can make a new decision about the treatment.

This shows that humans can be easily integrated at both the time of decision making by the Agent but also after the fact.

Guardrail important choices

Problem: AI could make choices that don’t align with fundamental rules.

While human validation is a nice way to keep things in check, humans are neither infallible nor are they scalable. So when your process has an important decision to be taken by an Agent, you don’t want to have to rely on a human to always check the result or have to rely on Agents checking other Agents. You need substantiation guardrails that will not make mistakes. You need business rules.

BPMN’s sister standard DMN lets you visually define complex business rules that can be integrated into a process. If these rules are broken by a decision from an Agent, it’s caught early, before any further action is taken. Also for the more financially conscientious users out there—it wont cost you a call out to an AI Agent, so for high throughput predictable decisions it’s a great choice economically. But it gets even better because in combination with BPMN’s Error event they can also ensure that anytime the rules are broken it can be reported, understood and hopefully improved. Using DMN also ensures auditable compliance. Because there’s no way for a process to break the rules, you can be absolutely sure that every instance of your process is both compliant and auditable, so if there are regulations guiding how your process should or should not perform not only can the business rest assured that things aren’t going to go pear-shapped but it can also be proven to external auditors.

In this model I’ve added a DMN table that is triggered after the “Confirm Treatment Decision” task. The DMN table has a set of rules outlining treatments that should not be given based on existing conditions of the patent. These kinds of rules are made to be easy to define and update so as more treatments become available so do the rules. If a decision made by the Agent breaks the rules an Error event is triggered and this registers the failure as an incident to be corrected so that the Agent can improve and violate fewer rules in the future.

Ad-Hoc human intervention

Problem: It should be possible to call on human intervention at any time

Most AI Agents are built so that once assigned a task, they work on it within their little black box until they completely succeed or completely fail. Basically AI Agents are transactions. The annoying side effect of this is that an AI Agent cannot just reach out mid-thought for human input. Because the all or nothing design pattern means it can’t wait for a response. That’s not the case for AI Agents built with BPMN and Camunda.

As a process grows in complexity and more decision making is being left up to AI, it’s important to maintain human awareness of decisions and approvals when needed. BPMN events allow for users to be called on dynamically to check decisions or give input. These measures are incredibly important for further growth of an agentic process, because they reinforce trust and take minimal amounts of time from experts, who may only need to be called on for verification and validation of the most complex or consequential parts of the process.

In now the final iteration of the diagnostic process, I’ve added a couple of ways to be more dynamic about how human interaction is integrated. Starting with the ad-hoc sub-process. There’s now an Escalation event called “Doctor’s Opinion Needed” that can be triggered at any time by the AI Agent if they feel they need more context before continuing. Unlike previous events, this does not have over decision making to the doctor but instead informs the doctor that the Agent needs some advice in order to continue their diagnosis. The AI Agent then waits for a signal to return that indicates they’ve got an answer to their query.

The agent can theoretically use this as often as they like until they’ve got all the information they need for an informed decision.

The future of AI Agent design

AI agents are going to become ubiquitous for helping navigate a lot of the mundane parts of productivity very soon. For the most consequential parts of business—it’s going to take a little longer, because there’s a lot of risk inherent in giving decision making power to components that can act without oversight. Moving from deterministic to non-deterministic processes is going to require businesses to rethink design principles. Once it starts to happen though, it’s the place that’s going to benefit the most and have the biggest impact on the core business. While it’s still early days and I’m looking forward to seeing how new patterns beyond the ones I’ve talked about will change the way Agents impact business, I’m pretty confident that BPMN is going to be how we see AI Agent design and implementation where it matters most. As Jakob and Daniel have already suggested—those companies are going to be doing it with the best placed technology and simply put, that’s Camunda.

Ai-company-future-camunda

Read more about the future of AI, process orchestration and Camunda

Curious to learn more about AI and how we think it will impact business processes? Keep reading below.

The post Essential Agentic Patterns for AI Agents in BPMN appeared first on Camunda.

]]>
Building Your First AI Agent in Camunda https://camunda.com/blog/2025/02/building-ai-agent-camunda/ Fri, 28 Feb 2025 16:44:18 +0000 https://camunda.com/?p=130273 Follow this step-by-step guide (with video) to use agentic ai and start developing agentic process orchestration with Camunda today.

The post Building Your First AI Agent in Camunda appeared first on Camunda.

]]>

Update for Camunda 8.8

Note: This step-by-step guide takes advantage of agentic AI features and capabilities in Camunda 8.7. Please see our latest step-by-step AI Agent Guide to implement a similar process using Camunda 8.8 alpha functionality.

Building your first agentic artificial intelligence (AI) process is easier than you think. Our intention in this post is to provide you with step-by-step instructions to create that first process using agentic process orchestration with Camunda. If you’re new to Camunda, you can get started for free here.

Within BPMN, there is a construct called an ad-hoc subprocess. An ad-hoc subprocess in BPMN is a type of subprocess where tasks do not follow a predefined sequence flow. Instead, tasks within the subprocess can be executed in any order, repeatedly, or skipped entirely, based on the needs of the process instance.

In Camunda, this workflow pattern enables the injection of non-deterministic behavior into otherwise deterministic processes as ad-hoc subprocesses: the ad-hoc subprocess serves as a container where the exact sequence and occurrence of tasks are not pre-defined but rather determined at runtime by leveraging LLMs.

This is the implementation path for Camunda’s support for AI agents, as it allows portions of the decision-making to be handed over to an agent for processing. That processing can include human tasks as well. This approach provides the AI agent some freedom, with constraints, about what actions should be processed.

For a better understanding of Camunda’s terminology, we have provided our definition of an AI agent below:

An AI agent is an automation within Camunda that leverages ad-hoc subprocesses to perform one or more tasks with non-deterministic behavior. AI agents can:

  • Make autonomous decisions about task execution
  • Adapt their behavior based on context and input
  • Handle complex scenarios that require dynamic response
  • Integrate with other process components through standard interfaces

AI agents represent the practical implementation of agentic process orchestration within the Camunda ecosystem, combining the flexibility of AI with the reliability of traditional process automation.

These subprocesses provide access to actions that can improve decisions and help to optimize the completion of tasks and choices. They can be easily integrated into your end-to-end business process.

Model overview

Our example model for this process is a fraud detection process when submitting tax forms.  

A BPMN model of an AI agent in an ad-hob sub-process using Camunda.

The process begins when a form is filled in by a user who wants to submit information for their tax return. An OpenAI bot checks the data provided for any indication of fraud. The AI bot will determine which tasks, from a list of tasks, to perform for this set of criteria. The appropriate tasks within the ad-hoc subprocess will then be activated running in parallel until all are completed.

The tasks that can be performed are located in the ah-hoc sub process and are:

  1. Send an email asking for more information,
  2. Ask a human expert for their opinion,
  3. Declare that fraud has been detected.

Each of these options triggers a different type of action. For example:

  • Sending an email activates two tasks,
  • Asking an expert activates a front-end application,
  • Detecting fraud will activate an escalation event that will cancel the process. This could initiate another process to investigate the fraud, of course.

Let’s jump right in and build the process.

Assumptions and initial configuration

A few assumptions are made for those individuals who will be using this step-by-step guide to create their first agentic AI process. These are outlined in this section.

The proper environment

In order to take advantage of the latest and greatest functionality provided by Camunda, you will need to have a Camunda 8.7.x cluster or higher available for use. You will be using Web Modeler and Forms to create your model and human task interface, and then Play and Task List when executing the process.

Required skills

It is assumed that those using this guide have the following skills with Camunda.

  • Form Editor – the ability to create forms for use in a process.
  • Web Modeler – the ability to create elements in BPMN and connect elements together properly, link forms, and update properties for connectors.
  • Play – the ability to step through a model from Modeler.
  • TaskList – the ability to open items and act upon them accordingly as well as starting processes.

GitHub repository and video tutorial

If you do not want to build this process from scratch, you may access the GitHub repository and download the individual components. Accompanying this guide, we have created a step-by-step video tutorial for you. The steps provided in this guide closely mirror the steps taken in the video tutorial.

Connector secrets

Separating sensitive information from the process model is a best practice. Since we will be using a few connectors in this model, you will need to create the appropriate connector secrets within your cluster. You can follow the instructions provided in our documentation.

If you do not have existing accounts for the connectors that will be used, you can create a SendGrid account and an OpenAI account. You will then need to get an API key for each service which will be used in the Camunda Console to create your secrets.

Connector-secrets

The secrets will be referenced in your model using {{secrets.yourSecretHere}} where yourSecretHere represents the name of your connector secret..

For this example to work you’ll need to create secrets with the following names:

  • OpenAI
  • SendGrid

Building your process

Creating your process application

The first step is to create a process application for your process model and any other associated assets. Create a new project using the blue button at the top right of your Modeler environment.

Create-new-project

Enter the name for your project. In this case we have used “Agentic Fraud Detection” as shown below.

Create-new-process-application

Next, create your process application using the blue button provided.

Create-new-process-application-2

Enter the name of your process application, select the Camunda 8.7.x cluster selected for your project, and select “Create” to create the application within this project.

Initial model

The next step is to build your process model in BPMN and the appropriate forms for any human tasks. We will be building the model represented below.

Ai-agent-bpmn-model-camunda

Click on the process “AI Fraud Detection Example” and we will begin building out the model in BPMN off the start step provided.

Ai-model-bpmn-building

We will start by building our model in Design mode as shown below.

Ai-model-bpmn-start

These steps will help you create your initial model.

  1. Name your start event. We have called it “Enter Financial Details” as shown below.
    Ai-model-bpmn-start-2

  2. Add an End Event and call it “No Fraud Found.”
    Ai-model-bpmn-end

  3. Create a task after the start task. Change it to an OpenAI Outbound Connector task and call it “Decide on likelihood of fraud” as shown below.
    Openai-connector-camunda-1
     
    Openai-connector-camunda-2
     
    Openai-connector-camunda-3

  4. Create a Script task after the OpenAI connector task that will be called “Create list of tasks” and will provide a list of tasks based on the OpenAI decision that will be run in the ad-hoc process.
    Script-task

Creating the ad-hoc subprocess

Now we are at the point in our process where we want to create the ad-hoc subprocess that will be used to trigger the appropriate components based on the decisions made by the previous tasks. Just complete these steps to create the ad-hoc subprocess for your model.

  1. Drag and drop the proper element from the pallet for an expanded subprocess.
    Ad-hoc-sub-process-camunda

    Your process will now look something like this.
    Ad-hoc-sub-process-camunda-2

  2. Now this is a standard subprocess, which we can see because it has a start event. We need to remove the start event and then change the element to an Ad-hoc subprocess.
    Ad-hoc-sub-process-camunda-3

    Once the type of subprocess is changed, you will see the BPMN symbol (~) in the subprocess denoting it is an ad-hoc subprocess. And don’t forget to connect your script task to the subprocess.

    This is the key to our process, as the ad-hoc subprocess will contain a set of tasks that may or may not be activated. Although BPMN is usually very strict about what gets activated, this construct allows us to control what gets triggered by what is passed to the subprocess.
  3. Take a little time to expand the physical size of the subprocess as we will be adding elements into it.
  4. As mentioned, one of the options is going to be sending an email to request additional information. Create two connected tasks inside the subprocess, the first is an OpenAI connector task called “Generate Email Inquiry” followed by a SendGrid connector task called “Send Email“ as shown below.
    Ad-hoc-sub-process-email

    You will use the “Change Element” option to select the OpenAI Outbound Connector and the SendGrid Outbound Connector.

    So the act of sending the email has two elements: generating the email content and then sending the email.
  5. Now we want to add the option to call on an expert for advice, so add a task called “Call on Expert for Advice” and be sure to change this to a User Task.
    Ad-hoc-sub-process-expert

    This enables us to create a front end form to interact with a human as part of the ad-hoc process.
  6. Finally, we want to add the option to trigger fraud, so we will add an event to the subprocess called “Fraud Detected” which will then throw another event which is an escalation event, as shown, which will be called “Throw Fraud.”


    This escalation event is going to throw the process out of the ad-hoc subprocess.
    Fraud-detected

  7. We are going to catch this fraud throw event with a boundary event on the subprocess and change it to an escalation boundary event. This will just end the process for our example.

    Click on the ad-hoc subprocess and create the catch event, as shown below, then connect an end event called “Fraud Found” so that your diagram looks something like what is shown below.
    Ad-hoc-sub-process-camunda-4

  8. Let’s give the option for the expert to also stop the process if fraud is indicated. So, add an exclusive gateway with the option for a “No Fraud” event or to Throw Fraud as indicated below. Be sure to label your gateway and the branches as indicated.
    Ad-hoc-sub-process-camunda-5


    This gives us two cases where fraud can be found: the AI bot can find fraud or the expert can find fraud, both triggering the end of the workflow.

Finalizing the process

Now that we have completed the ad-hoc subprocess, we have a few more things to add to finalize our overall process.

  1. We want to add another task to the overall process before the final end event. Create another OpenAI connector task, which will serve as an AI bot called “Check final Decision” to confirm that the decision made (fraud or not fraud) is accurate.
    Ad-hoc-sub-process-camunda-6

  2. If it is determined that the decision is not accurate, we need to return to the “Decide on likelihood of Fraud” element to rerun the ad-hoc subprocess. In order to do this, add an exclusive gateway after the “Check final Decision” OpenAI task called “Is everything OK?”.
    Everything-ok-gateway


    If we are happy with the decision, the process ends (the “yes” path) and if we are not happy, then this will return to the “Decide on likelihood of fraud” (the “no” path).

Now you have completed the design of our model, we move to the Implement tab to make sure we provide all the proper parameters configured.

Optional: Creating forms

Let’s start by creating the forms you will need for the human tasks in this process. You will need two (2) forms for this process. You are welcome to create your own forms or use the ones in the GitHub repository link provided. To build your own, follow these instructions.

Enter Details of Tax

The first is the initial form that will be used to initiate the process called the “Enter Details of Tax” form. The completed form should look something like this.

Tax-form

You are welcome to create the Text view fields for the title, “Tax Return Submission Form” and to separate the information on the form into three sections although they are not necessary. The Text view fields as shown in the image are:

  • Personal Information
  • Financial Information
  • Deductions

You will need the following fields on this form:

FieldTypeDescriptionReq?KeyOther
Full NameTextEnter your full nameYfullname
Date of BirthDate Time
Subtype:Date
Select your date of birthYdob
Email AddressTextYemailAddressValidation pattern: Email
Total IncomeNumberEnter your total incomeYtotalIncomePrefix: € or $
minimum: 0
maximum: 9999999
Total ExpensesNumberEnter your total expensesYtotalExpensesPrefix: € or $
minimum: 0
maximum: 9999999
Large PurchasesTag listPlease add any large purchases you’ve made this yearYlargePurchasesOptions source: Static
Static options:
  • Car, key car
  • House, key house
  • Stocks, key stocks
  • Holiday, key holiday
  • Boat, key boat
Charitable DonationsTextEnter any charitable donationsYcharitableDonations

View Tax Details

The final form is used to view the results for the likelihood of fraud providing a human to make that determination. It is called the “View Tax Details” form. The completed form should look something like this.

Tax-form-2

You are welcome to create the title Text view field “Tax Return Check Form” and the subtext “I don’t have time to build a front end, so you just need to guess . . . Fraud or no Fraud?”

You will need the following fields:

FieldTypeDescriptionReq?KeyOther
FraudCheckbox????Tick the fraud boxNfraudDetectedDefault value: Not checked
Reason for DecisionTextYexpertAnalysis

Obtaining and linking your forms

If you created your own forms, they will already exist in your process application. If not, please download them from the GitHub repository provided to your project before starting this set of steps.

To import these forms in your project, simply select “Create New->Upload Files” and select the two downloaded forms. Now your Camunda process application should look something like what is shown below.

Link-forms

Now that we have the form files, we need to link them to the appropriate elements. First, you will need to switch to the Implement tab so that you have access to the required features.

  1. Select the start event (Enter Financial Details) and select the chainlink from the menu as shown below:
    Link-forms-2
     
  2. Select the form named “Enter Details of Tax” and click “Link” to attach this form to your start event.
    Link-forms-3


    You can now view this form by selecting “Open in form editor” from the link icon and you can review the details of the information that will be entered to initiate the process.
    Link-forms-4


    When opened in “Validate Mode” within the form editor, you can see the Form Output in the lower right hand corner of the UI. This information will be important when we go through our process later.
    Form-output

  3. Go back to Modeler and select the “Call on Expert for Advice” human task and link the other form, “View Tax Details,” to this task.
    Link-other-form

Now that our forms are connected, we just have to make a few final configurations before stepping through this process.

Configure remaining elements

You should still be using the Implement tab so that you have access to the required features to update the properties for the required elements. At this time, you may also want to confirm that you are validating against Zeebe 8.7 or higher.

Implement-zeebe-87

Decide on likelihood of fraud

As mentioned, you must create connector secrets for the OpenAI and the SendGrid connector elements. Now we need to add those secrets to the proper places in the process.

Connector-secrets-keys
  1. Select the “Decide on likelihood of Fraud” OpenAI task and update the properties to include the secret and prompt.
    Add-secret-prompt

  2. You will want to check the name of your connector secret for OpenAI in the Camunda Console (in our case it is OPENAI_KEY) and enter that into the Authentication location as shown below.
    Check-key

    Remember the secrets will be referenced in your model using {{secrets.yourSecretHere}} where yourSecretHere represents the name of your connector secret.
  3. The next thing we need to enter is the Prompt that will be sent to OpenAI. This prompt is key to our example because it builds based on the values provided in the intake form and will then ask OpenAI to make a determination about fraud.
    Enter-prompt

    Click the fx icon to change the input method for this field to a FEEL expression, which will look like this:
    Enter-prompt-2

    Click the boxes to the right in the field to open the pop-up editor which will provide additional space to enter the expression.
    Enter-prompt-3

    Copy this text into the field:
    “I’d like your opinion on if this hypothetical situation could be fraud or not.” + fullName + ” has submitted details of his economic status to a hypothetical government. They are as follow: Date of Birth” + string(dob) +
    ” Total Income ” + string(totalIncome) +
    ” Total Expenses ” + string(totalExpenses) +
    ” Charitable Donations ” + charitableDonations +
    ” Large purchases ” + string join(largePurchases , “, “) + ” I’m going to need you to respond strictly in the following format: with nothing more than one, two or three of the following words separated by commas. ’email’ If the person submitting should be asked to clarify anything. also add ‘human’ if a human expert could be used in clarifying the submission. If neither option could justify the submission add the word ‘fraud'”

    Click the “X” in the upper right-hand corner to close the pop-up editor and your properties should now look like what is shown below:
    Enter-prompt-4

  4. Finally, we need to modify the result expression for this task so that the output will be easier to use in our model. Replace the existing line in the field with the following.
    <code>{response:response.body.choices[1].message.content}

    Your result expression should now look like this:
    Result-expression


    Taking this action allows the process to parse the response appropriately to provide just the metadata that we need.

Create list of tasks

Now we want to use the results provided by our OpenAI request to create the list of tasks that will trigger the optional elements in our ad-hoc subprocess. If you recall, the prompt had some key words:

  • email
  • human
  • fraud

If these are found in the results from the OpenAI request, this will guide us on which optional elements to trigger.

For this script task, we are going to generate the list of tasks, so we need to expand the Script section of the properties to modify the Script variables for this task. We can do this by adding a Result variable of tasks and then a simple FEEL expression which will generate the list from our OpenAI response.

Select FEEL expression for the Implementation.

Feel-implementation

Enter this FEEL expression in the properties for the script task.
split(response, ", ")

Your properties for the script task will now look like this:

Properties-script-task

The results of this script is a list of tasks in the variable tasks for use in the ad-hoc subprocess. Be sure to confirm that the Implementation property is set to “FEEL expression.”

Ad-hoc subprocess

Now we need to provide the ad-hoc subprocess with the list of tasks.

Select your subprocess and review the properties to find the Active elements. Expand this section and add the tasks variable as the Active elements collection for the subprocess.

Active-elements
Correlate the tasks to activate

The way tasks are activated is defined by their ID in the process, which has to correlate with the keywords we used: email, human, fraud. To simplify this, we will just set the ID for each optional task to the associated keyword.

Any task that does not have a task before it is a potentially activatable task.

  1. Select the “Generate Email Inquiry” OpenAI element and change the ID to email.
    Email-id

  2. For the “Call on Expert for Advice” human task, change the ID to human.
  3. For the “Fraud Detected” event, change the ID to fraud.

Finalize all connector tasks

We need to update the remaining connector tasks to update the keys using our connector secrets.

Generate email inquiry
  1. Select the “Generate Email Inquiry” OpenAI connector task and update the OpenAI API Key with your secret.
    Openai-secret-key

  2. You will also need to update the prompt using a FEEL expression for this step. Copy and paste this prompt into the Prompt property for this task.

    I’d like your opinion on if this hypothetical situation could be fraud or not.” + fullName + ” has submitted details of his economic status to a hypothetical government. They are as follow: Date of Birth” + string(dob) +
    ” Total Income ” + string(totalIncome) +
    ” Total Expenses ” + string(totalExpenses) +
    ” Charitable Donations ” + charitableDonations +
    ” Large purchases ” + string join(largePurchases , “, “) + ” can you generate an email to ask for clarification on anything you think is odd about this?”

    Your Prompt should look like that shown below:

    Prompt

    This prompt will be used to generate an email requesting clarification.
  3. Now you need to configure how we will parse the response. This will look very similar to the previous configuration at the OpenAI task, but we are changing the variable assignment to emailBody.
    {emailBody:response.body.choices[1].message.content}

    Your result expression should now look like this:
    Result-expression-emailbody
OPTIONAL: SendGrid task

If you have a SendGrid account and key, you can complete the steps below, but if you do not, you can just modify the “Send Email” task to be a User Task.

  1. Enter your secret for the SendGrid API key using the format previously discussed.
  2. You can enter “Tax Man” for the sender of the email since the question will be coming from our tax commission.
  3. Select the email address as the sending email address that you know is properly configured in SendGrid.
  4. For Receiver, we will use variables provided from our initial form: fullName and emailAddress. Don’t forget to click the fx icon before entering your variable names and you can use autocomplete as shown below.

    Your properties should now look something like this.
    Sendgrid-secrets-key

  5. Select “Simple (no dynamic template)” for the Mail contents property in the Compose email section.
  6. Enter the Subject as “Tax Inquiry”.
  7. Select the emailBody variable that is the response from the OpenAI email step prior to this one.
    Emailbody-variable

We also want to update the final OpenAI connector task “Check final Decision” with the proper secret and prompt.

  1. Enter your OpenAI API key using the format for the secret as previously discussed.
  2. Enter the prompt as follows:
    “I’d like your opinion on if this hypothetical situation could be fraud or not.” + fullName + ” has submitted details of his economic status to a hypothetical government. They are as follow: Date of Birth” + string(dob) +
    ” Total Income ” + string(totalIncome) +
    ” Total Expenses ” + string(totalExpenses) +
    ” Charitable Donations ” + charitableDonations +
    ” Large purchases ” + string join(largePurchases , “, “) + ” When asked if this was fraud the answer came back as ” + string(fraudDetected) + “If you think this is accurate reply only with ‘yes’ otherwise reply with ‘no'”
  3. Update the Result expression to use finalCheck for the variable as shown below:
    {finalCheck:response.body.choices[1].message.content}

Update gateways and events

We already linked the proper form to our human task, but we want to make sure that the output from that form is used for triggering fraud or not. In this case, we need to use the fraudDetected variable set by the checkbox on the form. Now we need to configure our exclusive gateway after the “Call on Expert for Advice” user task.

  1. For the “Yes” branch, you want to update the Condition expression property to use:
    fraudDetected = true
  2. For the “No” branch, you want to update the Condition expression property to use:
    fraudDetected = false

Now we will configure the “Throw Fraud” event.

  1. Select the “Throw Fraud” event and expand the Escalation section. Select “Create New” for the Global escalation reference and set the new name to Fraud!.
  2. The Code field is what correlates the throw to the catch boundary event in our model. Set this to Fraud! as well.
    Fraud-code

  3. Select the catch boundary event and select “Fraud!” for the Global escalation reference.
    Fraud-escalation

Finally, we need to configure the last exclusive gateway paths using the result from the “Check final Decision” OpenAI response.

  1. For the “Yes” branch, you want to update the Condition expression property to use:
    finalCheck = "yes"
  2. For the “No” branch, you want to update the Condition expression property to use:
    finalCheck = "no"

    Or you can set the “No” branch as the default flow.

Final step

We want to set a default variable to be used for the subprocess and set this as the input for the ad-hoc subprocess which will be fraudDetected.

  1. Select the subprocess and create a new input variable of fraudDetected with a value of false.
    Nofraud

  2. Use that same variable with a value of itself to copy that variable back out of the subprocess as an Output variable.
    Yesfraud

  3. Now select the “Fraud Detected” event and set an output variable for that of fraudDetected as if this event is triggered, we have determined fraud was detected.
    Yesfraud-2

That’s it. You have completed the model and we are now ready to test it using the Camunda Play feature.

Step through your model with Play

While still within Modeler, click the Play tab so we can test this model.

Play will display the cluster that will access the secrets and use that cluster’s engine to run through the process.

Camunda-play-ai-agent

Click Continue.

When ready, Play will provide the following box:

Startprocess

Click Start a process instance to run through the model.

If I access the Start Form, I can fill in data and save that for future instances.

Startform

In this case I have filled in some data.

Exampledata

Click Start Instance to begin the run through. It will take a moment, but when the “Loading xxx details” disappears, you will see what tasks were initiated by the information entered in the starting form and review the variables and which optional tasks were triggered.

Note: Your results may vary as AI can return different results at different times.

Info

We see here that the email task and the human expert task was triggered by the input information. I can see the values of the process variables at the bottom of the screen. If I check my email, I get an email that mentions my expenses being much higher than my income.

Email

If I select the form for the Expert task, I can fill in this form. In this case, I am going to determine that I do not think this is fraud and see what happens.

Humanform

By doing this, the decision step triggered another pass through the ad-hoc subprocess which generated another email and another expert review.

Model-loopback

This time I will select fraud for the expert human task. That action ends the process with a positive for fraud detection.

Model-fraudfound

Congratulations!

You did it! You completed building an AI Agent in Camunda from start to finish including running through the process to see the results. You can try different data in the initial form and see what happens with new variables. Don’t forget to watch the accompanying step-by-step video tutorial if you haven’t already done so.

The post Building Your First AI Agent in Camunda appeared first on Camunda.

]]>
Migrating Solutions from Camunda 7 to Camunda 8—A Strategy Update https://camunda.com/blog/2025/02/migrating-solutions-camunda-7-camunda-8-strategy-update/ Fri, 28 Feb 2025 11:00:00 +0000 https://camunda.com/?p=129811 We want to make migration to Camunda 8 as easy as we can for you. Read on to learn the latest journey and new strategies you can take to get there.

The post Migrating Solutions from Camunda 7 to Camunda 8—A Strategy Update appeared first on Camunda.

]]>
With the EOL (end of life) of the Camunda 7 CE (Community Edition) in October 2025, we get a lot of requests around migrating existing solutions based on Camunda 7 to Camunda 8.

Camunda 8 is not a direct drop-in replacement for Camunda 7, meaning a simple library swap is insufficient—your solution must be adapted. This post outlines the typical migration journey, available tooling to assist migration, and important timeline considerations.

We have recently adjusted our migration strategy based on learnings from the past year(s), so this information may differ from what you have seen before.

But let’s go step-by-step.

The migration journey

Most of our customers go through the following journey, which is also the basis of our just refreshed migration guide that walks you through that journey in detail.

A diagram of the migration journey from Camunda 7 to Camunda 8

Some solutions are easier to migrate and may not require a full transition process. If your solution adheres to Camunda best practices, does not require data migration, and does not involve complex parallel run scenarios, migration might get relatively straightforward.

A diagram of a simpler migration journey from Camunda 7 to Camunda 8

This blog post will not go into all the details of those journeys, which is what the migration guide itself does—but to give you an idea, the “orient yourself” phase describes how to:

When to migrate?

It goes without saying that any new projects should be started using Camunda 8.

For existing Camunda 7 solutions, consider the following support timeline:

  • Camunda 7 CE (Community Edition) will reach EOL in October 2025, with a final release (v7.24) on Oct 14, 2025. No further Camunda 7 CE releases will occur after this date. The GitHub repository will be archived, and issues/pull requests will be closed.
  • Camunda 7 EE (Enterprise Edition) customers will continue to receive security patches and bug fixes until at least 2030.

Although there is some urgency to start planning migration, you still have time to execute it effectively.

There is a second aspect to the timeline. Camunda 8 is a completely rearchitected platform, meaning some features are still being reintroduced. If your solution depends on features not yet available, you may need to wait for the appropriate Camunda 8 version. Prominent examples are task listeners (planned for 8.8) or the business key (planned for 8.9). We are further running an architecture streamlining initiative to improve the core architecture, which will be released with Camunda 8.8. This introduces a new, harmonized API. Hence, unless you have time pressure or momentum to lose, we generally recommend waiting for 8.8 to happen and consider 8.8 or 8.9 as ideal candidates to migrate to.

Check the public feature roadmap to see when your required features will be available.

Feature-timeline

That said, it is important to note that targeting for example the 8.9 release doesn’t mean you wait for it to happen and postpone migration planning. Many preparatory steps—analysis, budgeting, and project planning—should begin as soon as possible. Migration tasks can often be performed in advance or using early alpha versions of upcoming releases.

Preparedness-timeline

Migration tooling

To support migration, we have several tools available, most importantly:

These are the tools our consultants are using with great success with customers. The tools are open source (Apache 2.0 license) and can be easily adapted or extended to your environment.

However, we acknowledge that some tools are not as self-explanatory as they should be. We have also seen a growing need for additional migration tooling, which is why we are investing in the following tools, targeted for the Camunda 8.8 (October 2025) release:

  • Migration Analyzer: Enhancing user experience around the diagram converter and adding DMN support.
  • Data Migrator: Migrates runtime instances (running process instances) and copies audit data (history) from Camunda 7 to Camunda 8. (Limited to Camunda 8 running on RDBMS, planned for 8.9 release.)
  • Code Converter: A collection of code conversion patterns (e.g., refactoring a JavaDelegate to a JobWorker) with automation guidance (likely provided as OpenRewrite recipes).

These tools aim to simplify and streamline the migration process.

Migrating from Camunda 7 CE (Community Edition)

We regularly also get the question of whether migration from CE is also possible. And of course, it is. It is actually the exact same as for our EE edition.

If you are worried about the timeline because of the EOL of the community edition, you can switch to our Camunda 7 Enterprise Edition now and benefit from the extended support timelines right away.

Where do I get help?

With the updated migration guide, we aim to provide clearer guidance on migration tasks. We will continue improving this guide iteratively—please share your feedback via GitHub or our forum.

You can further leverage:

Next steps

As a Camunda 7 user your next steps towards migration are:

  1. Orient yourself and analyze your existing solution. This will help you understand necessary tasks and effort, so you can plan and budget your project. This can ideally be supported by a Camunda consultant or certified Camunda partner. This will also inform your timeline on migration, ideally targeting Camunda 8.8 or 8.9.
  2. Migrate your solution, adjusting your models and code.
  3. Plan data migration and roll out your migrated solution.

Let’s go!

We know that some of you felt a bit lost with migration in the last year and we are truly sorry for any confusion around the topic. Our priority has been to build the best process orchestration and automation platform in the world—but we fully recognize that supporting existing Camunda 7 users to get to this future is equally critical.

In 2025, migration support will be a top priority, led by a strategic task force headed by Mary Thengvall and myself (Bernd Ruecker). We are committed to making this transition as smooth as possible.

Looking forward to discussing migration with you! Join the conversation in our forum.

The post Migrating Solutions from Camunda 7 to Camunda 8—A Strategy Update appeared first on Camunda.

]]>
How to use Camunda’s SOAP Connector https://camunda.com/blog/2025/01/how-to-use-camunda-soap-connector/ Tue, 28 Jan 2025 10:30:00 +0000 https://camunda.com/?p=127509 Learn how to use Camunda’s SOAP Connector to enable you to interact with applications that are exposed via Simple Object Access Protocol (SOAP)

The post How to use Camunda’s SOAP Connector appeared first on Camunda.

]]>
SOAP has been around for quite some time and there are many business applications using the protocol. We’re asked on a regular basis about how Camunda can help orchestrate processes that include these mission critical applications. As a result of this feedback from our customers, we have provided a SOAP Connector to help speed the time to value in the creation of your processes.

We’ll be using Camunda 8 Run and Desktop Modeler in this example though other options, like Web Modeler and Camunda 8 Self-Managed, can be used as well. We’ll also be using the ubiquitous SOAP UI application as an endpoint though you can use your own endpoints and utilize this as a tutorial if you’d like.  

Download and install Camunda 8 Run

You can use an already functioning Camunda 8 environment as the SOAP Connector is bundled in the latest releases. If you’re new to Camunda or if you want to get an environment up and running on your computer quickly, we highly recommend using Camunda 8 Run. For detailed instructions on how to download and install it, refer to our documentation here. Once you have it installed and running, continue on your journey right back here!

Download and install Camunda Desktop Modeler

You can download and install Desktop Modeler using instructions found here. You may need to scroll down to the Open Source Desktop Modeler section. Select the appropriate operating system and follow the instructions and be sure to start Modeler up.

Download and install SoapUI

In this example, we’ll use SoapUI and one of its tutorials as the SOAP endpoint. You can download SoapUI here. When you install SoapUI, be sure to also install the tutorials:

Soapui-setup

We’re almost there. With the latest releases of Camunda Desktop Modeler you may have noticed this message pop up when starting it:

Camunda-connector-pop-up

Desktop Modeler creates connector template shortcuts for all of the out of the box connectors, with the exception of the SOAP Connector. To add the SOAP Connector template you’ll need to download and install it. The SOAP Connector template can be found here. Depending on which operating system you’re using, you’ll need to place the connector template in a particular folder and details about it can be found here. You’ll then need to either restart or reset Desktop Modeler (Ctrl-r or Cmd-r) for the connector template to be recognized.

Start SoapUI and start sample project

Start SoapUI locally and follow the instructions to import and start up the SOAP Sample Project as described here. Be sure to start ServiceSoapBinding MockService as shown here:

Start-mock-service

Once you have started the mock service you can explore the various web service requests. Expand ServiceSoapBinding in the navigator panel and expand the login node. Open login rq by double clicking on it. Your screen should look something like this:

Login-rq

You can run the request by clicking on the green play button to ensure that the service is running. You should get a response similar to the following:

Login-rq-response

 Next, what we’ll do is replicate this using the SOAP Connector in Camunda.

Create a process flow in Modeler

Go to Modeler and create a new Camunda 8 BPMN diagram. Build a simple one step process with a Start Event, a Task, and an End Event. Put focus on the task by clicking on it. Look for the SOAP Connector by clicking on the wrench icon in the context pad just to the right of the task to change the element. In the dialog box that appears, you can search for the SOAP Connector by typing in ‘soap’ in the search field or you can scroll through the templates. Your screen should look something like this:

Bpmn-model-soap-connector

Select SOAP Connector. Give the task a name. In this example the name is Call SOAP service. Now we need to fill in some parameters to make it work. You’ll notice the Service URL is in red to indicate a required field.

Service-url

Go back to SoapUI and login rq and find the service URL. You should see it at the top:

Login-rq-response

Copy and paste the URL into the Service URL field:

http://127.0.0.1:8088/mockServiceSoapBinding

We’ll leave the fields of Authentication, SOAP version, and SOAP Header using default values:

Soap-service

For the SOAP body we will define a template with placeholders for variables that we’ll pass in. For SOAP body select Template from the drop down. For the XML Template itself you can copy and paste the contents of

<soapenv:Body>

In SoapUI login rq:

Login-rq-2

Into the XML Template field and replace the values of username and password with {{username}} and {{password}} placeholders. The XML should look something like this:

<sam:login>

<username>{{username}}</username>

<password>{{password}}</password>

</sam:login>

Next, enter in a variable name for the XML template context. We will pass in a JSON object with that name into the process. For this example we used usernamePasssword. That section of the properties panel should look something like this:

Xml-template-soap-connector

Now we need to enter the namespaces and this requires converting what is in XML into JSON. Heading back to SoapUI login rq, we can copy the namespaces defined in

<soapenv:Envelope...

Pasting them into the Namespaces field and editing them into JSON:

{ 
    "soapenv":"http://schemas.xmlsoap.org/soap/envelope/",
    "sam":"http://www.soapui.org/sample/"
}

Finally, set an output variable. You can dump the entire contents into a single variable and/or if you know the structure of the response you can use a FEEL expression to extract what you need. For now, let’s just dump the entire contents into a single variable and then we can create a suitable FEEL expression to retrieve needed contents. For the Result variable field we’ll use result.

Output-variable-soap-connector

Save your work.

Run the process

In Desktop Modeler go to the lower left hand corner and click on the Deploy button which resembles a rocket ship:

Deploy-proces-camunda

A dialog box should appear. Provide a deployment name, select Camunda 8 Self-Managed, enter a Cluster endpoint of:

http://localhost:26500

And set Authentication to None. Your screen should look something like this:

Deployment-name

Click on Deploy. You should get a deployment successful message in Modeler:

Deployed-confirmation

Let’s run a process. In the lower left hand corner of Modeler, click on the Run icon that next the the Deploy button.

Run-process-camunda

A dialog box should appear prompting you to enter in some JSON data. Copy and paste this into the JSON field:

{ "usernamePassword": 
   {
      "username": "Login", 
      "password": "Login123"
   } 
}

The dialog box should look something like this:

Json-data

Click on Start.

View process in Operate

Open a browser and navigate to:

http://localhost:8080/operate

Log into Operate using demo/demo for username and password. The process will likely be completed by the time you navigate to the process instance in Operate. You should see the result variable with its contents. We’ll use this as a guide to extract the sessionid using FEEL in a little bit.

What happens if you run it again? What happens if you change the username or password?

Run-variations

Congratulations, you’ve successfully integrated with a SOAP endpoint using Camunda’s SOAP Connector!

Extracting the sessionid from the response

Before updating the process you’ll probably want to stop and restart ServiceSoapBinding MockService to avoid errors (ie user already logged in if you tried running it multiple times) when running the process again (see red box highlighting service stop and start):

Stop-start-process

Back in Modeler, go to the process diagram and open the Test SOAP Connector task. In the Output Mapping section in the Result expression field, use the following expression to create a variable called sessionid which extracts the sessionid from the response. The response is held in a variable called, oddly enough, response:

{sessionid: response.Envelope.Body.loginResponse.sessionid}

Your screen should look something like this:

Result-expression-soap-connector

Run the process again and head over to Operate to view the results:

View-results-operate

You can see the new variable sessionid with the extracted data. While you can get to sessionid in the result variable using the following expression:

result.Envelope.Body.loginResponse.sessionid

It’s more elegant to extract the data you need. You can now use sessionid for the other calls in the SoapUI example – logout, search, buy. Have fun experimenting!

Try Camunda today

If you don’t already have a Camunda account, you can learn more about Camunda and get started with everything it has to offer for free. Note that Camunda 8 Run is a newly released distribution to help developers run Camunda 8 locally, so it requires a self-managed installation.

The post How to use Camunda’s SOAP Connector appeared first on Camunda.

]]>
AI Tools and Process Orchestration, the Perfect Match for Developers https://camunda.com/blog/2025/01/ai-tools-process-orchestration-perfect-match-developers/ Fri, 24 Jan 2025 21:02:10 +0000 https://camunda.com/?p=127427 A few suggestions for adding development AI tools into the process orchestration mix.

The post AI Tools and Process Orchestration, the Perfect Match for Developers appeared first on Camunda.

]]>
Process orchestration and automation tools have become indispensable for developers looking to automate and streamline time-consuming human workflows, transform robotic process automation, and optimize process intelligence and analytics. Tools like Camunda enable the integration and coordination of various processes and microservices, making complex deployments more manageable. However, by leveraging Artificial Intelligence (AI) tools and applications in conjunction with process orchestration, developers can further enhance efficiency, reduce errors, and accelerate development cycles.

Let’s explore a few useful AI tools and functions that synergize well with process orchestration tools to simplify developers’ lives.

AI-powered monitoring and analytics

One key area where AI tools can aid developers is monitoring and analytics. AI-driven monitoring tools like Dynatrace, New Relic, and Datadog use machine learning algorithms to analyze application performance data as it happens. They can predict potential issues before they impact the system, allowing developers to address them proactively. When integrated with process orchestration tools like Camunda, these AI-powered monitors can trigger automated workflows to scale resources or initiate recovery processes, ensuring high availability and optimal performance without manual intervention.

NB: Camunda 8 will soon introduce a new feature where completed process instances are archived directly in Camunda Zeebe. This change simplifies installation, means less manual configuration of The Archiver, and enhances scalability and replication by utilizing Zeebe’s partitions. Zeebe is optimized for high performance and is well-suited for distributed systems and environments with heavy workloads.

Automated code reviews and quality checks

Tools like DeepCode and Codacy use AI to perform automated code reviews and enforce coding standards. These platforms scan code repositories for security vulnerabilities, bugs, and quality issues, providing real-time feedback to developers. When integrated with process orchestration platforms like Camunda, these automated quality checks can be incorporated into the deployment pipelines. This ensures that only high-quality code is moved through to production environments.

NB: Camunda offers offer CI/CD with Git Sync, and a blueprint that replaces an organization’s CI/CD pipeline with Camunda. A CI/CD pipeline is a series of automated processes that streamline the software development lifecycle:

  • Continuous integration (CI): Developers regularly merge code changes into a central repository.
  • Continuous delivery (CD) or continuous deployment: The application is automatically released to its intended environment.

AI tools for predictive resource management

AI can also significantly improve resource allocation and cost management within orchestrated environments. Tools like Google Cloud’s Recommender and AWS Compute Optimizer use machine learning to investigate usage patterns and recommend optimal resource configurations. By integrating these recommendations into orchestration tools like Camunda, developers can automate the provisioning of Cloud resources, ensuring that applications always have the resources they need while minimizing costs.

NB: A recent feature in Camunda Optimize lets you export organized and pre-processed data into a single dataset, making it easier to perform advanced analysis and enable machine learning predictions for process instances. The enhanced raw data reports now include additional columns, providing details like incident counts and user task numbers, which can help predict instances’ completion time.

Natural language processing for documentation

Documentation is critical, yet so often time-consuming, as a part of the development process. Let’s face it: anything that increases available dev time, and lets teams get on with what’s important, is a plus. AI tools leveraging Natural Language Processing (NLP) can automatically generate and update documentation based on code changes and project updates. For instance, tools like Doxygen and Swagger can be leveraged to run automatically during the build process, ensuring that documentation stays in sync with the codebase without manual effort from developers.

Chatbots for operational efficiency

AI-powered chatbots, such as Slack’s built-in bots or Hubot, can be integrated with orchestration tools to provide developers with an interactive interface for managing workflows. Developers can issue commands and receive notifications about their orchestrated processes directly within their communication platform of choice, streamlining operations and allowing for quicker responses to issues.

NB: Camunda offers a chatbot with Modeler that provides information about the documentation and, with BPMN Copilot, even more AI assistance. Camunda subscribers can also implement Connectors to do this. Camunda also makes use of an AI chatbot interface to quickly surface information from our own docs.

The fusion of AI tools and applications with process orchestration tools offers developers a powerful combination to enhance their productivity and the reliability of their systems. AI can significantly reduce the manual burden on development teams by automating routine tasks, predicting and addressing potential issues, and optimizing resources. As these technologies evolve, we can expect even more sophisticated integrations to revolutionize further how developers build and deploy software.

As development teams and organizations embrace these AI-enhanced orchestration capabilities, the future of software development will be more efficient, intelligent, and responsive to the rapidly changing market demands. The key to success is carefully selecting tools that integrate seamlessly with existing development workflows and enterprise orchestration platforms like Camunda.

How to use AI tools effectively

Here are a few steps developers can take to harness the full potential of AI tools in process orchestration:

  1. Assess your workflow
    Begin by evaluating your current development and deployment processes. Identify bottlenecks, repetitive tasks, and areas prone to errors. This will help you understand where AI tools can provide the most value.
  2. Choose complementary tools
    Select AI tools that complement and integrate well with your process orchestration and automation platform. Ensure these tools have robust APIs and support the programming languages or frameworks you use. Compatibility is crucial for a smooth integration.
  3. Start small and scale
    Implement AI tools incrementally. Start with one or two key areas, such as automated code reviews or predictive resource management. Once you’ve gauged their effectiveness and worked out any kinks, you can expand their use across other aspects of your processes.
  4. Train your team
    Ensure your development team is well-versed in the capabilities and usage of the selected AI tools. Proper training will enable them to effectively leverage these tools and integrate them into their daily work.
  5. Monitor and optimize
    Continuously monitor the performance and impact of AI tools on your workflow. Use the insights gained to fine-tune configurations and optimize processes. AI tools often improve with more data and use, so expect improvements over time.
  6. Stay updated
    The field of AI is rapidly advancing. Keep abreast of the latest developments in AI tools and process orchestration. By staying updated, you can take advantage of new features and capabilities as they emerge.
  7. Encourage feedback and innovation
    Encourage your team to provide feedback on the AI tools in use. Developers are the end-users and will have firsthand insights into the practical benefits and limitations. Foster a culture of innovation where team members can suggest or even develop new AI integrations that could benefit your workflows.

Incorporating AI into process orchestration is not a one-time event but an ongoing journey. As AI technologies evolve, seemingly moving the goalposts daily, so will how they can be applied to streamline development processes. However, developers and organizations can stay on the competitive edge by keeping an open mind and continually exploring new AI integrations.

Combining AI and process orchestration promises a future where developers can focus more on creative problem-solving and less on routine tasks. With the right tools and approach, developers can create a synergistic ecosystem that accelerates development and enhances the quality and reliability of software products.

For more information on Camunda’s AI-enabled process orchestration capabilities and how organizations can realize AI’s full potential across their enterprise, please contact us for a no-obligation demonstration.

The post AI Tools and Process Orchestration, the Perfect Match for Developers appeared first on Camunda.

]]>