Joyce Johnson, Author at Camunda https://camunda.com Workflow and Decision Automation Platform Fri, 30 May 2025 15:44:17 +0000 en-US hourly 1 https://camunda.com/wp-content/uploads/2022/02/Secondary-Logo_Rounded-Black-150x150.png Joyce Johnson, Author at Camunda https://camunda.com 32 32 Intelligent by Design: A Step-by-Step Guide to AI Task Agents in Camunda https://camunda.com/blog/2025/05/step-by-step-guide-ai-task-agents-camunda/ Wed, 14 May 2025 07:00:00 +0000 https://camunda.com/?p=138550 In this step-by-step guide (with video), you'll learn about the latest ways to use agentic ai and take advantage of agentic orchestration with Camunda today.

The post Intelligent by Design: A Step-by-Step Guide to AI Task Agents in Camunda appeared first on Camunda.

]]>
Camunda is pleased to announce new features and functionality related to how we offer agentic AI. With this post, we provide detailed step-by-step instructions to use Camunda’s AI Agent to take advantage of agentic orchestration with Camunda.

Note: Camunda also offers an agentic AI blueprint on our marketplace.

Camunda approach to AI agents

Camunda has taken a systemic, future-ready approach for agentic AI by building on the proven foundation of BPMN. At the core of this approach is our use of the BPMN ad-hoc sub-process construct, which allows for tasks to be executed in any order, skipped, or repeated—all determined dynamically at runtime based on the context of the process instance.

This pattern is instrumental in introducing dynamic (non-deterministic) behavior into otherwise deterministic process models. Within Camunda, the ad-hoc sub-process becomes the agent’s decision workspace—a flexible execution container where large language models (LLMs) can assess available actions and determine the most appropriate next steps in real time.

We’ve extended this capability with the introduction of the AI Agent Outbound connector (example blueprint of usage) and the Embeddings Vector Database connector (example blueprint of usage). Together, they enable full-spectrum agentic orchestration, where workflows seamlessly combine deterministic flow control with dynamic, AI-driven decision-making. This dual capability supports both high-volume straight-through processing (STP) and adaptive case management, empowering agents to plan, reason, and collaborate in complex environments. With Camunda’s approach, the AI agents can add additional context for handling exceptions from STP.

This represents our next phase of AI Agent support and we intend to continue adding richer features and capabilities.

Camunda support for agentic AI

To power next-generation automation, Camunda embraces structured orchestration patterns. Camunda’s approach ensures your AI orchestration remains adaptive, goal-oriented, and seamlessly interoperable across complex, distributed systems.

As part of this evolution, Camunda has integrated Retrieval-Augmented Generation (RAG) into its orchestration fabric. RAG enables agents to retrieve relevant external knowledge—such as historical case data or domain-specific content—and use that context to generate more informed and accurate decisions. This is operationalized through durable, event-driven workflows that coordinate retrieval, reasoning, and human collaboration at scale.

Camunda supports this with our new Embeddings Vector Database Outbound connector—a modular component that integrates RAG with long-term memory systems. This connector supports a variety of vector databases, including both Amazon Managed OpenSearch (used in this exercise) and Elasticsearch.

With this setup, agents can inject knowledge into their decision-making loops by retrieving semantically relevant data at runtime. This same mechanism can also be used to update and evolve the knowledge base, enabling self-learning behaviors through continuous feedback.

To complete the agentic stack, Camunda also offers the AI Agent Outbound connector. This connector interfaces with a broad ecosystem of large language models (LLMs) like OpenAI and Anthropic, equipping agents with reasoning capabilities that allow them to autonomously select and execute ad-hoc sub-processes. These agents evaluate the current process context, determine which tasks are most relevant, and act accordingly—all within the governed boundaries of a BPMN-modeled orchestration.

How this applies to our exercise

Before we step through an exercise, let’s review a quick explanation about how these new components and Camunda’s approach will be used in this example and in your agentic AI orchestration.

The first key component is the AI Task Agent. It is the brains behind the operations. You give this agent a goal, instructions, limits and its chain of thought so it can make decisions on how to accomplish the set goal.

The second component is the ad-hoc sub-process. This encompasses the various tools and tasks that can be performed to accomplish the goal.

A prompt is provided to the AI Agent and it decides which tools should be run to accomplish this goal. The agent reevaluates the goal and the information from the ad-hoc sub-process and determines which of these tools, if any, are needed again to accomplish the goal; otherwise, the process ends.

Now armed with this information, we can get into our example and what you are going to build today.

Example overview

This BPMN process defines a message delivery service for the Hawk Emporium where AI-powered task agents make real-time decisions to interpret customer requests and select the optimal communication channels for message delivery.

Our example model for this process is the Message Delivery Service as shown below.

Message-delivery-service-agentic-orchestration

The process begins with user input filling out a form including a message, the desired  individual(s) to send it to, and the sender. Based on this input, a script task generates a prompt to send to the AI Task Agent. The AI Task processes the generated prompt and determines appropriate tasks to execute. Based on the AI Agent’s decision, the process either ends or continues to refine using various tools until the message is delivered.

The tasks that can be performed are located in the ah-hoc sub-process and are:

  1. Send a Slack message (Send Slack Message) to specific Slack channels,
  2. Send an email message (Send an Email) using SendGrid,
  3. Request additional information (Ask an Expert) with a User Task and corresponding form.

If the AI Task Agent has all the information it needs to generate, send and deliver the message, it will execute the appropriate message via the correct tool for the request. If the AI Agent determines it needs additional information; such as a missing email address or the tone of the message, the agent will send the process instance to a human for that information.

The process completes when no further action is required.

Process breakdown

Let’s take a little deeper dive on the components of the BPMN process before jumping in to build and execute it.

AI Task Agent

The AI Task Agent for this exercise uses AWS Bedrock’s Claude 3 Sonnet model for processing requests. The agent makes decisions on which tools to use based on the context. You can alternatively use Anthropic or OpenAI.

SendGrid

For the email message task, you will be sending email as community@camunda.com. Please note that if you use your own SendGrid account, this email source may change to the email address for that particular account.

Slack

For the Slack message task, you will need to create the following channels in your Slack organization:

  • #good-news
  • #bad-news
  • #other-news

Assumptions, prerequisites, and initial configuration

A few assumptions are made for those who will be using this step-by-step guide to implement your first an agentic AI process with Camunda’s new agentic AI features. These are outlined in this section.

The proper environment

In order to take advantage of the latest and greatest functionality provided by Camunda, you will need to have a Camunda 8.8-alpha4 cluster or higher available for use. You will be using Web Modeler and Forms to create your model and human task interface, and then Tasklist when executing the process.

Required skills

It is assumed that those using this guide have the following skills with Camunda:

  • Form Editor – the ability to create forms for use in a process.
  • Web Modeler – the ability to create elements in BPMN and connect elements together properly, link forms, and update properties for connectors.
  • Tasklist – the ability to open items and act upon them accordingly as well as starting processes.
  • Operate – the ability to monitor processes in flight and review variables, paths and loops taken by the process instance.

Video tutorial

Accompanying this guide, we have created a step-by-step video tutorial for you. The steps provided in this guide closely mirror the steps taken in the video tutorial. We have also provided a GitHub repository with the assets used in this exercise. 

Connector keys and secrets

If you do not have existing accounts for the connectors that will be used, you can create them.

You will need to have an AWS with the proper credentials for AWS Bedrock. If you do not have this, you can follow the instructions on the AWS site to accomplish this and obtain the required keys:

  • AWS Region
  • AWS Access key
  • AWS Secret key

You will also need a SendGrid account and a Slack organization. You will need to obtain an API key for each service which will be used in the Camunda Console to create your secrets.

Secrets

The secrets will be referenced in your model using {{secrets.yourSecretHere}} where yourSecretHere represents the name of your connector secret.

For this example to work you’ll need to create secrets with the following names if you use our example and follow the screenshots provided:

  • SendGrid
  • Slack
  • AWS_SECRET_KEY
  • AWS_ACCESS_KEY
  • AWS_REGION

Separating sensitive information from the process model is a best practice. Since we will be using a few connectors in this model, you will need to create the appropriate connector secrets within your cluster. You can follow the instructions provided in our documentation to learn about how to create secrets within your cluster.

Now that you have all the background, let’s jump right in and build the process.

Note: Don’t forget you can download the model and assets from the GitHub repository.

Overview of the step-by-step guide

For this exercise, we will take the following steps:

  • Create the initial high-level process in design mode.
    • Create  the ad-hoc sub-process of AI Task Agent elements.
  • Implement the process.
    • Configure the connectors.
      • Configure the AI Agent connector.
      • Configure the Slack connector.
        • Create the starting form.
        • Configure the AI Task Agent.
        • Update the gateways for routing.
        • Configure the ad-hoc sub-process.
        • Connect the ad-hoc sub-process and the AI Task agent
  • Deploy and run the process.
  • Enhance the process, deploy and run again.

Build your initial process

Create your process application

The first step is to create a process application for your process model and any other associated assets. Create a new project using the blue button at the top right of your Modeler environment.

Build-process

Enter the name for your project. In this case we have used the name “AI Task Agent Tutorial” as shown below.

Process-name

Next, create your process application using the blue button provided.

Enter the name of your process application, in this example “AI Task Agent Tutorial,” select the Camunda 8.8-alpha4 (or greater) cluster that you will be using for your project, and select Create to create the application within this project.

Initial model

The next step is to build your process model in BPMN and the appropriate forms for any human tasks. We will be building the model represented below.

Message-delivery-service-agentic-orchestration

Click on the process “AI Agent Tutorial” to open to diagram the process. First, change the name of your process to “Message Delivery Service” and then switch to Design mode as shown below.

Design-mode

These steps will help you create your initial model.

  1. Name your start event. We have called it “Message needs to be sent” as shown below. This start event will have a form front-end that we will build a bit later.
    Start-event

  2. Add an end event and call it “Message delivered”
    End-event

  3. The step following the start event will be a script task called “Create Prompt.” This task will be used to hold the prompt for the AI Task Agent.
    Script-task

  4. Now we want to create the AI Task Agent. We will build out this step later after building our process diagram.
    Ai-agent

Create the ad-hoc sub-process

Now we are at the point in our process where we want to create the ad-hoc sub-process that will hold our toolbox for the AI Task Agent to use to achieve the goal.

  1. Drag and drop the proper element from the palette for an expanded subprocess.
    Sub-process


    Your process will now look something like this.
    Sub-process-2

  2. Now this is a standard sub-process, which we can see because it has a start event. We need to remove the start event and then change the element to an “Ad-hoc sub-process.”
    Ad-hoc sub-process

    Once the type of sub-process is changed, you will see the BPMN symbol (~) in the subprocess denoting it is an ad-hoc sub-process.
  3. Now you want to change this to a “Parallel multi-instance” so the elements in the sub-process can be run more than once, if required.
    Parallel multi-instance


    This is the key to our process, as the ad-hoc sub-process will contain a set of tools that may or may not be activated to accomplish the goal. Although BPMN is usually very strict about what gets activated, this construct allows us to control what gets triggered by what is passed to the sub-process.
  4. We need to make a decision after the AI Task Agent executes which will properly route the process instance back through the toolbox, if required. So, add a mutually exclusive gateway between the AI Task Agent and the end event, as shown below, and call it “Should I run more tools?”.
    Run-tools

  5. Now connect that task to the right hand side of your ad-hoc sub-process.
    Connect-to-ad-hoc-sub-process

  6. If no further tools are required, we want to end this process. If there are, we want to go back to the ad-hoc sub-process. Label the route to the end event as “No” and the route to the sub-process as “Yes” to route appropriately.
    Label-paths

  7. Take a little time to expand the physical size of the sub-process as we will be adding elements into it.
  8. We are going to start by just adding a single task for sending a Slack message.
    Slack-message

  9. Now we need to create the gateway to loop back to the AI Task Agent to evaluate if the goal has been accomplished. Add a mutually exclusive gateway after the “Create Prompt” task with an exit route from the ad-hoc sub-process to the gateway.
    Loop-gateway

Implement your initial process

We will now move into setting up the details for each construct to implement the model, so switch to the Implement tag in your Web Modeler.

Configure remaining tasks

The next thing you want to do in implementation mode is to use the correct task types for the constructs that are currently using a blank task type.

AI Agent connector

First we will update the AI Task Agent to use the proper connector.

  1. Confirm that you are using the proper cluster version. You can do this on the lower right-hand side of Web Modeler and be sure to select a cluster that is at least 8.8 alpha4 or higher.
    Zeebe-88-cluster

  2. Now select the AI Task Agent and choose to change the element to “Agentic AI Connector” as shown below.
    Agentic-ai-connector-camunda


    This will change the icon on your task agent to look like the one below.
    Agentic-ai-connector-camunda-2

Slack connector

  1. Select the “Send a Slack Message” task inside the ad-hoc sub-process and change the element to the Slack Outbound Connector.
    Slack-connector

Create the starting form

Let’s start by creating a form to kick off the process.

Note: If you do not want to create the form from scratch, simply download the forms from the GitHub repository provided. To build your own, follow these instructions.

The initial form is required to ask the user:

  • Which individuals at Hawk Emporium should receive the message
  • What the message will say
  • Who is sending the message

The completed form should look something like this.

Form

To enter the Form Builder, select the start event, click the chain link icon and select + Create new form.

Start by creating a Text View for the title and enter the text “# What do you want to Say?” in the Text field on the component properties.

You will need the following fields on this form:

FieldTypeDescriptionReq?Key
To whom does this concern?TextYperson
What do you want to say?TextYmessage
Who are you?TextYsender

Once you have completed your form, click Go to Diagram -> to return to your model.

Create the prompt

Now we want to generate the prompt that will be used in our script task to tell the AI Task Agent what needs to be done.

  1. Select the “Create Prompt” script task and update the properties starting with the “Implementation” type which will be set to “FEEL expression.”

    This action will open two additional required variables: Result variable and FEEL expression.
  2. For the “Result” variable, you will create the variable for the prompt, so enter prompt here.
  3. For the FEEL expression, you will want to create your prompt.
    "I have a message from " + sender + " they would like to convey the following message: " + message + " It is intended for " + person

    Feel-prompt-message

Configure the AI Task Agent

Now we need to configure the brains of our operation, the AI Task Agent. This task takes care of accepting the prompt and sending the request to the LLM to determine next steps. In this section, we will configure this agent with specific variables and values based on our model and using some default values where appropriate.

  1. First, we need to pick the “Model Provider” that we will use for our exercise, so we are selecting “AWS Bedrock.”
    Agentic-ai-connector-properties-camunda


    Additional fields specific to this model will open in the properties panel for input.
  2. The next field is the ”Region” for AWS. In this case, a secret was created for the region (AWS_REGION) which will be used in this field.
    Agentic-ai-connector-properties-camunda-2

    Remember the secrets will be referenced in your model using {{secrets.yourSecretHere}} where yourSecretHere represents the name of your connector secret.

    Note: See the Connector and secrets section in this blog for more information on what is required, the importance of protecting these keys, and how to create the secrets.
  3. Now we want to update the authorization credentials with our AWS Access Key and our AWS Secret key from our connector secrets.
    Agentic-ai-connector-properties-camunda-3

  4. The next part is to set the Agent Context in the “Memory” section of your task. This variable is very important as you can see by the text underneath the variable box.

    The agent context variable contains all relevant data for the agent to support the feedback loop between user requests, tool calls and LLM responses. Make sure this variable points to the context variable which is returned from the agent response.

    In this case, we will be creating a variable called  agent and in that variable there is another variable called context, so for this field, we will use the variable agent.context. This variable will play an important part in this process.

    Agentic-ai-connector-properties-camunda-4

    We will leave the maximum messages at 20 which is a solid limit.
  5. Now we will update the system prompt. For this, we have provided a detailed system prompt for you to use for this exercise. You are welcome to create your own. It will be entered in the “System Prompt” section for the “System Prompt” variable.

    Hint: If you are creating your own prompt, try taking advantage of tools like ChatGPT or other AI tools to help you build a strong prompt. For more on prompt engineering, you can also check out this blog series.

    Agentic-ai-connector-properties-camunda-system-prompt

    If you want to copy and paste in the prompt, you can use the code below:
You are **TaskAgent**, a helpful, generic chat agent that can handle a wide variety of customer requests using your own domain knowledge **and** any tools explicitly provided to you at runtime.

────────────────────────────────
# 0. CONTEXT — WHO IS “USER”?
────────────────────────────────
• **Every incoming user message is from the customer.**  
• Treat “user” and “customer” as the same person throughout the conversation.  
• Internal staff or experts communicate only through the expert-communication tool(s).

────────────────────────────────
# 1. MANDATORY TOOL-DRIVEN WORKFLOW
────────────────────────────────
For **every** customer request, follow this exact sequence:

1. **Inspect** the full list of available tools.  
2. **Evaluate** each tool’s relevance.  
3. **Invoke at least one relevant tool** *before* replying to the customer.  
   • Call the same tool multiple times with different inputs if useful.  
   • If no domain-specific tool fits, you **must**  
     a. call a generic search / knowledge-retrieval tool **or**  
     b. escalate via the expert-communication tool (e.g. `ask_expert`, `escalate_expert`).  
   • Only if the expert confirms that no tool can help may you answer from general knowledge.  
   • Any decision to skip a potentially helpful tool must be justified inside `<reflection>`.  
4. **Communication mandate**:  
   • To gather more information from the **customer**, call the *customer-communication tool* (e.g. `ask_customer`, `send_customer_msg`).  
   • To seek guidance from an **expert**, call the *expert-communication tool*.  
5. **Never** invent or call tools that are not in the supplied list.  
6. After exhausting every relevant tool—and expert escalation if required—if you still cannot help, reply exactly with  
   `ERROR: <brief explanation>`.

────────────────────────────────
# 2. DATA PRIVACY & LOOKUPS
────────────────────────────────
When real-person data or contact details are involved, do **not** fabricate information.  
Use the appropriate lookup tools; if data cannot be retrieved, reply with the standard error message above.

────────────────────────────────
# 3. CHAIN-OF-THOUGHT FORMAT  (MANDATORY BEFORE EVERY TOOL CALL)
────────────────────────────────
Wrap minimal, inspectable reasoning in *exactly* this XML template:

<thinking>
  <context>…briefly state the customer’s need and current state…</context>
  <reflection>…list candidate tools, justify which you will call next and why…</reflection>
</thinking>

Reveal **no** additional private reasoning outside these tags.

────────────────────────────────
# 4. SATISFACTION CONFIRMATION, FINAL EMAIL & TASK RESOLUTION
────────────────────────────────
A. When you believe the request is fulfilled, end your reply with a confirmation question such as  
   “Does this fully resolve your issue?”  
B. If the customer answers positively (e.g. “yes”, “that’s perfect”, “thanks”):  
   1. **Immediately call** the designated email-delivery tool (e.g. `send_email`, `send_customer_msg`) with an appropriate subject and body that contains the final solution.  
   2. After that tool call, your *next* chat message must contain **only** this word:  
      RESOLVED  
C. If the customer’s very next message already expresses satisfaction without the confirmation question, do step B immediately.  
D. Never append anything after “RESOLVED”.  
E. If no email-delivery tool exists, escalate to the expert-communication tool; if the expert confirms none exists, reply with an error as described in §1-6.
  1. Remember that in the Create Prompt task, we stored the prompt in a variable called prompt. We will use this variable in the “User Prompt” section for the “User Prompt.”
    Image54

  2. The key to this step are the tools at the disposal of the AI Task Agent, so we need to link the agent to the ad-hoc sub-process. We do this by mapping the ID of the sub-process to the proper tools field in the AI Task Agent.
    1. Start by selecting your ad-hoc sub-process and giving it a name and an ID. In the example, we will use “Hawk Tools” for the name and hawkTools for the “ID.”
      Link-agent-to-ad-hoc-sub-process-camunda-1

    2. Go back to the AI Task Agent and update the “Ad-hoc subprocess ID” to hawkTools for the ID of the sub-process.
      Link-agent-to-ad-hoc-sub-process-camunda-2

    3. Now we need a variable to store the results from calling the toolbox to place in the “Tool Call Results” variable field. We will use toolCallResults.
      Link-agent-to-ad-hoc-sub-process-camunda-3

    4. There are several other parameters of importance. We will use the defaults for several of these variables. We will leave the “Maximum model calls” in the “Limits” section set at “10” which will limit the number of times the model is called to 10 times. This is important for cost control.
      Link-agent-to-ad-hoc-sub-process-camunda-4

    5. There are additional parameters to help provide constraints around the results. Update these as shown below.
      Link-agent-to-ad-hoc-sub-process-camunda-5

    6. Now we need to update the “Output Mapping” section, first the “Result variable” which is where we are going to use our agent variable that will contain all the components of the result including the train of thought taken by the AI Task Agent.
      Link-agent-to-ad-hoc-sub-process-camunda-6

Congratulations, you have completed the configuration of your AI Task Agent. Now we just need to make some final connections and updates before we can see this running in action.

Gateway updates

We are going to use the variable values from the AI Task Agent to determine if we need to run more tools.

  1. Select the “Yes” path and add the following:
    not(agent.toolCalls = null) and count(agent.toolCalls) > 0
    Flow-condition

  2. For the “No” path, we will make this our default flow.
    Default-flow

Ad-hoc sub-process final details

We first need to provide the input collection of tools for the sub-process to use, and we do that by updating the “Input collection” in the “Multi-instance” variable.

  1. We will then provide each individual “Input element” with the single toolCall.
    Toolcall-toolcallresults
  2. We will then update the “Output Collection” to our result variable, toolCallResults.
    Toolcall-toolcallresults

  3. Finally, we want to create a FEEL expression for our “Output element” as shown below.
    {<br>  id: toolCall._meta.id,<br>  name: toolCall._meta.name,<br>  content: toolCallResult<br>}
     
    Output-element


    This expression provides the id, name and content for each tool.
  4. Finally, we need to provide the variable for the “Active elements” for the “Active elements collection” showing which element is active in the sub-process.
    [toolCall._meta.name]
    Active-element

    To better explain this, the AI Task Agent determines a list of elements (tools) to run and this variable represents which element gets activated in this instance.

Connect sub-process elements and the AI Task Agent

Now, how do we tell the agent that it has access to the tools in the ad-hoc subprocess?

  1. First of all, we are going to use the” Element Documentation” field to help us connect these together. We will add some descriptive text about the element’s job. In this case, we will be using:
    This can send a slack message to everyone at Niall's Hawk Emporium
    Element-documentation

Now we need to provide the Slack connector with the message to send and what channel to send that message on.

  1. We need to use a FEEL expression for our message and take advantage of the keyword fromAi and we will enter some additional information in the expression. Something like this:
    fromAi(toolCall.slackMessage, "This is the message to be sent to slack, always good to include emojis")
    Message


    Notice that we have used our variable toolCall again and told AI that you need to provide us with a variable called slackMessage.
  2. We also need to explain to AI which channel is appropriate for the type of message being sent. Remember that we provided three (3) different channels in our Slack organization. We will use another FEEL expression to provide guidance on the channel that should be used.
    fromAi(toolCall.slackChannel, "There are 3 channels to use they are called, '#good-news', '#bad-news' and '#other-news'. Their names are self explanatory and depending on the type of message you want to send, you should use one of the 3 options. Make sure you  use the exact name of the channel only.")
    Channels

  3. Finally, be sure to add your secret for “Authentication” for Slack in the “OAuth token” field. In our case this is:
    {{secrets.Slack}}
    Secrets

Well, you did it! You now should have a working process model that accesses an AI Task Agent to determine which elements in its toolbox can help it achieve its goal. Now you just need to deploy it and see it in action.

Deploy and run your model

Now we need to see if our model will deploy. If you haven’t already, you might want to give your process a better name and ID something like what is shown below.

Name-process
  1. Click Deploy and your process should deploy to the selected cluster.
    Deploy-agentic-ai-process-camunda

  2. Go to Tasklist and Processes and find your process called “Message System” and start the process clicking the blue button Start Process ->.
    Start-process
  3. You will be presented with the form you created so that you can enter who you are, the message content and who should receive the message. Enter the following for the fields:
    • To whom does this concern?
      Everyone at the Hawk Emporium
    • What do you want to say?
      We have a serious problem. Hawks are escaping. Please be sure to lock cages. Can you make sure this issue is taken more seriously?
    • Who are you?
      Joyce, assistant to Niall - Owner, Hawk Emporium
      Or enter anything you want for this.

Your completed form should look something like the one shown below.

Form

The process is now running and should post a Slack message to the appropriate channel, so open your Slack application.

  1. We can assume that this would likely be a “bad news” message, so let’s review our Slack channels and see if something comes to the #bad-news channel. You should see a message that might appear like this one.
    Ai-results-slack

  2. Open Camunda Operate and locate your process instance. It should look something like that seen below.
    Camunda-operate-check

  3. You can review the execution and see what took place and the variable values.
    Camunda-operate-check-details

You have successfully executed your first AI Task Agent and associated possible tasks or elements associated with that agent, but let’s take this a step further and add a few additional options for our AI Task Agent to use when trying to achieve its “send message” goal.

Add tasks to the toolbox

Why don’t we give our AI Task Agent a few more options to help it accomplish its goal to send the proper message. To do that, we are going to add a couple other options for our AI Task Agent within our ad-hoc sub-process now.

Add a human task

The first thing we want to do is add a human task as an option.

  1. Drag another task into your ad-hoc sub-process and call it “Ask an Expert”.
  2. Change the element type to a “User Task.” The result should look something like this.
    Add-tasks


    Now we need to connect this to our sub-process and provide it as an option to the AI Task Agent.
  3. Update the “Element Documentation” field with the information about this particular element. Something like:
    If you need some additional information that would help you with your request, you can ask this expert.
    Element-documentation-user-task

  4. We will need to provide the expert with some inputs, so hover over the + and click Create+ to create a new input variable.
  5. For the “Local variable name” use  aiquestion and then we will use a FEEL expression for the “Variable assigned value” following the same pattern we used before with the fromAi tool.
    fromAi(toolCall.aiquestion, "Add here the question you want to ask our expert. Keep it short and be friendly", "string")
    User-task-inputs

  6. In this case, we need to see the response from the expert so that the AI Task Agent can use this information to determine how to achieve our goal. So add an “Output Variable” and call it toolCallResult and we will be providing the answer using the following JSON in the Variable assignment value.
    {<br>  “Personal_info_response”: humanAnswer<br>}

    Your output variable section should now look like that shown below.
    User-task-output

  7. Now we need to create a form for this user task to display the question and give the user a place to enter their response to the question. Select the “Ask an Expert” task and choose the link icon and then click on the + Create new form from the dialog.
    Add-form
         
    New-form

  8. The form we need to build will look something like this:
    Question-from-ai


    Start by creating a Text View for the title and enter the text “# Question from AI” in the Text field on the component properties.

    You will need the following fields on this form:
FieldTypeDescriptionReq?Key
{{aiquestion}}Text viewN
AnswerText areaYhumanAnswer

The Text view field for the question will display the value of the aiquestion variable that will be passed to this task. We also provided a place to add a document that will be of some assistance.

Once you have completed your form, click Go to Diagram -> to return to your model.

Because we have already connected the AI Task Agent to the ad-hoc sub-process and the tools it can use, we do not have to provide more at this step.

Optional: Send an email

If you have a SendGrid account and key, you can complete the steps below, but if you do not, you can just keep two elements in your ad-hoc sub-process for this exercise.

  1. Create one more task in your ad-hoc sub-process and call it “Send an Email.”
  2. Change the task type to use the SendGrid Outbound Connector.
  3. Enter your secret for the SendGrid API key using the format previously discussed.

    Remember the secrets will be referenced in your model using {{secrets.yourSecretHere}} where yourSecretHere represents the name of your connector secret. In this case, we have used:
    {{secrets.SendGrid}}
  4. You will need to provide the reason the AI Task Agent might want to use this element in the Element documentation. The text below can be used.
    This is a service that lets you send an email to someone.
    Email

  5. For the Sender “Name” you want to use the information provided to the AI Task Agent about the person that is requesting the message be sent. We do this using the following information.
    fromAi(toolCall.emailSenderName, "This is the name of the person sending the email")

    In our case, the outgoing “Email address” is “community@camunda.com” which we also need to add to the “Sender” section of the connector properties. You will want to use the email address for your own SendGrid configuration.
    Sender-name-fromai


    Note: Don’t forget to click the fx icon before entering your expressions.
  6. For the “Receiver,” we also will use information provided to the AI Task Agent about who should receive the message. For the “Name”, we can use this expression:
    fromAi(toolCall.emailReceiveName, "This is the name of the person getting the email")

    For the Email address, we will need to make sure that the AI Task Agent knows the email address for the intended individual(s) for the message.
    fromAi(toolCall.emailReceiveAddress, "This is the email address of the person you want to send an email to, make very sure that if you use this that the email address is correctly formatted you also should be completely sure that the email is correct. Don't send an email unless you're sure it's going to the right person")

    Your properties should now look something like this.
    Receiver-name-fromai

  7. Select “Simple (no dynamic template)” for the “Mail contents” property in the “Compose email” section.
  8. In the “Compose email” section for the subject, we will let the AI Task Agent determine the best subject for the email, so this text will provide that to the process.
    fromAi(toolCall.emailSubject, "Subject of the email to be sent")
  9. The AI Task Agent will determine the email message body as well with the following:
    fromAi(toolCall.emailBody, "Body of the email to be sent")

    Your properties should look something like this.
    Properties-fromai

That should do it. You now have three (3) elements or tools for your AI Task Agent to use in order to achieve the goal of sending a message for you.

Deploy and run again

Now that you have more options for the AI Task Agent, let’s try running this again. However, we are going to make an attempt to have the AI Task Agent use the human task to show how this might work.

  1. Deploy your newly updated process as you did before.
  2. Go to Tasklist and Processes and find your process called “Message System” and start the process clicking the blue button.
    Start-process
  3. You will be presented with the form you created so that you can enter who you are, the message content and who should receive the message. Enter the following for the fields
    • To whom does this concern?
      I want to send this to Reb Brown. But only if he is working today. So, find that out.
    • What do you want to say?
      Can you please stop feeding the hawks chocolate? It is not healthy.
    • Who are you?
      Joyce, assistant to Niall - Owner, Hawk Emporium
      Or enter anything you want for this.

Your completed form should look something like the one shown below.

New-form-to-user-task-from-ai

The process is now running.

  1. Open Camunda Operate and locate your process instance. It should look something like that seen below.
    Camunda-operate-check-again

  2. You can review the execution and see what took place and the variable values.
  3. If you then access Tasklist and select the Tasks tab, you should have an “Ask an Expert” task asking you if Reb Brown is working today. Respond as follows:
    He is working today, but it’s also his birthday, so it would be nice to let him know the important message with a happy birthday as well.

    What-ai-asked-and-user-answer

  4. In Operate, you will see that the process instance has looped around with this additional information.
    Camunda-operate-check-details-again


    You can also toggle the “Show Execution Count” to see how many times each element in the process was executed.
    Camunda-operate-execution-count

  5. Now open your Slack application and you should have a message now that the AI Task Agent knows that not only is Reb Brown working, but it is his birthday.
    Ai-message

Congratulations! You have successfully executed your first AI Task Agent and associated possible tasks or elements associated with that agent.

We encourage you to add more tools to the ad-hoc sub-process to continue to enhance your AI Task Agent process. Have fun!

Congratulations!

You did it! You completed building an AI Agent in Camunda from start to finish including running through the process to see the results. You can try different data in the initial form and see what happens with new variables. Don’t forget to watch the accompanying step-by-step video tutorial if you haven’t already done so.

The post Intelligent by Design: A Step-by-Step Guide to AI Task Agents in Camunda appeared first on Camunda.

]]>
Build Your First Camunda RPA Task https://camunda.com/blog/2025/05/build-your-first-camunda-rpa-task/ Wed, 07 May 2025 13:14:16 +0000 https://camunda.com/?p=137622 Leverage robotic process automation (RPA) to automate tasks, enhance efficiency, and minimize human errors.

The post Build Your First Camunda RPA Task appeared first on Camunda.

]]>
You may have heard that Camunda now provides Camunda Robotic Process Automation (RPA) for automating manual, repetitive tasks to streamline your orchestrated processes. But would you know where to begin?

This blog will provide step-by-step instructions so you can build your first Camunda RPA task and then run it.

RPA leverages software robots to automate tasks traditionally handled manually. By automating these processes, organizations can enhance efficiency and minimize human errors. These RPA tasks can be integrated into your end-to-end Camunda processes by connecting isolated bots.

Terminology

Before getting started, it is important to understand the terminology related to robotic process automation.

  • Bot/Robot: A software agent that executes tasks and automates processes by interacting with applications, systems, and data.
  • Robot script: The script that tells the robot what actions, such as, keystrokes and mouseclicks, to execute.

Overview of the model you will build

For this example, you’ll build a model similar to the one below.

Camunda RPA Task 1

In this scenario, you’re providing the end user with the ability to generate a QR code for a website. The RPA bot will access a QR website (www.qrcode-monkey.com) to generate the QR code and bring that back to the end user.

The process starts with a form for the URL that requires a QR code. Once the URL is entered and the form is submitted, the process will use a `.robot` script that provides this URL to the website to generate the QR code. Once generated, the QR code is displayed in a form that the end user can download if desired.

What you’re going to do today is:

  • Create a model in Camunda SaaS.
  • Install the RPA runtime on your local machine.
  • Create an RPA script on that local machine.
  • Deploy the RPA script to the cloud.
  • Run your process, which will launch the RPA runtime on your local machine.

Assumptions and initial configuration

If you’re using this step-by-step guide to create your first Camunda RPA bot process, let’s make sure you have a few things in order before you begin.

  • In order to take advantage of the latest and greatest functionality provided by Camunda, you’ll need to have a Camunda 8.7.x cluster or higher available for use in Camunda SaaS.
  • You will be using Web Modeler and Forms to create your model and human task interface.
  • You’ll create your RPA script in Desktop Modeler to create your RPA script.
  • You’ll deploy and execute your process in Camunda SaaS.
  • You will also be running the Camunda RPA robot on your machine (RPA Worker), and you will be deploying your RPA script to the cloud so when your process runs, it will trigger the robot locally.

Required skills

It is assumed that those using this guide have the following skills with Camunda:

  • Form Editor—the ability to create forms for use in a process.
  • Web Modeler—the ability to create elements in BPMN and connect elements together properly, link forms, and update properties for connectors.
  • Desktop Modeler—understanding of installation and use of Desktop Modeler.
  • TaskList—the ability to open items and act upon them accordingly, as well as starting processes.

GitHub repository and video tutorial

If you don’t want to build this process from scratch, you can access the GitHub repository and download the individual components. We’ve also created a step-by-step video tutorial for you. The steps in this guide closely mirror the steps taken in the video tutorial.

Installations

Desktop Modeler

In order to build your RPA script, install Camunda Desktop Modeler if you don’t already have it installed.

If it is, make sure it’s the latest version to take advantage of Camunda RPA functionality within the application.

Download the appropriate version of Camunda Desktop Modeler for your machine. Make sure you’re using version 5.34.0 or higher of Desktop Modeler. Installation instructions are on the download page, but essentially you’ll unpack the archive (for Windows) or open the DMG (for macOS). Then you will start the Camunda Modeler (executable for Windows, application for macOS) when you want to run the application.

RPA worker

Download and install the RPA worker that will run your RPA script locally on your machine. You can find the RPA worker in GitHub. Select version 1.0.1 or later for this tutorial.

Scroll down to the assets and select the appropriate asset for your local machine:

  • For Windows: rpa-worker_1.0.1_win32_amd634.zip
  • For macOS: rpa-worker_1.0.1_darwin_aarch64.zip

Extract the contents of the .zip file on your local machine. This creates a directory of the same root name as the .zip file, which will contain your executable for the RPA-worker and an application.properties file for configuration purposes.

Updating the properties file

Update the application.properties file with the correct information for your configuration before starting the executable for the RPA worker. You can obtain many of these settings by creating an API Client and reviewing the settings and values.

We’ll cover how to do this and what to modify in that file later in this blog.

Notes for macOS installation

You may want to install Homebrew for running the RPA worker on your local macOS machine. To do so, run the following command:

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

Followed by:

echo >> ~/.zprofile
echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile
eval "$(/opt/homebrew/bin/brew shellenv)"

Confirm that you’re running Python 3.11. Run this command to start the rpa-worker executable: ./rpa-worker_1.0.1_darwin_aarch64 .

Do not start the executable until requested later in this tutorial.

Building your process

Follow these steps to create your RPA bot process.

Create a new project for your process model and any other associated assets (this example uses RPA Tutorial).

Using the drop down menu, create a new BPMN diagram.

Camunda RPA Task 2

Name your new process Get a QR Code.

Initial model

Now it’s time to build your process model in BPMN and the appropriate forms for any human tasks. You’ll be building the model represented below:

Camunda RPA Task 3

Begin by building your model in Design mode as shown below.

Camunda RPA Task 4

To create your initial model, name your start event. This example calls it URL needs QR.

Camunda RPA Task 5

Add an End Event and call it QR Generated.

Camunda RPA Task 6

After this task, create a task called Create QR Code. This will be the RPA Connector task to run Camunda RPA.

Then create a human task called Display QR Code. This will display the code obtained through RPA in the prior step.

Camunda RPA Task 7

Next, create a form to serve as the frontend to the process providing the URL that needs a QR generated.

Click the Start task, select to Link Form, and select Create a New Form (the blue button).

All you really need on this form is a text field to enter the URL that needs to be generated. For the text field, enter the label. The key (or variable) should be url.

Camunda RPA Task 8

The form shown here includes a title field (Text view) in addition to the text field. This is not required.

Click Go to diagram -> to return to your diagram.

Select the first task, Create QR Code, and change the element type to RPA Connector.

Camunda RPA Task 9

If you do not see the template, search the Camunda Marketplace from Web Modeler and download the connector to your project. If you still do not see the connector, verify (in Implement mode) that you are checking problems against Zeebe 8.7 and not a prior version.

Switch to Implement mode in the Web Modeler and select your new RPA connector element.

In the properties for this element, enter the Script ID for the connector as RPA-Get-QR-Tutorial. This will be the script you’ll create using Desktop Modeler and deploy to the cloud later.

Camunda RPA Task 10

Select your human task to display the QR Code and link a form.

Camunda RPA Task 11

Select + Create new form to enter the form editor.

Camunda RPA Task 12

You’ll receive something back from the RPA bot, so create those items in the form. The first is a document preview for the document that will be coming back.

Enter some title (this example uses QR Code Results and then lists qrcode for the document reference—more on that coming up).

Camunda RPA Task 13

This form also creates a heading (Text view) field for the form. That is not necessary.

Click Go to diagram -> to return to your BPMN diagram.

Now that you have the initial model, let’s move on to setting up RPA and using Desktop Modeler to create the RPA script.

Create your RPA script

Now you need to create the RPA script that will tell the robot what tasks to complete, including launching the QR code website, entering the URL for the QR Code, and so on.

Launch Camunda Desktop Modeler.

Click RPA Script to generate a default script for getting started with your first RPA bot.

Camunda RPA Task 14

Connecting the RPA worker

The next step is to properly connect the RPA worker so you can execute RPA scripts properly.

When you select RPA script from the Camunda Modeler menu, you’ll see an example script that runs an RPA challenge.

Camunda RPA Task 15

At the bottom of the screen below the RPA initial script, you’ll see a note with a red icon stating that the RPA worker is not connected. You’ll take care of that next.

Camunda RPA Task 16

This warning message indicates that scripts can be created but not executed until the worker is properly connected.

Confirm that you’ve taken the proper steps to install the RPA worker covered at the beginning of this blog.

Now open a terminal window and run the appropriate executable from your RPA worker directory. For example, ./rpa-worker_1.0.1_darwin_aarch64.

When everything is working properly, the note at the bottom of the script page reveals that the RPA worker is connected and the icon turns green.

Camunda RPA Task 17

Testing the example script

To ensure that everything is working correctly, run the script to test it.

Click the Test Script icon to the immediate right of the RPA worker connected statement.

Camunda RPA Task 18

The script opens a browser, fills in fields quickly, and then completes. You can review the statistics from running the script in the testing output below your script.

You can see an example here:

Camunda RPA Task 19
Camunda RPA Task 20
Camunda RPA Task 21

The RPA script

Before you build the script, it’s important to understand what you want the script to accomplish. Let’s take a moment to review that. Essentially, you’ll be doing the following:

  • Open a browser.
  • Open the proper QR Code URL (https://www.qrcode-monkey.com).
  • Accept any required cookies by clicking Accept All Cookies.
  • Enter the URL provided in the Request a QR Code form in the proper form field on the website.
  • Click Create QR Code.
  • Copy the created QR code from the proper region to be presented in our final form in the process.

The browser screen should look something like this:

Camunda RPA Task 22

Create your script

Now that you’ve reviewed what the script needs to accomplish, go back to Camunda Modeler and remove everything under the last Library line so you can put together your script for this exercise.

You can also remove the Documentation section and remove the library for Camunda.Excel.Files, which you won’t be using.

So your starting script looks something like this:

*** Settings ***
Library             Camunda.Browser.Selenium
Library             Camunda.HTTP
Library             Camunda

Camunda RPA uses Robot Framework, an open source Python-based RPA framework that lets you describe how you can step through tasks that you want to accomplish.

Let’s create your script.

Enter your Tasks section by adding the task name:

*** Tasks ***
Get QR Code

You’ll be creating two methods under this task:

Generate QR Code
Send QR Code

Now enter your Variables section:

*** Variables ***

You’ll want a single variable called url as shown here:

${url}	https://www.camunda.com

Your script should look something like this so far:

*** Settings ***
Library             Camunda.Browser.Selenium
Library             Camunda.HTTP
Library             Camunda


*** Tasks ***
Get QR Code
   Generate QR Code
   Send QR Code


*** Variables ***
${url}      https://www.camunda.com

Now create your Keywords section:

*** Keywords ***

This is where you will define the methods.

First define your Generate QR Code method with the following lines:

Generate QR Code
	Open Available Browser	https://www.qrcode-monkey.com
	Sleep 2s

You can elect to wait for a condition in lieu of the Sleep command if you like.

You want to be able to click a button to accept the cookies. Use the Click Button method to do this:

Click Button		locator

Find the location for the Accept All Cookies button in your browser.

Camunda RPA Task 23

To do this, find the locator to enter in this statement. Open your web browser and go to https://qrcode-monkey.com. While hovering over Accept All Cookies, right click, and choose Inspect.

Camunda RPA Task 24

This displays the locator information for this button (onetrust-accept-btn-handler) as shown below:

Camunda RPA Task 25

Replace locator with id:onetrust-accept-btn-handler in the RPA script:

Click Button		id:onetrust-accept-btn-handler

Continue to find the applicable locators for the places where you’ll enter text on the web form. The next location will be where you enter the URL for the QR code generation. That locator is qrcodeUrl, so your next line will look like this:

Input Text    id:qrcodeUrl    ${url}

This ensures that you’re entering the variable from the form in Camunda as the text for this statement.

Next you want to be able to click a button to actually create the QR code, which will look like this:

Click Button    id:button-create-qr-code

In this case, run another Sleep so that the script doesn’t run too fast—you want to be able to see it running.

Sleep    2s

Now that you have the first method, Generate QR Code, you need to create the Send QR Code section of the RPA script.

Remember to capture a screenshot of your QR code, so use the statement Capture Element Screenshot. However, in this example, you’re not going to enter the locator ID. Instead, use xpath and search for an image as shown below:

Capture Element Screenshot    xpath://img[contains(@class, 'card-img-top')]    qr-code.png

Next, capture the URL that you used as well for the QR code:

Capture Element Screenshot    id:qrcodeUrl      qr-URL.png

Finally, upload the resulting screenshots for use in your Camunda process.

Upload Documents    **/*.png    qrCode

You’ll close the browser using Close Browser.

Your final script should look like the one shown below:

*** Settings ***
Library             Camunda.Browser.Selenium
Library             Camunda.HTTP
Library             Camunda


*** Tasks ***
Get QR Code
   Generate QR Code
   Send QR Code


*** Variables ***
${url}      https://www.camunda.com


*** Keywords ***


Generate QR Code
   Open Available Browser      https://www.qrcode-monkey.com/
   Sleep    2s
   Click Button    id:onetrust-accept-btn-handler
   Input Text      id:qrcodeUrl    ${url}
   Click Button    id:button-create-qr-code
   Sleep    2s


Send QR Code
   Capture Element Screenshot    xpath://img[contains(@class, 'card-img-top')]    qr-code.png
   Capture Element Screenshot    id:qrcodeUrl      qr-URL.png
   Upload Documents    **/*.png    qrCode
   Close Browser

Be sure to save your script. This example saves the script as getQRCode.rpa.

Test your script locally

Now that you’ve generated your script, it’s time to test it locally before connecting it to your Camunda SaaS Zeebe engine. In order to do this, remove the Upload Documents statement until you connect your script to the Camunda engine.

To test your script, remove the Upload Documents statement.

Confirm that your RPA worker is connected (see this section to review how to do this).

When your RPA worker is connected, click the Test Script icon.

Camunda RPA Task 18

A browser should open to the QR Code website. You’ll see the various buttons clicked, the URL filled in, and the screenshots created. This will happen quickly. You should receive a PASS status as shown below:

Camunda RPA Task 26
Camunda RPA Task 27

You can expand the various tasks for more detailed information on what took place in the script.

Camunda RPA Task 28
Camunda RPA Task 29

Deploying the RPA script

Now that you have a working script (be sure to add back in the Upload Documents statement), it’s time to deploy this script to your SaaS environment to make it available for use in process models.

Let’s take a little mental inventory on what you’ve done so far:

  • You have a model in Camunda Web Modeler that requests a URL from a user and then calls an RPA script to obtain a QR code for that URL and pass it back to the user for view.
  • You have RPA running on your local machine.
  • You have an RPA Script that will obtain the QR code by completing specific tasks.

You need to take a few final steps in order to deploy your script, and that involves making sure you’re connecting your script ID. Begin by making the IDs match on your cloud model and your RPA script.

Open your cloud model and select the Create QR Code task. Review the properties to find the Script ID and copy it.

Camunda RPA Task 30

You can see here the ID for this example is RPA-Get-QR-Tutorial.

Go back to Desktop Modeler. If it’s not already viewable, expand the properties in your Desktop Modeler while displaying the script.

Paste the RPA-Get-QR-Tutorial into the ID location for your script in Camunda Desktop Modeler.

Camunda RPA Task 31

To deploy your script, locate the Deploy icon at the bottom of your Desktop Modeler screen (the rocket ship). You’ll be prompted to enter some information for this deployment.

Camunda RPA Task 32

For your deployment, create a Desktop Modeler Client API in your SaaS Console and then obtain the remaining elements required for this dialog at that time. First, open Console in your SaaS environment and select the appropriate cluster.

Click the API tab and then click Create new client.

Camunda RPA Task 33

Enter the name DesktopModeler for the API name and select Zeebe for the required credentials.

Camunda RPA Task 34

Click Create to create the credentials required. This displays the required items to fill into the dialog box in Desktop Modeler.

Camunda RPA Task 35

Before closing this dialog, be sure to click the Desktop Modeler tab to find the Cluster URL that you need.

Camunda RPA Task 36

Enter the Cluster URL, the Client ID, and the Client Secret, and then click Deploy.

Camunda RPA Task 37

You should receive verification that the script was properly deployed.

Connecting your RPA worker to the cloud

Just to clarify where you are again, the call to use the script is being orchestrated in the cloud, but it’s being run locally on your machine. Now you need to take the final steps to make sure everything is communicating properly in order to execute your RPA script.

Your next step is to connect your local RPA worker to the cloud engine.

Update your application.properties file in the RPA worker directory to the proper values for the following:

  • camunda.client.cluster-id
  • camunda.client.region
  • camunda.client.auth.client-id
  • camunda.client.auth.client-secret

To obtain these values, open your SaaS Console and create another API client. Select both Zeebe and Secrets scopes.

Camunda RPA Task 38

This example uses RPA-Tutorial-QR for this new client.

You can choose to download the credentials or copy/paste them into your application.properties file. Your file will look something like this:

## Camunda RPA Worker

## Full properties reference: https://github.com/camunda/rpa-worker?tab=readme-ov-file#configuration-reference

### General Configuration

#camunda.rpa.zeebe.worker-tags=default

#camunda.rpa.robot.default-timeout=PT5M

#camunda.rpa.robot.fail-fast=true

#camunda.rpa.python.extra-requirements=/path/to/extra-requirements.txt

### Zeebe Configuration

#### SaaS Production

camunda.client.mode=saas

camunda.client.auth.client-id=Sd6QUXtleuLio0Luk5wsPGM~RvyqDw~i

camunda.client.auth.client-secret=<SECRET HERE>

camunda.client.cloud.cluster-id=3c43f328-25bb-49c0-bd3f-7786af3b98c0

camunda.client.cloud.region=hel-1

Once you’ve updated your information, restart your RPA worker. It should read the new application.properties files that point to the proper configuration in SaaS.

Run your script

You’re almost there! Now it’s time to test your original BPMN diagram to make sure that everything is working correctly.

Go back to your Web Modeler and deploy your diagram to the proper cluster that you used to create your API clients.

Camunda RPA Task 39

Go to Tasklist, click the Processes tab, and start the process Get a QR Code.

Camunda RPA Task 40

You will be prompted to enter the URL for the QR Code in the form you created. Enter any URL you want to use here.

Camunda RPA Task 41

Click Start process.

You should see the browser open on your local machine. The URL will be entered, the screenshots generated, and then you’ll see another task appear in Tasklist (be sure to click on the Tasks tab in Tasklist).

Camunda RPA Task 42

When you open this task, you’ll see the generated QR code and the URL that was used in the first form in the form.

Camunda RPA Task 43

Congratulations

You’ve completed your first RPA script! You’ve also deployed it, created a process model, and executed your Camunda RPA script in the process. We hope you enjoyed this step-by-step tutorial.

Be sure to check out the video to use to walk through the process.

The post Build Your First Camunda RPA Task appeared first on Camunda.

]]>
Camunda 8.7 Release is Here https://camunda.com/blog/2025/04/camunda-8-7-release/ Tue, 08 Apr 2025 16:23:02 +0000 https://camunda.com/?p=133181 We're excited to announce the 8.7 release of Camunda. Check out what's new, including AI, IDP, RPA, SAP Integration, Camunda Copilot and more.

The post Camunda 8.7 Release is Here appeared first on Camunda.

]]>
We’re excited to share that the official software release of Camunda is now live and available for download. For our SaaS customers who are up to date, you may have already noticed some of these features as we make them available for you automatically.

Release 8.7 brings new features around artificial intelligence (AI) investments into the product to improve the user experience and flexible, future-proof approach to process automation and AI adoption. Camunda enables you to overcome the limitations of fragmented automation and siloed AI so you can connect your automation efforts across people, systems and devices, unlocking lasting business value.

We accompany this investment in AI with the power of Intelligent Document Management (IDP), Robotic Process Automation (RPA), SAP Integration, Camunda Copilot and more. This post will delve into the power of agentic process orchestration, ad-hoc sub-processes and our newest features to provide you with our best enterprise-grade process orchestration and automation platform.

Below is a summary of everything new in Camunda 8.7.

Introduction to the new release blog

We introduced a new format for our release blog posts several months ago. As a reminder, this format organizes the blog using the following product house, with E2E Process Orchestration at the foundation and our product components represented by the building bricks. We have organized our components as per the image below to show how we believe Camunda builds the best infrastructure for your processes, with a strong foundation of orchestration and AI thoughtfully infused throughout.

Image13

E2E Process Orchestration

This section will update you on the components that make up Camunda’s foundation, including the underlying engine, platform operations, security, and API.

Zeebe

Support for ad-hoc sub-processes

The new Camunda version supports a new BPMN element: the ad-hoc sub-process. This new kind of sub-process allows more flexible process flows with a compact visual representation. It is the first step towards dynamic processes and execution of ad-hoc activities.

Image12

Support for deploying and linking Robotic Process Automation (RPA) scripts

In Camunda 8.7, Camunda proudly announces the 1.0 release of its integrated Robotic Process Automation (RPA) solution, now fully production-ready. This major update introduces a suite of powerful features designed to enhance the development, deployment, and management of RPA scripts.

Image10

Cancel banned instances

You can now cancel banned instances. A banned instance occurs when an unexpected, unhandled error happens in the Zeebe engine. When this happens, the process instance is frozen and will never terminate. To avoid causing confusion or taking unnecessary space, you can cancel it, effectively deleting it from the engine.

We hope you enjoy the latest Zeebe 8.7 release right here.

Operate

Support for ad-hoc sub-processes

Operate now supports a new BPMN symbol—ad-hoc sub-process. With this support, you can check what elements from an ad-hoc sub-process have been executed and which are in process, etc. This facilitates end-to-end visibility into process execution while enabling ad-hoc execution, depending on the specific case.

Image6

We hope you enjoy everything in the latest Operate 8.7 release.

Tasklist

Camunda’s Document Handling makes use of new components on Form-JS. These are:

File Picker

You can now choose to include a “file picker” to select a file or multiple files (as configured) to upload to your process instance.

Image7

When included on your form, you configure the form element.

Image14

When the form has been assigned in TaskList, you can select the Browse button to include the appropriate files.

Image21

When files have been successfully uploaded, the name of the file(s) will appear on the form.

Image11

Document Preview

Now, you can add a Document Preview to a form to preview documents associated with the process.

Image9

When a user is interacting with the form, they will see the document in preview, something like this.

Image3

In this release, we have also worked on bug fixes and minor improvements for Tasklist. We hope you enjoy all the latest updates!

Web Modeler

README support

Web Modeler now supports the README file type in common markdown format to formally document your process.

Image15

Users can create, edit, preview, version and diff README files within Web Modeler as with any other supported file type. In addition, your README files that are inside your process application can be synced with your Git repository.

Process landscape visualization

Web Modeler automatically generates an interactive visualization of all BPMN files within a project, folder or process application that shows connections between them allowing users to quickly understand the structure of a project and the process dependencies.

Image17

Users can view the process landscape and interact with this visualization. For example:

  • You can click a node to view the details of the selected BPMN file including the latest version and the README.
  • You can also search for a specific file.
  • You can highlight the entire hierarchy of related connections.

Sharing projects for organization-wide collaboration

With the introduction of process landscapes and README support, users can leverage the existing capabilities within Web Modeler for organization-wide collaboration. You can create a shared project and invite collaborators—it’s now possible to invite all users in the organization at one time—to this shared project for organization-wide reuse.

Users can also view the landscape of the shared project to see details of specific versions including the README and then reuse them by copying these versions into their target project.

Bulk publish of connector templates to shared resources

It is now possible to use the public API to publish a connector template version to the organization.

Milestones are now versions

In previous releases, Camunda referred to versions of files as “milestones” for certain cases. We now refer to all of these as versions to avoid any possible confusion.

Process application versioning

With 8.7, we introduce the concept of a process application version and link the versions of the individual assets created to this process application version. As a result, when selecting a process application version, you now know which resources are present for this version and can also view its contents. We can also perform actions like restore a version, deploy, download, delete, rename, and copy.

Process application review

Web Modeler now offers form review support for process application versions. Users can request a review of a process application version and update those changes to production in an approved manner. Reviewers can view the changes made in the version and approve or request modifications. Organization administrators can enforce these reviews before manual deployment to production.

Mono-repository Git Sync

Web Modeler offers the path option when using Git Sync which provides enterprise organizations the flexibility to safely integrate Web Modeler without charging their repository structure. Administrators can synchronize process applications with a defined path so that they can sync multiple process applications to the same repository.

GitLab Synchronization

Web Modeler now supports native integration with both GitLab as well as GitHub. Previously, we only supported GitHub for synchronization. This ensures seamless synchronization between Web Modeler, Desktop Modeler, and official version control projects.

Simplified deployment experience

With 8.6, Caunda introduced configuring clusters in Web Modeler for an easier deployment experience from a list of pre-configured clusters for deployment. In 8.7, we have simplified the deployment experience even further. User tokens are now used to authorize deployment, so users no longer have to enter credentials for a specific cluster requiring authentication.

Connector template generator

Now it is possible to automatically generate a custom connector template directly from Web Modeler by importing an existing API definition, such as an OpenAPI specification, Swagger specification, or a Postman collection.

Image19

Support for non-public database schemas

Customers can now easily install Web Modeler using a non-public PostgreSQL database schema without any additional configuration steps.

Multi-tenancy support with Play

Play now supports multi-tenancy.

Appending tasks

You can now create and append tasks with available resources within the current project. You can find the available processes, decisions, and forms in the append menu to directly create a task linked to that resource.

Zeebe User Tasks

With this release, user task events have been renamed to “Camunda user tasks” and this is the default type in Modeler. Job worker-based user tasks have been depreciated, with migration support provided to help transition smoothly to the new implementation type.

BPMN Copilot (SaaS only)

Thanks to Camunda’s integrated BPMN Copilot for SaaS, anybody can go from 0 to 80% of a process diagram in minutes. Users can generate process diagrams from natural language descriptions. The simple interface means that even BPMN novices can make meaningful, accurate diagrams. And BPMN Copilot also generates a new version each time it creates a diagram, so you can see the progression of your process.

Image4

You can also feed documentation of a process, other vendor specifications and more to generate your BPMN diagram with Camunda Copilot.

BPMN to text with Camunda Copilot (SaaS only)

As we all know, documentation can be tedious to create and difficult to maintain with rapid iterations. With the 8.7 release of Camunda Copilot, you can not only generate BPMN diagrams, but you can also generate text from your BPMN diagram.

Image5

This offers a wide range of benefits including:

  • Rapid draft of process documentation
  • Faster enablement for how a process works
  • Simpler explanation of process behavior to stakeholders

Ad-hoc sub-processes

With 8.7, we have introduced support of the BPMN element for ad-hoc sub-processes. This element is a collection of tasks that can be executed independently without predefined connections to other tasks in the process. This new process sets the stage for our support for agentic AI and AI agents, providing a compact visual representation and more flexible process flows―both deterministic and non-deterministic. AI agents enable you to increase the level of automation in a process, while BPMN provides guardrails for the use of AI models.

Image2

Reply scenarios

Play supports manual testing; however, this approach often leads to limited test coverage, lacks protection against regressions, and involves repetitive, error-prone tasks. However, with 8.7, you can now use Play to quickly repeat manual test suites by recording and playing back process instances as scenarios. As you save completed instances as scenarios, Play calculates the percent of elements covered by the scenario suite. This is the first step towards bringing automated testing into the Web Modeler and enabling business and IT to collaborate on automated tests.

REST API support of customer JWKS location and JWT algorithms

Self-Managed customers can now configure JWKS (JSON Web Key Set) location and JWT (JSON Web Token) algorithms manually. This is especially useful when the information cannot be derived from the OpenID configuration.

We hope you enjoy the latest Web Modeler 8.7 release!

Desktop Modeler

With our Camunda 8.7 release, we have provided support for the following features.

Support for upcoming features

Camunda 8.7 and 7.23 are now fully supported with Desktop Modeler.

Configure completion attributes for sub-processes

With the support of ad-hoc sub-processes, additional functionality has been added to support completion attributes. An ad-hoc sub-process can define an optional `completeCondition`―a boolean expression―that is evaluated every time an inner task or element is completed.

Support for process applications

We now support process applications and resource linking. You can use process applications to easily group and link processes, decisions, and forms in a project.

RPA editor

With the new RPA editor, users can edit, test, and deploy Robotic Process Automation (RPA) scripts.

Image22

Check out the full release notes for the latest Desktop Modeler 5.34 release right here.

Optimize

In this release, we have also worked on bug fixes and minor improvements for Optimize.

Console

Console self-managed: Tags and user-defined properties

We have added support for custom tags and properties in the self-managed Console to make it easier to manage orchestration clusters. Admins can now label clusters with tags like prod, dev, or test to identify them by environment quickly. These tags appear in the Console UI and can be accessed via the Administration API, helping with reporting and cost tracking.

Console self-managed: Inbound connectors monitoring

We’re introducing a new monitoring experience for inbound connectors to improve visibility and operational control. This release delivers a centralized view of all inbound connectors that are running for each Orchestration cluster managed within the Console.

We hope you enjoy the latest Console!

Installation options

This section gives updates on our installation options and various supported software components.

Self-Managed

Camunda 8 Run

Camunda 8 Run (C8Run) now supports additional configuration parameters, including web application port, location of keystore TLS certificates, and log level. With this release, we also introduced a new --docker option allowing you to start C8Run with the docker-compose up command and deploy Camunda 8 using Docker Compose instead of starting with a Java engine.

Reference architecture—Openshift dual region

We have published a Camunda dual-region deployment guide for OpenShift. This guide will allow our customers using OpenShift to develop active-passive configurations with failover and regional replication. For more information visit the documentation.

Kubernetes production guide

We’re excited to announce the Helm Chart Production Installation Guide. This comprehensive guide provides best practices and recommendations for running Camunda Self-Managed in production using Helm on Kubernetes. Whether you’re planning a new deployment or hardening an existing setup, this guide will help you optimize performance, reliability, and maintainability. Check it out here.

Support for user-defined manifests in Helm Charts

With this release, you can inject additional Kubernetes manifests directly through the values.yaml file. This feature is ideal for users who need to deploy custom resources—such as ConfigMaps, Deployments, or Services—alongside Camunda without modifying the Helm Charts themselves.

Task automation components

In this section, you can find information related to the components that allow you to build and automate your processes including our modelers and Connectors.

Connectors

With 8.7, document handling support has been added to over ten (10) connectors, allowing users to send documents via Microsoft Teams, Slack, or as email attachments. Documents can be uploaded to AWS or utilized with our AI connectors to gather additional insights into your process information. You can find more information in our documentation.

We’ve introduced intrinsic operations that enable users to work with documents easily and generate public links that are compatible with all connectors.

Inbound connectors now come with safer default settings to prevent multiple processes from starting in the event of duplicated messages.

We hope you enjoy the latest Connectors 8.7 release right here.

Document handling

Document handling has been updated for release 8.7 and now provides:

  • Production support for 8.7
  • Compatibility with both Amazon Web Services (AWS) S3 bucket storage as well as Google Cloud Provider (GCP) bucket
  • A REST API that is available to manage and work with document operations:
    • Upload
    • Download
    • Delete
    • Create link

Intelligent document processing (IDP)

With 8.7, Camunda now offers intelligent document processing (IDP) enabling organizations to streamline and automate the handling of complex documents, minimizing manual errors and lowering operational costs. By integrating IDP into your process orchestration, you can enhance compliance, increase efficiency, and gain a competitive advantage.

Powered by AWS Textract and LLM technologies, intelligent document processing (IDP) helps you integrate automated document processing by extracting desired data fields and using them later into your end-to-end processes. You can train your IDP applications to extract certain data from different document types using an LLM extraction model.

Image8

Once configured, you can load various documents and test them against your configured extraction.

Image20

A connector is then created and can be used in various Camunda processes to extract data from documents providing deeper insights into the information provided.

Image1

This latest release provides the following:

  • IDP is now supported in production with the release of Camunda 8.7.
  • Allows for the configuration of your AWS region, AWS bucket name, and Camunda cluster while testing extraction.
  • IDP provides support for JSON extracted fields.

Check out the getting started guide to get an early insight into the new feature.

Robotic Process Automation (RPA)

As mentioned earlier in this release blog, Camunda now provides Camunda RPA to create and execute integration with other systems seamlessly from your Camunda process.

Camunda focuses on creating micro RPA bots that simulate APIs, allowing you to automate interactions with legacy systems seamlessly. These bots serve as the glue between the legacy world and new digital environments.

Image18

Check out the getting started guide to get an early insight into the latest features.

Ecosystem

In this section, we provide release details for our various business solutions and product integrations. 

Camunda SAP Integration

With Camunda’s support for SAP, you can simplify your SAP transformations and increase business agility. Camunda offers SAP integration to provide the ability to integrate both SAP and non-SAP systems.

Image16

Our SAP integration has several modules to support the following functionality:

  • Retrieve and write data to and from any SAP System (via OData and RFC)
  • Start a Camunda process from any SAP System via an API
  • Build one-user multi-page flow
  • Generic SAP Business Technology Platform (BTP) Process Launcher to start Camunda processes in an SAP Fiori application during development
  • Render Camunda forms in the SAP Fiori Design System as part of the one-user multi-page flow

This deep integration provides many benefits to our customers including:

  • It is an SAP Certified integration.
  • There are no additional licensing costs.
  • Our integration is compliant with SAP’s Clean Core strategy.
  • Camunda’s integration with SAP retains both SAP and BTP governance.

Camunda 7

With Camunda 7, we have added new features to improve the usability of Cockpit including:

  • Operate with subsets of process instances.
  • Filter processes with exceptions and retries left.
  • Configure the default value for the cascade flag.
  • Display business key for called process instances.
  • Setting variable batch operation is now idempotent.

In addition, we now provide support for the FEEL Scala Engine as an integrated script engine.

We have added support for new environments as well including:

  • PostgreSQL 17
  • AWS Aurora PostgreSQL16
  • Spring Boot 3.4
  • WildFly 35
  • Quarkus 3.20 LTS

The engine, by default, points to Spring 6 now.

Thank you

We hope you enjoy our latest minor release updates! For more details, be sure to review the latest release notes as well. If you have any feedback or thoughts, please feel free to contact us or let us know on our forum.

If you don’t have an account, you can try out the latest version today with a free trial.

Join us live to learn more!

Check out our companion release blog for additional information. You can learn more about recently released features in our upcoming webinar scheduled for April 10th at 11:00 AM ET / 4:00 PM CET. Register for the webinar to hear all about this release.

The post Camunda 8.7 Release is Here appeared first on Camunda.

]]>
Building Your First AI Agent in Camunda https://camunda.com/blog/2025/02/building-ai-agent-camunda/ Fri, 28 Feb 2025 16:44:18 +0000 https://camunda.com/?p=130273 Follow this step-by-step guide (with video) to use agentic ai and start developing agentic process orchestration with Camunda today.

The post Building Your First AI Agent in Camunda appeared first on Camunda.

]]>

Update for Camunda 8.8

Note: This step-by-step guide takes advantage of agentic AI features and capabilities in Camunda 8.7. Please see our latest step-by-step AI Agent Guide to implement a similar process using Camunda 8.8 alpha functionality.

Building your first agentic artificial intelligence (AI) process is easier than you think. Our intention in this post is to provide you with step-by-step instructions to create that first process using agentic process orchestration with Camunda. If you’re new to Camunda, you can get started for free here.

Within BPMN, there is a construct called an ad-hoc subprocess. An ad-hoc subprocess in BPMN is a type of subprocess where tasks do not follow a predefined sequence flow. Instead, tasks within the subprocess can be executed in any order, repeatedly, or skipped entirely, based on the needs of the process instance.

In Camunda, this workflow pattern enables the injection of non-deterministic behavior into otherwise deterministic processes as ad-hoc subprocesses: the ad-hoc subprocess serves as a container where the exact sequence and occurrence of tasks are not pre-defined but rather determined at runtime by leveraging LLMs.

This is the implementation path for Camunda’s support for AI agents, as it allows portions of the decision-making to be handed over to an agent for processing. That processing can include human tasks as well. This approach provides the AI agent some freedom, with constraints, about what actions should be processed.

For a better understanding of Camunda’s terminology, we have provided our definition of an AI agent below:

An AI agent is an automation within Camunda that leverages ad-hoc subprocesses to perform one or more tasks with non-deterministic behavior. AI agents can:

  • Make autonomous decisions about task execution
  • Adapt their behavior based on context and input
  • Handle complex scenarios that require dynamic response
  • Integrate with other process components through standard interfaces

AI agents represent the practical implementation of agentic process orchestration within the Camunda ecosystem, combining the flexibility of AI with the reliability of traditional process automation.

These subprocesses provide access to actions that can improve decisions and help to optimize the completion of tasks and choices. They can be easily integrated into your end-to-end business process.

Model overview

Our example model for this process is a fraud detection process when submitting tax forms.  

A BPMN model of an AI agent in an ad-hob sub-process using Camunda.

The process begins when a form is filled in by a user who wants to submit information for their tax return. An OpenAI bot checks the data provided for any indication of fraud. The AI bot will determine which tasks, from a list of tasks, to perform for this set of criteria. The appropriate tasks within the ad-hoc subprocess will then be activated running in parallel until all are completed.

The tasks that can be performed are located in the ah-hoc sub process and are:

  1. Send an email asking for more information,
  2. Ask a human expert for their opinion,
  3. Declare that fraud has been detected.

Each of these options triggers a different type of action. For example:

  • Sending an email activates two tasks,
  • Asking an expert activates a front-end application,
  • Detecting fraud will activate an escalation event that will cancel the process. This could initiate another process to investigate the fraud, of course.

Let’s jump right in and build the process.

Assumptions and initial configuration

A few assumptions are made for those individuals who will be using this step-by-step guide to create their first agentic AI process. These are outlined in this section.

The proper environment

In order to take advantage of the latest and greatest functionality provided by Camunda, you will need to have a Camunda 8.7.x cluster or higher available for use. You will be using Web Modeler and Forms to create your model and human task interface, and then Play and Task List when executing the process.

Required skills

It is assumed that those using this guide have the following skills with Camunda.

  • Form Editor – the ability to create forms for use in a process.
  • Web Modeler – the ability to create elements in BPMN and connect elements together properly, link forms, and update properties for connectors.
  • Play – the ability to step through a model from Modeler.
  • TaskList – the ability to open items and act upon them accordingly as well as starting processes.

GitHub repository and video tutorial

If you do not want to build this process from scratch, you may access the GitHub repository and download the individual components. Accompanying this guide, we have created a step-by-step video tutorial for you. The steps provided in this guide closely mirror the steps taken in the video tutorial.

Connector secrets

Separating sensitive information from the process model is a best practice. Since we will be using a few connectors in this model, you will need to create the appropriate connector secrets within your cluster. You can follow the instructions provided in our documentation.

If you do not have existing accounts for the connectors that will be used, you can create a SendGrid account and an OpenAI account. You will then need to get an API key for each service which will be used in the Camunda Console to create your secrets.

Connector-secrets

The secrets will be referenced in your model using {{secrets.yourSecretHere}} where yourSecretHere represents the name of your connector secret..

For this example to work you’ll need to create secrets with the following names:

  • OpenAI
  • SendGrid

Building your process

Creating your process application

The first step is to create a process application for your process model and any other associated assets. Create a new project using the blue button at the top right of your Modeler environment.

Create-new-project

Enter the name for your project. In this case we have used “Agentic Fraud Detection” as shown below.

Create-new-process-application

Next, create your process application using the blue button provided.

Create-new-process-application-2

Enter the name of your process application, select the Camunda 8.7.x cluster selected for your project, and select “Create” to create the application within this project.

Initial model

The next step is to build your process model in BPMN and the appropriate forms for any human tasks. We will be building the model represented below.

Ai-agent-bpmn-model-camunda

Click on the process “AI Fraud Detection Example” and we will begin building out the model in BPMN off the start step provided.

Ai-model-bpmn-building

We will start by building our model in Design mode as shown below.

Ai-model-bpmn-start

These steps will help you create your initial model.

  1. Name your start event. We have called it “Enter Financial Details” as shown below.
    Ai-model-bpmn-start-2

  2. Add an End Event and call it “No Fraud Found.”
    Ai-model-bpmn-end

  3. Create a task after the start task. Change it to an OpenAI Outbound Connector task and call it “Decide on likelihood of fraud” as shown below.
    Openai-connector-camunda-1
     
    Openai-connector-camunda-2
     
    Openai-connector-camunda-3

  4. Create a Script task after the OpenAI connector task that will be called “Create list of tasks” and will provide a list of tasks based on the OpenAI decision that will be run in the ad-hoc process.
    Script-task

Creating the ad-hoc subprocess

Now we are at the point in our process where we want to create the ad-hoc subprocess that will be used to trigger the appropriate components based on the decisions made by the previous tasks. Just complete these steps to create the ad-hoc subprocess for your model.

  1. Drag and drop the proper element from the pallet for an expanded subprocess.
    Ad-hoc-sub-process-camunda

    Your process will now look something like this.
    Ad-hoc-sub-process-camunda-2

  2. Now this is a standard subprocess, which we can see because it has a start event. We need to remove the start event and then change the element to an Ad-hoc subprocess.
    Ad-hoc-sub-process-camunda-3

    Once the type of subprocess is changed, you will see the BPMN symbol (~) in the subprocess denoting it is an ad-hoc subprocess. And don’t forget to connect your script task to the subprocess.

    This is the key to our process, as the ad-hoc subprocess will contain a set of tasks that may or may not be activated. Although BPMN is usually very strict about what gets activated, this construct allows us to control what gets triggered by what is passed to the subprocess.
  3. Take a little time to expand the physical size of the subprocess as we will be adding elements into it.
  4. As mentioned, one of the options is going to be sending an email to request additional information. Create two connected tasks inside the subprocess, the first is an OpenAI connector task called “Generate Email Inquiry” followed by a SendGrid connector task called “Send Email“ as shown below.
    Ad-hoc-sub-process-email

    You will use the “Change Element” option to select the OpenAI Outbound Connector and the SendGrid Outbound Connector.

    So the act of sending the email has two elements: generating the email content and then sending the email.
  5. Now we want to add the option to call on an expert for advice, so add a task called “Call on Expert for Advice” and be sure to change this to a User Task.
    Ad-hoc-sub-process-expert

    This enables us to create a front end form to interact with a human as part of the ad-hoc process.
  6. Finally, we want to add the option to trigger fraud, so we will add an event to the subprocess called “Fraud Detected” which will then throw another event which is an escalation event, as shown, which will be called “Throw Fraud.”


    This escalation event is going to throw the process out of the ad-hoc subprocess.
    Fraud-detected

  7. We are going to catch this fraud throw event with a boundary event on the subprocess and change it to an escalation boundary event. This will just end the process for our example.

    Click on the ad-hoc subprocess and create the catch event, as shown below, then connect an end event called “Fraud Found” so that your diagram looks something like what is shown below.
    Ad-hoc-sub-process-camunda-4

  8. Let’s give the option for the expert to also stop the process if fraud is indicated. So, add an exclusive gateway with the option for a “No Fraud” event or to Throw Fraud as indicated below. Be sure to label your gateway and the branches as indicated.
    Ad-hoc-sub-process-camunda-5


    This gives us two cases where fraud can be found: the AI bot can find fraud or the expert can find fraud, both triggering the end of the workflow.

Finalizing the process

Now that we have completed the ad-hoc subprocess, we have a few more things to add to finalize our overall process.

  1. We want to add another task to the overall process before the final end event. Create another OpenAI connector task, which will serve as an AI bot called “Check final Decision” to confirm that the decision made (fraud or not fraud) is accurate.
    Ad-hoc-sub-process-camunda-6

  2. If it is determined that the decision is not accurate, we need to return to the “Decide on likelihood of Fraud” element to rerun the ad-hoc subprocess. In order to do this, add an exclusive gateway after the “Check final Decision” OpenAI task called “Is everything OK?”.
    Everything-ok-gateway


    If we are happy with the decision, the process ends (the “yes” path) and if we are not happy, then this will return to the “Decide on likelihood of fraud” (the “no” path).

Now you have completed the design of our model, we move to the Implement tab to make sure we provide all the proper parameters configured.

Optional: Creating forms

Let’s start by creating the forms you will need for the human tasks in this process. You will need two (2) forms for this process. You are welcome to create your own forms or use the ones in the GitHub repository link provided. To build your own, follow these instructions.

Enter Details of Tax

The first is the initial form that will be used to initiate the process called the “Enter Details of Tax” form. The completed form should look something like this.

Tax-form

You are welcome to create the Text view fields for the title, “Tax Return Submission Form” and to separate the information on the form into three sections although they are not necessary. The Text view fields as shown in the image are:

  • Personal Information
  • Financial Information
  • Deductions

You will need the following fields on this form:

FieldTypeDescriptionReq?KeyOther
Full NameTextEnter your full nameYfullname
Date of BirthDate Time
Subtype:Date
Select your date of birthYdob
Email AddressTextYemailAddressValidation pattern: Email
Total IncomeNumberEnter your total incomeYtotalIncomePrefix: € or $
minimum: 0
maximum: 9999999
Total ExpensesNumberEnter your total expensesYtotalExpensesPrefix: € or $
minimum: 0
maximum: 9999999
Large PurchasesTag listPlease add any large purchases you’ve made this yearYlargePurchasesOptions source: Static
Static options:
  • Car, key car
  • House, key house
  • Stocks, key stocks
  • Holiday, key holiday
  • Boat, key boat
Charitable DonationsTextEnter any charitable donationsYcharitableDonations

View Tax Details

The final form is used to view the results for the likelihood of fraud providing a human to make that determination. It is called the “View Tax Details” form. The completed form should look something like this.

Tax-form-2

You are welcome to create the title Text view field “Tax Return Check Form” and the subtext “I don’t have time to build a front end, so you just need to guess . . . Fraud or no Fraud?”

You will need the following fields:

FieldTypeDescriptionReq?KeyOther
FraudCheckbox????Tick the fraud boxNfraudDetectedDefault value: Not checked
Reason for DecisionTextYexpertAnalysis

Obtaining and linking your forms

If you created your own forms, they will already exist in your process application. If not, please download them from the GitHub repository provided to your project before starting this set of steps.

To import these forms in your project, simply select “Create New->Upload Files” and select the two downloaded forms. Now your Camunda process application should look something like what is shown below.

Link-forms

Now that we have the form files, we need to link them to the appropriate elements. First, you will need to switch to the Implement tab so that you have access to the required features.

  1. Select the start event (Enter Financial Details) and select the chainlink from the menu as shown below:
    Link-forms-2
     
  2. Select the form named “Enter Details of Tax” and click “Link” to attach this form to your start event.
    Link-forms-3


    You can now view this form by selecting “Open in form editor” from the link icon and you can review the details of the information that will be entered to initiate the process.
    Link-forms-4


    When opened in “Validate Mode” within the form editor, you can see the Form Output in the lower right hand corner of the UI. This information will be important when we go through our process later.
    Form-output

  3. Go back to Modeler and select the “Call on Expert for Advice” human task and link the other form, “View Tax Details,” to this task.
    Link-other-form

Now that our forms are connected, we just have to make a few final configurations before stepping through this process.

Configure remaining elements

You should still be using the Implement tab so that you have access to the required features to update the properties for the required elements. At this time, you may also want to confirm that you are validating against Zeebe 8.7 or higher.

Implement-zeebe-87

Decide on likelihood of fraud

As mentioned, you must create connector secrets for the OpenAI and the SendGrid connector elements. Now we need to add those secrets to the proper places in the process.

Connector-secrets-keys
  1. Select the “Decide on likelihood of Fraud” OpenAI task and update the properties to include the secret and prompt.
    Add-secret-prompt

  2. You will want to check the name of your connector secret for OpenAI in the Camunda Console (in our case it is OPENAI_KEY) and enter that into the Authentication location as shown below.
    Check-key

    Remember the secrets will be referenced in your model using {{secrets.yourSecretHere}} where yourSecretHere represents the name of your connector secret.
  3. The next thing we need to enter is the Prompt that will be sent to OpenAI. This prompt is key to our example because it builds based on the values provided in the intake form and will then ask OpenAI to make a determination about fraud.
    Enter-prompt

    Click the fx icon to change the input method for this field to a FEEL expression, which will look like this:
    Enter-prompt-2

    Click the boxes to the right in the field to open the pop-up editor which will provide additional space to enter the expression.
    Enter-prompt-3

    Copy this text into the field:
    “I’d like your opinion on if this hypothetical situation could be fraud or not.” + fullName + ” has submitted details of his economic status to a hypothetical government. They are as follow: Date of Birth” + string(dob) +
    ” Total Income ” + string(totalIncome) +
    ” Total Expenses ” + string(totalExpenses) +
    ” Charitable Donations ” + charitableDonations +
    ” Large purchases ” + string join(largePurchases , “, “) + ” I’m going to need you to respond strictly in the following format: with nothing more than one, two or three of the following words separated by commas. ’email’ If the person submitting should be asked to clarify anything. also add ‘human’ if a human expert could be used in clarifying the submission. If neither option could justify the submission add the word ‘fraud'”

    Click the “X” in the upper right-hand corner to close the pop-up editor and your properties should now look like what is shown below:
    Enter-prompt-4

  4. Finally, we need to modify the result expression for this task so that the output will be easier to use in our model. Replace the existing line in the field with the following.
    <code>{response:response.body.choices[1].message.content}

    Your result expression should now look like this:
    Result-expression


    Taking this action allows the process to parse the response appropriately to provide just the metadata that we need.

Create list of tasks

Now we want to use the results provided by our OpenAI request to create the list of tasks that will trigger the optional elements in our ad-hoc subprocess. If you recall, the prompt had some key words:

  • email
  • human
  • fraud

If these are found in the results from the OpenAI request, this will guide us on which optional elements to trigger.

For this script task, we are going to generate the list of tasks, so we need to expand the Script section of the properties to modify the Script variables for this task. We can do this by adding a Result variable of tasks and then a simple FEEL expression which will generate the list from our OpenAI response.

Select FEEL expression for the Implementation.

Feel-implementation

Enter this FEEL expression in the properties for the script task.
split(response, ", ")

Your properties for the script task will now look like this:

Properties-script-task

The results of this script is a list of tasks in the variable tasks for use in the ad-hoc subprocess. Be sure to confirm that the Implementation property is set to “FEEL expression.”

Ad-hoc subprocess

Now we need to provide the ad-hoc subprocess with the list of tasks.

Select your subprocess and review the properties to find the Active elements. Expand this section and add the tasks variable as the Active elements collection for the subprocess.

Active-elements
Correlate the tasks to activate

The way tasks are activated is defined by their ID in the process, which has to correlate with the keywords we used: email, human, fraud. To simplify this, we will just set the ID for each optional task to the associated keyword.

Any task that does not have a task before it is a potentially activatable task.

  1. Select the “Generate Email Inquiry” OpenAI element and change the ID to email.
    Email-id

  2. For the “Call on Expert for Advice” human task, change the ID to human.
  3. For the “Fraud Detected” event, change the ID to fraud.

Finalize all connector tasks

We need to update the remaining connector tasks to update the keys using our connector secrets.

Generate email inquiry
  1. Select the “Generate Email Inquiry” OpenAI connector task and update the OpenAI API Key with your secret.
    Openai-secret-key

  2. You will also need to update the prompt using a FEEL expression for this step. Copy and paste this prompt into the Prompt property for this task.

    I’d like your opinion on if this hypothetical situation could be fraud or not.” + fullName + ” has submitted details of his economic status to a hypothetical government. They are as follow: Date of Birth” + string(dob) +
    ” Total Income ” + string(totalIncome) +
    ” Total Expenses ” + string(totalExpenses) +
    ” Charitable Donations ” + charitableDonations +
    ” Large purchases ” + string join(largePurchases , “, “) + ” can you generate an email to ask for clarification on anything you think is odd about this?”

    Your Prompt should look like that shown below:

    Prompt

    This prompt will be used to generate an email requesting clarification.
  3. Now you need to configure how we will parse the response. This will look very similar to the previous configuration at the OpenAI task, but we are changing the variable assignment to emailBody.
    {emailBody:response.body.choices[1].message.content}

    Your result expression should now look like this:
    Result-expression-emailbody
OPTIONAL: SendGrid task

If you have a SendGrid account and key, you can complete the steps below, but if you do not, you can just modify the “Send Email” task to be a User Task.

  1. Enter your secret for the SendGrid API key using the format previously discussed.
  2. You can enter “Tax Man” for the sender of the email since the question will be coming from our tax commission.
  3. Select the email address as the sending email address that you know is properly configured in SendGrid.
  4. For Receiver, we will use variables provided from our initial form: fullName and emailAddress. Don’t forget to click the fx icon before entering your variable names and you can use autocomplete as shown below.

    Your properties should now look something like this.
    Sendgrid-secrets-key

  5. Select “Simple (no dynamic template)” for the Mail contents property in the Compose email section.
  6. Enter the Subject as “Tax Inquiry”.
  7. Select the emailBody variable that is the response from the OpenAI email step prior to this one.
    Emailbody-variable

We also want to update the final OpenAI connector task “Check final Decision” with the proper secret and prompt.

  1. Enter your OpenAI API key using the format for the secret as previously discussed.
  2. Enter the prompt as follows:
    “I’d like your opinion on if this hypothetical situation could be fraud or not.” + fullName + ” has submitted details of his economic status to a hypothetical government. They are as follow: Date of Birth” + string(dob) +
    ” Total Income ” + string(totalIncome) +
    ” Total Expenses ” + string(totalExpenses) +
    ” Charitable Donations ” + charitableDonations +
    ” Large purchases ” + string join(largePurchases , “, “) + ” When asked if this was fraud the answer came back as ” + string(fraudDetected) + “If you think this is accurate reply only with ‘yes’ otherwise reply with ‘no'”
  3. Update the Result expression to use finalCheck for the variable as shown below:
    {finalCheck:response.body.choices[1].message.content}

Update gateways and events

We already linked the proper form to our human task, but we want to make sure that the output from that form is used for triggering fraud or not. In this case, we need to use the fraudDetected variable set by the checkbox on the form. Now we need to configure our exclusive gateway after the “Call on Expert for Advice” user task.

  1. For the “Yes” branch, you want to update the Condition expression property to use:
    fraudDetected = true
  2. For the “No” branch, you want to update the Condition expression property to use:
    fraudDetected = false

Now we will configure the “Throw Fraud” event.

  1. Select the “Throw Fraud” event and expand the Escalation section. Select “Create New” for the Global escalation reference and set the new name to Fraud!.
  2. The Code field is what correlates the throw to the catch boundary event in our model. Set this to Fraud! as well.
    Fraud-code

  3. Select the catch boundary event and select “Fraud!” for the Global escalation reference.
    Fraud-escalation

Finally, we need to configure the last exclusive gateway paths using the result from the “Check final Decision” OpenAI response.

  1. For the “Yes” branch, you want to update the Condition expression property to use:
    finalCheck = "yes"
  2. For the “No” branch, you want to update the Condition expression property to use:
    finalCheck = "no"

    Or you can set the “No” branch as the default flow.

Final step

We want to set a default variable to be used for the subprocess and set this as the input for the ad-hoc subprocess which will be fraudDetected.

  1. Select the subprocess and create a new input variable of fraudDetected with a value of false.
    Nofraud

  2. Use that same variable with a value of itself to copy that variable back out of the subprocess as an Output variable.
    Yesfraud

  3. Now select the “Fraud Detected” event and set an output variable for that of fraudDetected as if this event is triggered, we have determined fraud was detected.
    Yesfraud-2

That’s it. You have completed the model and we are now ready to test it using the Camunda Play feature.

Step through your model with Play

While still within Modeler, click the Play tab so we can test this model.

Play will display the cluster that will access the secrets and use that cluster’s engine to run through the process.

Camunda-play-ai-agent

Click Continue.

When ready, Play will provide the following box:

Startprocess

Click Start a process instance to run through the model.

If I access the Start Form, I can fill in data and save that for future instances.

Startform

In this case I have filled in some data.

Exampledata

Click Start Instance to begin the run through. It will take a moment, but when the “Loading xxx details” disappears, you will see what tasks were initiated by the information entered in the starting form and review the variables and which optional tasks were triggered.

Note: Your results may vary as AI can return different results at different times.

Info

We see here that the email task and the human expert task was triggered by the input information. I can see the values of the process variables at the bottom of the screen. If I check my email, I get an email that mentions my expenses being much higher than my income.

Email

If I select the form for the Expert task, I can fill in this form. In this case, I am going to determine that I do not think this is fraud and see what happens.

Humanform

By doing this, the decision step triggered another pass through the ad-hoc subprocess which generated another email and another expert review.

Model-loopback

This time I will select fraud for the expert human task. That action ends the process with a positive for fraud detection.

Model-fraudfound

Congratulations!

You did it! You completed building an AI Agent in Camunda from start to finish including running through the process to see the results. You can try different data in the initial form and see what happens with new variables. Don’t forget to watch the accompanying step-by-step video tutorial if you haven’t already done so.

The post Building Your First AI Agent in Camunda appeared first on Camunda.

]]>
Continuous Integration and Continuous Deployment with Git Sync from Camunda https://camunda.com/blog/2025/02/continuous-integration-and-continuous-deployment-with-git-sync/ Thu, 27 Feb 2025 21:48:45 +0000 https://camunda.com/?p=130006 Reduce errors and foster collaboration Camunda's Git integration and CI/CD pipeline blueprint.

The post Continuous Integration and Continuous Deployment with Git Sync from Camunda appeared first on Camunda.

]]>
Every project, including process orchestration initiatives, requires time and multiple iterations to achieve the desired outcome. Adopting continuous integration and continuous deployment (or continuous delivery), known as CI/CD, is the most effective way to automate code integration, testing, and application deployment. CI/CD enhances development efficiency, minimizes errors, and speeds up software delivery while ensuring high quality.

Camunda now enables organization owners and administrators to link their Web Modeler process applications to GitHub and GitLab. This ensures seamless synchronization between Web Modeler, Desktop Modeler, and official version control projects.

Why is this important?

CI/CD plays a crucial role in automating and optimizing the software development lifecycle, allowing teams to release updates more quickly, minimize errors, and uphold code quality. Continuous integration (CI) ensures frequent merging and testing of code changes to detect issues early, while continuous deployment/delivery (CD) streamlines the release process, reducing manual work and potential deployment risks. By adopting CI/CD, organizations can drive faster innovation, enhance collaboration, and achieve more reliable software delivery.

An analysis of over 12,000 open source repositories revealed that implementing CI/CD practices resulted in a 141.19% boost in commit velocity, highlighting significantly faster development cycles.

Many organizations have adopted the use of GitLab and GitHub to manage the software development lifecycle; however, it can be challenging to integrate GitLab and GitHub into your development cycles and deployment processes. With Camunda’s integrated solution, developers can use Camunda’s solution to transfer files to the Git repository all the way through production employment.

This integration links a process application to a Git repository branch, making it easy for both nontechnical users and developers to access the source of truth and collaborate seamlessly across desktop and Web Modeler.

Git Sync with Camunda

In order to take advantage of this integration, a bit of setup is required. However, once configured, you can take advantage of a button click to sync your Camunda process application and your Git repository. As mentioned, this integration works with GitLab and GitHub. The configuration is quite similar, and you can find detailed instructions in our documentation.

In the screenshots below, you can see the option to configure the Git integration by clicking the upper right button. Enter the fields for your GitHub configuration (for this example) after installing the Camunda Git Sync application in GitHub for our repository.

Space Mutiny
Configure repository connection

In this example case, a GitHub repository has files ready to be pulled to your Camunda process application.

GitHub repo with files to pull to application

You can now synchronize your process application and, in this case, pull down the contents of your GitHub repository (reflected below).

Sync with GitHub

This action will execute a pull from GitHub of all the latest commits to your Camunda process application as shown in Modeler here.

Modeler display of GitHub commits

Version control synchronization

As with any project, you are going to make changes, and you’ll want to make sure that these changes are captured in your Git repository with proper version information.

Proper version info in your repo

Camunda’s Git Sync allows you to synchronize any changes made to your project files to your repository with the proper associated version control information. For this version commit, the name of the main BPMN process was updated to be spelled correctly, as is shown in the Git repository.

Updating the name of the BPMN process

As expected, GitHub reflects that the original misspelled BPMN file was deleted from the Git repository and replaced with the file with the properly spelled name.

GitHub reflects changes

Let’s now look at some modifications to the elements of the process model itself and use Camunda’s tools to show how the versions can be compared.

By modifying the main model (Eligibility Check) and then synchronizing those changes with GitHub using a minor version change, you can see that GitHub shows your modifications to the committed files.

GitHub showing modification to committed files

Although you can see the changes between versions in GitHub, this might not be as easy to interpret as reviewing the changes in a more visual way by diffing the versions with Camunda Web Modeler.

Visual review of changes in Modeler

In Web Modeler, you can see the differences between versions with explanations in a graphical UI, which makes it easier to understand what changes were made.

Graphical UI makes change easier to see

Parallel feature development

Camunda’s Git sync also enables parallel feature development by allowing multiple process applications to connect to separate feature branches. This ensures teams can work on different features simultaneously without overlapping or disrupting each other’s progress.

Git Sync blueprint

In addition to Git sync functionality, Camunda also offers a CI/CD pipeline blueprint to help get you started. This blueprint showcases a flexible CI/CD pipeline for deploying Web Modeler folder content across various environments using GitLab and Camunda.

CI/CD pipeline blueprint

With this custom integration, you can fully orchestrate your release process and modify it to fit your specific requirements. While Web Modeler provides native Git Sync functionality, this blueprint allows you to connect your process application to a remote repository and sync your application files with a single click to create a new commit to the remote repository and then initiate your CI/CD pipeline process.

This blueprint provides:

  • Version control. It enables the synchronization of a Web Modeler process application with a target GitLab (by default) repository by creating a merge request. Once merged, this request initiates the deployment pipeline.
  • Fully customizable. Although the blueprint is designed to be used with GitLab, you can adapt it to work with other CI/CD tools.
  • Multistate deployments. It simulates a deployment pipeline with three stages, incorporating a manual review and additional testing. A milestone is created after each successful deployment to track the deployment status.

With mulitstate deployments, you can use different projects within the same Web Modeler instance to represent different stages. You can allow developers access to the development project, which may be synced to a feature branch, while giving only a few users access to the production project, which is synched for the production branch.

CI/CD with GitHub and GitLab

This offering from Camunda allows organizations to adhere to CI/CD procedures and guidelines, supporting the full pipeline from development to deployment. Developers can push process application changes to a Git repository and trigger the deployment process using the CI/CD pipeline blueprint.

By using CI/CD with Git and Camunda, teams can efficiently automate workflows, reduce manual intervention, and ensure reliable, continuous software delivery.

CI/CD is essential for automating and optimizing the software development lifecycle, enabling faster, more reliable, and high-quality software delivery for several reasons. For example:

  • CI ensures early issue detection by frequently merging and testing code changes, reducing integration challenges.
  • CD automates releases, minimizing manual effort, deployment risks, and time-to-market.

With Git integration with Camunda and our CI/CD pipeline blueprint, organizations can reduce errors and foster collaboration while empowering teams to innovate quickly while maintaining stability and consistency across development, testing, and production environments.

Try it yourself

If you want to dive in and try this out yourself, learn how to set up the integration.

You can also follow along with this step-by-step video tutorial that will walk you through how to set this up and take advantage of the Git sync feature.

The post Continuous Integration and Continuous Deployment with Git Sync from Camunda appeared first on Camunda.

]]>
Camunda’s One Model Approach to Process Orchestration https://camunda.com/blog/2025/02/camunda-one-model-approach-to-process-orchestration/ Mon, 17 Feb 2025 21:02:46 +0000 https://camunda.com/?p=128884 Get a holistic view into your process execution data with a one model approach to reporting.

The post Camunda’s One Model Approach to Process Orchestration appeared first on Camunda.

]]>
Camunda, well known for its spectacular process orchestration capabilities, has a unique approach to providing visibility into your end-to-end process—a one model approach. The same model can be used by your entire organization, including developers, business analysts, information technology leaders, and executive owners for all stages of your process orchestration.

Essentially, there is no need to use a different representation of the workflow for different parts of the deployment process, from design, to monitoring and improvement.

Business process model and notation

The foundation of Camunda’s one model approach is the use of Business Process Model and Notation (BPMN), a visual language designed to represent business processes clearly and effectively. BPMN depicts processes with graphical flowcharts using a standardized set of symbols and techniques. This graphical representation allows the same model to be easily shared across the organization.

BPMN diagrams are easily understandable by individuals of all technical backgrounds. Business owners, technical teams, project managers, and business analysts can all leverage BPMN to visualize process execution, identify key participants, and determine where integrations are needed.

These diagrams strike a balance between simplicity for visualization and technical depth for execution. By using BPMN, organizations eliminate ambiguity and provide clear context for process specifications.

We will be using the following example to show how one model is used throughout Camunda’s components.

A BPMN model determines response to customer survey
An example model for automating a sentiment analysis process

Benefits of a one model approach

There are many benefits to the organization associated with adopting this one model approach for process orchestration.

Collaboration

With a single model approach, business and IT can work collaboratively to build complex business processes using a common language. IT and developers can take that same process, and add required integration or other functionality to the same model used by the business teams. It serves as a single source of truth by using the same model for design, execution, monitoring, and analysis. This eliminates the need to toggle between different representations and ensures that all stakeholders are always working with the most up-to-date version.

Collaboration is provided with Camunda Modeler allowing users to share models with others within the organization as well as the ability to add elements and make changes to the models as a collaborator.

Camunda Modeler
Collaborators (on the right) can easily be added to share the model

Enhanced visibility

In addition to supporting advanced workflow patterns, a process orchestration solution like Camunda offers full visibility into the entire end-to-end process, extending beyond the tasks performed by a single tool. Visibility with the same model simplifies discussions and analysis because you do not need to learn new tools or models to gain insight into the process.

Process execution and monitoring

This same process will be executed by Camunda’s workflow engine, Zeebe. This simplifies monitoring and troubleshooting because process execution data aligns directly with the visual model providing clear visualization of the entire process.

When monitoring running processes, users see information about process status and incidents overlaid on the same model, so they don’t have to interpret technical performance data or decipher server logs.

Process insight with Zeebe
Monitoring an active process (see blue arrow) and tracking its history and variables

Monitoring an active process (see blue arrow) and tracking its history and variables

Process improvement

Reports show historical process execution data with the same visualization, including heatmaps and branch analysis, providing an intuitive way to understand process performance and possible bottlenecks. Having access to this data in the same representation used in every stage makes it easier to identify areas for optimization.

Report showing historical process execution data alongside the visualization
Viewing analytics and process performance of the same model

One model approach is the way to go

Camunda’s approach to the model representation ensures a consistent process model visualization across design, execution, monitoring, and optimization. With this single model, you can enhance overall collaboration throughout the organization.

This one model approach offers a holistic view of process execution data, simplifying analysis and interpretation. Process status and incident details are seamlessly integrated into the same model used during the design phase.

The complexity that comes from dealing with multiple models for multiple stakeholders, forcing frequent version confusion and reconciliation, will be increasingly untenable in a world where more and more is being automated every day (including by AI). Camunda’s “one model” approach is easier today, and it’s a powerful way to keep your automations more future-proof when things inevitably change.

The post Camunda’s One Model Approach to Process Orchestration appeared first on Camunda.

]]>
Automation through Composability https://camunda.com/blog/2025/02/automation-through-composability/ Tue, 11 Feb 2025 21:04:04 +0000 https://camunda.com/?p=128425 Reduce risk and improve accuracy and efficiency with a truly composable architecture.

The post Automation through Composability appeared first on Camunda.

]]>
Hopefully, you caught our blog post about onboarding automation using artificial intelligence and machine learning. This follow-on blog shows an example using machine learning to predict the onboarding risk for a particular applicant, but takes that a step further by replacing a previously manual (human) task with a completely automated task instead.

Camunda’s composable architecture allows you to streamline your processes by easily swapping out manual components with their automated counterparts with ease.

Manual risk verification

In this example using a manual risk verification, let’s assume we have a Camunda process already in place. We’re going to review loan applicants by checking their credit scores and other financial information to help determine an initial risk value. This will help determine if the loan should be offered.

This process makes a call to a service to obtain certain relevant financial information about the applicant that is then used to help determine the loan risk. However, the prediction of risk is done using a program that does not have an API for making a call. Instead, the loan officer must manually enter information (copy/paste) into another program to obtain the risk.

Based on the results of that legacy system access, the loan decision will be given one of these statuses:

  • Rejected: For loans that have “high” or “do not lend” for the loan risk.
  • Approved: For loans that have a “low” risk.
  • Manual review: For loan applications that have “medium” risk; a loan supervisor will review all the information to make the final decision on the loan application.

As you can imagine, this existing process relies on humans at both the risk determination and the manual verification stage, which can slow down a process. But even more important, this human task can cause errors and limit your ability to audit your process effectively. With a “disconnect” or a loan officer running an application outside of your loan onboarding process, you lose visibility into the process which can affect compliance.

Let’s take a look at this process when executed. First, fill out the Personal Application Form requesting the loan.

Personal loan application form

Once the application is submitted, the process runs a check to return the credit score and other pertinent financial information for the applicant.

Check Applicant for Risk

This information is presented to a loan officer, who must review it and then access the risk determination application to predict the risk for this applicant.

Note: The risk determination application is written in Python and uses past applicant data as the training data to enhance and refine our risk prediction accuracy. The code accepts the data shown in the next figure to make this determination.

This entails copying the data provided to another application.

loan risk prediction

The application returns a risk determination.

loan risk low

Then the loan officer updates the form with the risk to continue the process.

verified risk

In this situation, the process determines that the risk for lending to this applicant is low. The process generates the loan documents and makes them available to the applicant as per the associated email. The diagram below shows the branches taken in this case.

The applicant receives an email with a link to the loan documentation.

Email with link to loan documentation

The information in the documentation is calculated using the interest rates determined by the financial information for that applicant. In this case, the rate is 11.99% with an estimated monthly payment of $1,284.80 each month for 30 years.

Loan documents

We can see the path taken by the applicant with Camunda Operate, as well as inspect the variables of the process.

Values for risk assessment

The value for the risk for this particular applicant is shown in the risk variable in the process. Although this process works well and takes advantage of connectors and decision management and notation (DMN) for decision-making, it does have a human step that takes context out of the process. This leaves room for error and less auditability and visibility.

Possible issues

Let’s assume that the loan officer transposes the number of late payments with the time in the current position as indicated below.

loan risk prediction
loan risk medium

This will indicate that the applicant is at “medium” risk, not the “low” risk which is correct. Take that a step further and assume the loan officer incorrectly input the credit score for the applicant in addition to the switch between credit card late payments and time in the current position.

loan risk prediction 520
loan risk high

This would automatically reject the applicant for the loan.

Unfortunately, since the data in the process is correct for each of these parameters, we have no visibility into what might have happened when the loan officer input the data into the loan risk prediction application and what results that caused. There is no auditability into what went wrong, which can be devastating to our potential and existing clients and customer loyalty.

This lack of governance and visibility into your process can lead to regulation and compliance issues. We will now take a look at the difference in a process when this is replaced with an automated task to provide end-to-end visibility.

Replace manual risk verification with automation

Let’s assume that my organization has developed a script that will allow Camunda to run our prediction application programmatically. It should minimize human error, streamline the process, and provide visibility into all aspects involved in onboarding decision-making.

With Camunda’s composable architecture, it is quite simple to swap or replace a human step with a connector or other automated functionality—in this case a service task—in order to achieve these enhancements.

Let’s look at how that can be done.

Anatomy of the service task

In this case, our development team has created a JavaScript program that spawns our Python prediction algorithm for determining the risk for the applicant. Using this code, we can execute the Python application from the Camunda process using a service task instead of waiting on an available human who may or may not incorrectly input the data.

In order to accomplish this task, we alter the process to replace the Verify Risk human task with a service task (in red) as shown below.

For the service task, we make sure that we have imported the required Zeebe variables, and then create our job worker that subscribes to our job type by referencing it in the process. This service will wait for jobs that reference the task type and then run the code which calls our Python prediction model.

service task

Swapping out the manual task

Essentially, the process looks much the same after swapping the human task with the service task; we simply gain end-to-end visibility into the process. The service task code needs to be running waiting to execute when the task is called from the process.

Let’s see how this would look at execution and what we gain from taking this approach. We start with the same form without any changes. This again, reinforces the beauty of composability, as we do not need to change our form to support the element change to a service task.

Personal loan application form

Without the need to wait for an available loan officer, this process quickly obtains the credit score and other financial information and moves to verify the risk. For this example, we have the service task writing a log so that you can see what is happening in the Verify Risk task.

zeebe logs

Here you see that the function has been called (and some of the variables from the process) and the result of a “low” credit risk from the program.

risk assessment details

The same branch is taken to approve the loan without human intervention and create the appropriate loan documentation for the applicant to sign.

Camunda Financial Lending faux document

Streamlining your process

Although we are only using an example in this case, let’s take a deeper look at how much time it can save in the process when we swap out a manual task with an automated task.

For example, it is very important to be able to quickly eliminate loan requests for unqualified candidates in a timely fashion. Using the manual task for risk evaluation, we might see something like the following process.

The applicant fills out the usual loan request form.

personal loan application form_manual

In our example, we were waiting for the risk verification task proactively, and it was picked up immediately.

Credit lookup_manual

We use the information gathered from our process with the financial information to obtain the proper loan risk for the applicant.

loan risk prediction 525
loan risk for applicant high

The following screenshot shows the path of this process instance to a rejection email sent because the individual had a high credit risk.

The entire process from start to finish took 1 minute and 8 seconds (as mentioned, we were proactively waiting for the manual task to verify the risk).

Note: This does not take into account the time required to enter the initial form information by the applicant.

elapsed time

Contrast that time with running the process without human intervention.

The same path is taken, but there is no wait in the process for an individual, who can make errors, to verify the risk manually.

Note: This does not take into account the time required to enter the initial form information by the applicant.

elapsed time for automated process

This process took 3 seconds from start to finish.

As you can imagine, if your organization gets 2,000 requests daily for possible loans, then the total time with a manual process—in a best case scenario—is:

2000 requests * 1.13 minutes = 2,260 minutes or 37 hours

Alternatively, if you replaced the manual verification process with the call to our machine learning prediction model service task, this number is:

2000 requests * 0.13 minutes = 266 minutes or just under 4 ½ hours

You can start to see the benefit for streamlining this process with an automated task. Moreover, this machine learning model can continue to fine-tune itself by feeding results from new applicants back into the test set for a more robust prediction model.

In addition to providing a more streamlined and accelerated process, you can achieve so many more benefits using Camunda’s composable architecture to replace or swap older, legacy components with automated or newer technology elements.

Additional benefits of composability

This looks much like the previous process with the manual step; however, there are many associated benefits when taking the automated approach.

End-to-end process visibility and efficiency

Achieving end-to-end visibility in your processes ensures alignment with business objectives, including KPIs and other performance metrics. Using our simple example in this blog, leveraging Camunda’s composable architecture and replacing manual tasks with technical solutions reduced the loan rejection processing time by over 85%.

Process auditability

End-to-end auditability is critical to ensuring processes are executed consistently and effectively. With the right tasks in place, you gain complete transparency from start to finish, enabling consistent execution across your organization.

Process governance and regulatory compliance

True end-to-end process orchestration and visibility are essential for effective process governance. This approach ensures processes are consistently managed, aligned with organizational objectives, and adaptable to evolving needs. It also provides the insights and accountability needed to execute, monitor, and improve processes while integrating new technologies seamlessly.

By implementing a composable architecture, organizations can easily audit processes for regulatory compliance and assess the impact of modifications. Change management, including documentation, compliance tracking, and performance monitoring, becomes more efficient and manageable.

Take advantage of Camunda’s composable architecture in your processes

With a truly composable architecture, you can reduce your risk while improving accuracy, efficiency, and compliance. If you want to obtain more information about the benefits of composability, please see our Composability for Best in Class Process Orchestration blog.

The post Automation through Composability appeared first on Camunda.

]]>
Revolutionizing Health Insurance Underwriting: Harnessing AI for Smarter, Faster, and Fairer Risk Assessment https://camunda.com/blog/2025/01/health-insurance-underwriting-ai-smarter-faster-fairer-risk-assessment/ Thu, 16 Jan 2025 19:57:19 +0000 https://camunda.com/?p=126240 Learn how AI, along with process orchestration and automation, can combine to make health insurance underwriting easier and more effective.

The post Revolutionizing Health Insurance Underwriting: Harnessing AI for Smarter, Faster, and Fairer Risk Assessment appeared first on Camunda.

]]>
Process orchestration (PO) and artificial intelligence (AI) can significantly enhance health insurance underwriting by streamlining processes, enhancing risk evaluation accuracy, and boosting overall efficiency. The key to achieving these improvements is by joining forces with process orchestration and artificial intelligence.

Let’s start by taking a look at how artificial intelligence enhances fairness in risk assessment for healthcare insurance underwriting.

Enhance risk assessment

With the introduction of AI into your organization’s processes and operations, you can analyze vast amounts of structured and unstructured data including medical records, genetics, and lifestyle habits, significantly enhancing the accuracy and scope of risk assessments. By identifying intricate patterns that may be missed by human analysis, AI provides more detailed and personalized evaluations of individual health risks.

AI enhances your underwriting models by focusing on individual health factors rather than relying on broad demographic categories. This approach minimizes the potential for biased assessments based on age, gender, or ethnicity, emphasizing the unique health profiles and behaviors of individuals instead.

If you take advantage of these capabilities of AI and integrate them into your underwriting process, you can make faster, more accurate decisions concerning risk. Models can be used to forecast potential health risks, likelihood of claims, and associated medical costs, which helps improve both the underwriting accuracy and the speed of making decisions. With natural language processing (NLP) automatically extracting relevant information from data, the information can be simplified and summarized in advance for underwriters streamlining the review process.

As shown in the example process below, prior to underwriter review, you can take advantage of AI to review and summarize various records as well as do an initial risk assessment. This streamlines the process and expedites the review by providing the underwriter with an overview of the applicant, extracting highlights and potential risks for the review process.

Additionally, machine learning continuously updates risk models with new data, allowing for ongoing improvement. This adaptability ensures that evaluations remain precise and aligned with evolving healthcare trends and risks.

Process orchestration and automation with AI

Now that we have addressed AI and risk assessment, let’s look at a few of the ways process orchestration and automation (PO&A) with AI can make a significant impact on your health insurance underwriting operations and provide fairer risk assessment.

Make decisions faster

Quick decisions can make or break the customer experience, but including AI in your process can help improve and even automate decisions for you.

There are several tasks that can be automated to help reduce the manual workload of underwriters. These can include policy renewals and eligibility checks, for example. By automating these repetitive tasks, you can speed up the underwriting process.

There are several different ways to automate these types of tasks, which can be further enhanced by adding AI in the mix. AI enables your process to pull real-time data from electronic health records (EHRs), wearable devices, and databases to make faster, more dynamic decisions.

Streamline workflow

Most underwriting process orchestrations have several decision points and steps, including data gathering, document processing, decision approval, and risk assessment. Integrating various systems, automating repetitive tasks and enhancing data gathering with AI can ensure faster and more efficient underwriting cycles.

With true process orchestration, you can provide cross-functional collaboration to coordinate tasks and communication between different departments sharing data across those teams. Automating processes improve both transparency and customer satisfaction by allowing staff to provide quicker, more informed feedback to customers and agents.

Improve accuracy and reduce bias

Consistency in processes and decisions is essential for fairness. AI models reduce human error and variability, leading to more uniform and impartial risk assessments across applicants.

AI minimizes biases that stem from subjective judgment or misinterpretation of data. When designed and trained on unbiased datasets, AI systems focus on objective, data-driven predictions, ensuring fair and equitable evaluations.

Achieve regulatory compliance and enhance fraud detection

Integrating AI with process orchestration will enable you to proactively track regulatory changes and compliance by simplifying the management and updates of underwriting guidelines and policies.

You can also use AI to help you identify unusual patterns and subsequently flag them as potential fraud, which helps to safeguard your organization against the risks associated with fraudulent activities. An example of this type of flagging can be seen below.

Transparency and Accountability

With a clearly defined process, you can use this information to justify any decisions that are made to insurers and regulators. This transparency helps build trust and ensures adherence to regulatory standards aimed at fairness. A true process orchestration solution with integrated AI can assist organizations to address regulations like the EU AI Act through this transparency and auditability of your process.

While AI can significantly improve fairness, it requires careful design and oversight to avoid perpetuating biases present in the training data. Ethical guidelines and rigorous testing are essential to ensure fairness in healthcare insurance underwriting. 

How Camunda can help

Camunda has a platform that allows organizations to integrate AI throughout your process by providing connectors to run certain models, and options like Camunda Copilot that uses generative AI to help simplify complex process modeling tasks. With Camunda Robotic Process Automation (RPA), you can automate repetitive tasks to streamline your underwriting process. You can also access legacy systems, such as your policy administration system, using RPA.

In fact, you can use Camunda RPA in combination with your own RPA tools in your process today because of our composable architecture. In addition, this architectural approach allows users to integrate and utilize AI models and connectors  where they add the most value leaving room to exchange them, if needed, in the future. This composability extends your solution to auditability and governance while remaining flexible for future requirements. This approach can significantly reduce your time to market (TTM).

With Camunda Intelligent Document Processing (IDP), you can simplify and automate how your documents are handled, minimizing manual errors and reducing operational costs typically associated with human-driven tasks. It enables you to extract actionable intelligence and insights from your documents, uncovering valuable information to enhance workflows, streamline processes, and support strategic decision-making.

You can also gain insights into your processes with Camunda Optimize. With Optimize, you can establish and monitor your key performance indicators (KPIs) and evaluate process consistency and bottlenecks.

Camunda provides an open and scalable platform to address your underwriting process.

What’s next?

Together, AI and process orchestration enable health insurers to optimize their underwriting processes, improving speed, accuracy, and efficiency while enhancing customer experience and profitability.

But, you don’t have to stop with the underwriting process. There is so much more you can achieve if you integrate AI and process orchestration into your organization. You can include process and AI into policy renewals, the appeals process, claims processing, policy changes like live events, and more.

The post Revolutionizing Health Insurance Underwriting: Harnessing AI for Smarter, Faster, and Fairer Risk Assessment appeared first on Camunda.

]]>
Composability for Best in Class Process Orchestration https://camunda.com/blog/2025/01/composability-for-best-in-class-process-orchestration/ Mon, 06 Jan 2025 19:21:45 +0000 https://camunda.com/?p=125541 Prepare for the future using Camunda’s composable architecture, providing visibility, audibility, and governance for your orchestration journey.

The post Composability for Best in Class Process Orchestration appeared first on Camunda.

]]>
As industries integrate artificial intelligence (AI) and automation in their business processes, they must determine where and when to make these changes. There is a fine line between automating for automation’s sake and making strategic decisions about the best way to orchestrate your process with the introduction of AI agents and other automation components.

With Camunda, organizations can be confident that their investment in building the right foundation for a flexible and scalable orchestration journey.

Camunda’s composable architecture allows users to combine integrations and AI agents where they add the most value but still leaves room to exchange them as needed in the future. This approach lends itself toward taking organizations from a tactical solution that might solve a very specific problem all the way to reaching a best-in-class technology that automates complex end-to-end business processes managing multiple integrations. This best-in-class solution extends your solution to auditability and governance while remaining flexible.

But how do you get started on this journey? The key is having the right core product for these requirements, like composable process orchestration.

Process orchestration is the core

Organizations always need to automate processes—sometimes it’s a small, straightforward process; other times it’s more complex and integrated. These initial single-solution processes are often patched together over time, leading to poorly designed and performing workflows.

Process orchestration is a technology that coordinates the various moving parts (or endpoints) of a business process, and sometimes even ties multiple processes together. Process orchestration helps you work with the people, systems, and devices you already have—while achieving even the most ambitious goals around end-to-end process automation.

Introducing automation elements, like an AI agent or a bot, might improve the process, but you want to confirm that these are strategic and automating the right tasks effectively.

As you look at future requirements and technologies, you’ll want to steer clear of vendor lock-in to keep your processes flexible for future requirements and initiatives. Essentially, you must future-proof your solutions to allow for fluctuations in requirements, technologies, and regulations. You need to innovate your orchestration, and this is where Camunda comes in to help. With Camunda, process orchestration is the core of your solution.

Consider a building-block approach to this goal, as represented in the image below. Start with a strong foundation of integration capabilities like reusable connectors, AI agents, RPA bots and executable BPMN. Then build business capability processes, such as claims review, adjustor investigation, and adjudication, that use these integrations.

Block with layers, starting from bottom: integration capabilities, business capabilities, strategic end-to-end processes, customer journeys and value streams, and finally, Business area.

With this solid bedrock, you can build strategic end-to-end processes using these business capabilities—for example, automobile claims handling.

Now it’s time to review your customer journey and value streams that are then implemented as these end-to-end processes. Finally, you can work with senior leadership to build strategic value by business area.

This approach allows you to develop enterprise-scale process orchestration optimizing strategic value to your organization.

The importance of composability

We’ve talked quite a bit about the fact that Camunda is composable. What does that mean? It means that Camunda is both integrated and flexible.

Camunda’s process orchestration sits at the core of the automation technology stack while providing an open, collaborative, and scalable platform. With composability, you can combine anything to truly automate and optimize your process orchestrations, like the example shown below.

BPMN diagram of an automobile claims decision

As mentioned, to build that solid orchestration foundation, you want to start with what are commonly called task agents—reusable components that you can use in multiple processes to address specific tasks.

Task agents can be made up of executable BPMN and automated tasks like RPA bots that automate a specific task or set of tasks. This might be reviewing data and making decisions or interacting with other systems autonomously or semiautonomously.

Swapping out elements of an orchestrated process

These task agent building blocks can be used in multiple processes and can also be easily replaced as technology changes without modifying your strategic end-to-end processes. These are the key to a composable architecture, providing you the flexibility to integrate and automate while orchestrating larger, more complex processes.

The benefits of composability for process orchestration

The benefits are many with this architecture. Not only does it build the foundation for future-proofing your organization, but there are other key benefits. Let’s go over a few.

Flexibility

The core benefit of a composable architecture is the technical ability to add, swap, or remove individual automations or services when needed.

For example, you may have built a custom integration to a system that now offers a REST API that would streamline the integration. You can swap your existing task agent with an updated agent that embraces the new technology to access the REST API. Alternatively, you may no longer require access to a particular legacy system. You can now easily remove the deprecated component from your process.

As shown in the following diagram, you can exchange any bot in the process with the one that best suits the requirements for your business process. With a composable architecture, you can strive for best in class by selecting the right task agent for your needs.

Exchanging bots within a composable process

End-to-end process insight

Having comprehensive understanding and visibility into your entire process enables you to ensure strategic alignment with your business objectives. Using your key performance indicators (KPIs) and metrics, you can verify that customer expectations are being met and continue to optimize your processes within your organization.

Process auditability

With only individual automated tasks, you may not be able to clearly track and verify compliance across your overall processes. You may only have chunks of information that have to be reviewed independently, limiting the ability to trace specific events, decisions, and individuals that may have been part of your process.

However, if you’ve put the proper task agents in place in your end-to-end processes, you achieve auditability of the process from start to finish. With true auditability, you can confirm that processes are executed consistently, meeting performance metrics. Having auditability in your process allows you to identify risks and highlight areas for improvement.

Process governance

Process governance is the system of policies, roles, and frameworks that ensure that business processes are effectively managed, consistently applied, and aligned with organization’s objectives. It establishes the proper insight and accountability for the design, execution, monitoring, and continuous improvement of processes.

Process governance is nearly impossible without full process orchestration. With composability, you extend process governance to individual tasks as well as the overall process. Effective process governance ensures that business processes stay aligned with organizational goals, operate efficiently, and adapt to changes quickly, while maintaining transparency and control.

Using tools like BPMN and the ability to track process modifications, you have proper visibility into your operations. You can easily audit your processes to confirm compliance with regulations and the effectiveness of any modifications.

Proper change management with documentation, compliance requirements, and performance data are significantly easier to achieve with a composable architecture.

Future-proof your applications

Let’s face it, technology is a moving target. It’s important to implement a solution that allows you to be flexible enough to take advantage of new tech stacks when needed but keeps you on the right track for best-in-class process orchestration. A composable architecture allows you to plug in the right task agent for the job while still providing visibility into the end-to-end process.

For example, with a composable architecture, you can easily maximize the use of AI in your organization. You can choose, combine, and orchestrate your AI agent framework from pro-code to low-code, building best-in-class process orchestration solutions.

Conclusion

Camunda’s composable architecture for process orchestration is a powerful and adaptable solution for today’s business challenges. By leveraging its core process orchestration one-model approach, organizations gain the flexibility to build, modify, and scale workflows with ease, ensuring alignment with new technologies.

Camunda excels in delivering best-in-class process orchestration capabilities. Implementing our composable architecture provides robust auditing, governance, and future-proofing, enabling businesses to maintain compliance, enhance transparency, and drive continuous improvement. As organizations navigate the complexities of their digital transformation, Camunda’s architecture empowers them to orchestrate processes with precision, adaptability, and confidence, setting a strong foundation for success.

The post Composability for Best in Class Process Orchestration appeared first on Camunda.

]]>
Putting AI Prompt Engineering Into Practice https://camunda.com/blog/2024/12/putting-ai-prompt-engineering-into-practice-part-2/ Mon, 23 Dec 2024 18:05:58 +0000 https://camunda.com/?p=125363 Learn why prompt engineering is key to getting the most out of generative AI models.

The post Putting AI Prompt Engineering Into Practice appeared first on Camunda.

]]>
Welcome to the second part of our blog series on AI Prompt Engineering where we examine the importance of prompt engineering when it is integrated into processes to automate tasks and achieve better outcomes.

Be sure to read our first blog post in this series, Understanding AI Prompt Engineering, where we lay the groundwork for how you can put this into practices in your organization’s business processes.

Using prompts and prompt engineering for automation

Our simple example in the previous blog post shows a human interacting directly with a chatbot in a “conversation” reviewing results and fine tuning right away. The approach is different when you take this concept and implement this type of service in a business process, but can include an iterative approach for the best results.

AI requires detailed instructions to mimic humans and create the most relevant and high-quality output. And now you understand that the objective of AI prompt engineering is to supply the AI with relevant context, clear instructions, and illustrative examples, enabling it to grasp the intent and generate meaningful responses.

There are other components that need to be considered in order to maximize results when integrating generative AI into your business operations. As you can imagine, these tools require quite a bit of horsepower, which converts to costs. In many cases, the selected LLM might be the key to the returned results. A common way to implement integration is using APIs.

When building integration with models into workflows using APIs, you will want to become familiar with some additional terms so that you can configure how your application will work with the LLM:

  • Model ID or name. You will need to note which model you want to invoke when you are making an API call to a specific LMM. Sometimes a change in model can improve your results.
  • System (full context). Where instructions are provided to the model or the actual “prompt” to the LLM. This is what is telling the underlying AI provider what to do. For example, create an email to respond to a customer inquiry.
  • User. This is the input that we are providing for the prompt. For example, here is the customer inquiry I received to which I need a response.
  • Maximum tokens. More tokens can allow the AI provider to tackle more complex tasks and provide more nuanced responses. It is also how you are charged by an AI provider. The more tokens, the higher the cost. For example, 1,000 tokens equate to about 750 words. The maximum tokens help to set a limit so you are not charged for excessively wordy responses. In most cases, 500 to 1,000 max tokens are appropriate depending on the use case.
  • Temperature. This provides information on how “creative” the response can be—the higher the temperature, the more creative. A lower temperature indicates more consistent responses. Your selection will depend on your use case. Consistency is likely important and you want all your users to have a similar experience, so a lower temperature is indicated.

As with any API, you will need to have your authentication method and any appropriate access keys for that API.

Framing the task for optimal results

As discussed, the most important part of prompt engineering is providing all the right stuff in order to get optimal results. For example, when working with an API of a LMM model, you need to be sure to “tell” the model the specifics of its job and what role it is going to play in your operations. This provides the model with specifics about its job and role for this task. For example:

You are an email responder.

You also want to provide primary instructions or guidance about what is going to happen and the goals of the interaction. This may be in the form of primary and secondary goals or instructions.

I will provide you with an email that I receive and you will respond to that email. My name is James, I work at AIPrompt Solutions, and I am in the Customer Support group.

Steps: 
1. Comprehend what is wanted in the email received - reference the subject line and body 
2. Write a response to the email using the following format: Full email where the subject is identified with “Subject:” and body is identified with “Body:” 

You may even provide an output format and example output. These options depend on the API and model you elect to integrate using the API.

Output Format: When providing an output, do not use "Subject:" or "Body:", just provide the relevant information for each of those sections. 

Example Output: Hello [name], Thank you for your interest in our new AI  product. It features a simple interface, multiple generative AI models, advanced analytics, and customizable reports. For more detailed information, please see our product documentation. Let me know if you have any other questions. Thank you. Sincerely, James AIPrompt Solutions, Customer Support Specialist
Specific Instructions: Each email body should be a maximum of 4 sentences.

Providing all the correct components for guidance will set you up for success.

Note: The example shown was derived from this video.

Example of using this technique in a Camunda process

Let’s assume you work for a travel agency and you want to provide an experience for your customers that mimics a conversation that they might have with a travel agent for upcoming travel.

The customer is presented with a dialog to enter the details about the desired travel details.

AI Travel Agent dialog

As you can see, you are providing a place for the customer to put information that might help determine what is the best travel method for them. For example, they might put “I am a vegetarian,” or “I do not like to travel by air,” or “I prefer to travel in the early morning.” This type of information allows the LMM to include these details into consideration when providing travel suggestions.

You are also providing a place to put the travel details, for example:

I need to travel from Berlin, Germany to Barcelona, Spain leaving on Saturday, December 7, 2024 and returning on Thursday, December 12, 2024. I will not need a vehicle when in Spain. I do need lodging in Barcelona for this trip.

In this case, the following sample Camunda process is provided for explanation. Here, the traveler is prompted to enter the information shown in the form shown above. This information is sent to the loop (shown in the white box below) for processing by the correct AI agents..

Camunda sample process for AI travel agent

The Travel Agent component uses the AWS Bedrock service to determine first, if the traveler has provided enough information to process the request and then which other agents (flight, hotel, car) should be invoked to provide a full picture of the options to the traveler.  

 The Travel Agent component using the AWS Bedrock service

As part of the integration to the Bedrock API, you must provide the proper access and secret keys as well as the model to be used and the “payload” which includes our sophisticated prompt telling the LMM our expectations as well as how to provide the response.

Providing payload for AWS Bedrock

In the response of this API call, we capture if certain agents need to be invoked. Based on the outcome of this travel agent engine, a setting is captured and a value is set for future agent invocation.

Setting a value for future agent invocation

Let’s dissect the prompt provided to Bedrock in this example. Remember those terms we mentioned in the Using prompts and prompt engineering for automation section? This is a place where you see some of them—specifically, the max_tokens shown below.

Defined terms in the Bedrock prompt

Next, we can see the system prompt providing the LLM with the scope and parameters of the request. You can see that we are providing information about this Travel Agent acting as a routing agent for the travel request to assemble trips for the traveler.

System prompt providing LLM with scope and parameters

Guidance is then provided using the role of the user giving guidance on what to do if the query is incomplete or missing information.

Guidance for what to do if the query is incomplete or missing information

Finally, each subsequent invoked agent has a section in the payload for the model.

Email agent

Email Agent payload

Flight agent

Flight Agent payload

Hotel agent

Hotel Agent payload

Car agent

Car Agent payload

All of these agents work together or separately depending on the requirements for the traveler.

Booking agent to take over booking

When the traveler is happy with the selections provided, the booking agent can take over to book the itinerary. Keep in mind, the prompts shown in this example were “engineered” so that the selected LMM and other parameters would provide the best results.

Want more information?

When using generative AI models in your organization, prompt engineering is key to getting the most out of this automation and experience.

If you want to take a look at these files in more detail, please access this GitHub repository.

The post Putting AI Prompt Engineering Into Practice appeared first on Camunda.

]]>