Engineering Archives | Camunda https://camunda.com/blog/category/engineering-excellence/ Workflow and Decision Automation Platform Fri, 06 Jun 2025 21:06:50 +0000 en-US hourly 1 https://camunda.com/wp-content/uploads/2022/02/Secondary-Logo_Rounded-Black-150x150.png Engineering Archives | Camunda https://camunda.com/blog/category/engineering-excellence/ 32 32 Solving the RPA Challenge with Camunda https://camunda.com/blog/2025/06/solving-rpa-challenge-with-camunda/ Fri, 06 Jun 2025 21:06:34 +0000 https://camunda.com/?p=141263 See how you can solve the RPA Challenge (and much more when it comes to orchestrating your RPA bots) with Camunda.

The post Solving the RPA Challenge with Camunda appeared first on Camunda.

]]>
The RPA Challenge is a popular benchmark in the automation community, designed to test how well bots can handle dynamic web forms. The task involves filling out a form that changes its layout with each submission, using data from an Excel file. While this might seem tricky, Camunda’s RPA capabilities make it surprisingly straightforward.

In this post, we’ll walk through what RPA is, how to tackle the RPA Challenge using Camunda’s tools, from scripting the bot to deploying and executing it within a BPMN workflow, and finally how process orchestration can help super-charge your RPA bots.

What is RPA, and why should you care?

Robotic Process Automation (RPA) is a technology that allows you to automate repetitive, rule-based tasks typically performed by humans. Think of actions like copying data between systems, filling out web forms, or processing invoices—if it follows a predictable pattern, RPA can probably handle it. The goal is to free up people from mundane work so they can focus on higher-value tasks that require creativity, problem-solving, or empathy.

At the heart of RPA is the RPA bot, a small script that mimics human actions on a computer. These bots can click buttons, read emails, move files, enter data, and more. They’re like digital assistants that never sleep, don’t make typos, and follow instructions exactly as given. And unlike traditional software scripts, RPA bots are often designed to work with existing user interfaces, so you don’t need to rebuild backend systems to automate work.

If you already use process orchestration, why use RPA at all? Because it’s a fast, cost-effective way to automate existing business processes without major changes to your IT infrastructure. Whether you’re streamlining internal workflows or integrating legacy systems with modern platforms like Camunda, RPA gives you the power to get more done, faster—and with fewer errors. When combined with process orchestration, it becomes even more powerful, allowing bots to operate as part of larger, end-to-end business processes.

Follow along with the video

We’re going to dig into the RPA Challenge with a full tutorial below, but feel free to follow along with the video as well.

Understanding the RPA script

To solve the RPA Challenge, we used a script written in Robot Framework, a generic open-source automation framework. The script is built with Camunda’s RPA components, allowing seamless orchestration within a BPMN process. You can view the full script, as well as a BPMN model that uses the script, by clicking here. In this section, we’ll walk through the script in detail.

Settings

*** Settings ***
Documentation       Robot to solve the first challenge at rpachallenge.com,
...                 which consists of filling a form that randomly rearranges
...                 itself for ten times, with data taken from a provided
...                 Microsoft Excel file. Return Congratulation message to Camunda.
Library             Camunda.Browser.Selenium    auto_close=${False}
Library             Camunda.Excel.Files 
Library             Camunda.HTTP
Library             Camunda

The Settings section defines metadata and dependencies for the script:

  • Documentation: A human-readable description of what the robot does. In this case, it describes the task of filling a dynamically rearranging form using Excel data.
  • Library: These lines load external libraries required for the script to run. Robot Framework supports many built-in and third-party libraries. When using Camunda’s implementation, you can also use Camunda-specific RPA libraries tailored to browser automation, Excel file handling, HTTP actions, and integration with the Camunda platform.

When writing a RPA script, you can import as many libraries as needed to accomplish the task at hand. Here’s what each library used in this challenge does:

  • Camunda.Browser.Selenium: Enables browser automation via Selenium. The auto_close=${False} argument ensures the browser doesn’t automatically close after execution, useful for debugging.
  • Camunda.Excel.Files: Provides utilities to read data from Excel files, which is essential for this challenge.
  • Camunda.HTTP: Used to download the Excel file from the RPA Challenge site.
  • Camunda: Core Camunda RPA library that helps interact with the platform, such as setting output variables for process orchestration.

Tasks

*** Tasks ***
Complete the challenge
    Start the challenge
    Fill the forms
    Collect the results

The Tasks section defines high-level actions that the robot will execute. In Robot Framework, a “task” is essentially a named sequence of keyword calls and defines every action the robot will take.

Each task is given a friendly name (for this challenge, the task is named “Complete the challenge”). This task is composed of three keyword invocations:

  • Start the challenge: Opens the site and prepares the environment.
  • Fill the forms: Loops through the Excel data and fills out the form.
  • Collect the results: Captures the output message after the form submissions are complete.

Each of these steps corresponds to a custom keyword defined in the next section.

Keywords

The Keywords section is where we define reusable building blocks of the automation script. Keywords are like functions or procedures. Each keyword performs a specific operation, and you can call them from tasks or other keywords. The order keywords are defined does not matter, as they are executed individually as defined in the task.

Let’s break down each one.

Start the challenge

Start the challenge
    Open Browser   http://rpachallenge.com/         browser=chrome
    Maximize Browser Window
    Camunda.HTTP.Download
    ...    http://rpachallenge.com/assets/downloadFiles/challenge.xlsx
    ...    overwrite=True
    Click Button    xpath=//button[contains(text(), 'Start')]

This keyword sets up the browser environment and downloads the data file required for the challenge. It begins by launching a Chrome browser using the Open Browser keyword, which is part of the Camunda.Browser.Selenium library imported earlier, and navigates to rpachallenge.com. Once the site loads, the Maximize Browser Window keyword ensures that all elements on the page are fully visible and accessible for automation.

The script then uses the Camunda.HTTP.Download keyword from the Camunda.HTTP library to download the Excel file containing the challenge data; the overwrite=True argument ensures that an up-to-date version of the file is used each time the bot runs. Finally, it clicks the “Start” button on the page using the Click Button keyword, targeting the element via an XPath expression that identifies the button by its text. This action triggers the start of the challenge and reveals the form to be filled.

This keyword sets the stage for the main task by ensuring we’re on the correct page and have the necessary data.

Fill the forms

Fill the forms
    ${people}=    Get the list of people from the Excel file
    FOR    ${person}    IN    @{people}
        Fill and submit the form    ${person}
    END

This keyword performs the core automation logic by iterating over the data and filling out the form multiple times. It starts by calling the custom keyword Get the list of people from the Excel file, which reads the downloaded Excel file and returns its contents as a table—each row representing a different person.

The script then enters a loop using Robot Framework’s FOR ... IN ... END syntax, iterating through each person in the dataset. Within this loop, it calls the Fill and submit the form keyword, passing in the current person’s data. This step ensures that the form is filled and submitted once for every individual listed in the Excel file, effectively completing all ten iterations of the challenge.

The keyword demonstrates how modular and readable Robot Framework scripts can be. Each action is broken into self-contained logic blocks, which keeps the code clean and reusable.

Get the list of people from the Excel file

Get the list of people from the Excel file
    Open Workbook    challenge.xlsx
    ${table}=    Read Worksheet As Table    header=True
    Close Workbook
    RETURN    ${table}

This keyword is responsible for reading and parsing the data from the Excel file. It begins by using the Open Workbook keyword to open the downloaded challenge.xlsx file. Once the file is open, the Read Worksheet As Table keyword reads the contents of the worksheet and stores it as a table, with the header=True argument ensuring that the first row is treated as column headers—making the data easier to work with.

After reading the data, the Close Workbook keyword is called to properly close the file, which is a best practice to avoid file access issues. Finally, the keyword returns the parsed table using RETURN ${table}, allowing the calling keyword to loop through each row in the dataset.

The result is a list of dictionaries, where each dictionary represents a person’s data (e.g., first name, last name, email, etc.).

Fill and submit the form

Fill and submit the form
    [Arguments]    ${person}
    Input Text    xpath=//input[@ng-reflect-name="labelFirstName"]    ${person}[First Name]
    Input Text    xpath=//input[@ng-reflect-name="labelLastName"]    ${person}[Last Name]
    Input Text    xpath=//input[@ng-reflect-name="labelCompanyName"]    ${person}[Company Name]
    Input Text    xpath=//input[@ng-reflect-name="labelRole"]    ${person}[Role in Company]
    Input Text    xpath=//input[@ng-reflect-name="labelAddress"]    ${person}[Address]
    Input Text    xpath=//input[@ng-reflect-name="labelEmail"]    ${person}[Email]
    Input Text    xpath=//input[@ng-reflect-name="labelPhone"]    ${person}[Phone Number]
    Click Button    xpath=//input[@type='submit']

This keyword fills out the web form using data for a single person. It begins with the [Arguments] ${person} declaration, which accepts a dictionary containing one individual’s details—such as name, email, and company information—retrieved from the Excel file. The form fields are then populated using multiple Input Text keywords, each one targeting a specific input element on the page. These fields are identified using XPath expressions that match the ng-reflect-name attributes, ensuring the correct data is entered in the right place regardless of how the form rearranges itself. Once all fields are filled in, the Click Button keyword is used to submit the form, completing one iteration of the challenge.

The challenge dynamically rearranges form fields on each iteration, but these attributes remain consistent, making them a reliable way to target inputs.

Collect the results

Collect the results
    ${resultText}=    Get Text    xpath=//div[contains(@class, 'congratulations')]//div[contains(@class, 'message2')]
    Set Output Variable    resultText    ${resultText}
    Close Browser

After all the forms have been submitted, this keyword captures the final confirmation message displayed by the RPA Challenge. It starts by using the Get Text keyword to extract the congratulatory message shown on the screen, targeting the message element with an XPath expression that identifies the relevant section of the page.

The retrieved message is then passed to Camunda using the Set Output Variable keyword, which makes the result available to the surrounding BPMN process—allowing downstream tasks or process participants to use or display the outcome. Finally, the Close Browser keyword is called to shut down the browser window and clean up the automation environment.

Testing, deploying and executing with Camunda

Once your RPA script is ready, the next step is to test and integrate it into a Camunda workflow so it can be executed as part of a larger business process. To begin, you’ll need to download and install Camunda Modeler, a desktop application used to create BPMN diagrams and manage automation assets like RPA scripts. (Note: RPA scripts cannot be edited in Web Modeler currently; this feature is in development.) Desktop Modeler includes an RPA Script Editor, which allows you to open, write, test, and deploy Robot Framework scripts directly from within the application.

Before deploying the script to Camunda, it’s a good idea to test it locally. Start by launching a local RPA worker—a component that polls for and executes RPA jobs. You can find setup instructions for the worker in Camunda’s Getting Started with RPA guide. Once your worker is running, use the RPA Script Editor in Camunda Modeler to open your script and run it. This will launch the browser, execute your automation logic, and allow you to verify that the bot behaves as expected and completes the RPA Challenge successfully.

Test-rpa-script-camunda

After confirming the script works, you can deploy it to your Camunda 8 cluster. Ensure that you have set an “ID” for the RPA script; this ID is how your BPMN process will reference and invoke the script. In the Modeler, click the “Deploy” button in the RPA Script Editor and choose the target cluster.

Next, create a new BPMN model in the Camunda Modeler to orchestrate the RPA bot. Start with a simple diagram that includes a Start Event, a Task, and an End Event. Select the Task and change it to the “RPA Connector”. Then, in the input parameters for the task, set the “Script ID” parameter to the ID you set in the RPA script earlier.

Rpa-connector-camunda

Once your BPMN model is ready, deploy it to the same Camunda cluster. To execute the process, you can either use the Camunda Console’s UI to start a new process instance or call the REST API. The RPA Worker will pick up the job, run the associated script, and return any output variables—like the final confirmation message from the RPA Challenge—back to Camunda. You can monitor the execution and troubleshoot any issues in Camunda Operate, which provides visibility into your running and completed processes, including logs and variable values.

With that, your RPA script is fully integrated into a Camunda process. You now have a bot that not only completes the challenge but does so as part of a well-orchestrated, transparent, and scalable business workflow.

How Camunda supercharges your RPA bots

RPA is great at automating individual tasks—but what happens when you need to coordinate multiple bots, connect them with human workflows, or make them part of a larger business process? That’s where Camunda comes in.

Camunda is a process orchestration platform that helps you model, automate, and monitor complex business processes from end to end. Think of it as the brain that tells your RPA bots when to run, what data to use, and how they fit into the bigger picture. With Camunda, your bots are no longer isolated automation islands—they become integrated, managed components of scalable workflows.

For example, you might use a Camunda BPMN diagram to define a process where:

  1. A customer submits a request (via a form or API),
  2. An RPA bot retrieves data from a legacy system,
  3. A human reviews the output,
  4. Another bot sends the results via email.

Camunda handles all of this orchestration—making sure each task runs in the right order, managing exceptions, tracking progress, and providing visibility through tools like Camunda Operate. And because Camunda is standards-based (using BPMN), you get a clear, visual representation of your processes that both developers and business stakeholders can understand.

When you combine RPA with Camunda, you’re not just automating tasks—you’re transforming how your organization runs processes. You get flexibility, scalability, and transparency, all while reducing manual effort and human error. Whether you’re scaling up existing bots or orchestrating new workflows from scratch, Camunda makes your RPA investments go further.

Conclusion

Automating the RPA Challenge with Camunda showcases our ability to handle dynamic, UI-based tasks seamlessly. By combining Robot Framework scripting with Camunda’s orchestration capabilities, you can build robust automation workflows that integrate both modern and legacy systems.

Ready to take your automation to the next level? Explore Camunda’s RPA features and see how they can streamline your business processes. Be sure to check out our blog post on how to Build Your First Camunda RPA Task as well.

The post Solving the RPA Challenge with Camunda appeared first on Camunda.

]]>
Build Your First Camunda RPA Task https://camunda.com/blog/2025/05/build-your-first-camunda-rpa-task/ Wed, 07 May 2025 13:14:16 +0000 https://camunda.com/?p=137622 Leverage robotic process automation (RPA) to automate tasks, enhance efficiency, and minimize human errors.

The post Build Your First Camunda RPA Task appeared first on Camunda.

]]>
You may have heard that Camunda now provides Camunda Robotic Process Automation (RPA) for automating manual, repetitive tasks to streamline your orchestrated processes. But would you know where to begin?

This blog will provide step-by-step instructions so you can build your first Camunda RPA task and then run it.

RPA leverages software robots to automate tasks traditionally handled manually. By automating these processes, organizations can enhance efficiency and minimize human errors. These RPA tasks can be integrated into your end-to-end Camunda processes by connecting isolated bots.

Terminology

Before getting started, it is important to understand the terminology related to robotic process automation.

  • Bot/Robot: A software agent that executes tasks and automates processes by interacting with applications, systems, and data.
  • Robot script: The script that tells the robot what actions, such as, keystrokes and mouseclicks, to execute.

Overview of the model you will build

For this example, you’ll build a model similar to the one below.

Camunda RPA Task 1

In this scenario, you’re providing the end user with the ability to generate a QR code for a website. The RPA bot will access a QR website (www.qrcode-monkey.com) to generate the QR code and bring that back to the end user.

The process starts with a form for the URL that requires a QR code. Once the URL is entered and the form is submitted, the process will use a `.robot` script that provides this URL to the website to generate the QR code. Once generated, the QR code is displayed in a form that the end user can download if desired.

What you’re going to do today is:

  • Create a model in Camunda SaaS.
  • Install the RPA runtime on your local machine.
  • Create an RPA script on that local machine.
  • Deploy the RPA script to the cloud.
  • Run your process, which will launch the RPA runtime on your local machine.

Assumptions and initial configuration

If you’re using this step-by-step guide to create your first Camunda RPA bot process, let’s make sure you have a few things in order before you begin.

  • In order to take advantage of the latest and greatest functionality provided by Camunda, you’ll need to have a Camunda 8.7.x cluster or higher available for use in Camunda SaaS.
  • You will be using Web Modeler and Forms to create your model and human task interface.
  • You’ll create your RPA script in Desktop Modeler to create your RPA script.
  • You’ll deploy and execute your process in Camunda SaaS.
  • You will also be running the Camunda RPA robot on your machine (RPA Worker), and you will be deploying your RPA script to the cloud so when your process runs, it will trigger the robot locally.

Required skills

It is assumed that those using this guide have the following skills with Camunda:

  • Form Editor—the ability to create forms for use in a process.
  • Web Modeler—the ability to create elements in BPMN and connect elements together properly, link forms, and update properties for connectors.
  • Desktop Modeler—understanding of installation and use of Desktop Modeler.
  • TaskList—the ability to open items and act upon them accordingly, as well as starting processes.

GitHub repository and video tutorial

If you don’t want to build this process from scratch, you can access the GitHub repository and download the individual components. We’ve also created a step-by-step video tutorial for you. The steps in this guide closely mirror the steps taken in the video tutorial.

Installations

Desktop Modeler

In order to build your RPA script, install Camunda Desktop Modeler if you don’t already have it installed.

If it is, make sure it’s the latest version to take advantage of Camunda RPA functionality within the application.

Download the appropriate version of Camunda Desktop Modeler for your machine. Make sure you’re using version 5.34.0 or higher of Desktop Modeler. Installation instructions are on the download page, but essentially you’ll unpack the archive (for Windows) or open the DMG (for macOS). Then you will start the Camunda Modeler (executable for Windows, application for macOS) when you want to run the application.

RPA worker

Download and install the RPA worker that will run your RPA script locally on your machine. You can find the RPA worker in GitHub. Select version 1.0.1 or later for this tutorial.

Scroll down to the assets and select the appropriate asset for your local machine:

  • For Windows: rpa-worker_1.0.1_win32_amd634.zip
  • For macOS: rpa-worker_1.0.1_darwin_aarch64.zip

Extract the contents of the .zip file on your local machine. This creates a directory of the same root name as the .zip file, which will contain your executable for the RPA-worker and an application.properties file for configuration purposes.

Updating the properties file

Update the application.properties file with the correct information for your configuration before starting the executable for the RPA worker. You can obtain many of these settings by creating an API Client and reviewing the settings and values.

We’ll cover how to do this and what to modify in that file later in this blog.

Notes for macOS installation

You may want to install Homebrew for running the RPA worker on your local macOS machine. To do so, run the following command:

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

Followed by:

echo >> ~/.zprofile
echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile
eval "$(/opt/homebrew/bin/brew shellenv)"

Confirm that you’re running Python 3.11. Run this command to start the rpa-worker executable: ./rpa-worker_1.0.1_darwin_aarch64 .

Do not start the executable until requested later in this tutorial.

Building your process

Follow these steps to create your RPA bot process.

Create a new project for your process model and any other associated assets (this example uses RPA Tutorial).

Using the drop down menu, create a new BPMN diagram.

Camunda RPA Task 2

Name your new process Get a QR Code.

Initial model

Now it’s time to build your process model in BPMN and the appropriate forms for any human tasks. You’ll be building the model represented below:

Camunda RPA Task 3

Begin by building your model in Design mode as shown below.

Camunda RPA Task 4

To create your initial model, name your start event. This example calls it URL needs QR.

Camunda RPA Task 5

Add an End Event and call it QR Generated.

Camunda RPA Task 6

After this task, create a task called Create QR Code. This will be the RPA Connector task to run Camunda RPA.

Then create a human task called Display QR Code. This will display the code obtained through RPA in the prior step.

Camunda RPA Task 7

Next, create a form to serve as the frontend to the process providing the URL that needs a QR generated.

Click the Start task, select to Link Form, and select Create a New Form (the blue button).

All you really need on this form is a text field to enter the URL that needs to be generated. For the text field, enter the label. The key (or variable) should be url.

Camunda RPA Task 8

The form shown here includes a title field (Text view) in addition to the text field. This is not required.

Click Go to diagram -> to return to your diagram.

Select the first task, Create QR Code, and change the element type to RPA Connector.

Camunda RPA Task 9

If you do not see the template, search the Camunda Marketplace from Web Modeler and download the connector to your project. If you still do not see the connector, verify (in Implement mode) that you are checking problems against Zeebe 8.7 and not a prior version.

Switch to Implement mode in the Web Modeler and select your new RPA connector element.

In the properties for this element, enter the Script ID for the connector as RPA-Get-QR-Tutorial. This will be the script you’ll create using Desktop Modeler and deploy to the cloud later.

Camunda RPA Task 10

Select your human task to display the QR Code and link a form.

Camunda RPA Task 11

Select + Create new form to enter the form editor.

Camunda RPA Task 12

You’ll receive something back from the RPA bot, so create those items in the form. The first is a document preview for the document that will be coming back.

Enter some title (this example uses QR Code Results and then lists qrcode for the document reference—more on that coming up).

Camunda RPA Task 13

This form also creates a heading (Text view) field for the form. That is not necessary.

Click Go to diagram -> to return to your BPMN diagram.

Now that you have the initial model, let’s move on to setting up RPA and using Desktop Modeler to create the RPA script.

Create your RPA script

Now you need to create the RPA script that will tell the robot what tasks to complete, including launching the QR code website, entering the URL for the QR Code, and so on.

Launch Camunda Desktop Modeler.

Click RPA Script to generate a default script for getting started with your first RPA bot.

Camunda RPA Task 14

Connecting the RPA worker

The next step is to properly connect the RPA worker so you can execute RPA scripts properly.

When you select RPA script from the Camunda Modeler menu, you’ll see an example script that runs an RPA challenge.

Camunda RPA Task 15

At the bottom of the screen below the RPA initial script, you’ll see a note with a red icon stating that the RPA worker is not connected. You’ll take care of that next.

Camunda RPA Task 16

This warning message indicates that scripts can be created but not executed until the worker is properly connected.

Confirm that you’ve taken the proper steps to install the RPA worker covered at the beginning of this blog.

Now open a terminal window and run the appropriate executable from your RPA worker directory. For example, ./rpa-worker_1.0.1_darwin_aarch64.

When everything is working properly, the note at the bottom of the script page reveals that the RPA worker is connected and the icon turns green.

Camunda RPA Task 17

Testing the example script

To ensure that everything is working correctly, run the script to test it.

Click the Test Script icon to the immediate right of the RPA worker connected statement.

Camunda RPA Task 18

The script opens a browser, fills in fields quickly, and then completes. You can review the statistics from running the script in the testing output below your script.

You can see an example here:

Camunda RPA Task 19
Camunda RPA Task 20
Camunda RPA Task 21

The RPA script

Before you build the script, it’s important to understand what you want the script to accomplish. Let’s take a moment to review that. Essentially, you’ll be doing the following:

  • Open a browser.
  • Open the proper QR Code URL (https://www.qrcode-monkey.com).
  • Accept any required cookies by clicking Accept All Cookies.
  • Enter the URL provided in the Request a QR Code form in the proper form field on the website.
  • Click Create QR Code.
  • Copy the created QR code from the proper region to be presented in our final form in the process.

The browser screen should look something like this:

Camunda RPA Task 22

Create your script

Now that you’ve reviewed what the script needs to accomplish, go back to Camunda Modeler and remove everything under the last Library line so you can put together your script for this exercise.

You can also remove the Documentation section and remove the library for Camunda.Excel.Files, which you won’t be using.

So your starting script looks something like this:

*** Settings ***
Library             Camunda.Browser.Selenium
Library             Camunda.HTTP
Library             Camunda

Camunda RPA uses Robot Framework, an open source Python-based RPA framework that lets you describe how you can step through tasks that you want to accomplish.

Let’s create your script.

Enter your Tasks section by adding the task name:

*** Tasks ***
Get QR Code

You’ll be creating two methods under this task:

Generate QR Code
Send QR Code

Now enter your Variables section:

*** Variables ***

You’ll want a single variable called url as shown here:

${url}	https://www.camunda.com

Your script should look something like this so far:

*** Settings ***
Library             Camunda.Browser.Selenium
Library             Camunda.HTTP
Library             Camunda


*** Tasks ***
Get QR Code
   Generate QR Code
   Send QR Code


*** Variables ***
${url}      https://www.camunda.com

Now create your Keywords section:

*** Keywords ***

This is where you will define the methods.

First define your Generate QR Code method with the following lines:

Generate QR Code
	Open Available Browser	https://www.qrcode-monkey.com
	Sleep 2s

You can elect to wait for a condition in lieu of the Sleep command if you like.

You want to be able to click a button to accept the cookies. Use the Click Button method to do this:

Click Button		locator

Find the location for the Accept All Cookies button in your browser.

Camunda RPA Task 23

To do this, find the locator to enter in this statement. Open your web browser and go to https://qrcode-monkey.com. While hovering over Accept All Cookies, right click, and choose Inspect.

Camunda RPA Task 24

This displays the locator information for this button (onetrust-accept-btn-handler) as shown below:

Camunda RPA Task 25

Replace locator with id:onetrust-accept-btn-handler in the RPA script:

Click Button		id:onetrust-accept-btn-handler

Continue to find the applicable locators for the places where you’ll enter text on the web form. The next location will be where you enter the URL for the QR code generation. That locator is qrcodeUrl, so your next line will look like this:

Input Text    id:qrcodeUrl    ${url}

This ensures that you’re entering the variable from the form in Camunda as the text for this statement.

Next you want to be able to click a button to actually create the QR code, which will look like this:

Click Button    id:button-create-qr-code

In this case, run another Sleep so that the script doesn’t run too fast—you want to be able to see it running.

Sleep    2s

Now that you have the first method, Generate QR Code, you need to create the Send QR Code section of the RPA script.

Remember to capture a screenshot of your QR code, so use the statement Capture Element Screenshot. However, in this example, you’re not going to enter the locator ID. Instead, use xpath and search for an image as shown below:

Capture Element Screenshot    xpath://img[contains(@class, 'card-img-top')]    qr-code.png

Next, capture the URL that you used as well for the QR code:

Capture Element Screenshot    id:qrcodeUrl      qr-URL.png

Finally, upload the resulting screenshots for use in your Camunda process.

Upload Documents    **/*.png    qrCode

You’ll close the browser using Close Browser.

Your final script should look like the one shown below:

*** Settings ***
Library             Camunda.Browser.Selenium
Library             Camunda.HTTP
Library             Camunda


*** Tasks ***
Get QR Code
   Generate QR Code
   Send QR Code


*** Variables ***
${url}      https://www.camunda.com


*** Keywords ***


Generate QR Code
   Open Available Browser      https://www.qrcode-monkey.com/
   Sleep    2s
   Click Button    id:onetrust-accept-btn-handler
   Input Text      id:qrcodeUrl    ${url}
   Click Button    id:button-create-qr-code
   Sleep    2s


Send QR Code
   Capture Element Screenshot    xpath://img[contains(@class, 'card-img-top')]    qr-code.png
   Capture Element Screenshot    id:qrcodeUrl      qr-URL.png
   Upload Documents    **/*.png    qrCode
   Close Browser

Be sure to save your script. This example saves the script as getQRCode.rpa.

Test your script locally

Now that you’ve generated your script, it’s time to test it locally before connecting it to your Camunda SaaS Zeebe engine. In order to do this, remove the Upload Documents statement until you connect your script to the Camunda engine.

To test your script, remove the Upload Documents statement.

Confirm that your RPA worker is connected (see this section to review how to do this).

When your RPA worker is connected, click the Test Script icon.

Camunda RPA Task 18

A browser should open to the QR Code website. You’ll see the various buttons clicked, the URL filled in, and the screenshots created. This will happen quickly. You should receive a PASS status as shown below:

Camunda RPA Task 26
Camunda RPA Task 27

You can expand the various tasks for more detailed information on what took place in the script.

Camunda RPA Task 28
Camunda RPA Task 29

Deploying the RPA script

Now that you have a working script (be sure to add back in the Upload Documents statement), it’s time to deploy this script to your SaaS environment to make it available for use in process models.

Let’s take a little mental inventory on what you’ve done so far:

  • You have a model in Camunda Web Modeler that requests a URL from a user and then calls an RPA script to obtain a QR code for that URL and pass it back to the user for view.
  • You have RPA running on your local machine.
  • You have an RPA Script that will obtain the QR code by completing specific tasks.

You need to take a few final steps in order to deploy your script, and that involves making sure you’re connecting your script ID. Begin by making the IDs match on your cloud model and your RPA script.

Open your cloud model and select the Create QR Code task. Review the properties to find the Script ID and copy it.

Camunda RPA Task 30

You can see here the ID for this example is RPA-Get-QR-Tutorial.

Go back to Desktop Modeler. If it’s not already viewable, expand the properties in your Desktop Modeler while displaying the script.

Paste the RPA-Get-QR-Tutorial into the ID location for your script in Camunda Desktop Modeler.

Camunda RPA Task 31

To deploy your script, locate the Deploy icon at the bottom of your Desktop Modeler screen (the rocket ship). You’ll be prompted to enter some information for this deployment.

Camunda RPA Task 32

For your deployment, create a Desktop Modeler Client API in your SaaS Console and then obtain the remaining elements required for this dialog at that time. First, open Console in your SaaS environment and select the appropriate cluster.

Click the API tab and then click Create new client.

Camunda RPA Task 33

Enter the name DesktopModeler for the API name and select Zeebe for the required credentials.

Camunda RPA Task 34

Click Create to create the credentials required. This displays the required items to fill into the dialog box in Desktop Modeler.

Camunda RPA Task 35

Before closing this dialog, be sure to click the Desktop Modeler tab to find the Cluster URL that you need.

Camunda RPA Task 36

Enter the Cluster URL, the Client ID, and the Client Secret, and then click Deploy.

Camunda RPA Task 37

You should receive verification that the script was properly deployed.

Connecting your RPA worker to the cloud

Just to clarify where you are again, the call to use the script is being orchestrated in the cloud, but it’s being run locally on your machine. Now you need to take the final steps to make sure everything is communicating properly in order to execute your RPA script.

Your next step is to connect your local RPA worker to the cloud engine.

Update your application.properties file in the RPA worker directory to the proper values for the following:

  • camunda.client.cluster-id
  • camunda.client.region
  • camunda.client.auth.client-id
  • camunda.client.auth.client-secret

To obtain these values, open your SaaS Console and create another API client. Select both Zeebe and Secrets scopes.

Camunda RPA Task 38

This example uses RPA-Tutorial-QR for this new client.

You can choose to download the credentials or copy/paste them into your application.properties file. Your file will look something like this:

## Camunda RPA Worker

## Full properties reference: https://github.com/camunda/rpa-worker?tab=readme-ov-file#configuration-reference

### General Configuration

#camunda.rpa.zeebe.worker-tags=default

#camunda.rpa.robot.default-timeout=PT5M

#camunda.rpa.robot.fail-fast=true

#camunda.rpa.python.extra-requirements=/path/to/extra-requirements.txt

### Zeebe Configuration

#### SaaS Production

camunda.client.mode=saas

camunda.client.auth.client-id=Sd6QUXtleuLio0Luk5wsPGM~RvyqDw~i

camunda.client.auth.client-secret=<SECRET HERE>

camunda.client.cloud.cluster-id=3c43f328-25bb-49c0-bd3f-7786af3b98c0

camunda.client.cloud.region=hel-1

Once you’ve updated your information, restart your RPA worker. It should read the new application.properties files that point to the proper configuration in SaaS.

Run your script

You’re almost there! Now it’s time to test your original BPMN diagram to make sure that everything is working correctly.

Go back to your Web Modeler and deploy your diagram to the proper cluster that you used to create your API clients.

Camunda RPA Task 39

Go to Tasklist, click the Processes tab, and start the process Get a QR Code.

Camunda RPA Task 40

You will be prompted to enter the URL for the QR Code in the form you created. Enter any URL you want to use here.

Camunda RPA Task 41

Click Start process.

You should see the browser open on your local machine. The URL will be entered, the screenshots generated, and then you’ll see another task appear in Tasklist (be sure to click on the Tasks tab in Tasklist).

Camunda RPA Task 42

When you open this task, you’ll see the generated QR code and the URL that was used in the first form in the form.

Camunda RPA Task 43

Congratulations

You’ve completed your first RPA script! You’ve also deployed it, created a process model, and executed your Camunda RPA script in the process. We hope you enjoyed this step-by-step tutorial.

Be sure to check out the video to use to walk through the process.

The post Build Your First Camunda RPA Task appeared first on Camunda.

]]>
How to Build a REST API with Spring Boot: A Step-by-Step Guide https://camunda.com/blog/2025/05/how-to-build-a-rest-api-with-spring-boot-a-step-by-step-guide/ Fri, 02 May 2025 23:11:40 +0000 https://camunda.com/?p=125442 From setting up your project to securing your endpoints, this guide lays the foundation for your API.

The post How to Build a REST API with Spring Boot: A Step-by-Step Guide appeared first on Camunda.

]]>
REST APIs are everywhere in today’s web and mobile applications. They play a crucial role in allowing different systems and services to communicate smoothly, making it easier to share data and functionality across platforms. It’s gotten to the point that knowing how to create a good RESTful API is a valuable skill.

Spring Boot stands out as one of the top choices for building RESTful services in Java. It’s favored by developers for its straightforward setup and how quickly you can get your API up and running. With Spring Boot, you don’t have to deal with complex configurations, allowing you to focus more on developing your application’s features. Additionally, it taps into the vast Java ecosystem, offering a wide range of libraries and tools that make adding functionalities like security, data management, and scalability much easier.

In this guide, we’ll take you through how Spring Boot can simplify and enhance the process, providing a more powerful and efficient way to build your REST API.

Note: If you’re just getting started with REST APIs in Java and want to learn the basics without using Spring Boot, check out our basic guide on creating a RESTful API in Java

1. Setting up your Spring Boot project

Getting started with Spring Boot is straightforward, thanks to the tools and resources available. In this section, we’ll walk you through setting up your Spring Boot project, covering both the Spring Initializr method and alternative approaches using Maven or Gradle.

Create a new Spring Boot project

Spring Initializr is a web-based tool that simplifies the process of generating a Spring Boot project with the necessary dependencies. Here’s how to use it:

  1. Navigate to Spring Initializr at start.spring.io.
  2. Configure your project:
    • Project: Choose either Maven or Gradle as your build tool.
    • Language: Select Java.
    • Spring Boot: Choose the latest stable version.
    • Project metadata: Fill in details like Group (e.g., com.example) and Artifact (e.g., demo).
  3. Select dependencies:
    • Spring Web: Provides REST support.
    • Spring Boot DevTools: Enables hot reloading for faster development.
  4. Click Generate to download a ZIP file containing your new project.
  5. Extract the ZIP file and open the project in your preferred IDE (e.g., IntelliJ IDEA, Eclipse).

Alternative methods: Using Maven or Gradle

If you prefer setting up your project using the command line with Maven or Gradle, follow the instructions below.

Using Maven:

Open your terminal and run the following command to generate a Spring Boot project with Maven:

mvn archetype:generate -DgroupId=com.example -DartifactId=demo -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false

After generating the project, add the necessary Spring Boot dependencies to your pom.xml file.

Using Gradle:

For Gradle users, execute the following command in your terminal:

gradle init --type java-application

Once the project is initialized, add the Spring Boot plugins and dependencies to your build.gradle file.

Code example: Generating a project with Spring Initializr

Here’s a quick example of how your pom.xml might look after selecting the necessary dependencies using Spring Initializr:

<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-devtools</artifactId>
        <scope>runtime</scope>
    </dependency>
    <!-- Add other dependencies as needed -->
</dependencies>

Project structure overview

Once you’ve set up your project, it’s helpful to understand its basic structure. Here’s a breakdown of the main components we’ll be working with:

  • src/main/java: This folder contains all your application’s source code. You’ll typically organize your code into packages, such as controller, service, and repository, to maintain a clean structure.
  • src/main/resources: This directory holds configuration files and other resources. Key files include:
    • application.properties or application.yml: Configuration settings for your Spring Boot application.
    • Static resources: If you’re serving static content, such as HTML, CSS, or JavaScript files, they go here.
  • Application.java: Located in the src/main/java directory, this class serves as the entry point for your Spring Boot application. It contains the main method that bootstraps the application. Here’s what it typically looks like: 
package com.example.demo;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class Application {
    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}

In this file, we have the following annotations:

  • @SpringBootApplication: This annotation marks the main class of a Spring Boot application. It combines three other annotations:
    • @EnableAutoConfiguration: Tells Spring Boot to start adding beans based on classpath settings, other beans, and various property settings.
    • @ComponentScan: Enables component scanning so that the web controller classes and other components you create will be automatically discovered and registered as beans in the Spring application context.
    • @Configuration: Allows you to register extra beans in the context or import additional configuration classes.

2. Creating your first REST API endpoint

Now that your Spring Boot project is set up, it’s time to create your first REST API endpoint. This involves building a Controller that will handle HTTP requests and send responses back to the client.

Build the controller

In Spring Boot, a controller is a crucial component that manages incoming HTTP requests and returns appropriate responses. It acts as the intermediary between the client and your application’s logic, processing requests and sending back data in formats like JSON or XML.

This also means that usually, your business logic is not inside the control but rather somewhere else, and the controller is just interacting with it. Keep this in mind when you build your own APIs—sometimes developers forget this part and couple the actual business logic with the API’s controller code, creating a complicated mess in their code.

With that out of the way, here’s how to create a simple controller in Spring Boot.

Let’s create a WelcomeController that responds to a GET request with a welcome message.

In your project’s src/main/java/com/example/demo directory (adjust the package name as necessary), create a new Java class named WelcomeController.java.

package com.example.demo;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class WelcomeController {

    @GetMapping("/welcome")
    public String welcome() {
        return "Welcome to the Spring Boot REST API!";
    }
}

Here’s what’s happening with this code:

  • @RestController: This annotation marks the class as a Controller where every method returns a domain object instead of a view. It’s a convenience annotation that combines @Controller and @ResponseBody.
  • @GetMapping("/welcome"): This annotation maps HTTP GET requests to the welcome() method. When a client sends a GET request to /welcome, this method is invoked.
  • public String welcome(): This method returns a simple welcome message as a plain text response. Spring Boot automatically converts this to the appropriate HTTP response.

Run the application

Start your Spring Boot application by running the Application class. You can do this from your IDE by right-clicking the Application class and selecting Run or by using the command line: 

./mvnw spring-boot:run

Or if you’re using Gradle:

./gradlew bootRun

Test the endpoint

Once the application is running, you can test your new endpoint. Open your web browser or use a tool like curl or Postman to send a GET request to http://localhost:8080/welcome. You should receive the following response:

Welcome to the Spring Boot REST API!

This simple example demonstrates how to create a REST API endpoint using Spring Boot. The WelcomeController handles HTTP GET requests to the /welcome path and returns a welcome message.

3. Adding data models and business logic

With your first REST API endpoint up and running, it’s time to expand your application by introducing data models and business logic. This section will guide you through defining your data structures and creating a service layer to manage your application’s core functionality.

Define the data model

In any application, it’s essential to have a clear representation of the data you’ll be working with (i.e., your users, your products, the shopping cart, etc). In Java, Plain Old Java Objects (POJOs) provide a simple and effective way to model your data.

POJO is just a fancy name for regular Java classes without any special restrictions or requirements, making them easy to create and maintain.

Create a User class that represents a user in your application. This class will have fields for id, name, and email.

In your project’s src/main/java/com/example/demo/model directory (you may need to create the model folder), create a new Java class named User.java. Add the following code:

package com.example.demo.model;
import jakarta.persistence.Entity;
import jakarta.persistence.*;


@Entity
@Table(name="my_users")
public class User {
    @Id
    @GeneratedValue(strategy=GenerationType.IDENTITY)
    private Long id;
    private String name;
    private String email;

    // Default constructor
    public User() {
    }

    // Parameterized constructor
    public User(Long id, String name, String email) {
        this.id = id;
        this.name = name;
        this.email = email;
    }

    // Getters and Setters
    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public String getEmail() {
        return email;
    }

    public void setEmail(String email) {
        this.email = email;
    }

    // toString method
    @Override
    public String toString() {
        return "User{" +
                "id=" + id +
                ", name='" + name + '\'' +
                ", email='" + email + '\'' +
                '}';
    }
}

Here’s what’s happening in this file:

  • Fields: The User class has three private fields: id (of type Long), name, and email (both of type String).
  • Constructors:
    • Default constructor: Allows for the creation of a User object without setting any fields initially.
    • Parameterized constructor: Enables the creation of a User object with all fields initialized.
  • Getters and setters: Provide access to the private fields, allowing other parts of the application to retrieve and modify their values.
  • toString method: Offers a readable string representation of the User object, which can be useful for debugging and logging.

This simple POJO serves as the foundation for your data model, representing the structure of the user data your API will handle.

Create the service layer

Separating business logic from your controllers is a best practice in software development (remember, we gotta stay away from spaghetti code!). The service layer handles the core functionality of your application, such as processing data and applying business rules, while the controller layer manages HTTP requests and responses. This separation enhances the maintainability and scalability of your application.

Let’s create a UserService that manages a list of users. For this example, we’ll use an in-memory list to store user data.

In your project’s src/main/java/com/example/demo/service directory (create the service package if it doesn’t exist), create a new Java class named UserService.java. Add the following code:

package com.example.demo.service;

import com.example.demo.model.User;
import org.springframework.stereotype.Service;

import java.util.ArrayList;
import java.util.List;

@Service
public class UserService {
    private List<User> users = new ArrayList<>();

    // Constructor to initialize with some users
    public UserService() {
        users.add(new User(1L, "John Doe", "john.doe@example.com"));
        users.add(new User(2L, "Jane Smith", "jane.smith@example.com"));
    }

    // Method to retrieve all users
    public List<User> getAllUsers() {
        return users;
    }

    // Method to add a new user
    public void addUser(User user) {
        users.add(user);
    }

    // Method to find a user by ID
    public User getUserById(Long id) {
        return users.stream()
                .filter(user -> user.getId().equals(id))
                .findFirst()
                .orElse(null);
    }
}

And for this file, here’s what’s going on:

  • @Service annotation: Marks the UserService class as a Spring service component, making it eligible for component scanning and dependency injection.
  • User list: Maintains an in-memory list of User objects. In a real-world application, this would typically be replaced with a database.
  • Constructor: Initializes the service with a couple of sample users for demonstration purposes.
  • getAllUsers method: Returns the list of all users.
  • addUser method: Adds a new User to the list.
  • getUserById method: Searches for a User by their id and returns the user if found; otherwise, returns null.

Integrate the service with the controller

To use the UserService in your controller, inject it into your WelcomeController or create a new controller dedicated to user operations.

Here’s how you can modify the WelcomeController to include user-related endpoints:

package com.example.demo;

import com.example.demo.model.User;
import com.example.demo.service.UserService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RestController;

import java.util.List;

@RestController
public class WelcomeController {

    @Autowired
    private UserService userService;

    @GetMapping("/welcome")
    public String welcome() {
        return "Welcome to the Spring Boot REST API!";
    }

    @GetMapping("/users")
    public List<User> getAllUsers() {
        return userService.getAllUsers();
    }

    @GetMapping("/users/{id}")
    public User getUserById(@PathVariable Long id) {
        return userService.getUserById(id);
    }

    @PostMapping("/users")
    public void addUser(@RequestBody User user) {
        userService.addUser(user);
    }
}

Let’s take a closer look at what’s happening with this code:

  • @Autowired annotation: Injects the UserService into the WelcomeController, allowing the controller to use the service’s methods.
  • New endpoints:
    • GET /users: Retrieves the list of all users by calling userService.getAllUsers().
    • GET /users/{id}: Retrieves a specific user by their id using userService.getUserById(id).
    • POST /users: Adds a new user to the list by calling userService.addUser(user). The @RequestBody annotation indicates that the user data will be sent in the request body in JSON format.

Test the service layer

Restart your Spring Boot application and test the new endpoints to ensure everything is working as expected.

curl http://localhost:8080/users

You should get a response that looks somewhat like this:

[
  {
    "id": 1,
    "name": "John Doe",
    "email": "john.doe@example.com"
  },
  {
    "id": 2,
    "name": "Jane Smith",
    "email": "jane.smith@example.com"
  }
]

Retrieve a user by ID

curl http://localhost:8080/users/1

This endpoint should return the JSON containing the data from user with ID 1:

{
  "id": 1,
  "name": "John Doe",
  "email": "john.doe@example.com"
}

Add a new user

curl -X POST -H "Content-Type: application/json" -d '{"id":3,"name":"Alice Johnson","email":"alice.johnson@example.com"}' http://localhost:8080/users

The response will be empty, as there is no body returned as part of the creation process. You should get a “No content (HTTP 200 OK)” response.

Verify the addition:

curl http://localhost:8080/users

The new user should now be returned as part of the list of users:

[
  {
    "id": 1,
    "name": "John Doe",
    "email": "john.doe@example.com"
  },
  {
    "id": 2,
    "name": "Jane Smith",
    "email": "jane.smith@example.com"
  },
  {
    "id": 3,
    "name": "Alice Johnson",
    "email": "alice.johnson@example.com"
  }
]

These tests confirm that your data model and service layer are functioning correctly, allowing you to manage user data through your REST API.

4. Connecting to a database

Now let’s take this example to the next level by incorporating an actual database for persistence.

Spring Boot makes the process of connecting to a database seamless by integrating with Spring Data JPA, which allows you to interact with databases using Java objects. In this section, we’ll set up database connectivity using an in-memory H2 database for simplicity and create a JPA repository to manage your data.

Setting up a database

Spring Data JPA provides a robust and flexible way to interact with relational databases. For development and testing purposes, using an in-memory database like H2 is useful because it requires minimal configuration and doesn’t persist data after the application stops (so, it’s “kind of a database,” but you get the point).

Add the Spring Data JPA and H2 dependencies

First, include the necessary dependencies in your project to enable Spring Data JPA and the H2 database.

Using Maven:

Open your pom.xml file and add the following dependencies within the <dependencies> section:

<dependencies>
    <!-- Existing dependencies -->

    <!-- Spring Data JPA -->
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-data-jpa</artifactId>
    </dependency>

    <!-- H2 Database -->
    <dependency>
        <groupId>com.h2database</groupId>
        <artifactId>h2</artifactId>
        <scope>runtime</scope>
    </dependency>

    <!-- Other dependencies as needed -->
</dependencies>

Using Gradle:

Open your build.gradle file and add the following dependencies:

dependencies {
    // Existing dependencies

    // Spring Data JPA
    implementation 'org.springframework.boot:spring-boot-starter-data-jpa'

    // H2 Database
    runtimeOnly 'com.h2database:h2'

    // Other dependencies as needed
}

Configure the H2 Database

Next, configure Spring Boot to use the H2 in-memory database. Open the src/main/resources/application.properties file and add the following configurations:

# H2 Database Configuration
spring.datasource.url=jdbc:h2:mem:testdb
spring.datasource.driverClassName=org.h2.Driver
spring.datasource.username=sa
spring.datasource.password=
spring.jpa.database-platform=org.hibernate.dialect.H2Dialect

# Enable H2 Console
spring.h2.console.enabled=true
spring.h2.console.path=/h2-console

# Show SQL Statements in the Console (Optional)
spring.jpa.show-sql=true
spring.jpa.hibernate.ddl-auto=update

Here’s the explanation of the configuration from above:

  • spring.datasource.url: Specifies the JDBC URL for the H2 in-memory database named testdb.
  • spring.datasource.driverClassName: The driver class for H2.
  • spring.datasource.username & spring.datasource.password: Credentials for accessing the database. The default username for H2 is sa with an empty password.
  • spring.jpa.database-platform: Specifies the Hibernate dialect for H2.
  • spring.h2.console.enabled: Enables the H2 database console for easy access.
  • spring.h2.console.path: Sets the path to access the H2 console at http://localhost:8080/h2-console.
  • spring.jpa.show-sql: (Optional) Enables logging of SQL statements in the console.
  • spring.jpa.hibernate.ddl-auto: Automatically creates and updates the database schema based on your JPA entities.

Create a JPA repository

Spring Data JPA simplifies data access by providing repository interfaces that handle common CRUD operations. By extending Spring Data JPA’s JpaRepository interface, you can interact with the database without writing boilerplate code.

Create the repository interface

To manage User entities in the database, create a repository interface that extends JpaRepository. This interface provides various methods for performing CRUD operations.

In your project’s src/main/java/com/example/demo directory, create a new package named repository.

Inside the repository package, create a new Java interface named UserRepository.java with the following content: 

package com.example.demo.repository;

import com.example.demo.model.User;
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.stereotype.Repository;

@Repository
public interface UserRepository extends JpaRepository<User, Long> {
    // Additional query methods can be defined here if needed
}

Here’s what’s happening with the above code:

  • @Repository annotation: Although not strictly necessary since Spring Data JPA automatically detects repository interfaces, adding @Repository enhances clarity and allows for exception translation.
  • Extending JpaRepository: By extending JpaRepository<User, Long>, the UserRepository interface inherits several methods for working with User persistence, including methods for saving, deleting, and finding User entities.
  • Generic parameters:
    • User: The type of the entity to manage.
    • Long: The type of the entity’s primary key.

Update the service layer to use the repository

With the repository in place, update your UserService to interact with the database instead of using an in-memory list.

Modify the UserService Class. Open UserService.java in the service package and update it as follows:

package com.example.demo.service;

import com.example.demo.model.User;
import com.example.demo.repository.UserRepository;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

import java.util.List;
import java.util.Optional;

@Service
public class UserService {

    private final UserRepository userRepository;

    @Autowired
    public UserService(UserRepository userRepository) {
        this.userRepository = userRepository;
    }

    // Method to retrieve all users
    public List<User> getAllUsers() {
        return userRepository.findAll();
    }

    // Method to add a new user
    public User addUser(User user) {
        return userRepository.save(user);
    }

    // Method to find a user by ID
    public Optional<User> getUserById(Long id) {
        return userRepository.findById(id);
    }

    // Method to update a user
    public User updateUser(Long id, User userDetails) {
        User user = userRepository.findById(id)
                .orElseThrow(() -> new ResourceNotFoundException("User not found with id " + id));
        user.setName(userDetails.getName());
        user.setEmail(userDetails.getEmail());
        return userRepository.save(user);
    }

    // Method to delete a user
    public void deleteUser(Long id) {
        userRepository.deleteById(id);
    }
}

Here’s the detailed explanation of what’s happening with the updated code:

  • Dependency injection: The UserRepository is injected into the UserService using constructor injection, promoting immutability, and easier testing.
  • CRUD methods: The service methods now delegate data operations to the UserRepository, utilizing methods like findAll(), save(), and findById() provided by JpaRepository.
  • Optional return type: The getUserById method returns an Optional<User>, which helps handle cases where a user with the specified ID might not exist.
  • Additional methods: Methods for updating and deleting users have been added to demonstrate more comprehensive CRUD operations.

Update the controller to use the updated service

Finally, update your controller to use the updated UserService methods that interact with the database.

To modify the WelcomeController class, open WelcomeController.java and update it as follows:

package com.example.demo;

import com.example.demo.model.User;
import com.example.demo.service.ResourceNotFoundException;
import com.example.demo.service.UserService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.*;

import java.util.List;

@RestController
@RequestMapping("/api")
public class WelcomeController {

    private final UserService userService;

    @Autowired
    public WelcomeController(UserService userService) {
        this.userService = userService;
    }

    @GetMapping("/welcome")
    public String welcome() {
        return "Welcome to the Spring Boot REST API!";
    }

    @GetMapping("/users")
    public List<User> getAllUsers() {
        return userService.getAllUsers();
    }

    @GetMapping("/users/{id}")
    public ResponseEntity<User> getUserById(@PathVariable Long id) {
        User user = userService.getUserById(id)
                .orElseThrow(() -> new ResourceNotFoundException("User not found with id " + id));
        return ResponseEntity.ok(user);
    }

    @PostMapping("/users")
    public User addUser(@RequestBody User user) {
        return userService.addUser(user);
    }

    @PutMapping("/users/{id}")
    public ResponseEntity<User> updateUser(@PathVariable Long id, @RequestBody User userDetails) {
        User updatedUser = userService.updateUser(id, userDetails);
        return ResponseEntity.ok(updatedUser);
    }

    @DeleteMapping("/users/{id}")
    public ResponseEntity<Void> deleteUser(@PathVariable Long id) {
        userService.deleteUser(id);
        return ResponseEntity.noContent().build();
    }
}

This is what’s happening:

  • @RequestMapping(“/api”): Sets a base path for all endpoints in the controller, e.g., /api/welcome and /api/users.
  • CRUD endpoints:
    • GET /api/users: Retrieves all users.
    • GET /api/users/{id}: Retrieves a user by ID. Throws ResourceNotFoundException if the user doesn’t exist.
    • POST /api/users: Adds a new user.
    • PUT /api/users/{id}: Updates an existing user.
    • DELETE /api/users/{id}: Deletes a user by ID.
  • ResponseEntity: Provides more control over the HTTP response, allowing you to set status codes and headers as needed.

5. Handling HTTP methods: GET, POST, PUT, DELETE

With your Spring Boot project connected to a database and your data models in place, it’s time to implement the core functionalities of your REST API. This means handling the primary HTTP methods—GET, POST, PUT, and DELETE—to perform Create, Read, Update, and Delete (CRUD) operations on your User entities.

Implement CRUD operations with Spring Boot

CRUD operations are fundamental to any REST API, allowing clients to manage resources effectively. Here’s how you can implement each of these operations in your Spring Boot application.

GET Request: Retrieve a List of Users or a Specific User by ID

To fetch a list of all users, you’ll create a GET endpoint that returns a collection of User objects.

@GetMapping("/users")
public List<User> getAllUsers() {
    return userService.getAllUsers();
}

The code is quite straightforward, but here’s the relevant parts of it:

  • @GetMapping("/users"): Maps HTTP GET requests to /api/users to this method.
  • public List<User> getAllUsers(): Returns a list of all users by invoking the getAllUsers() method from the UserService.

To fetch a single user by their ID, create another GET endpoint that accepts a path variable.

@GetMapping("/users/{id}")
public ResponseEntity<User> getUserById(@PathVariable Long id) {
    User user = userService.getUserById(id)
            .orElseThrow(() -> new ResourceNotFoundException("User not found with id " + id));
    return ResponseEntity.ok(user);
}

Here’s what’s happening:

  • @GetMapping("/users/{id}"): Maps HTTP GET requests to /api/users/{id} to this method.
  • @PathVariable Long id: Binds the {id} path variable to the id parameter.
  • userService.getUserById(id): Retrieves the user from the service layer.
  • ResponseEntity.ok(user): Returns the user with an HTTP 200 OK status.
  • Throws ResourceNotFoundException if the user is not found, which results in a 404 Not Found response.

POST request: add a new user

To add a new user to your database, create a POST endpoint that accepts user data in the request body.

@PostMapping("/users")
public User addUser(@RequestBody User user) {
    return userService.addUser(user);
}

Again, very straightforward, mainly thanks to JPA:

  • @PostMapping("/users"): Maps HTTP POST requests to /api/users to this method.
  • @RequestBody User user: Binds the incoming JSON payload to a User object.
  • userService.addUser(user): Saves the new user using the service layer.
  • Returns the saved User object, which includes the generated ID.

Example request body:

{
    "name": "Alice Johnson",
    "email": "alice.johnson@example.com"
}

PUT request: update an existing user

To update the details of an existing user, create a PUT endpoint that accepts the user ID and the updated data. This endpoint will update the changed properties to the element matching the ID provided.

@PutMapping("/users/{id}")
public ResponseEntity<User> updateUser(@PathVariable Long id, @RequestBody User userDetails) {
    User updatedUser = userService.updateUser(id, userDetails);
    return ResponseEntity.ok(updatedUser);
}

And here are the details of the new method:

  • @PutMapping("/users/{id}"): Maps HTTP PUT requests to /api/users/{id} to this method.
  • @PathVariable Long id: Binds the {id} path variable to the id parameter.
  • @RequestBody User userDetails: Binds the incoming JSON payload to a User object containing updated data.
  • userService.updateUser(id, userDetails): Updates the user using the service layer.
  • Returns the updated User object with an HTTP 200 OK status.

Example request body:

PUT /users/1
{
    "name": "Alice Smith",
    "email": "alice.smith@example.com"
}

 DELETE request: delete a user by ID

To remove a user from your database, create a DELETE endpoint that accepts the user ID.

@DeleteMapping("/users/{id}")
public ResponseEntity<Void> deleteUser(@PathVariable Long id) {
    userService.deleteUser(id);
    return ResponseEntity.noContent().build();
}

This is a simple endpoint that has a few interesting points to notice:

  • @DeleteMapping("/users/{id}"): Maps HTTP DELETE requests to /api/users/{id} to this method.
  • @PathVariable Long id: Binds the {id} path variable to the id parameter.
  • userService.deleteUser(id): Deletes the user using the service layer.
  • Returns an HTTP 204 No Content status, indicating that the deletion was successful and there is no content to return.

ResourceNotFoundException class

To handle scenarios where a user is not found, create a custom exception class. This class helps in providing meaningful error responses to the client.

package com.example.demo.exception;

import org.springframework.http.HttpStatus;
import org.springframework.web.bind.annotation.ResponseStatus;

@ResponseStatus(value = HttpStatus.NOT_FOUND)
public class ResourceNotFoundException extends RuntimeException {
    public ResourceNotFoundException(String message) {
        super(message);
    }
}

And here’s what’s happening:

  • Package declaration: Place this class in an exception package within your project structure.
  • @ResponseStatus(HttpStatus.NOT_FOUND): Automatically sets the HTTP status to 404 Not Found when this exception is thrown.
  • Constructor: Accepts a custom error message that describes the exception.

6. Best practices for building REST APIs

Building a REST API is not just about making endpoints available; it’s also about ensuring that your API is reliable, maintainable, and secure. Adhering to best practices can significantly enhance the quality and usability of your API. In this section, we’ll cover some essential best practices to follow when building REST APIs with Spring Boot.

Follow REST principles

Adhering to REST (Representational State Transfer) principles ensures that your API is intuitive and easy to use. Key aspects include:

Use proper HTTP methods

Each HTTP method should correspond to a specific type of operation:

  • GET: Retrieve data from the server. Use for fetching resources without modifying them.
  • POST: Create a new resource on the server.
  • PUT: Update an existing resource entirely.
  • DELETE: Remove a resource from the server.

Use appropriate HTTP status codes

HTTP status codes communicate the result of an API request. Using the correct status codes helps clients understand the outcome of their requests.

  • 200 OK: The request was successful.
  • 201 Created: A new resource was successfully created.
  • 204 No Content: The request was successful, but there’s no content to return.
  • 400 Bad Request: The request was malformed or invalid.
  • 404 Not Found: The requested resource does not exist.
  • 500 Internal Server Error: An unexpected error occurred on the server.

Use versioning

API versioning helps ensure that changes or updates to your API don’t break existing clients. By versioning your API, you can introduce new features or make changes without disrupting service for users relying on older versions (this of course, requires you to also have multiple versions of the API deployed at the same time).

A common approach (and one of the most efficient) is to include the version number in the URL path.

@RestController
@RequestMapping("/api/v1")
public class UserControllerV1 {
    // Version 1 endpoints
}

@RestController
@RequestMapping("/api/v2")
public class UserControllerV2 {
    // Version 2 endpoints with updates or new features
}

Some benefits of versioning include:

  • Backward compatibility: Clients using older versions remain unaffected by changes.
  • Controlled rollouts: Gradually introduce new features and allow clients to migrate at their own pace.
  • Clear communication: Clearly indicate which version of the API is being used.

Paginate responses

When your API deals with large datasets, returning all records in a single response can lead to performance issues and increased load times. Implementing pagination helps manage data efficiently and improves the user experience.

Of course, given how Spring Data JPA is already meant for dealing with databases and APIs, it already provides built-in support for pagination through the Pageable interface.

Controller example:

import org.springframework.data.domain.Pageable; 

@GetMapping("/users")
public Page<User> getAllUsers(
        @RequestParam(defaultValue = "0") int page,
        @RequestParam(defaultValue = "10") int size) {
    Pageable pageable = PageRequest.of(page, size);
    return userService.getAllUsers(pageable);
}

Some benefits of pagination include:

  • Performance optimization: Reduces the amount of data transferred in each request.
  • Improved user experience: Faster response times and more manageable data chunks.
  • Scalability: Handles growth in data volume without degrading performance.

Additional best practices

While the four best practices outlined above are fundamental, consider incorporating the following additional practices to further enhance your REST API.

Use consistent naming conventions

A crucial factor for API adoption is the developer experience (DX) that you provide your users. One way you can improve the DX of your API is through the use of consistent naming conventions across all your endpoints and other user-facing aspects:

  • Endpoints: Make sure your endpoints are named after your resource and use plural nouns for resource names (e.g., /users instead of /user).
  • Path variables: Clearly define and consistently use path variables (e.g., /users/{id}).

Provide comprehensive documentation

Inline with the idea of providing a good DX, make sure developers have all the tools they need to properly understand how to use your API.

Take advantage of tools like Swagger (OpenAPI) to generate interactive API documentation that they can use to get sample responses and validate their assumptions about their API.

Implement error handling

Error handling is often neglected by developers focused on the happy paths of their API endpoints. This leads to returning cryptic error messages or even providing an inconsistent format on different errors, depending on who coded it.

Consider creating a global exception handler to manage and format error responses consistently, and make sure your entire team uses it.

@ControllerAdvice
public class GlobalExceptionHandler {
    
    @ExceptionHandler(ResourceNotFoundException.class)
    public ResponseEntity<?> handleResourceNotFoundException(ResourceNotFoundException ex, WebRequest request) {
        ErrorDetails errorDetails = new ErrorDetails(new Date(), ex.getMessage(), request.getDescription(false));
        return new ResponseEntity<>(errorDetails, HttpStatus.NOT_FOUND);
    }
    
    // Handle other exceptions
}

Optimize performance

If you’re starting to see longer response times on your endpoints, maybe due to high levels of traffic, or perhaps an overloaded database, consider implementing some of the following optimization techniques.

  • Caching: Implement caching strategies to reduce database load and improve response times. This can be at the webserver level, or even at the application level by caching common requests to the database. First understand where the bottlenecks are, and then make sure caching is a valid option for them.
  • Asynchronous processing: Use asynchronous methods for long-running tasks to prevent blocking requests.
  • Ensure scalability: Design your API to handle increasing loads by implementing load balancing, horizontal scaling, and efficient resource management.

Conclusion

Building a REST API with Spring Boot involves several key steps, from setting up your project and defining data models to implementing CRUD operations and securing your endpoints. By following this guide, you’ve laid the foundation for a functional and secure API that can serve as the backbone of your web or mobile applications.

As you continue developing your API, consider exploring additional Spring Boot features such as advanced security configurations, caching strategies, and asynchronous processing. Experimenting with these tools will not only deepen your understanding of Spring Boot but also enable you to build more sophisticated and efficient APIs.

The post How to Build a REST API with Spring Boot: A Step-by-Step Guide appeared first on Camunda.

]]>
How to Succeed When Getting Started with Camunda 8 https://camunda.com/blog/2025/04/how-to-succeed-when-getting-started-with-camunda-8/ Tue, 29 Apr 2025 22:20:12 +0000 https://camunda.com/?p=136597 Avoid these four common pitfalls as you set up Camunda 8.

The post How to Succeed When Getting Started with Camunda 8 appeared first on Camunda.

]]>
After spending five years in Camunda support helping customers get up and running with the product, I’ve noticed a few recurring issues that are easy to avoid. Keep reading to learn about how you can ensure a good start with Camunda!

Don’t treat Camunda as a system of record

Using Camunda as a system of record can lead to several issues, such as a bloated data store that causes performance problems. There’s also the risk of storing personally identifiable information (PII) or other sensitive data in systems that don’t require it. To avoid these challenges, keep variable data to a minimum, only storing what’s necessary for the process.

For critical information, use a separate data store and reference it in the process using an ID, rather than storing it directly in Camunda. Ultimately, the data you store in Camunda should be strictly relevant to the process flow itself, helping to maintain both efficiency and security.

One way to keep variable data small is to make use of the Result Expression when using a connector. Most connectors will offer an Output Mapping to store the result of the connector. I commonly see users storing the entire result instead of making use of the Result Expression to store only the data they need.

For example, if you use the REST connector, you may get a result containing the status code of the response, some headers, and then some additional data from the endpoint. By using the Result Expression, you can map just the data you need to variables.

Prevent running into backpressure

Backpressure is a crucial mechanism in Zeebe that helps maintain system stability when processing slows down. It kicks in when the broker experiences high latency, preventing new events from being accepted until the system can catch up and the processing speeds return to normal. This ensures that the broker doesn’t become overwhelmed and continues to function efficiently.

To avoid hitting backpressure, there are several proactive steps you can take.

First, conduct thorough load testing to simulate real-world traffic and identify potential bottlenecks. The Camunda community provides two GitHub projects to help you perform load tests: the benchmarking toolset and the process automator.

Check out this in-depth blog post about benchmarking in Camunda.

Next, review your hardware specs to ensure they meet the performance requirements for your use case—insufficient resources can cause delays in processing. There are two common pitfalls when choosing your hardware, both relating to the hard drives attached to Zeebe brokers: be sure that your drives have a consistent minimum of 1000 IOPS and are not NFS. Slower drives, and the latency incurred by NFS, will cause your Zeebe brokers to perform inefficiently.

Make your system observable

Observability refers to the ability to track and analyze your system’s performance in real time. This gives you insight into how well your application is functioning and allows you to catch issues before they become significant problems.

Despite its importance, I often see people delay setting up observability until it’s too late. Being proactive about monitoring your application’s health can save a lot of time and effort in the long run by helping you identify and resolve issues early.

To help you get started monitoring your Camunda platform, Camunda comes out of the box with support for Prometheus and OpenTelemetry. You can review the metrics we provide in our documentation. We also provide dashboards for Grafana to make visualizing these metrics quick and easy.

In addition to monitoring the metrics provided by our application, be sure that you can collect and monitor application logs. These are critical for determining the root cause of issues that may arise. If accessing raw logs is an issue in your team, consider leveraging cloud vendor solutions like AWS CloudWatch or Azure Monitor.

Set up data retention early

Databases have limits, and as they grow, they can start to slow down. Storing unnecessary or outdated data causes your system to become bloated, which will negatively impact query performance and overall responsiveness. That’s where data retention policies come in.

By defining what data should be kept and what should be discarded, you ensure that your database stays lean and efficient, preventing performance issues as your system scales.

Don’t wait until your database becomes overwhelming to start cleaning it up. Make data retention a key part of your system planning from the beginning, so you can keep things running smoothly as your data grows.

Check out our documentation for more information on setting data retention policies for all the Camunda components:

While there is always more to learn as you get deeper into Camunda 8, avoiding these common pitfalls will help you get off to a strong start. And if you don’t want to deal with the hassle of hosting, sizing, and monitoring your platform, we also offer a SaaS option.

For anyone looking for a hand, of course, be sure to check out the docs, our forum, or contact your Camunda support representative directly, and we’ll be happy to help!

The post How to Succeed When Getting Started with Camunda 8 appeared first on Camunda.

]]>
An Advanced Ad-Hoc Sub-Process Tutorial https://camunda.com/blog/2025/04/an-advanced-ad-hoc-sub-process-tutorial/ Fri, 25 Apr 2025 02:09:15 +0000 https://camunda.com/?p=135934 Learn about the new ad-hoc sub-process capabilities and how you can take advantage of them to create dynamic process flows.

The post An Advanced Ad-Hoc Sub-Process Tutorial appeared first on Camunda.

]]>
Ad-hoc sub-processes are a new feature in Camunda 8.7 that allow you to define what task or tasks are to be performed during the execution of a process instance. Who or what decides which of the tasks are to be performed could be a person, rule, microservice, or artificial intelligence.

In this example, you’ll decide what those tasks are, and later on you’ll be able to add more tasks as you work through the process. We’ll use decision model and notation (DMN) rules along with Friendly Enough Expression Language (FEEL) expressions to carry out the logic. Let’s get started!

Table of contents

SaaS or C8Run?

Download and install Camunda 8 Run

Download and install Camunda Desktop Modeler

Create a process using an ad-hoc sub-process

Add logic for sequential or parallel tasks

Create a form to add more tasks and to include a breadcrumb trail for visibility

Run the process!

You’ve built your ad-hoc sub-process!

SaaS or C8Run?

You can choose either Camunda SaaS or Self-Managed. Camunda provides a free 30-day SaaS trial, or you can choose Self-Managed. I recommend using Camunda 8 Run to simplify standing up a local environment on your computer.

The next sections provide links to assist you in installing Camunda 8 Run and Desktop Modeler. If you’ve already installed Camunda or are using SaaS, you can skip to Create a process using an ad-hoc sub-process.

If using Saas, be sure to create an 8.7 cluster first.

Download and install Camunda 8 Run

For detailed instructions on how to download and install Camunda 8.7 Run, refer to our documentation. Once you have it installed and running, continue on your journey right back here!

Download and install Camunda Desktop Modeler

Download and install Desktop Modeler. You may need to open the Alternative downloads dropdown to find your desired installation.

Select the appropriate operating system and follow the instructions to start Modeler up. We’ll use Desktop Modeler to create and deploy applications to Camunda 8 Run a little bit later.

Create a process using an ad-hoc sub-process

Start by creating a process that will let you select from a number of tasks to be executed in the ad-hoc sub-process.

Open Modeler and create a new process diagram. This post uses SaaS and Web Modeler, but the same principles apply to Desktop Modeler. Be sure to switch versions, if not set correctly already, to 8.7, as ad-hoc sub-processes are available to Camunda 8.7 and later versions.

ad-hoc sub-process 1

Next, add an ad-hoc sub-process after the start event; add a task and click the Change element icon.

ad-hoc sub-process 2

Your screen should look something like this. Notice the tilde (~) denoting the ad-hoc sub-process:

ad-hoc sub-process 3

Now add four User Tasks to the subprocess. We’ll label them Task A, Task B, Task C, and Task D. Be sure to update the ID for each of the tasks to Task_A, Task_B, Task_C, and Task_D. We’ll use these IDs later to determine which of the tasks to execute.

You can ignore the warnings indicating forms should be associated with User Tasks.

Add an end event after the ad-hoc sub-process as well.

ad-hoc sub-process 4

Add a collection (otherwise known as an array) to the ad-hoc sub-process that determines what task or tasks should be completed within it.

Put focus on the ad-hoc sub-process and add the variable activeElements to the Active elements collection property in the Properties panel of the ad-hoc sub-process. You’ll need to pass in this collection from the start of the process.

ad-hoc sub-process 5

Now you need to update the start event by giving it a name and adding a form to it. Put focus on the start event and enter in a name. It can be anything actually, but it’s always a best practice to name events. This post uses the name Select tasks.

Click the link icon above the start event and click Create new form.

ad-hoc sub-process 6

The form should take the name of the start event: Select tasks.

Now drag and drop a Tag list form element onto the Form Definition panel.

ad-hoc sub-process 7

The Tag list form element allows users to select from an array of items and pass it to Camunda as an array.

Next, update the Field label in the Tag list element to Select tasks and the Key to activeElements.

ad-hoc sub-process 8

By default, a Tag list uses Static options with one default option, and we’ll use that in this example. Add three more static options and rename the Label and Value of each to Task A, Task_A; Task B, Task_B; Task C, Task_C; and Task D, Task_D.

ad-hoc sub-process 9

Let’s run the process! Click Deploy and Run. For SaaS, be sure to switch back to the ad-hoc sub-process diagram to deploy and run it.

You can also shortcut this by simply running the process, as running a process also deploys it.

ad-hoc sub-process 10

You’ll receive a prompt asking which Camunda cluster to deploy to, but there is only one choice. Deploy and run processes from Desktop Modeler to Camunda 8 Run.

Upon running a process instance, you should see the screen we created for the start event. Select one or more tasks and submit the form. This post selects Task A, Task B, and Task C. Click Run to start the process.

ad-hoc sub-process 11

A pop-up gives you a link to Camunda’s administrative console, Operate. If you happen to miss the pop-up, you can always click the grid icon in the upper left corner in Web Modeler. Select Operate in the menu.

ad-hoc sub-process 12
ad-hoc sub-process 13

Check out the documentation to see how to get to Operate in Camunda 8 Run.

Once in Operate, you should see your process definition. You can navigate to the process instance by clicking through the hyperlinks. If you caught the link in Web Modeler, you should be brought to the process instance directly. You should see something like this:

ad-hoc sub-process 14

As you can see, the process was started, the ad-hoc sub-process was invoked, and Task A, Task B, and Task C are all active. This was accomplished by passing in the activeElements variable set by the Tag list element in the Start form.

You can switch to Tasklist to complete the tasks. The ad-hoc sub-process will not complete until all three tasks are completed. Navigate to a task by clicking on it in the process diagram panel and clicking Open Tasklist in the dialog box.

ad-hoc sub-process 15

You should see all three tasks in Tasklist. Complete them by selecting each one, then click Assign to me and then click Complete Task.

ad-hoc sub-process 16

Once all three tasks are complete, you can return to Operate and confirm the process has completed.

ad-hoc sub-process 17

Now that you understand the basics of ad-hoc sub-processes, let’s add more advanced behavior:

  • What if you wanted to be able to decide whether those tasks are to be completed in parallel or in sequence?
  • What if you wanted to add more tasks to the process as you execute them?
  • What if you wanted a breadcrumb trail of the tasks that have been completed or will be completed?

In the next section, we’ll add rules and expressions to handle these scenarios. If you get turned around in the pursuit of building this example, we’ll provide solutions to help out.

Add logic for sequential or parallel tasks

Now we’ll add logic to allow the person starting the process to decide whether to run the selected tasks in sequence or in parallel. We’ll add a radio button group, an index variable, FEEL expressions, and rules to handle this.

Go back to the Select tasks form in Web Modeler. Add a Radio group element to the form.

ad-hoc sub-process 18

Update the Radio group element, selecting a Label of Sequential or Parallel and Static options of Sequential with a value of sequential and Parallel with a value of parallel. Update the Key to routingChoice and set the Default value to Sequential. Your screen should look something like this:

ad-hoc sub-process 19

Now you need to add some outputs to the Select tasks start event. Go back to the ad-hoc sub-process diagram and put focus on the Select tasks start event. Add the following Outputs, as shown below:

  • activeElements
  • index
  • tasksToExecute
ad-hoc sub-process 20

Next, update each with a FEEL expression. For activeElements, add the following expression:

{ "initialList": [],
  "appendedList": if routingChoice = "sequential" then append(initialList, tasksToExecute[1]) else tasksToExecute
}.appendedList

If you recall, activeElements is the collection of the task or tasks that are to be executed in the ad-hoc sub-process. Before, you simply passed the entire list, but now that you can choose between sequential or parallel behavior, you need to update the logic to account for that choice. If the choice is sequential, add the next task and that task only to activeElements.

If you’re not familiar with FEEL, let’s explain what you’re seeing here. This FEEL expression starts with the creation of a list called initialList. We then create another variable called appendedList by appending initialList with either the first task (if routingChoice is sequential) or the entire list (if routingChoice is parallel). We then pass back the contents of appendedList, as denoted by .appendedList on the last line, and populate `activeElements`.

ad-hoc sub-process 21

The index variable will be used to track where you are in the process. Set it to 1:

ad-hoc sub-process 22

In tasksToExecute, you’ll hold all of the tasks, whether in sequence or in parallel, in a list which you can use to display where you are in a breadcrumb trail. Use the following expression:

{ "initialList": [],
  "appendedList": if routingChoice = "parallel" then insert before(initialList, 1, tasksToExecute) else tasksToExecute
}.appendedList

In a similar fashion to activeElements, create a list variable called initialList. Next, insert tasks as a nested list if routingChoice is parallel or the entire list if routingChoice is sequential.

ad-hoc sub-process 23

Your screen should look something like this:

ad-hoc sub-process 24

Now you need to increase the index after completion of the ad-hoc sub-process and add some logic to determine if you’re done. In the process diagram, put focus on the ad-hoc sub-process and add an Output called index. Then add an expression of index + 1. Your screen should look something like this:

ad-hoc sub-process 25

Add two more Outputs to the ad-hoc sub-process, interjectYesNo with a value of no and interjectTasks with a value of null. We’ll be using these values later in a form inside the subprocess and this will set those variables to default values upon the conclusion of a sub-process iteration:

ad-hoc sub-process 26

Next, we’ll add a business rule task and a gateway to the process. Drag and drop a generic task from the palette on the left and change it to a Business rule task. Then drag and drop an Exclusive gateway from the palette after the Business rule task. You’ll probably need to move the End event to accommodate these items.

Your screen should look like this (you can see the palette on the left):

ad-hoc sub-process 27

Let’s create a rule set. Put focus on the Business rule task and click the link icon in the context pad that appears.

ad-hoc sub-process 28

In the dialog box that appears, click Create DMN diagram.

ad-hoc sub-process 29

In the decision requirements diagram (DRD) diagram that appears, set the Diagram and Decision names to Set next task.

ad-hoc sub-process 30

The names aren’t critical, but they should be descriptive.

Let’s write some rules! Click the blue list icon in the upper left corner of the Set next task decision table to open the DMN editor.

In the DMN editor, you’ll see a split-screen view. On the left is the DRD diagram with the Set next task decision table. On the right is the DMN editor where you can add and edit rules.

ad-hoc sub-process 31

First things first, update the Hit policy to First to keep things simple. The DMN will execute until it hits the first rule that matches. Check out the documentation for more information regarding Hit Policy.

ad-hoc sub-process 32

Let’s add some rules. In the DMN editor, you can add rule rows by clicking on the blue plus icon in the lower left. Add two rows to the decision table.

ad-hoc sub-process 33

Next, double click Input to open the expression editor. Your screen should look something like this:

ad-hoc sub-process 34

In this screen, enter the following expression: tasksToExecute[index]. Select Any for the Type. Your screen should look like this:

ad-hoc sub-process 35

Just to recap, you’ve incremented the index by one. Here you retrieve the next task or tasks, and now you’ll write rules to determine what to do based on what is retrieved.

In the first row input, enter the following FEEL expression: count(tasksToExecute[index]) > 1.

This checks to see if the count of the tasksToExecute list at the new index is greater than one which indicates parallel tasks. For now it’s not important, but it will be later. Next, double-click Output to open the expression editor.

ad-hoc sub-process 36

For Output name, enter activeElements, and for the Type, enter Any.

ad-hoc sub-process 37

In the first rule row output, enter the expression tasksToExecute[index].

If the count is greater than one, this means that there are parallel tasks to be executed next. All that’s needed is to pass on these tasks. The expression above does just that. You may also want to put in an annotation to remind yourself of the logic.

For example, you can enter Next set of tasks are parallel for the annotation.

Your screen should look like this:

ad-hoc sub-process 38

Next, add logic to the second row. Leave the otherwise notation - in for the input on the second row. Enter the following expression for the output of the second row:

{ "initialArray":[],  "appendedList": append (initialArray, tasksToExecute[index]) }.appendedList

What this does is create an empty list, add the next single task to be executed to the empty list, and then populate activeElements. You may want to add an annotation here as well: Next task is sequential.

Your screen should look like this:

ad-hoc sub-process 39

Now you need to add logic to the gateway to either end the process or to loop back to the ad-hoc sub-process. Go back to the ad-hoc sub-process in your project.

You might notice this in your process:

ad-hoc sub-process 40

Add a Result variable of activeElements and add a name of Set next task. Your screen should look like this:

ad-hoc sub-process 41

Add a name to the Exclusive gateway. Let’s use All tasks completed? Also, add the name Yes on the sequence flow from the gateway to the end event. Your screen should look like this:

ad-hoc sub-process 42

Change that sequence flow to a Default flow. Put focus on the sequence flow, click the Change element icon, and select Default flow.

ad-hoc sub-process 43

Notice the difference in the sequence flow now?

ad-hoc sub-process 44

Next, add a sequence flow from the All tasks completed? gateway back to the ad-hoc sub-process. Put focus on the gateway and click the arrow icon in the context pad.

ad-hoc sub-process 45

Draw the sequence flow back to the ad-hoc sub-process. You may need to adjust the sequence path for better clarity in the diagram.

ad-hoc sub-process 46

Add the name No to the sequence flow. Add the following Condition expression: activeElements[1] != null.

Your screen should look like this:

ad-hoc sub-process 47

Before running this process again, you need to deploy the Set next task rule. Switch over to the Set next rule DMN and click Deploy.

ad-hoc sub-process 48

One update is needed in the starting form. Open the Select tasks form and go to the Select tasks form element. Change the Key from activeElements to tasksToExecute.

ad-hoc sub-process 49

If you recall, the outputs you defined in the Start event will add activeElements.

Go back to the ad-hoc sub-process diagram and click Run. This time, select Task A and Task B and leave the routing choice set to Sequential. Click Run.

ad-hoc sub-process 50

In your Tasklist, you should only see Task A. Claim and complete the task. Wait for a moment, and you should then see Task B in Tasklist. Claim and complete the task.

Now, if you go to Operate and view the completed process instance, it should look something like this:

ad-hoc sub-process 51

Start another ad-hoc sub-process but this time select a number of tasks and choose Parallel. Did you see the tasks execute in parallel? You should have!

In the next section, you’ll add a form to the tasks in the ad-hoc sub-process to allow users to add more parallel and sequential tasks during process execution. You’ll also add a breadcrumb trail to the form to provide users visibility into the tasks that have been completed and tasks that are yet to be completed.

Create a form to add more tasks and to include a breadcrumb trail for visibility

Go back to Web Modeler and make a duplicate of the start form. To do this, click the three-dot icon to the right of the form entry and click Duplicate.

ad-hoc sub-process 52

While you could use the same form for both the start of the process and task completion, it’ll be easier to make changes without being concerned about breaking other things in the short term. Name this duplicate Task completion.

ad-hoc sub-process 53

Click the Select tasks form element and change the Key to interjectTasks.

ad-hoc sub-process 54

We’ll add logic later to add to the tasksToExecute variable.

Next, add a condition to the form elements to show or hide them based on a variable. You’ll add this variable, based on a radio button group, soon. In the Select tasks form element, open the Condition property and enter the expression interjectYesNo = “no”.

Your screen should look something like this:

ad-hoc sub-process 55

Repeat the same for the Sequential or parallel form element:

ad-hoc sub-process 56

You could just as easily put these elements into a container form element and set the condition property in the container instead, rather than setting the condition in each of the elements.

Next, add a Radio group to the form, above the Select tasks element. Set Field label to Interject any tasks?, Key to interjectYesNo, Static options to Yes and No with values of yes and no. Set Default value to No. Your screen should look like this:

ad-hoc sub-process 57

If you’ve done everything correctly, you should notice that the fields Select tasks and Sequential or parallel do not appear in the Form Preview pane. Given that No is selected in Interject any tasks?, this is the correct behavior. You should see both the Select tasks and Sequential or parallel fields if you select Yes in the Interject any tasks? radio group in Form Preview.

Next, you’ll add HTML to show a breadcrumb trail of tasks at the top of the form. Drag and drop an HTML view form element to the top of the form.

ad-hoc sub-process 58

Copy and paste the following into the Content property of the HTML view:

<div>
<style>
  .breadcrumb li {
    display: inline; /* Inline for horizontal list */
    margin-right: 5px;
  }

  .breadcrumb li:not(:last-child)::after {
    content: " > "; /* Insert " > " after all items except the last */
    padding-left: 5px;
  }

  .breadcrumb li:nth-child({{currentTask}}){ 
    font-weight: bold; /* Bold the current task */
    color: green;
  }
</style>
<ul class="breadcrumb">
    {{#loop breadcrumbTrail}}
      <li>{{this}}</li>
    {{/loop}}
</div>

Essentially this creates a breadcrumb trail using an HTML unordered list along with some CSS styling. You’ll need to provide two inputs, currentTask and breadcrumbTrail, which we’ll define next.

Your screen should look something like this:

ad-hoc sub-process 59

Let’s test the HTML view component. Copy and paste this into the Form Input pane:

{"breadcrumbTrail":["Task_A","Task_B","Task_C & Task_D"], "currentTask":2}

Your screen should look something like this (note that Task B is highlighted):

ad-hoc sub-process 60

Feel free to experiment with the CSS.

Go back to the ad-hoc sub-process diagram. You need to add inputs to the ad-hoc sub-process to feed this view. Be sure to put focus on the ad-hoc sub-process. Add an input called currentTask and set the value to index.

ad-hoc sub-process 61

Next, add an input called breadcrumbTrail and enter the following expression:

{  
  "build": [],
  parallelTasksFunction: function(tasks) string join(tasks, " & ") ,
  checkTaskFunction: function(task) if count(task) > 1 then parallelTasksFunction(task) else task,   
  "breadcrumbTrail": for task in tasksToExecute return concatenate (build, checkTaskFunction(task)),
  "breadcrumbTrail": flatten(breadcrumbTrail)
}.breadcrumbTrail

This expression takes the tasksToExecute variable and creates an HTML-friendly unordered list. It creates an empty array, build[], then defines a couple of functions:

  • parallelTasksFunction, that takes the parallel tasks and joins them together into a single string
  • checkTaskFunction, that sees if the list item is an array.

If the list item is an array, it calls the parallelTaskFunction. Otherwise it just returns the task. All the while, data is being added to the build[] list as defined in the loop in breadcrumbTrail. It is eventually flattened and returned for use by the HTML view to show the breadcrumb trail.

Your screen should look something like this:

ad-hoc sub-process 62

Next, link the four tasks in the ad-hoc sub-process to the Task Completion form.

ad-hoc sub-process 63

One last thing you need to do is add a rule set in the ad-hoc sub-process. This will add tasks to the taskToExecute variables if users opt to add tasks as they complete tasks.

Add a Business rule task to the ad-hoc sub-process, add an exclusive gateway join, then add sequence flows from the tasks to the exclusive gateway join. Finally, add a sequence flow from the exclusive gateway join to the business rule task.

It might be easier to just view the next screenshot:

ad-hoc sub-process 64

Every time a task completes, it will also invoke the rule that you’re about to author.

Click the Business rule task and give it the name Update list of tasks. Click the link icon in the context pad, then click Create DMN diagram.

ad-hoc sub-process 65

You should see the DRD screen pop up. Click the blue list icon in the upper left corner of the Update list of tasks decision table.

ad-hoc sub-process 66

In the DMN editor, update the Hit Policy to First. Double-click Input and enter the following expression: interjectYesNo.

Optionally you can enter a label for the input, but we’ll leave it blank for now.

ad-hoc sub-process 67

Add another input to the table by clicking the plus sign button next to interjectYesNo.

ad-hoc sub-process 68

Once again double-click the second Input to open the expression editor and enter the following expression: routingChoice.

Double click Output to open the expression editor and enter the following: tasksToExecute.

ad-hoc sub-process 69

Just to recap—you’ll use the variables interjectYesNo and routingChoice from the form to determine what to do with tasksToExecute.

Let’s add the rules. Here is the matrix of rules if you don’t want to enter them manually:

injectYesNoroutingChoicetasksToExecute
"no"tasksToExecute
"yes""sequential"concatenate(tasksToExecute, interjectTasks)
"yes""parallel"if count(interjectTasks) > 1 then append(tasksToExecute, interjectTasks) else concatenate(tasksToExecute, interjectTasks)

Your screen should look something like this:

ad-hoc sub-process 70

There are some differences between concatenate and append in FEEL in this context. The behavior of concatenate in this context will add the tasks as individual elements into the tasksToExecute list. Since the second argument of append takes Any object, it will add the entire object. In this case, it’s a list that needs to be added in its entirety to tasksToExecute. It’s a subtle but important distinction.

You’ll need an additional check of the count of interjectTasks in row 3 of the Output, in the event the user selects Parallel but only selects one task. In that case, it’s treated like a sequential addition.

Don’t forget to click Deploy as the rule will not be automatically deployed to the server upon the execution of the process.

ad-hoc sub-process 71

Go back to ad-hoc sub-process and add the Result variable tasksToExecute.

ad-hoc sub-process 72

Run the process!

The moment of truth has arrived! Be sure to select the cluster you’ve created for running the process. Select Task A and Task B in the form. The form will default to a routing choice of sequential. Click Run. You should be presented with the start screen upon running the process.

ad-hoc sub-process 73

Check Operate, and your process instance should look something like this:

ad-hoc sub-process 74

Now check Tasklist and open Task A. It should look something like this:

ad-hoc sub-process 75

Click Assign to me to assign yourself the task. Select Yes to interject tasks. Next, select Task C and Task D and Parallel.

ad-hoc sub-process 76

Complete the task. You should see Task B appear in Tasklist. Select it and notice how Task C and Task D have been added in parallel to be executed after Task B.

Also note the current task highlighted in green. Assign yourself the task and complete it.

ad-hoc sub-process 77

You should now see Task C and Task D in Tasklist.

ad-hoc sub-process 78

Assign yourself Task C, interject Task A sequentially, and complete the task. You may need to clear out previous selections.

ad-hoc sub-process 79

Complete Task D without adding any more tasks. You’ll notice that Task A has not been picked up yet in the breadcrumb trail.

ad-hoc sub-process 80

Task A should appear in Tasklist:

ad-hoc sub-process 81

Notice the breadcrumb trail updates. Assign it to yourself and complete the task. Check Operate to ensure that the process has been completed.

ad-hoc sub-process 82

You can view the complete execution of the process in Instance History in the lower left pane.

You’ve built your ad-hoc sub-process!

Congratulations on completing the build and taking advantage of the power of ad-hoc sub-processes! Keep in mind that you can replace yourself in deciding which tasks to add, if any, by using rules, microservices, or even artificial intelligence.

Want to start working with AI agents in your ad-hoc sub-processes right now? Check out this guide for how to build an AI agent with Camunda.

Stay tuned for even more on how to make the most of this exciting new capability.

The post An Advanced Ad-Hoc Sub-Process Tutorial appeared first on Camunda.

]]>
Creating and Testing Custom Exporters Using Camunda 8 Run https://camunda.com/blog/2025/04/creating-testing-custom-exporters-camunda-8-run/ Fri, 18 Apr 2025 17:26:17 +0000 https://camunda.com/?p=134975 Learn how to create custom exporters and how to test them quickly using Camunda 8 Run.

The post Creating and Testing Custom Exporters Using Camunda 8 Run appeared first on Camunda.

]]>
If you’re familiar with Camunda 8, you’ll know that it includes exporters to Elasticsearch and Opensearch for user interfaces, reporting, and historical data storage. And many times folks want the ability to send data to other warehouses for their own purposes. While creating custom exporters has been available for some time, in this post we’ll explore how you can easily test them on your laptop using Camunda 8 Run (C8 Run).

C8 Run is specifically targeted for local development, making it faster and easier to build and test applications on your laptop before deploying it to a shared test environment. Thank you to our colleague Josh Wulf for this blog post detailing how to build an exporter.

Download and install Camunda 8 Run

For detailed instructions on how to download and install Camunda 8 Run, refer to our documentation here. Once you have it installed and running, continue on your journey right back here!

Download and install Camunda Desktop Modeler

You can download and install Desktop Modeler using instructions found here. You may need to open the dropdown menu for “Alternative downloads” to find your preferred installation.. Select the appropriate operating system and follow the instructions and be sure to start Modeler up. We’ll use Desktop Modeler to create and deploy sample applications to Camunda 8 Run a little bit later.

Create a sample exporter

First, we’ll create a very simple exporter and install it on your local C8 Run environment and see the results. In this example, we’ll create a Maven project in IntelliJ, adding the exporter dependency, and then create a Java class, implementing the Exporter interface with  straightforward logging to system out. Feel free to use your favorite integrated development environment and build automation tools.

Once you’ve created a sample Maven project, add the following dependency to the pom.xml file. Be sure to match the version of the dependency, at the very least the minor version, with your C8 Run installation.

<dependencies>
  <dependency>
      <groupId>io.camunda</groupId>
      <artifactId>zeebe-exporter-api</artifactId>
      <version>8.6.12</version>
  </dependency>
</dependencies>

After reloading the project with the updated dependency, go to the src/main/java folder and create a package called io.sample.exporter:

Sample-exporter

Next, create a class called SimpleExporter in the package:

Simple-exporter

In SimpleExporter add implements Exporter, and then you should be prompted to select an interface. Be sure to choose Exporter io.camunda.zeebe.exporter.api:

Exporter-interface

You’ll likely get a message saying you’ll need to implement the export method of the interface. You’ll also want to implement the open method as well. Either select the option to implement the methods or create them yourself. The code should look something like this:

package io.sample.exporter;


import io.camunda.zeebe.exporter.api.Exporter;
import io.camunda.zeebe.exporter.api.context.Controller;
import io.camunda.zeebe.protocol.record.Record;


public class SimpleExporter implements Exporter
{
   @Override
   public void open(Controller controller) {
       Exporter.super.open(controller);
   }


   @Override
   public void export(Record<?> record) {
      
   }
}

Let’s make some updates. First we’ll add a Controller object that includes a method to mark a record as exported and moves the record position forward. The Zeebe broker will not truncate the event log otherwise and will lead to full disks. Add a controller object: Controller controller; to the class and update the open method, replacing the generated code with: this.controller = controller;

Your code should now look something like this:

public class SimpleExporter implements Exporter
{
   Controller controller;


   @Override
   public void open(Controller controller) {
       this.controller = controller;
   }


   @Override
   public void export(Record<?> record) {
   }
}

Let’s implement the export method. We’ll print something to the log and move the record position forward. Add the following code to the export method:

if(! record.getValue().toString().contains("worker")) {
   System.out.println("SIMPLE_EXPORTER " + record.getValue().toString());
}

The connectors will generate a number of records and the if statement above will cut down on the noise so we can focus on events generated from processes. Your class should now look something like this:

public class SimpleExporter implements Exporter
{
   Controller controller;


   @Override
   public void open(Controller controller) {
       this.controller = controller;
   }


   @Override
   public void export(Record<?> record) {
       if(! record.getValue().toString().contains("worker")) {
   		System.out.println("SIMPLE_EXPORTER " + record.getValue().toString());
 	 }
   }
}

Next, we’ll package this up as a jar file, add it to the Camunda 8 Run libraries, update the configuration file to point to this exporter and see it in action.

Add custom exporter to Camunda 8 Run

Using either Maven terminal commands ie: mvn package, or your IDE Maven command interface, package the exporter. Depending on what you’ve defined for artifactId and version in your pom file, you should see a file named artifactId-version.jar.

In the target directory. Here is an example jar file with an artifactId of exporter and a version of 1.0-SNAPSHOT:

Example-jar-artifactid

While you don’t have to copy and paste this jar file into the Camunda 8 installation, it’s a good idea. As long as the Camunda 8 Run application can access the directory, you can place it anywhere. In this example we’re placing the jar into the lib directory of the Camunda 8 Run installation in <Camunda 8 Run root directory>/camunda-zeebe-8.x.x/lib.

Lib-directory

Next, update the application.yaml configuration file to reference the custom exporter jar file. It can be found in the <Camunda 8 Run root directory>/camunda-zeebe-8.x.x/config directory.

Example Configuration:

zeebe:
  broker:
    exporters:
      customExporter:
        className: io.sample.exporter.SimpleExporter
        jarPath: <C8 Run dir>/camunda-zeebe-8.x.x/lib/exporter-1.0-SNAPSHOT.jar

This ensures that Camunda 8 Run recognizes and loads your custom exporter during startup.

Now let’s start up Camunda 8 Run.

Start Camunda 8 Run and observe the custom exporter in action

Open a terminal window, change directory to the Camunda 8 Run root directory. In it you should find the start.sh or c8run.exe file, depending on your operating system. Start either one (./start.sh or .c8run.exe).

Once Camunda 8 Run has started and you once again have a prompt, change the directory to log, ie: <Camunda 8 Run root directory>/log. In that directory there should be three logs, camunda.log, connectors.log, and elasticsearch.log.

Log-directory

Start tailing or viewing camunda.log using your favorite tool. Next, what we’ll do is create a very simple process, deploy it, and run it to view sample records from a process instance.

Create and deploy a process flow in Desktop Modeler

Go to Modeler and create a new Camunda 8 BPMN diagram. Build a simple one step process with a Start Event, a User Task, and an End Event. Deploy it to the Camunda 8 Run instance.Your Desktop Modeler should look something like this:

Process-camunda-desktop-modeler

You can then start a process instance from Desktop Modeler as shown here:

Start-instance-camunda-desktop-modeler

Go back to camunda.log and you should see entries that look something like this:

SIMPLE_EXPORTER {"resources":[],"processesMetadata":[{"bpmnProcessId":"Process_0nhopct","version":1,"processDefinitionKey":2251799813703443,"resourceName":"diagram_1.bpmn","checksum":"xbmiHFXd3lVQbwV1gq/UEQ==","isDuplicate":true,"tenantId":"<default>","deploymentKey":2251799813703442,"versionTag":""}],"decisionRequirementsMetadata":[],"decisionsMetadata":[],"formMetadata":[],"tenantId":"<default>","deploymentKey":2251799813704156}
SIMPLE_EXPORTER {"bpmnProcessId":"Process_0nhopct","processDefinitionKey":0,"processInstanceKey":-1,"version":-1,"variables":"gA==","fetchVariables":[],"startInstructions":[],"tenantId":"<default>"}
SIMPLE_EXPORTER {"bpmnElementType":"PROCESS","elementId":"Process_0nhopct","bpmnProcessId":"Process_0nhopct","version":1,"processDefinitionKey":2251799813703443,"processInstanceKey":2251799813704157,"flowScopeKey":-1,"bpmnEventType":"UNSPECIFIED","parentProcessInstanceKey":-1,"parentElementInstanceKey":-1,"tenantId":"<default>"}
SIMPLE_EXPORTER {"bpmnProcessId":"Process_0nhopct","processDefinitionKey":2251799813703443,"processInstanceKey":2251799813704157,"version":1,"variables":"gA==","fetchVariables":[],"startInstructions":[],"tenantId":"<default>"}
SIMPLE_EXPORTER {"bpmnElementType":"PROCESS","elementId":"Process_0nhopct","bpmnProcessId":"Process_0nhopct","version":1,"processDefinitionKey":2251799813703443,"processInstanceKey":2251799813704157,"flowScopeKey":-1,"bpmnEventType":"UNSPECIFIED","parentProcessInstanceKey":-1,"parentElementInstanceKey":-1,"tenantId":"<default>"}
SIMPLE_EXPORTER {"bpmnElementType":"PROCESS","elementId":"Process_0nhopct","bpmnProcessId":"Process_0nhopct","version":1,"processDefinitionKey":2251799813703443,"processInstanceKey":2251799813704157,"flowScopeKey":-1,"bpmnEventType":"UNSPECIFIED","parentProcessInstanceKey":-1,"parentElementInstanceKey":-1,"tenantId":"<default>"}
SIMPLE_EXPORTER {"bpmnElementType":"START_EVENT","elementId":"StartEvent_1","bpmnProcessId":"Process_0nhopct","version":1,"processDefinitionKey":2251799813703443,"processInstanceKey":2251799813704157,"flowScopeKey":2251799813704157,"bpmnEventType":"NONE","parentProcessInstanceKey":-1,"parentElementInstanceKey":-1,"tenantId":"<default>"}

Now you can experiment extracting data from the JSON objects for your own purposes and experiment with sending data to warehouses of your choice. Enjoy!

Looking for more?

Camunda 8 Run is free for local development, but our complete agentic orchestration platform lets you take full advantage of our leading platform for composable, AI-powered end-to-end process orchestration. Try it out today.

The post Creating and Testing Custom Exporters Using Camunda 8 Run appeared first on Camunda.

]]>
Introducing Camunda Process Test—The Next Generation Testing Library https://camunda.com/blog/2025/04/camunda-process-test-the-next-generation-testing-library/ Fri, 04 Apr 2025 18:36:08 +0000 https://camunda.com/?p=132737 Transition from Zeebe to Camunda Process Test with Camunda 8.8 for a more robust, flexible testing framework.

The post Introducing Camunda Process Test—The Next Generation Testing Library appeared first on Camunda.

]]>
At Camunda, we’re committed to continuously improving the developer experience and ensuring our customers have robust tools to build, test, and deploy processes with confidence. This year, we’re streamlining our architecture, APIs, and testing libraries to help developers build process applications more efficiently.

As part of this commitment, we are excited to announce a significant evolution in our testing libraries: Camunda Process Test, designed specifically for Camunda 8.

Why the change?

Until now, Camunda 8 users have relied on the Zeebe Process Test (ZPT) library to unit test BPMN processes. ZPT served us well, leveraging an in-memory Zeebe engine with gRPC to run tests and verify process behavior.

However, as our platform evolved, ZPT could no longer fully support the latest Camunda 8 features, including our new REST API and user task functionalities. Additionally, as part of our API streamlining strategy, most of our gRPC endpoints will be phased out by version 8.10, making ZPT incompatible moving forward.

To address these challenges and provide our customers with enhanced testing capabilities, we’ve developed a completely new testing library: Camunda Process Test (CPT).

Introducing Camunda Process Test

The Camunda Process Test library is our next-generation testing framework, designed and built for our customers’ evolving needs. CPT offers powerful testing capabilities and fully aligns with the new Camunda 8 REST API, enabling comprehensive testing of BPMN processes, connectors, user tasks, and more.

Here are some highlights of what CPT offers:

  • Improved developer experience: By leveraging technologies like TestContainers, CPT ensures faster test execution, simpler environment setup, and smoother integration into modern CI/CD workflows. Also, by using the in-memory H2 database as a secondary storage, the testing library keeps a small memory footprint.
  • REST API integration: CPT fully integrates with the Camunda 8 REST API, providing extensive test coverage for the latest features, including Camunda user tasks, connectors, and advanced client commands.
  • Enhanced assertions and test coverage: CPT provides a rich set of assertions and generates detailed test coverage reports after each test run. These enable developers to quickly pinpoint testing gaps and verify process behaviour more accurately.
  • Automatic wait handling: CPT automatically manages process wait states, eliminating the need for manual waitForIdleState() or waitForBusyState() calls, significantly simplifying your test code.

Deprecation timeline for Zeebe Process Test (ZPT)

With the introduction of CPT in Camunda 8.8, we’re officially deprecating the Zeebe Process Test library. Here’s a clear timeline to help you plan your migration:

Camunda 8.8 (October 2025)

  • Introduce CPT as the recommended testing library.
  • Mark ZPT as deprecated (available but no longer actively enhanced).
  • Provide a comprehensive migration guide and assertion mapping documentation.

Camunda 8.9 (April 2026)

  • Transition: Both CPT and ZPT remain available and fully supported, allowing ample time for migration and testing.

Camunda 8.10 (October 2026)

  • Fully remove Zeebe Process Test library from repositories and documentation.
  • Customers must complete the migration to CPT before upgrading to 8.10.

Migration made simple

We understand that migrating to a new testing framework involves effort. To ensure a smooth transition, we’ll develop detailed resources, including:

  • A comprehensive step-by-step migration guide from ZPT to CPT. It will include:
    • Clear mapping of existing ZPT assertions and utilities to their CPT counterparts
    • Practical example code snippets covering common migration scenarios
  • Documentation featuring CPT’s new capabilities and best practices

These resources will be available together with the 8.8 release.

The migration involves the following steps:

  1. Review existing test cases: Identify ZPT usage and custom assertions within your test suite.
  2. Replace ZPT assertions with CPT equivalents: Use our assertion mapping guide for straightforward replacements.
  3. Adapt to structural changes: Remove manual wait states and leverage CPT’s built-in automatic handling.
  4. Migrate from Zeebe client to Camunda client: The Zeebe client is deprecated in favor of the Camunda client. CPT supports both clients until version 8.10.
  5. Transition to TestContainers: Update local development environments and CI pipelines to use TestContainers, enabling consistent and fast test environments.
  6. Utilize CPT’s enhanced capabilities: Leverage new test coverage reports, granular task lifecycle assertions, and improved connector testing.

Looking ahead

By transitioning to the Camunda Process Test, you’ll gain a more robust, flexible, and powerful testing framework aligned with the latest Camunda 8 features. While migration requires initial effort, the long-term benefits of improved test coverage, clearer assertions, and enhanced developer productivity are substantial.

We strongly encourage all customers to begin the migration process with the 8.8 release. Our documentation team and support resources will be ready to assist you in making this transition smoothly. Camunda documentation will provide detailed migration instructions and more information.

As always, we welcome your feedback and questions.

Happy testing!

The post Introducing Camunda Process Test—The Next Generation Testing Library appeared first on Camunda.

]]>
Continuous Integration and Continuous Deployment with Git Sync from Camunda https://camunda.com/blog/2025/02/continuous-integration-and-continuous-deployment-with-git-sync/ Thu, 27 Feb 2025 21:48:45 +0000 https://camunda.com/?p=130006 Reduce errors and foster collaboration Camunda's Git integration and CI/CD pipeline blueprint.

The post Continuous Integration and Continuous Deployment with Git Sync from Camunda appeared first on Camunda.

]]>
Every project, including process orchestration initiatives, requires time and multiple iterations to achieve the desired outcome. Adopting continuous integration and continuous deployment (or continuous delivery), known as CI/CD, is the most effective way to automate code integration, testing, and application deployment. CI/CD enhances development efficiency, minimizes errors, and speeds up software delivery while ensuring high quality.

Camunda now enables organization owners and administrators to link their Web Modeler process applications to GitHub and GitLab. This ensures seamless synchronization between Web Modeler, Desktop Modeler, and official version control projects.

Why is this important?

CI/CD plays a crucial role in automating and optimizing the software development lifecycle, allowing teams to release updates more quickly, minimize errors, and uphold code quality. Continuous integration (CI) ensures frequent merging and testing of code changes to detect issues early, while continuous deployment/delivery (CD) streamlines the release process, reducing manual work and potential deployment risks. By adopting CI/CD, organizations can drive faster innovation, enhance collaboration, and achieve more reliable software delivery.

An analysis of over 12,000 open source repositories revealed that implementing CI/CD practices resulted in a 141.19% boost in commit velocity, highlighting significantly faster development cycles.

Many organizations have adopted the use of GitLab and GitHub to manage the software development lifecycle; however, it can be challenging to integrate GitLab and GitHub into your development cycles and deployment processes. With Camunda’s integrated solution, developers can use Camunda’s solution to transfer files to the Git repository all the way through production employment.

This integration links a process application to a Git repository branch, making it easy for both nontechnical users and developers to access the source of truth and collaborate seamlessly across desktop and Web Modeler.

Git Sync with Camunda

In order to take advantage of this integration, a bit of setup is required. However, once configured, you can take advantage of a button click to sync your Camunda process application and your Git repository. As mentioned, this integration works with GitLab and GitHub. The configuration is quite similar, and you can find detailed instructions in our documentation.

In the screenshots below, you can see the option to configure the Git integration by clicking the upper right button. Enter the fields for your GitHub configuration (for this example) after installing the Camunda Git Sync application in GitHub for our repository.

Space Mutiny
Configure repository connection

In this example case, a GitHub repository has files ready to be pulled to your Camunda process application.

GitHub repo with files to pull to application

You can now synchronize your process application and, in this case, pull down the contents of your GitHub repository (reflected below).

Sync with GitHub

This action will execute a pull from GitHub of all the latest commits to your Camunda process application as shown in Modeler here.

Modeler display of GitHub commits

Version control synchronization

As with any project, you are going to make changes, and you’ll want to make sure that these changes are captured in your Git repository with proper version information.

Proper version info in your repo

Camunda’s Git Sync allows you to synchronize any changes made to your project files to your repository with the proper associated version control information. For this version commit, the name of the main BPMN process was updated to be spelled correctly, as is shown in the Git repository.

Updating the name of the BPMN process

As expected, GitHub reflects that the original misspelled BPMN file was deleted from the Git repository and replaced with the file with the properly spelled name.

GitHub reflects changes

Let’s now look at some modifications to the elements of the process model itself and use Camunda’s tools to show how the versions can be compared.

By modifying the main model (Eligibility Check) and then synchronizing those changes with GitHub using a minor version change, you can see that GitHub shows your modifications to the committed files.

GitHub showing modification to committed files

Although you can see the changes between versions in GitHub, this might not be as easy to interpret as reviewing the changes in a more visual way by diffing the versions with Camunda Web Modeler.

Visual review of changes in Modeler

In Web Modeler, you can see the differences between versions with explanations in a graphical UI, which makes it easier to understand what changes were made.

Graphical UI makes change easier to see

Parallel feature development

Camunda’s Git sync also enables parallel feature development by allowing multiple process applications to connect to separate feature branches. This ensures teams can work on different features simultaneously without overlapping or disrupting each other’s progress.

Git Sync blueprint

In addition to Git sync functionality, Camunda also offers a CI/CD pipeline blueprint to help get you started. This blueprint showcases a flexible CI/CD pipeline for deploying Web Modeler folder content across various environments using GitLab and Camunda.

CI/CD pipeline blueprint

With this custom integration, you can fully orchestrate your release process and modify it to fit your specific requirements. While Web Modeler provides native Git Sync functionality, this blueprint allows you to connect your process application to a remote repository and sync your application files with a single click to create a new commit to the remote repository and then initiate your CI/CD pipeline process.

This blueprint provides:

  • Version control. It enables the synchronization of a Web Modeler process application with a target GitLab (by default) repository by creating a merge request. Once merged, this request initiates the deployment pipeline.
  • Fully customizable. Although the blueprint is designed to be used with GitLab, you can adapt it to work with other CI/CD tools.
  • Multistate deployments. It simulates a deployment pipeline with three stages, incorporating a manual review and additional testing. A milestone is created after each successful deployment to track the deployment status.

With mulitstate deployments, you can use different projects within the same Web Modeler instance to represent different stages. You can allow developers access to the development project, which may be synced to a feature branch, while giving only a few users access to the production project, which is synched for the production branch.

CI/CD with GitHub and GitLab

This offering from Camunda allows organizations to adhere to CI/CD procedures and guidelines, supporting the full pipeline from development to deployment. Developers can push process application changes to a Git repository and trigger the deployment process using the CI/CD pipeline blueprint.

By using CI/CD with Git and Camunda, teams can efficiently automate workflows, reduce manual intervention, and ensure reliable, continuous software delivery.

CI/CD is essential for automating and optimizing the software development lifecycle, enabling faster, more reliable, and high-quality software delivery for several reasons. For example:

  • CI ensures early issue detection by frequently merging and testing code changes, reducing integration challenges.
  • CD automates releases, minimizing manual effort, deployment risks, and time-to-market.

With Git integration with Camunda and our CI/CD pipeline blueprint, organizations can reduce errors and foster collaboration while empowering teams to innovate quickly while maintaining stability and consistency across development, testing, and production environments.

Try it yourself

If you want to dive in and try this out yourself, learn how to set up the integration.

You can also follow along with this step-by-step video tutorial that will walk you through how to set this up and take advantage of the Git sync feature.

The post Continuous Integration and Continuous Deployment with Git Sync from Camunda appeared first on Camunda.

]]>
One Exporter to Rule Them All: Exploring Camunda Exporter https://camunda.com/blog/2025/02/one-exporter-to-rule-them-all-exploring-camunda-exporter/ Fri, 14 Feb 2025 18:37:30 +0000 https://camunda.com/?p=128815 Achieve a more streamlined architecture and better performance and stability with the new Camunda Exporter.

The post One Exporter to Rule Them All: Exploring Camunda Exporter appeared first on Camunda.

]]>
When using Camunda 8, you might encounter the concept of an exporter. An exporter is used to push out historic data, generated by our processing engine, to a secondary storage.

Our web applications have historically used importers and archivers to consume, aggregate, and archive historical data provided by our Elasticsearch (ES) or OpenSearch (OS) Exporter.

In the past year, we’ve engineered a new Camunda Exporter, which brings the importer and archiving logic of web components (Tasklist and Operate) closer to our distributed platform (Zeebe). This allows us to simplify our installation, enable scalability for web apps, reduce the latency to show runtime and historical data, and reduce data duplication (resource consumption).

In this blog post, we want to share more details about this project and the related architecture changes.

Challenges with the current architecture

Before we introduce the Camunda Exporter, we want to go into more detail about the challenges with the current Camunda 8 architecture.

A diagram of the current state simplified
A simplified view of the architecture of Camunda 8.7, highlighting 6 process steps

When a user sends a command to the Zeebe cluster (1), it is acknowledged (2) and processed by the Zeebe engine. The engine will confirm the processing with an event.

The engine has its own primary data store for runtime data. The primary data store is optimized for low-latency local access. It contains the execution state that allows the engine to execute process instances and move its data along in their corresponding processes.

Our users need a way to search and visualize process data (running and historical data), so Camunda 8 makes use of Elasticsearch or OpenSearch (RDBMS in the future) as secondary storage. It allows the separation of concerns between runtime data for process execution and history data for querying.

Camunda’s exporters are a bridge between primary and secondary data stores. The exporters allow the Zeebe system to stream out data (events) (3). Within the Camunda 8 architecture, we support both the ES and OS exporters. For more information about this concept and supported exporters, please visit our documentation.

The exported data is stored in what we unofficially refer to as Zeebe indices inside ES or OS. Web applications like Tasklist and Operate make use of importers and archivers to read data from Zeebe indices (4), aggregate, and write them back into their indices (5). Based on these indices, users can query and search process data (6).

Performance

Customers have reported performance issues, which are inherited with this architecture. For example, the delay of data shown in Operate can range from around five seconds to, in the worst scenarios, minutes or hours.

The time is spent in processing, exporting, flushing, importing, and flushing again, before the users see any data change. For more detailed information, you can also take a look at this blog post.

This means the user is never able to follow a process instance in real time. But there is a general expectation that it is at least close to real time, meaning that it should at max take 1-2 seconds to show updates.

Reducing such latency and improving the general throughput needs a general architecture change.

Scalability

What we can see in our architecture above is that when we scale Zeebe clusters and partitions or set them up for large workloads, the web applications do not scale automatically with it, as they are not directly coupled.

This means additional effort to make sure the web applications can handle certain loads. The current architecture limits the general scalability of the web applications, due to the decoupling of exporter-importer and no real partitioning of data in the secondary storage.

We want to make the web application more scalable to handle changing processing workloads.

Installation complexity

You can run the different components of the Camunda platform separately, e.g. separate deployments for Zeebe, Tasklist, Operate, etc. This gives you a lot of flexibility and allows for massive scale. But at the same time, this makes the installation harder to do—even with the help of Helm charts.

We want to support a simpler installation as an alternative. That wasn’t possible in this architecture because of a missing single application and the need for separate components.

Data duplication and resource consumption

Web applications like Operate and Tasklist have historically been grown and developed separately. As we have seen, they could have been deployed separately as well.

This was also why they had separate schemas. Tasklist used a subset of Operate schema but added additional necessary indices to store information about user tasks, etc. When deploying both applications, this caused an unnecessary duplication of data in ES or OS.

As a consequence of this, we are consuming more disk space than necessary. Furthermore, ES/OS has a higher load on indexing new data than should be necessary.

We want to reduce this to minimize the memory and disk footprint needed to run Camunda.

One exporter to rule them all

Understanding those challenges, we rearchitected our platform to get rid of the aforementioned challenges. In the new architecture, we have built a Camunda exporter to replace the exporter/importer from the old architecture.

Simplified diagram of 8.7 architecture
A simplified view of the new streamlined architecture

The Camunda Exporter brings the importer and archiving logic of web components (Tasklist and Operate) closer to the distributed platform (Zeebe).

The exporter consumes Zeebe records, aggregates data, and stores the related data into shared and harmonized indices that are used by both web applications. Archiving of data is done in the background, coupled with the exporter but not blocking the exporter’s progress.

Introducing this Camunda Exporter allows it to scale with Zeebe partitions and simplifies the installation, as importer and archiver deployments will be removed in the future.

The architecture diagram above is a simplified version of the actual work we have done. It shows an installation for a greenfield and a new cluster (no previous data).

More complex is a brownfield installation as shown in the diagram below, where data already exists.

Image1

We were able to harmonize the existing index schema used by Tasklist and Operate, reducing data duplication and resource consumption. Several indices can now be used by both applications without a need to duplicate the data.

With this new index structure, there is no need for additional Zeebe indices anymore.

Note: With 8.8, we likely will still have the importer/exporter (including Zeebe indices) to make use of Optimize (if enabled), but we aim to change that in the future as well.

Migration (brownfield installation)

Brownfield scenarios, where the data already exists and processes are running in an old architecture, are much more complex than greenfield installations. We have covered this in our solution design and want to briefly talk about it in this blog post. A more detailed update guide will follow with the Camunda 8.8 release.

When you update to the new Camunda version, there will be no additional effort for the user regarding data migration. We are providing an additional migration application that takes care of enhancing process data (in Operate indices) which can be used by Tasklist. Other than that, all existing Operate indices Operate can be used by Tasklist.

A simplified view of the brownfield (migration) scenario to the new streamlined architecture
A simplified view of the brownfield (migration) scenario to the new streamlined architecture

Reducing the installation complexity is a slower process for brownfield installations. Importers still need to be executed to drain the preexisting data in indices created by ES or OS exporters.

After all older data (produced before the update) is consumed and aggregated, importers and exporters can be turned off as well but can also be kept for simplicity. The importers will communicate via metrics if they are done and by writing to a special ES/OS index. More details will be provided in the following update guide.

Conclusion

The new Camunda Exporter helps us achieve a more streamlined architecture, better performance, and stability (especially concerning ES/OS). The target release for the Camunda Exporter project is the 8.8 release.

To recap the highlights of the new Camunda Exporter, we can:

  1. Scale with Zeebe partitions, as exporters are part of partitions. The data injection and the data archiving scales inherently.
  2. Reduce resource consumption with harmonized schema. Data is not unnecessarily duplicated between web applications. ES and OS are not unnecessarily overloaded with index requests for duplicated data.
  3. Improve performance, due to reducing additional hop. As we do not need to wait for ES/OS to flush twice and make data available, we can reduce one flush interval from our equation. We don’t need to import the data and store it in Zeebe indices, so we shorten the data pipeline. This was shown in one of our recent chaos days but needs to be further investigated and benchmarked, especially with higher load scenarios.
  4. Simplify installation by bringing business logic closer to the Zeebe system. We no longer need separate applications or components for importing and archiving data. It can be easily enabled within the Zeebe brokers. The Camunda Exporter has everything built in.

I hope this was insightful and helpful to understand what we are working on and what we want to achieve with the newest Camunda Exporter. Stay tuned for more information about benchmarks and other updates.

Join us at CamundaCon to learn more

Looking to learn more about this new architecture and how the Camunda Exporter will help you? I’ll be giving a talk on the new Camunda Exporter at CamundaCon Amsterdam in May. Join us there in person to catch the session and so much more.

The post One Exporter to Rule Them All: Exploring Camunda Exporter appeared first on Camunda.

]]>
Why AI Agents Need Orchestration https://camunda.com/blog/2025/02/why-ai-agents-needs-orchestration/ Mon, 10 Feb 2025 23:54:46 +0000 https://camunda.com/?p=128284 Help your AI make better choices and complete more complex tasks with agentic orchestration.

The post Why AI Agents Need Orchestration appeared first on Camunda.

]]>
Currently, most organizations are asking themselves how they can effectively integrate artificial intelligence (AI) agents. These are bots that can take in some natural language query and perform an action.

I’m sure you’ve already come across various experiments that aim to crowbar these little chaps into a product. Results can be mixed. They can range from baffling additions that hinder more than help to ingenious, often subtle enhancements that you can’t believe you ever lived without.

It’s exciting to see the innovation that’s going on, and because of the kind chap I am, I’ve been wondering about how we can build our way towards improving actual end-to-end business processes with AI agents. Naturally, this requires us to get to a point where we trust agents to make consequential decisions for us, and even trust them to action those decisions.

So, how do you build an infrastructure that uses what we’ve learned about the capabilities of AI agents without giving them too much or too little responsibility? And would end users ever be able to trust an AI to make consequential decisions?

How AI agents will evolve

Most people I know have already integrated some AI into their work in some great ways. I build proof of concepts with Camunda reasonably often and use Gemini or ChatGPT to generate test data or JSON objects—it’s very handy. This could be expanded into an AI agent by suggesting that it generate the data and also start an instance of the process with the given data. 

This also tends to be the way organizations are using AI agents—a black box that takes in user input and responds back with some (hopefully) useful response after taking a minor action.

AI agent responses are usually opaque and offer little reasoning

Those actions are always minor of course and it’s for good reason—it’s easy to deploy an AI agent if the worst it’s going to do is feed junk data into a PoC. The AI itself isn’t required to take any action or make any decisions that might have real consequences; if a human makes the decision to use a court filing generated by ChatGPT… Well, that’s just user error. 

For now, it’s safer to keep a distance from consequential decision-making and the erratic and sometimes flawed output of AI agents. This more or less rules out utilizing the full potential of AI agents—because at best you want them in production systems making decisions and taking consequential actions that a human might do.

It’s unrealistic, however, to assume that this will last for long. The logical conclusion of what we’ve seen so far is that AI agents will be given more responsibility regarding the actions they can take. What’s holding that step back is that no one trusts them because they simply don’t produce predictable, repeatable results. In most cases, you’d need them to be able to do that in order to make impactful decisions.

So what do we need to do to make this next step? Three things:

  • Decentralize
  • Orchestrate
  • Control

Agentic AI orchestration

As I mentioned, I use several AI tools daily. Not because I want to, but because no single AI tool can accurately answer the diversity of my queries. For example, I mentioned how I use Gemini to create JSON objects. I was building a small coffee order process and needed an object containing many orders.

{"orders" : [
  {
        "order_id": "20240726-001",
        "customer_name": "Alice Johnson",
        "order_date": "2024-07-26",
        "items": [
          {
            "name": "Latte",
            "size": "Grande",
            "quantity": 1,
            "price": 4.50
          },
          {
            "name": "Croissant",
            "quantity": 2,
            "price": 3.00
          }
        ],
        "payment_method": "Card"
  },
  {
        "order_id": "20240726-002",
        "customer_name": "Bob Williams",
        "order_date": "2024-07-26",
        "items": [
          {
            "name": "Espresso",
            "quantity": 1,
            "price": 3.00
          },
          {
            "name": "Muffin",
            "quantity": 1,
            "price": 2.50
          },
                {
            "name": "Iced Tea",
            "size": "Medium",
            "quantity": 1,
            "price": 3.50
          }
        ],
        "payment_method": "Cash"
  }
]}

I then needed to use Friendly Enough Expression Language (FEEL) to parse this object to get some kind of specific information.

I didn’t use Gemini for this because it reliably gives me bad information when I need a FEEL expression. This is for a few reasons. FEEL is a new and relatively niche expression language so there’s less data for it to be trained on. Also, I’m specifically using Camunda’s FEEL implementation, which contains some additional functions and little quirks that need to be considered. If I asked Gemini to both create the data object and then go ahead and use FEEL to get the first order in the array, I get this:

Incorrect FEEL response from Gemini

This response is a pack of lies. So instead I ask an AI agent which I know has been trained specifically and exclusively on Camunda’s technical documentation. The response is quite different and also quite correct.

Answer from properly trained AI

I’m usually confident that Camunda’s own AI copilot and assistants will give me the correct information. It not only generates the expression, it also runs the expression with the given data to make sure it works. Though the consequences aren’t so drastic. I know FEEL pretty well, so I’m going to be able to spot any likely issues before putting it into production.

In this scenario, I’m essentially working as an orchestrator of AI agents. I’m making decisions to use a specific agent based on two main factors. 

  1. Trust: Which Agent do I trust to give me the correct answer?
  2. Consequences: How impactful are the consequences of trusting the result?

This is what’s blocking the effectiveness of true end-to-end agentic processes. I don’t know if I can trust a given agent enough to decide something and then take action that might have real consequences. This is why people are okay with asking AI to summarize a text but not to purchase flowers for a wedding.

Truth and consequences

So enough theory, let’s talk about the practical steps to increase trust and control consequences in order to utilize AI Agents fully. As I like doing things sequentially, let’s take them one at a time.

Trust

We’ve all experienced looking at the result from an AI model and asked ourselves, “Why?” The biggest reason to distrust AI agents is that, in most cases, you’ll never be able to get a good answer to why a result was given. In situations where you require some kind of audit of decision-making or strict guardrails in place, you really can’t rely on a black box like an AI agent.

There is a nice solution to this though—chain of thought. This is where the AI is clear about how the problem was broken down and subsequently lays out its thought process step by step. The clear hole in this solution is that someone is needed to look over the chain of thought, and here is where we can start seeing how orchestration can lend a hand.

Orchestration can link together services in a way that sends a query to multiple agents. When both have returned with their answer and chain of thought, a third agent can act as a judge to determine how accurate the result is.

Continuing with my example, it would be easier to tell a generic endpoint, “I’m using Camunda and need to use a FEEL expression for finding the first element in an array, and have faith that this question will be routed to the agent best suited to answer it. In this case, that would be Camunda’s kapa.ai instance.

Building this with an orchestrator like Camunda that uses BPMN would be pretty easy.

In this process the query is sent into a process instance. Two different AI agents are triggered in parallel and asked who is best at handling this kind of request. The result is passed to a third agent that can review the chain of thought and make a determination from the results of both. In this case it’s probably clear that FEEL is something a Camunda AI would do a pretty good job of answering and you’d expect the process to be sent off in that direction.

In this case we’ve created a maintainable system where more trustworthy responses are passed back to the user along with a good indication of why a certain agent was chosen to be involved and why a certain response was given.

Consequences

Once trust is established, it’s not hard to imagine that you’d start to consider actions that should be taken. Let’s imagine that a Camunda customer has created a support ticket because they’re also having trouble getting the first element in an array. A Camunda support person could see that and think, That’s something I’m confident kapa.ai could answer—and in fact, I should just let the AI agent respond to this one.

In that case, we just need to make some adjustments to the model. 

In this model, we’ve introduced the action of accessing the ticketing system to find the relevant ticket and then updating the ticket with a trustworthy answer. Because of how we’ve designed the process, we would only do this in cases where we have a very high degree of trust. If we don’t, the information will be sent back to the support person, who can decide what to do next.

The future of AI orchestration

Providing independent, narrowly trained agents and then adding robust, auditable decision-making and orchestration around how and why they’re called upon will initially help users trust results and suggestions more. Beyond that, it will give architects and software designers the confidence to build in situations where direct action can be taken based on these trustworthy agents.

An orchestrator like Camunda is essential for achieving that step because it already specializes in integrating systems and lets designers tightly control how and why those systems are accessed. Another great advantage is far better auditability. Combining the data generated from stepping through the various paths of the process with the chain of thought output from each agent gives a complete picture of how and why certain actions were taken.

With these principles, it would be much easier to convince users that actions performed by AI without user supervision would be trustworthy and save a huge amount of time and money for people by removing remedial work like checking and verifying before taking additional steps.

Of course, it’s not true for everything, and I’m happy to say that I feel we should still leave court filing to humans. Eventually though I would expect we could offer AI agents not just the ability to action their suggestions, but to also choose the specific action.

BPMN has a construct called an ad-hoc subprocess in which a small part of the process decision-making can be handed over to a human or agent. This could be used to give an AI a limited amount of freedom about what action is best.

In the case above I’ve added a way for an AI agent to choose to ask for more information about the request if it needs to—it might do this multiple times before eventually deciding to post a response to the ticket, but the key thing is that if the agent knows it would benefit from more information it can perform an action that will help it make the final decision.

The future is trusting agents with what we believe they can achieve. If we give them access to actions that can help them make better choices and complete tasks, they can be fully integrated into end-to-end business processes.

Learn more about AI and Camunda

We’re excited to debut AI-enabled ad-hoc subprocesses and much more in our coming releases, so stay tuned. You can learn more about how you can already take advantage of AI-enabled process orchestration with Camunda here.

The post Why AI Agents Need Orchestration appeared first on Camunda.

]]>