Nathan Loding, Author at Camunda https://camunda.com Workflow and Decision Automation Platform Tue, 24 Jun 2025 19:51:21 +0000 en-US hourly 1 https://camunda.com/wp-content/uploads/2022/02/Secondary-Logo_Rounded-Black-150x150.png Nathan Loding, Author at Camunda https://camunda.com 32 32 Ensuring Responsible AI at Scale: Camunda’s Role in Governance and Control https://camunda.com/blog/2025/06/responsible-ai-at-scale-camunda-governance-and-control/ Tue, 24 Jun 2025 19:51:13 +0000 https://camunda.com/?p=142479 Camunda enables effective AI governance by acting as an operational backbone, so you can integrate, orchestrate and monitor AI usage in your processes.

The post Ensuring Responsible AI at Scale: Camunda’s Role in Governance and Control appeared first on Camunda.

]]>
If your organization is adding generative AI into its processes, you’ve probably hit the same wall as everyone else: “How do we govern this responsibly?”

It’s one thing to get a large language model (LLM) to generate a summary, write an email, or classify a support ticket. It’s another entirely to make sure that use of AI fits your company’s legal, ethical, operational, and technical policies. That’s where governance comes in—and frankly, it’s where most organizations are struggling to find their footing.

The challenge isn’t just technical. Sure, you need to worry about prompt injection attacks, hallucinations, and model drift. But you also need to think about compliance audits, cost control, human oversight, and the dreaded question from your CEO: “Can you explain why the AI made that decision?” These aren’t abstract concerns anymore—they’re real business risks that can derail AI initiatives faster than you can say “responsible deployment.”

That’s where Camunda comes into the picture. We’re not an AI governance platform in the abstract sense. We don’t decide your policies for you, and we’re not going to tell you whether your use case is ethical or compliant. But what we do provide is something absolutely essential: a controlled environment to integrate, orchestrate, and monitor AI usage inside your processes, complete with the guardrails and visibility that support enterprise-grade governance.

Think of it this way: if AI governance is about making sure your organization uses AI responsibly, then Camunda is the operational backbone that makes those policies actually enforceable in production systems. We’re the difference between having a beautiful AI ethics document sitting in a SharePoint folder somewhere and actually implementing those principles in your day-to-day business operations.

This post will explore how Camunda fits into the broader picture of AI governance, diving into specific features—from agent orchestration to prompt tracking—that help you operationalize your policies and build trustworthy, compliant automations.

What is AI governance, and where does Camunda fit?

Before we dive into the technical details, it’s worth stepping back and talking about what AI governance actually means. The term gets thrown around a lot, but in practice, it covers everything from high-level ethical principles to nitty-gritty technical controls.

We’re framing this discussion around the “AI Governance Framework” provided by ai-governance.eu, which defines a comprehensive model for responsible AI oversight in enterprise and public-sector settings. The framework covers organizational structures, procedural requirements, legal compliance, and technical implementations.

Ai-governance-eu

Camunda plays a vital role in many areas of governance, but none more so than the “Technical Controls (TeC)” category.. This is where the rubber meets the road—where your governance policies get translated into actual system behaviors. Technical controls include enforcing process-level constraints on AI use, ensuring explainability and traceability of AI decisions, supporting human oversight and fallback mechanisms, and monitoring inputs, outputs, and usage metrics across your entire AI ecosystem.

Here’s the crucial point: these technical controls don’t replace governance policies—they ensure that those policies are actually followed in production systems, rather than just existing as aspirational documents that nobody reads.

1. Fine-grained control over how AI is used

The first step to responsible AI isn’t choosing the right model or writing the perfect prompt—it’s being deliberate about when, where, and how AI is used in the first place. This sounds obvious, but many organizations end up with AI sprawl, where different teams spin up AI integrations without any coordinated approach to governance.

With Camunda, AI usage is modeled explicitly in BPMN (Business Process Model and Notation), which means every AI interaction is part of a documented, versioned, and auditable process flow.

Agentic-ai-camunda

You can design processes that use Service Tasks to call out to LLMs or other AI services, but only under specific conditions and with explicit input validation. User Tasks can involve human reviewers before or after an AI step, ensuring critical decisions always have human oversight. Decision Tables (DMN) can evaluate whether AI is actually needed based on specific inputs or context. Error events and boundary events capture and handle failed or ambiguous AI responses, building governance directly into your process logic.

Because the tasks executed by Camunda’s AI agents are defined with BPMN, those tasks can be deterministic workflows themselves, ensuring that, on a granular level, execution is still predictable.

This level of orchestration lets you inject AI into your business processes on your own terms, rather than letting the AI system dictate behavior. You’re not just calling an API and hoping for the best—you’re designing a controlled environment where AI operates within explicit boundaries.

Here’s a concrete example: if you’re processing insurance claims and want to use AI to classify them as high, medium, or low priority, you can insert a user task to verify all “high priority” classifications before they get routed to your fraud investigation team. You can also add decision logic that automatically escalates claims above a certain dollar amount, regardless of what the AI thinks. This way, you keep humans in the loop for critical decisions without slowing down routine processing.

2. Your models, your infrastructure, your rules

One of the most frequent concerns about enterprise AI adoption centers on data privacy and vendor risk. Many organizations have strict requirements that no customer data, internal business logic, or proprietary context can be sent to third-party APIs or cloud-hosted LLMs.

Camunda’s approach to agentic orchestration supports complete model flexibility without sacrificing governance capabilities. You can use OpenAI, Anthropic, Mistral, Hugging Face, or any provider you choose, and, starting with Camunda 8.8 (coming in October 2025), you can also route calls to self-hosted LLMs running on your own infrastructure. Whether you’re running LLaMA 3 on-premises, using Ollama for local development, or connecting to a private cloud deployment, Camunda treats all of these as different endpoints in your process orchestration.

There’s no “magic” behind our AI integration—we provide open, composable connectors and SDKs that integrate with standard AI frameworks like LangChain. You control the routing logic, prompt templates, authentication mechanisms, and access credentials. Most importantly, your data stays exactly where you want it.

For example, a financial services provider might route customer account inquiries to a cloud-hosted model, but keep transaction details and personal financial information on-premises. With Camunda, you can model this routing logic explicitly using decision tables to determine which endpoint to use based on content and context.

3. Design AI tasks with guardrails: Preventing prompt injection and hallucinations

Prompt injection isn’t just a theoretical attack—it’s a real risk that can have serious business consequences. Any time an AI model processes user-generated input, there’s potential for malicious content to manipulate the model’s behavior in unintended ways.

Camunda helps mitigate these risks by providing structured approaches to AI integration. All data can be validated and sanitized before it is used in a prompt, preventing raw input from reaching the models. Prompts are designed using FEEL (Friendly Enough Expression Language), allowing prompts to be flexible and dynamic. This centralized prompt design means prompts become part of your process documentation rather than buried in application code. Camunda’s powerful execution listeners can be utilized to analyze and sanitize the prompt before it is sent to the agent.

Ai-prompt-guardrails-camunda

Decision tables provide another layer of protection by filtering or flagging suspicious content before it reaches the model. You can build rules that automatically escalate requests containing certain keywords or patterns to human review.

When you build AI tasks with Camunda’s orchestration engine, you create a clear separation between the “business logic” of your process and the “creative output” of the model. This separation makes it much easier to test different scenarios, trace unexpected behaviors, and implement corrective measures. Camunda’s AI Task Agent supports guardrails, such as limiting the number of iterations it can perform, or the maximum number of tokens per request to help control costs.

4. Monitoring and auditing AI activity

You can’t govern what you can’t see. This might sound obvious, but many organizations deploy AI systems with minimal visibility into how they’re actually being used in production.

Optimize gives you comprehensive visibility into AI usage across all your processes. You can track the number of AI calls made per process or task, token usage (and therefore associated costs), response times and failure rates, and confidence scores or output quality metrics when available from your models.

This monitoring data supports multiple governance objectives. For cost control, you can spot overuse patterns and identify inefficient prompt chains. For policy compliance, you can prove that AI steps were reviewed when required. For performance tuning, you can compare model outputs over time or across different vendors to optimize both cost and quality.

You can build custom dashboards that break down AI usage by business unit, region, or product line, making AI usage measurable, accountable, and auditable. When auditors ask about your AI governance, you can show them actual data rather than just policy documents.

5. Multi-agent systems, modeled with guardrails

The future of enterprise AI isn’t just about better individual models—it’s about creating systems where multiple AI agents work together to achieve complex business goals.

Camunda’s agentic orchestration lets you design and govern these complex systems with the same rigor you’d apply to any other business process. Each agent—whether AI, human expert, or traditional software—gets modeled as a task within a larger orchestration flow. The platform defines how agents collaborate, hand off work, escalate problems, and recover from failures.

Ai-multiagent-guardrails-camunda

You can design parallel agent workflows with explicit coordination logic, conditional execution paths based on agent outputs, and human involvement at any point where governance requires it. Composable confidence checks ensure work only proceeds when all agents meet minimum quality thresholds.

Here’s a concrete example: in a legal document review process, one AI agent extracts key clauses, another summarizes the document, and a human attorney provides final review. Camunda coordinates these steps, tracks outcomes, and escalates if confidence scores are low or agents disagree on their assessments.

6. Enabling explainability and traceability

One of the most challenging aspects of AI governance is explainability. When an AI system makes a decision that affects your business or customers, stakeholders want to understand how and why that decision was made—and this is often a legal requirement in regulated industries.

Modern AI models are probabilistic systems that don’t provide neat explanations for their outputs. But Camunda addresses this by creating comprehensive audit trails that capture the context and process around every AI interaction.

For every AI step, Camunda persists the inputs provided to the model, outputs generated, and all prompt metadata. Each interaction gets correlated with the exact process instance that triggered it, creating a clear chain of causation. Version control for models, prompts, and orchestration logic means you can trace any historical decision back to the exact system configuration that was in place when it was made.

Through REST APIs, event streams, and Optimize reports, you can answer complex questions about AI usage patterns and decision outcomes. When regulators ask about specific decisions, you can provide comprehensive answers about what data was used, what models were involved, what confidence levels were reported, and whether human review occurred.

Camunda as a cornerstone of process-level AI governance

AI governance is a team sport that requires coordination across multiple organizational functions. You need clear policies, compliance frameworks, technical implementation, and ongoing oversight. No single platform can address all requirements, nor should it try to.

What Camunda brings to this collaborative effort is operational enforcement of governance policies at the process level. We’re not here to define your ethics policies—we provide the technical infrastructure to ensure that whatever policies you establish actually get implemented and enforced in your production AI systems.

Camunda gives you fine-grained control over exactly how AI gets used in your business processes, complete flexibility in model and hosting choices, robust orchestration of human-in-the-loop processes, comprehensive monitoring and auditing capabilities, protection against AI-specific risks like prompt injection, and support for cost tracking and usage visibility.

You bring the policies, compliance frameworks, and business requirements—Camunda helps you enforce them at runtime, at scale, and with the visibility and control that enterprise governance demands.

If you’re looking for a way to govern AI at the process layer—to bridge the gap between governance policy and operational reality—Camunda offers the controls, insights, and flexibility you need to do it safely, confidently, and sustainably as your AI initiatives grow and evolve.

Learn more

Looking to get started today? Download our ultimate guide to AI-powered process orchestration and automation to discover how to start effectively implementing AI into your business processes quickly.

The post Ensuring Responsible AI at Scale: Camunda’s Role in Governance and Control appeared first on Camunda.

]]>
Solving the RPA Challenge with Camunda https://camunda.com/blog/2025/06/solving-rpa-challenge-with-camunda/ Fri, 06 Jun 2025 21:06:34 +0000 https://camunda.com/?p=141263 See how you can solve the RPA Challenge (and much more when it comes to orchestrating your RPA bots) with Camunda.

The post Solving the RPA Challenge with Camunda appeared first on Camunda.

]]>
The RPA Challenge is a popular benchmark in the automation community, designed to test how well bots can handle dynamic web forms. The task involves filling out a form that changes its layout with each submission, using data from an Excel file. While this might seem tricky, Camunda’s RPA capabilities make it surprisingly straightforward.

In this post, we’ll walk through what RPA is, how to tackle the RPA Challenge using Camunda’s tools, from scripting the bot to deploying and executing it within a BPMN workflow, and finally how process orchestration can help super-charge your RPA bots.

What is RPA, and why should you care?

Robotic Process Automation (RPA) is a technology that allows you to automate repetitive, rule-based tasks typically performed by humans. Think of actions like copying data between systems, filling out web forms, or processing invoices—if it follows a predictable pattern, RPA can probably handle it. The goal is to free up people from mundane work so they can focus on higher-value tasks that require creativity, problem-solving, or empathy.

At the heart of RPA is the RPA bot, a small script that mimics human actions on a computer. These bots can click buttons, read emails, move files, enter data, and more. They’re like digital assistants that never sleep, don’t make typos, and follow instructions exactly as given. And unlike traditional software scripts, RPA bots are often designed to work with existing user interfaces, so you don’t need to rebuild backend systems to automate work.

If you already use process orchestration, why use RPA at all? Because it’s a fast, cost-effective way to automate existing business processes without major changes to your IT infrastructure. Whether you’re streamlining internal workflows or integrating legacy systems with modern platforms like Camunda, RPA gives you the power to get more done, faster—and with fewer errors. When combined with process orchestration, it becomes even more powerful, allowing bots to operate as part of larger, end-to-end business processes.

Follow along with the video

We’re going to dig into the RPA Challenge with a full tutorial below, but feel free to follow along with the video as well.

Understanding the RPA script

To solve the RPA Challenge, we used a script written in Robot Framework, a generic open-source automation framework. The script is built with Camunda’s RPA components, allowing seamless orchestration within a BPMN process. You can view the full script, as well as a BPMN model that uses the script, by clicking here. In this section, we’ll walk through the script in detail.

Settings

*** Settings ***
Documentation       Robot to solve the first challenge at rpachallenge.com,
...                 which consists of filling a form that randomly rearranges
...                 itself for ten times, with data taken from a provided
...                 Microsoft Excel file. Return Congratulation message to Camunda.
Library             Camunda.Browser.Selenium    auto_close=${False}
Library             Camunda.Excel.Files 
Library             Camunda.HTTP
Library             Camunda

The Settings section defines metadata and dependencies for the script:

  • Documentation: A human-readable description of what the robot does. In this case, it describes the task of filling a dynamically rearranging form using Excel data.
  • Library: These lines load external libraries required for the script to run. Robot Framework supports many built-in and third-party libraries. When using Camunda’s implementation, you can also use Camunda-specific RPA libraries tailored to browser automation, Excel file handling, HTTP actions, and integration with the Camunda platform.

When writing a RPA script, you can import as many libraries as needed to accomplish the task at hand. Here’s what each library used in this challenge does:

  • Camunda.Browser.Selenium: Enables browser automation via Selenium. The auto_close=${False} argument ensures the browser doesn’t automatically close after execution, useful for debugging.
  • Camunda.Excel.Files: Provides utilities to read data from Excel files, which is essential for this challenge.
  • Camunda.HTTP: Used to download the Excel file from the RPA Challenge site.
  • Camunda: Core Camunda RPA library that helps interact with the platform, such as setting output variables for process orchestration.

Tasks

*** Tasks ***
Complete the challenge
    Start the challenge
    Fill the forms
    Collect the results

The Tasks section defines high-level actions that the robot will execute. In Robot Framework, a “task” is essentially a named sequence of keyword calls and defines every action the robot will take.

Each task is given a friendly name (for this challenge, the task is named “Complete the challenge”). This task is composed of three keyword invocations:

  • Start the challenge: Opens the site and prepares the environment.
  • Fill the forms: Loops through the Excel data and fills out the form.
  • Collect the results: Captures the output message after the form submissions are complete.

Each of these steps corresponds to a custom keyword defined in the next section.

Keywords

The Keywords section is where we define reusable building blocks of the automation script. Keywords are like functions or procedures. Each keyword performs a specific operation, and you can call them from tasks or other keywords. The order keywords are defined does not matter, as they are executed individually as defined in the task.

Let’s break down each one.

Start the challenge

Start the challenge
    Open Browser   http://rpachallenge.com/         browser=chrome
    Maximize Browser Window
    Camunda.HTTP.Download
    ...    http://rpachallenge.com/assets/downloadFiles/challenge.xlsx
    ...    overwrite=True
    Click Button    xpath=//button[contains(text(), 'Start')]

This keyword sets up the browser environment and downloads the data file required for the challenge. It begins by launching a Chrome browser using the Open Browser keyword, which is part of the Camunda.Browser.Selenium library imported earlier, and navigates to rpachallenge.com. Once the site loads, the Maximize Browser Window keyword ensures that all elements on the page are fully visible and accessible for automation.

The script then uses the Camunda.HTTP.Download keyword from the Camunda.HTTP library to download the Excel file containing the challenge data; the overwrite=True argument ensures that an up-to-date version of the file is used each time the bot runs. Finally, it clicks the “Start” button on the page using the Click Button keyword, targeting the element via an XPath expression that identifies the button by its text. This action triggers the start of the challenge and reveals the form to be filled.

This keyword sets the stage for the main task by ensuring we’re on the correct page and have the necessary data.

Fill the forms

Fill the forms
    ${people}=    Get the list of people from the Excel file
    FOR    ${person}    IN    @{people}
        Fill and submit the form    ${person}
    END

This keyword performs the core automation logic by iterating over the data and filling out the form multiple times. It starts by calling the custom keyword Get the list of people from the Excel file, which reads the downloaded Excel file and returns its contents as a table—each row representing a different person.

The script then enters a loop using Robot Framework’s FOR ... IN ... END syntax, iterating through each person in the dataset. Within this loop, it calls the Fill and submit the form keyword, passing in the current person’s data. This step ensures that the form is filled and submitted once for every individual listed in the Excel file, effectively completing all ten iterations of the challenge.

The keyword demonstrates how modular and readable Robot Framework scripts can be. Each action is broken into self-contained logic blocks, which keeps the code clean and reusable.

Get the list of people from the Excel file

Get the list of people from the Excel file
    Open Workbook    challenge.xlsx
    ${table}=    Read Worksheet As Table    header=True
    Close Workbook
    RETURN    ${table}

This keyword is responsible for reading and parsing the data from the Excel file. It begins by using the Open Workbook keyword to open the downloaded challenge.xlsx file. Once the file is open, the Read Worksheet As Table keyword reads the contents of the worksheet and stores it as a table, with the header=True argument ensuring that the first row is treated as column headers—making the data easier to work with.

After reading the data, the Close Workbook keyword is called to properly close the file, which is a best practice to avoid file access issues. Finally, the keyword returns the parsed table using RETURN ${table}, allowing the calling keyword to loop through each row in the dataset.

The result is a list of dictionaries, where each dictionary represents a person’s data (e.g., first name, last name, email, etc.).

Fill and submit the form

Fill and submit the form
    [Arguments]    ${person}
    Input Text    xpath=//input[@ng-reflect-name="labelFirstName"]    ${person}[First Name]
    Input Text    xpath=//input[@ng-reflect-name="labelLastName"]    ${person}[Last Name]
    Input Text    xpath=//input[@ng-reflect-name="labelCompanyName"]    ${person}[Company Name]
    Input Text    xpath=//input[@ng-reflect-name="labelRole"]    ${person}[Role in Company]
    Input Text    xpath=//input[@ng-reflect-name="labelAddress"]    ${person}[Address]
    Input Text    xpath=//input[@ng-reflect-name="labelEmail"]    ${person}[Email]
    Input Text    xpath=//input[@ng-reflect-name="labelPhone"]    ${person}[Phone Number]
    Click Button    xpath=//input[@type='submit']

This keyword fills out the web form using data for a single person. It begins with the [Arguments] ${person} declaration, which accepts a dictionary containing one individual’s details—such as name, email, and company information—retrieved from the Excel file. The form fields are then populated using multiple Input Text keywords, each one targeting a specific input element on the page. These fields are identified using XPath expressions that match the ng-reflect-name attributes, ensuring the correct data is entered in the right place regardless of how the form rearranges itself. Once all fields are filled in, the Click Button keyword is used to submit the form, completing one iteration of the challenge.

The challenge dynamically rearranges form fields on each iteration, but these attributes remain consistent, making them a reliable way to target inputs.

Collect the results

Collect the results
    ${resultText}=    Get Text    xpath=//div[contains(@class, 'congratulations')]//div[contains(@class, 'message2')]
    Set Output Variable    resultText    ${resultText}
    Close Browser

After all the forms have been submitted, this keyword captures the final confirmation message displayed by the RPA Challenge. It starts by using the Get Text keyword to extract the congratulatory message shown on the screen, targeting the message element with an XPath expression that identifies the relevant section of the page.

The retrieved message is then passed to Camunda using the Set Output Variable keyword, which makes the result available to the surrounding BPMN process—allowing downstream tasks or process participants to use or display the outcome. Finally, the Close Browser keyword is called to shut down the browser window and clean up the automation environment.

Testing, deploying and executing with Camunda

Once your RPA script is ready, the next step is to test and integrate it into a Camunda workflow so it can be executed as part of a larger business process. To begin, you’ll need to download and install Camunda Modeler, a desktop application used to create BPMN diagrams and manage automation assets like RPA scripts. (Note: RPA scripts cannot be edited in Web Modeler currently; this feature is in development.) Desktop Modeler includes an RPA Script Editor, which allows you to open, write, test, and deploy Robot Framework scripts directly from within the application.

Before deploying the script to Camunda, it’s a good idea to test it locally. Start by launching a local RPA worker—a component that polls for and executes RPA jobs. You can find setup instructions for the worker in Camunda’s Getting Started with RPA guide. Once your worker is running, use the RPA Script Editor in Camunda Modeler to open your script and run it. This will launch the browser, execute your automation logic, and allow you to verify that the bot behaves as expected and completes the RPA Challenge successfully.

Test-rpa-script-camunda

After confirming the script works, you can deploy it to your Camunda 8 cluster. Ensure that you have set an “ID” for the RPA script; this ID is how your BPMN process will reference and invoke the script. In the Modeler, click the “Deploy” button in the RPA Script Editor and choose the target cluster.

Next, create a new BPMN model in the Camunda Modeler to orchestrate the RPA bot. Start with a simple diagram that includes a Start Event, a Task, and an End Event. Select the Task and change it to the “RPA Connector”. Then, in the input parameters for the task, set the “Script ID” parameter to the ID you set in the RPA script earlier.

Rpa-connector-camunda

Once your BPMN model is ready, deploy it to the same Camunda cluster. To execute the process, you can either use the Camunda Console’s UI to start a new process instance or call the REST API. The RPA Worker will pick up the job, run the associated script, and return any output variables—like the final confirmation message from the RPA Challenge—back to Camunda. You can monitor the execution and troubleshoot any issues in Camunda Operate, which provides visibility into your running and completed processes, including logs and variable values.

With that, your RPA script is fully integrated into a Camunda process. You now have a bot that not only completes the challenge but does so as part of a well-orchestrated, transparent, and scalable business workflow.

How Camunda supercharges your RPA bots

RPA is great at automating individual tasks—but what happens when you need to coordinate multiple bots, connect them with human workflows, or make them part of a larger business process? That’s where Camunda comes in.

Camunda is a process orchestration platform that helps you model, automate, and monitor complex business processes from end to end. Think of it as the brain that tells your RPA bots when to run, what data to use, and how they fit into the bigger picture. With Camunda, your bots are no longer isolated automation islands—they become integrated, managed components of scalable workflows.

For example, you might use a Camunda BPMN diagram to define a process where:

  1. A customer submits a request (via a form or API),
  2. An RPA bot retrieves data from a legacy system,
  3. A human reviews the output,
  4. Another bot sends the results via email.

Camunda handles all of this orchestration—making sure each task runs in the right order, managing exceptions, tracking progress, and providing visibility through tools like Camunda Operate. And because Camunda is standards-based (using BPMN), you get a clear, visual representation of your processes that both developers and business stakeholders can understand.

When you combine RPA with Camunda, you’re not just automating tasks—you’re transforming how your organization runs processes. You get flexibility, scalability, and transparency, all while reducing manual effort and human error. Whether you’re scaling up existing bots or orchestrating new workflows from scratch, Camunda makes your RPA investments go further.

Conclusion

Automating the RPA Challenge with Camunda showcases our ability to handle dynamic, UI-based tasks seamlessly. By combining Robot Framework scripting with Camunda’s orchestration capabilities, you can build robust automation workflows that integrate both modern and legacy systems.

Ready to take your automation to the next level? Explore Camunda’s RPA features and see how they can streamline your business processes. Be sure to check out our blog post on how to Build Your First Camunda RPA Task as well.

The post Solving the RPA Challenge with Camunda appeared first on Camunda.

]]>
Building Trustworthy AI Agents: How Camunda Aligns with Industry Best Practices https://camunda.com/blog/2025/05/ai-agent-design-patterns-in-camunda/ Fri, 09 May 2025 00:08:15 +0000 https://camunda.com/?p=137829 Build, deploy, and scale AI agents with an enterprise-ready framework that balances automation, control, speed, safety, complexity, and clarity.

The post Building Trustworthy AI Agents: How Camunda Aligns with Industry Best Practices appeared first on Camunda.

]]>
The rapid evolution of AI agents has triggered an industry-wide focus on design patterns that ensure reliability, safety, and scalability. Two major players—OpenAI and Anthropic—have each published detailed guidance on building effective AI agents. Camunda’s own approach to agentic orchestration shows how an enterprise-ready solution can embody these best practices.

Let’s take a look at how Camunda’s AI agent implementation aligns with the recommendations from OpenAI and Anthropic, and why this matters for enterprise success.

Clear task boundaries and explicit handoffs

Both Anthropic and OpenAI stress the importance of defining clear task boundaries for agents. According to Anthropic’s recommendations, ambiguity in agent responsibilities often leads to unpredictable behavior and systemic errors. OpenAI similarly highlights that agents should have narrowly scoped responsibilities to ensure predictability and reliability.

At Camunda, we address this by orchestrating agents through BPMN workflows. Each agent’s task is represented as a discrete service task with well-defined inputs and expected outputs. For example, in our example agent implementation, an email is sent only after a Generate Email Inquiry task completes its work and delivers validated output. This sequencing ensures that each agent knows precisely when to act, what data it receives, and what deliverables it is accountable for, thereby minimizing risks of cascading failures.

By visualizing these handoffs in BPMN diagrams, stakeholders across technical and nontechnical domains can easily understand the agent responsibilities, audit workflows, and troubleshoot when necessary.

AI agent inserted into BPMN diagram for process visibility

Narrow scope with composable capabilities

OpenAI’s guide highlights the benefits of agents that are designed with specialized, narrow scopes, which can then be composed into larger systems for more complex tasks. Anthropic echoes this, suggesting that mega-agents often become unwieldy and hard to trust.

Camunda’s architecture embraces this philosophy through microservices-style orchestration. Each AI agent within Camunda focuses on mastering a single task—for instance, information retrieval, natural language generation, decision support, or classification. These specialized agents can then be strung together through BPMN models to create sophisticated end-to-end business processes.

Let’s look at a practical example.

In an insurance claims process, Camunda orchestrates a Document Extraction agent to pull key fields, a Fraud Detection agent to assess risk, and a Claims Decision agent to recommend next steps. Each agent operates independently yet collaboratively, enhancing system resilience and allowing incremental upgrades without overhauling the entire workflow.

AI agents working together with their separate tasks
Each agent has its own limited set of specialized tasks, with the ability to compose tasks together within agents.

Monitoring, error handling, and human-in-the-loop

Both OpenAI and Anthropic emphasize that no agent should operate without proper supervision mechanisms. Agents must report their states, signal when they encounter issues, and escalate gracefully to human overseers.

Camunda is particularly strong in this area thanks to our suite of tools like Operate, Optimize, and Tasklist. Here’s how we achieve enterprise-grade monitoring and human-in-the-loop design:

  • Full observability: Camunda Operate provides real-time visibility into every process instance, showing exactly which agent did what, when, and with what outcome.
  • Error boundaries and fallbacks: BPMN error events and boundary timers allow processes to anticipate common failures (like timeouts or bad data) and take corrective actions, such as retrying, skipping, or escalating to a human operator.
  • Seamless human escalation: When agents cannot confidently complete a task—for example, due to ambiguity or ethical concerns—Camunda can dynamically activate a human task, prompting a person to step in, review, and make decisions.

In a future release—the 8.8 release scheduled for October—Camunda is taking this one step further by connecting these features directly to the agent. Failed tasks will automatically trigger the agent to reevaluate the prompt, allowing the agent to respond dynamically as the environment changes. Operate will provide real-time visibility into the agent, allowing seamless human escalation and recovery.

These capabilities ensure that agents augment rather than replace human judgment, a key principle recommended by both OpenAI and Anthropic.

Composability and reusability

Anthropic strongly recommends composable agent architectures to allow rapid iteration and minimize technical debt. Composable systems are more adaptable, easier to troubleshoot, and more cost-effective to maintain.

Camunda’s approach to process design aligns perfectly with this recommendation. Our BPMN models are built around modularity, enabling teams to:

  • Swap out individual agents without rewriting the entire workflow
  • Reuse standard subprocesses across different projects
  • Version-control agent behaviors separately, making it easy to A/B test and roll back changes

Drawing from IBM’s insights on agent design, Camunda’s platform allows enterprises to build libraries of reusable agent modules. These can be assembled like building blocks to rapidly create new processes or modify existing ones, significantly accelerating innovation cycles.

Transparent orchestration and explainability

OpenAI’s guide makes it clear: trustworthy AI systems must provide explainable decision pathways. Stakeholders need to understand why an agent acted a certain way, especially when decisions have legal, ethical, or financial consequences.

Camunda’s BPMN-driven orchestration inherently provides this transparency. Every agent interaction, every decision point, and every data handoff is visually modeled and logged. Teams can:

  • Trace the complete lineage of a decision from input to output
  • Generate audit trails automatically for compliance needs
  • Explain system behavior to both technical audiences and nontechnical stakeholders

In highly regulated industries like banking, healthcare, or insurance, this kind of transparency isn’t just a nice-to-have—it’s a nonnegotiable requirement. With Camunda, organizations can meet these standards confidently.

Centralized orchestration provides guardrails

Today, AI agents do not yet exhibit the level of trustworthiness, transparency, or security required to make a fully autonomous swarm of agents safe for enterprise contexts. In decentralized models, agents independently delegate tasks to one another, which can lead to a lack of oversight, unpredictable behavior, and challenges in ensuring compliance.

At Camunda, we believe that the decentralized agent pattern represents an exciting vision for the future. However, we see it as a pattern that is still years away from being viable for enterprise-grade AI systems.

For now, Camunda strongly supports centralized or manager patterns. With this approach, a single orchestrator (in Camunda’s case, the BPMN engine) manages when, why, and how agents act. This centralized orchestration ensures:

  • Full visibility into agent activities
  • Clear accountability for decision points
  • Easier implementation of security, compliance, and auditing mechanisms

Our philosophy is simple: while the future may hold promise for decentralized agent ecosystems, today’s enterprises need reliability, explainability, and control. Centralized orchestration, powered by Camunda, offers the safest and most effective path forward that you can utilize immediately, without sacrificing your flexibility for improvements and innovations in AI that may come in the future.

Enterprise-grade agentic orchestration is here!

By closely adhering to the industry best practices, Camunda delivers an enterprise-ready framework for building, deploying, and scaling AI agents. Our approach balances automation with control, speed with safety, and complexity with clarity.

We believe that AI agents should operate transparently, predictably, and with human-centric governance. With Camunda, enterprises gain not just a platform but a reliable foundation to scale AI responsibly and sustainably.

Want to learn more? Dive into our latest release announcement or check out our guide on building your first AI agent.

Stay tuned—the future of responsible, scalable AI is being built right now, and Camunda is at the forefront.

The post Building Trustworthy AI Agents: How Camunda Aligns with Industry Best Practices appeared first on Camunda.

]]>
Camunda 8.7 Preview: Intelligent Document Processing https://camunda.com/blog/2025/03/camunda-8-7-preview-intelligent-document-processing/ Mon, 17 Mar 2025 20:14:46 +0000 https://camunda.com/?p=131418 Native Intelligent Document Processing is coming to Camunda. In this preview of the alpha release, learn how you can get started with it today.

The post Camunda 8.7 Preview: Intelligent Document Processing appeared first on Camunda.

]]>
One of the most anticipated new features of the upcoming 8.7 release is Intelligent Document Processing (IDP). We teased IDP at CamundaCon in New York City last fall, and since then the engineering teams have been hard at work building a scalable document handling solution inside Camunda. With the latest alpha release, we are excited to announce that IDP is now available for testing for those on Self-Managed, with full SaaS support coming in next month’s 8.7 minor release.

As always, this is an alpha release, so you may encounter incomplete features. If you do, please let us know! You can share the issue on our forum or, if you’re an enterprise customer, let your AE or CSM know about the issue. This feedback helps our team as they work to finalize the features for the 8.7 release in April!

Requirements

Note: this configuration applies to an early-access alpha release of Camunda 8.7 and IDP. This configuration is likely to be different in the final 8.7 release.

There are a few requirements before you can start using IDP with Camunda. To get started, you’ll need API keys for Amazon Bedrock. Behind the scenes, Camunda connects to Bedrock to parse and understand the uploaded document. There are a few steps needed:

  • First, you need to configure a user in AWS IAM that has permission to Amazon Bedrock, AWS S3, and Amazon Textract.
  • Configure and save the access key pair for the IAM user. You need to save both the access key and secret access key.
  • Create an AWS S3 bucket. It can be named whatever you want, but remember the name and region as they will be needed next!

Next, you’ll need to start an 8.7.0-alpha5 cluster with IDP enabled. IDP is only supported with 8.7.0-alpha5 in Self-Managed. SaaS alpha releases do not support IDP yet! (Support for SaaS will be included in 8.7, but is not available in this alpha.) IDP is also not supported in Desktop Modeler; you must use Web Modeler to configure and test IDP.

The easiest way to get started with IDP in Self-Managed is to use the Camunda 8 Docker configuration. Once you’ve downloaded the Docker Compose files, you will need to:

  • Add the access keys, S3 bucket name, and region from AWS to the connector-secrets.txt file
  • Add the access keys, S3 bucket name, and region to the docker-compose.yaml file in two different places:
    • Under the zeebe container, and;
    • Under the tasklist container

And that’s it! Now you’re ready to train Camunda on how to extract data from a document!

Training document extraction

Before you can use IDP in your processes, you need to define how data should be extracted from each type of document. The first step is to create a new IDP Application inside Web Modeler.

Create-idp-application-camunda

Select your alpha cluster and give your IDP application a name. Each IDP application will store a set of document extraction templates which will be used to extract data from an uploaded document. When you add an IDP connector to your process, you link it to an IDP application, just like you link a user task to a form. Camunda will match the document to one of the templates, then execute that template to extract the data.

Currently, IDP only supports unstructured data extraction. This method uses AI to understand a document’s structure and find the necessary data. In the future, Camunda will support a structured data extraction method that will allow data extraction from well defined data structures, such as XML or JSON documents.

For example, let’s build a customer invoice IDP application. First, we will create an unstructured data extraction called “Invoice PDF.”

Create-extraction-project

The first step after creating the project is to upload sample documents. It is best to upload several different versions of the document you are trying to parse, to give Camunda enough data to accurately test against. By training the AI models against different variations of the document, it helps ensure that the model has the best chance at success.

After uploading a document, click on the “Extract” button to the right of it. The next screen might look a bit intimidating at first, so let’s break down the three major sections:

Extraction-details
(click to enlarge)
  1. On the right side of the page, you will see a preview of the document you uploaded. If you uploaded multiple documents, you can preview each of them using the dropdown just above the preview. This makes it easy to reference the document itself while defining the fields to be extracted.
  2. On the left is a list of fields to be extracted from the document. For each field, you give it a name (this is the name of the variable the data will be stored in, similar to how forms work), a data type, and a prompt that tells the AI what to extract from the document.
  3. Finally, there is the extraction model and a couple of buttons. The extraction model dropdown gives you a choice between multiple available AI models available in AWS; the “Extract document” button tests your prompts against the previewed document; and last, you can save the configuration as a test case.

The prompt you create for each field is the same type of prompt that you might give ChatGPT. In many ways, creating prompts is a skill that needs to be learned and practiced. For the simple example in the screenshot, the prompt of “find the name of the person who this invoice is for” might not be the most eloquently stated English sentence, but it is a prompt the AI understands fairly consistently.

Looking for more information about AI prompting? Check out this blog post!

You might be wondering why we offer multiple models. There are two primary reasons: cost and compliance. As enterprises adopt AI, they may have policies that restrict which models can be used. Different models also have different costs. You will likely find that you need different models for different documents. Each model has its own strengths and weaknesses, and there is no prescription here: test your model and refine your prompts to get the results your process needs. Camunda offers multiple models to allow you and your enterprise to find the right balance between capability, cost, and compliance. (Coming up on the roadmap is allowing enterprises to bring their own model!)

When you’ve finished adding a few fields and have selected a model, click the “Extract document” button to test your prompts. For each field you added, you should see the expected value in the “Extracted value” text box. If you are getting different data, try refining your prompt.

Creating a test case

Once you’re satisfied with the results of the extraction, it is time to save this data set as a test case. A test case makes it easy to test the model against multiple documents and ensure you are getting the level of consistency you expect.

How is this different from the “Extract document” test, you might be wondering? The previous test worked against the single document selected in the preview; this step runs against all the sample documents uploaded! It also validates the data extracted in the previous step against the data extracted during the test to ensure they match. (In other words, it is checking the newly extracted data against the data saved in the previous step to ensure accuracy.)

After selecting the AI model you want to use, click the “Test documents” button and review the results. You can expand each field and view the extracted value for each of the test documents. If you find yourself getting inconsistent or incorrect results, you will need to go back to the “Extract data” step and further refine your prompts.

Creating-test-case
(click to enlarge)

You’ll notice in this screenshot that the total field did not get the same values as the test case. This is because I used a different model to test with (Claude 3.5 Sonnet instead of Llama 3 8B Instruct). In order to resolve this issue, I can choose to move forward with the Llama 3 model, or I can go back to the “Extract data” step and refine my prompts to work better with Claude.

Once you are satisfied with the results, click the “Publish” button and select “Publish to project.” Here you can give your IDP application a version and description, as well as select which model will be used to extract the data. Similar to how connector templates work, after you’ve developed it, you must publish it so that you can use it within processes.

Adding IDP to a process

There are two important things to consider when adding IDP to a process: first, you need a way to upload a document; and second, you need to use the new “IDP Extraction Project” task type.

There are two ways to upload a document to a running process:

For this example, I built the simplest possible form in Web Modeler with only a file picker. I gave the file picker the key of document, so I know that is the variable name that will store the uploaded document(s).

Upload-document

Next, I added a task for document processing. If you scroll all the way to the bottom of the “Change element” list, you will see a new section below the connectors called “IDP Extraction Project.” You should see your published IDP application here. Select it!

Add-idp-task-camunda

You’ll notice in the details pane that some secrets are automatically populated. If you changed the name of the connector secrets when configuring your cluster, you will need to remember to change the name here too! Be sure to check all of the fields to make sure they match how you’ve defined your process and data (of course, don’t forget to define your variable output handling too!):

  • Authentication: ensure that the proper connector secrets are set.
  • AWS Properties: ensure that the AWS region for your S3 bucket is set.
  • Input Message Data:
    • The “Document” field should reference whatever variable you stored the uploaded document in. For my example, I used the key name document. Document uploads via the file picker are a list, so we need to ensure we are getting the first element of that list by setting this field to document[1].
    • The “AWS S3 Bucket Name” field should be the name of the S3 bucket you configured earlier. By default we assume the name is “idp-extraction-connector.”

And that’s it, you’re ready to run and test your process!

Exciting things ahead!

If you want to see IDP implemented end to end, showing the file upload and parsing of the document, check out this fantastic introduction video from our Senior Developer Advocate, Niall Deehan!

Curious for what else is coming in 8.7? Check out the latest alpha release blog for 8.7.0-alpha5! Ready to start experimenting with agentic AI? Learn about some essential BPMN patterns for agentic AI, and then build your own agent! And as always, if you have any questions, join our community forum!

Happy orchestrating!

The post Camunda 8.7 Preview: Intelligent Document Processing appeared first on Camunda.

]]>
FEEL for Citizen Developers https://camunda.com/blog/2024/09/feel-citizen-developers/ Fri, 27 Sep 2024 01:05:00 +0000 https://camunda.com/?p=118904 How to avoid pitfalls and make the most of FEEL for a citizen developer.

The post FEEL for Citizen Developers appeared first on Camunda.

]]>
You’re a business analyst, not a software engineer, and you’ve been asked to help implement parts of a new BPMN or DMN model for your business. Dragging and dropping the BPMN elements is pretty easy. So is adding labels. But then you need to write a small piece of code to make an exclusive gateway follow the right path. You’re not a programmer! How are you supposed to do that?!

Have no fear! FEEL is here! FEEL, or Friendly Enough Expression Language, lives up to its name: it is simple to get started with and understand, but powerful enough that you can write an expression for almost any data operation you need to perform.

What is FEEL, exactly?

FEEL is part of the DMN specification from the Object Management Group (OMG). FEEL is not a fully featured programming language. It is meant to create and execute “expressions”: generally speaking, an expression is a series of commands that produce a value. For instance, 1 + 1 is an expression that produces the value of 2. (Of course, in programming expressions are often more complex than this, but this is a usable definition for FEEL.) This means you can do some pretty complex data manipulation, but you can’t write a complete program using just FEEL.

One important feature of FEEL is that it uses “readable” language that feels more like writing a sentence than a computer instruction. Code often looks unintelligible (sometimes even to a veteran software developer!) with the use of lots of symbols (like dots, colons, asterisks, brackets, braces, and more). Different programming languages often have their own idiomatic way of writing code; being able to read one programming language doesn’t necessarily mean you can read another.

FEEL attempts to make the code feel less like “code” and more like “language.” Let’s look at a quick example to illustrate what I mean. Let’s say you want to transform a word to all uppercase letters. In a programming language like Java, you would write:

“hello”.toUpperCase();

In FEEL:

= upper case(“hello”)

There are similarities between the two. For instance, both words are surrounded by double quotes (more on that next!), but the FEEL syntax feels more like something you would say out loud.

Note: In a BPMN model or Camunda Form, FEEL expressions begin with an equals sign (“=”); in DMN, you don’t need the equals sign!

Understanding data in FEEL

In FEEL, every value has a type. This helps the computer understand what the value is, and allows us to perform evaluations with the value. There are several types supported by FEEL:

  • Strings: these are alpha-numeric values. String values are surrounded by double quotes, and can contain any character between the quotes.
  • Numbers: these are, as the name implies, only numbers. Number values are not surrounded by quotes. (100 is a number, but ”100” is a string!) FEEL supports both whole numbers as well as decimals.
  • True/False: called “boolean” values, you can represent a true or false value in FEEL. As with numbers, a boolean value does not have quotes. (true is a boolean, but ”true” is a string!)
  • Lists: sometimes referred to as arrays, a list is what it sounds like: a list of values. The values can be of any type, so you can mix and match values as needed. Lists are a set of comma-separated values inside of brackets: [ 1, “one”, true ] is a list of three items. You can also nest the data and have lists inside of lists! ([ [“list one”], “inside” “list two” ])
  • Contexts: sometimes referred to as objects, contexts are a collection of “key-value pairs” surrounded by braces. The key is a string, and the value can be of any type (including a list or another context): { “keyA”: “value A”, “keyB”: 123, “listKey”: [ “contains”, “a”, “list” ] }

FEEL also supports date, time, and duration values. I won’t be covering those in this post to keep things a little simpler;  if you need to work with date/time values, please read our FEEL documentation and ask any questions you have on the forum!

Data in FEEL is usually assigned to a variable, allowing the data to be referenced more easily in other parts of the process. Variables are names that contain a value. For instance, when you get a response back from the REST Connector, it is a variable named response that contains a context value.

Working with data

Now that we have a basic understanding of what data looks like with FEEL, it’s time to learn how to work with that data. There are many different operations you can perform with FEEL, and they can be categorized as either “expressions” or “functions.” (That’s how you’ll find these categorized in our documentation. I know it’s a bit confusing using “expression” in two different contexts, but it will make sense in the end!) Generally speaking, the expressions use the value (for instance, performing arithmetic on numbers or getting a value out of a list) and usually work with a single value, while the functions manipulate the value (for instance, converting a string to a number or merging two contexts) and usually work with multiple values. (Note: these are not strict definitions, but I find it helpful to think of them this way.)

Let’s use some real data to learn how these expressions and functions can be used together. If you find yourself feeling a bit lost at any point, I recommend referring back to the documentation then coming back to the example. (If you prefer video training, Camunda Academy has an excellent FEEL video series you can watch too!) For the real data, I’m going to use a fun data set from the Marvel API (yes, a data set about comics)! This data set contains information about the character Spider-Man and related comics, stories, and artists. Don’t worry, you don’t have to use the API for this; you can look at the data set for these examples by clicking here. After fetching the details for a single character, we need to transform the data into the following context shape:

{
  “name”: string,
  “description”: string,
  “numberOfAvailableComics”: number,
  “topCreators”: string,
  “eventSummary”: list of contexts [ {
    “eventName”: string,
    “endedOrContinuing”: string
  } ]
}

First, we get the name, description, and number of comics. These are elements directly in the data set, so all we need to do is reference the appropriate variable name. The data from the API comes as a JSON object, which is automatically converted to a FEEL context in Camunda. You can access individual properties of the context using a dot (.):

= {
  “name”: data.name,
  “description”: data.description,
  “numberOfAvailableComics”: data.comics.available
}

(Try it in the FEEL Playground!)

Next, we will fill in our list of creators. I want this to be a comma separated list with the names of the available creators. Looking in the data set, there are lists of creators available for each event. There are quite a few names available, so I decided I want to filter the list to just those with the role of “writer”, and select only up to the first five. Let’s see how we might do that in FEEL:

{
  "combinedListOfCreators": flatten(data.events.items.creators.items),
  "onlyWriters": combinedListOfCreators[item.role = "writer"],
  "onlyDistinctWriters": distinct values(onlyWriters),
  "limit5Writers": sublist(onlyDistinctWriters, 1, 5),
  "topCreators": string join(limit5Writers.name, ", ")
}

(Try it in the FEEL Playground! Scroll to the bottom of the output to see the top creators.)

Whoa, that looks complicated! It might look that way, but it makes sense once you break it down into its component parts. Let’s do that!

There are two events in the data set, each with a list of creators; the first step is to merge those two lists so that we have a single list to work from: flatten(data.events.items.creators.items). The “flatten” function turns multiple lists—including nested lists—into a single list. Doing it this way, we don’t need to know how many items there are in the list. The FEEL engine automatically discovers every nested list and flattens them all. If there were seven events instead of two, we would get the same single list result with the flatten function. This is one of the wonderful features of FEEL! We store this in a new variable named combinedListOfCreators.

I decided I only want to list creators with the role of “writer”, so the next step is to filter our combined list. To filter a list in FEEL, you place a filter condition inside of brackets. For instance, the expression =listOfNumbers[item > 10] will only return items that are greater than 10. The filter condition always has access to a variable named item which represents one item in the list. In our combined list, each item has a role property that we can filter on: =combinedListOfCreators[item.role = “writer”]. Now we have a variable named onlyWriters that contains only creators with the role of “writer”.

Of course, it’s possible that there are duplicate names between the two events! To ensure we don’t have duplicate names in our list, we can use the “distinct values” function to ensure only unique items exist in the list, and assign it to a new variable named onlyDistinctWriters: distinct values(onlyWriters)

I chose to limit the list to five names only, because there are lots of names available in the data. We can use the “sublist” function to create a list of a specific length (in our case, a length of 5): sublist(onlyDistinctWriters, 1, 5). What do the numbers mean? We want our sublist to start with the first writer and contain up to 5 names. (If we had a different data set and we needed to get the 10th through 20th results, we could use sublist(listName, 10, 10)!)

Last, I wanted to show the names a comma separated list. We can use the “string join” function to take each item in the list and make a single string out of it. Using the second parameter in the function, we can provide what the delimiter is. In this case, we want it to be a comma: string join(limit5Writers, “, “) (don’t forget the space after the comma!).

Now that we walked through that entire example, have a look at this FEEL expression:

= {
  "topCreators": string join(
    sublist(
      distinct values(
        flatten(data.events.items.creators.items)[item.role = "writer"]
      ), 1, 5).name, ", "
  )
}

(Try it in the FEEL Playground!)

This FEEL expression does the exact same thing as the one above, but it doesn’t use the intermediate variable names. FEEL can work both ways, and how you use it depends entirely on what you need to accomplish, and how comfortable you are working with FEEL!

One more example!

Now that we’ve gone through one example step by step, take a look at this next FEEL expression. See if you can figure out how it works before we walk through it:

{
  "topEvents": for event in data.events.items return {
    "eventName": event.title,
    "endedOrContinuing": if event.end != null then "Ended" else "Continuing"
  }
}

(Try it in the FEEL Playground!)

The last features of FEEL I’m going to cover in this post are conditionals, loops, and nulls. A conditional is an “if” statement: if something is true, then this, or else that. In FEEL, a conditional statement starts with the keyword “if”; the condition following “if” must have a true or false result. In other words, if 5 + 10 is not a valid condition, because “5 + 10” isn’t true or false. The example condition is first checking if the end property has a value (more on that next), and if it does, then it sets endedOrContinuing to the string value “Ended”; if the end property doesn’t have a value, then it sets endedOrContinuing to the string value “Continuing”.

The Marvel API returns anend property if the event or story has ended; if the event or story is continuing, there is no end property returned from the API. You check for a missing variable by checking if it is null. Sometimes, a variable can exist and also be set to a value of null. Either way, there is no value, so for both conditions, the FEEL engine returns true. (Try it in the FEEL Playground!)

Finally we have the loop. There are many cases where you need to go through each entry in a list and do something with that entry. In FEEL, you do this with the for loop: for each entry in a list, return something. In the example, we are looping through every entry in the list of events, and returning a new context that includes the name (or title) of the event and whether it has ended or is continuing. Just like in the previous example, you can combine multiple FEEL expressions and functions together into one single expression, or you can separate them into intermediate variables.

Bringing it all together

Let’s take a look at the final FEEL expression:

= {
  “name”: data.name,
  “description”: data.description,
  “numberOfAvailableComics”: data.comics.available,
  "topCreators": string join(
    sublist(
      distinct values(
        flatten(data.events.items.creators.items)[item.role = "writer"]
      ), 1, 5).name, ", "
  ),
  "topEvents": for event in data.events.items return {
    "eventName": event.title,
    "endedOrContinuing": if event.end != null then "Ended" else "Continuing"
  }
}

(Try it in the FEEL Playground!)

Congratulations, you made it to the end! You now know more about FEEL than you did just a few minutes ago. If you came into this post with no FEEL experience, I don’t expect you to now be an expert. My hope is that you are more comfortable getting started.

This post doesn’t cover everything FEEL can do. Not only are there additional data types that represent dates, times, and durations, but there are also a lot of additional expressions and functions for each data type too! So where next?

  • As always, the Camunda FEEL documentation is a great place to start. The docs cover all the available data types, expressions, and functions.
  • Camunda Academy has a wonderful FEEL video series that provides examples and exercises in addition to describing the different features of FEEL.
  • Find yourself having trouble getting your expression to work correctly? Join the Camunda community forum and ask your question there. Don’t forget to search, too!

There are some exciting new features coming to Camunda’s FEEL capabilities in the future too. In recent releases, we’ve improved the auto-complete in Camunda Modeler to better help with FEEL syntax. Keep an eye on future release announcements for some exciting new FEEL features too.

Happy coding!

The post FEEL for Citizen Developers appeared first on Camunda.

]]>
FEEL for Software Developers https://camunda.com/blog/2024/09/feel-for-software-developers/ Thu, 26 Sep 2024 23:23:24 +0000 https://camunda.com/?p=118896 How to avoid pitfalls and make the most of FEEL for a seasoned software developer.

The post FEEL for Software Developers appeared first on Camunda.

]]>
As a software engineer, you’ve been asked to help implement parts of a new BPMN or DMN model for your business. You feel quite comfortable with the APIs and code needed after reviewing the documentation, but you only took a quick look at the FEEL documentation. You’re a seasoned developer—it should be no problem to work with FEEL. You write your first FEEL expression to extract an element from a list, and suddenly the model doesn’t work, and you have errors saying your FEEL expression is invalid.

FEEL, or Friendly Enough Expression Language, generally lives up to its name: it is simple to get started with and understand, but powerful enough that you can write an expression for almost any data operation you need to perform. But for developers who are already familiar with other programming languages, there are some common pitfalls when getting started with FEEL.

What is FEEL exactly?

Before we look at those common pitfalls, let’s first define what, exactly, FEEL is and look at how it compares to other programming languages.

FEEL is part of the DMN specification from the Object Management Group (OMG). FEEL is not a fully featured programming language; rather, it is just for writing as expressions, as the name implies. You can chain those expressions together, so you can do some pretty complex data manipulation, but you can’t write a complete program using just FEEL.

While FEEL’s maintained spec is from OMG, there is no existing engine or runtime for it, so Camunda had to write its own! Camunda uses different FEEL interpreters, depending on what part of the application you are using.

Primarily, Camunda uses feel-scala. When Zeebe or the Connector Runtime evaluates a FEEL expression, it is using the feel-scala engine. When working with FEEL in an editor (for instance, inside Web Modeler), a linter is used to check for any syntax errors. In the few cases where Camunda needs to execute FEEL inside the browser (for instance, while creating Connector templates in Web Modeler), Camunda uses feelin, a JavaScript implementation of FEEL written by one of Camunda’s principal engineers, Nico Rehwaldt.

Types in FEEL vs. types in other languages

FEEL has several data types that you, as a developer, are most likely already familiar with. But FEEL uses some different terminology than other languages. (You can review all the data types and what Java types they map to here.) These are the key things you need to know:

  • Null: Anything that doesn’t exist in FEEL is null. Referencing a variable that doesn’t exist in the current scope returns a null value.
  • Numbers: Similar to JavaScript, there is one type to represent both integers and floating point numbers, signed or unsigned.
  • Date, time, and duration: FEEL supports multiple different date and time types, including durations. It supports different formats for the date and time, and I recommend reviewing the documentation for that. There are two supported durations: days-time and years-months. Days-time duration can have a duration of days, hours, minutes, and seconds; years-months durations are, as the name suggests, durations in years and months. Both have a unique format for specifying the duration, which you can review here.
  • Lists: Lists in FEEL are loosely typed arrays; they can contain any combination of other types.
  • Contexts: Contexts in FEEL are, essentially, JSON objects. They are made up of key-value pairs, and the value can be any data type or expression.

Common pitfalls

FEEL has a different syntax and behavior in some cases; they make sense within the scope of what FEEL is trying to accomplish but might, at first, feel unintuitive for an experienced developer.

Spaces in function names

One interesting aspect of FEEL that stands out to many developers I’ve spoken with is that FEEL allows spaces in function names! Nearly every developer has needed to get the index of an element in an array. It might look like this: myArray.indexOf(value). In FEEL, it looks like this: index of(myArray, value). Perhaps you need to join a list into a single string. In another language, it may look like this: String.join(myArray, “, “). In FEEL, it looks like this: string join(myArray, “, “).

Sometimes it still feels strange to me to put spaces in function names, even after writing FEEL expressions for several years. However, it makes a lot of sense when you consider what FEEL is trying to accomplish. FEEL is meant to be accessible to non-developers, and using a more “natural language” style feels more intuitive and comfortable for those with no software development experience. After all, you shouldn’t have to be a seasoned programmer to build a DMN table!

List indexing and filtering

Lists (or arrays) in FEEL use one-based indexing. Arrays in most programming languages use zero-based indexing.

Take this expression, for example: = [“a”, “b”, “c”, “d”][1]. If you wrote that in Java, you would expect the value b to be returned, because Java uses zero-based indexing. With FEEL, you get the value a. This one item is probably the most common issue developers encounter when they first start using FEEL.

Filtering a list also has a unique syntax in FEEL. Rather than pass the list and a condition to function, FEEL uses a condition inside the brackets where you typically specify the index. The filter exposes a variable named item which you can use to build your condition:

={
  “unfiltered”: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
  “filtered”: unfiltered[even(item)]
}

(Try it in the FEEL Playground!)

Type coercion and concatenation

FEEL does not do any type coercion. In JavaScript, you can do something like ”1” + 1 and get 11 as the output. In FEEL, you will get null instead. It doesn’t try to guess if you want to cast the string to a number and do addition, or 1 to a string and do concatenation. You must manually tell it what you want to happen: =”1” + string(1) or =number(“1”) + 1.

However, the number function cannot coerce a non-numeric string to a number. This expression results in a value of null: =number(‘string”) + 1.

It is important to cast the data to the correct type before doing the operation. If this isn’t something your application can do, one best practice would be to do this in any result expressions or output mappings in the process to ensure that the data is already of the expected type.

Combining expressions

I compared FEEL’s context type to JSON objects. This is very true, they do resemble and act like JSON objects in many ways. However, FEEL contexts come with one added superpower: the expressions within the context are evaluated sequentially, so you can reference a previous key within the same context.

Let’s look at an example using some real world data.

I’m going to use a fun data set from the Marvel API (yes, a data set about comics)! This data set contains information about the character Spider-Man and related comics, stories, and artists. Don’t worry, you don’t have to use the API for this; you can look at the data set for these examples by clicking here.

In a separate blog post that introduced FEEL to Camunda users without development experience, I included this example:

{
  "combinedListOfCreators": flatten(data.events.items.creators.items),
  "onlyWriters": combinedListOfCreators[item.role = "writer"],
  "onlyDistinctWriters": distinct values(onlyWriters),
  "limit5Writers": sublist(onlyDistinctWriters, 1, 5),
  "topCreators": string join(limit5Writers.name, ", ")
}

(Try it in the FEEL Playground!)

Notice how the onlyWriters property references the result of the combinedListOfCreators expression above it! This is a fun feature you can’t do in JSON. It allows you to build powerful expressions where needed.

Of course, not all those intermediate variables are necessary. You can combine it into one single expression:

= {
  "topCreators": string join(
    sublist(
      distinct values(
        flatten(data.events.items.creators.items)[item.role = "writer"]
      ), 1, 5).name, ", "
  )
}

(Try it in the FEEL Playground!)

What’s next?

Congratulations, you made it to the end! You now know more about FEEL than you did just a few minutes ago. If you’re an experienced software developer, it may take a few moments to get comfortable working with FEEL. If you’re like me, after writing a few expressions, something will click and why Camunda uses FEEL within Zeebe will make sense.

This post doesn’t cover everything FEEL can do. So where to next?

  • As always, the Camunda FEEL documentation is a great place to start. The docs cover all the available data types, expressions, and functions.
  • Camunda Academy has a wonderful FEEL video series that provides examples in addition to describing the different features of FEEL.
  • Find yourself having trouble getting your expression to work correctly? Join the Camunda community forum and ask your question there. Don’t forget to search, too!

There are some exciting new features coming to Camunda’s FEEL capabilities in the future, as well. In recent releases, we’ve improved the auto-complete in Camunda Modeler to better help with FEEL syntax. Keep an eye on future release announcements for other exciting new FEEL features.

Happy coding!

The post FEEL for Software Developers appeared first on Camunda.

]]>
From Monolith to Microservices Using Camunda https://camunda.com/blog/2024/07/from-monolith-to-microservices-using-camunda/ Wed, 10 Jul 2024 19:20:58 +0000 https://camunda.com/?p=113634 Overcome many of the challenges of a monolith-to-microservices migration with process orchestration.

The post From Monolith to Microservices Using Camunda appeared first on Camunda.

]]>
As I was writing my previous blog post, “A Developer’s Guide to Migrating an Existing Workflow to Camunda,” one question kept running through my mind: How can Camunda help development teams migrate away from legacy architectures?

Of course, migrating away from legacy systems is something Camunda and its customers have spoken about at length. At CamundaCon 2024 in Berlin, First American spoke about how they’re leveraging Camunda to modernize their business operations and how Camunda helped them migrate away from several legacy systems.

I was thinking about something closer to development teams, not broader business initiatives. For instance, how can development teams leverage Camunda to help manage tasks such as migrating from one cloud to another? Or, how can development teams use Camunda to migrate from a monolith to microservices?

Migrating legacy architectures before Camunda

One of the primary challenges faced by developers when migrating from an existing monolith to microservices—or any other architecture pattern—is business continuity. Dependencies between different services inside your monolith are more easily managed than dependencies between independently deployed services.

Changing from a monolith to microservices changes how those services communicate: while they used to communicate directly within your application, now they need to communicate using event buses or message queues. This means that, when migrating one service to its own deployment, all other services that previously called it also need to have changes deployed.

Other challenges include:

  • Domain complexity: The lines between each service is much less clear than in a microservices architecture, and migration requires deep knowledge of the application’s domains.
  • Data consistency: Ensuring consistency across all the newly developed services can be complex, especially when dealing with distributed transactions across multiple systems.
  • Distributed transactions: In a microservices architecture, transactions are spread across multiple services, requiring more complex rollback processes and the implementation of entirely new patterns, such as the saga pattern.

It’s unlikely that any product would be migrated entirely from a monolith to microservices in a single release. Not only would doing so introduce far too many risks, but the development process would take a very long time, putting new feature development on hold until the migration was complete. Instead, development teams often migrate an application incrementally, extracting individual services until the entire monolith has been migrated.

Let’s start another thought experiment using the loan application demo from the previous blog post, before implementing Camunda.

Consider the very first step that an application takes: validating the loan application form. After a user submits the application form, it needs to be validated to ensure that all the required data has been provided. Form validation is something every development team has experience with. Creating a standalone service that validates the user input is a simple and logical first step in migrating away from a monolith.

Once that service is complete, is it enough to deploy it and move onto the next task? Unfortunately no.

That service needs to do more than just validate the form. It also needs to tell the main application the result of the validation. For this thought experiment, let’s assume the team is using an event bus. That service needs to write an event to the bus, and the main application needs to listen to that bus.

It isn’t as simple as making changes to a single service; parts of the main application need to change also, resulting in two or more deployments for each feature. Is there a better way?

Migrating with a process engine

Continuing the same thought experiment with the loan application demo, what would the migration process look like after Camunda has been implemented?

Camunda is now managing the state of the application, moving each loan through the process from start to finish. No longer does the validate application service need to send an event back to the main application; Camunda handles the transition to the next task.

Of course, integrating with event and message queues may be needed for other applications; for instance, you may need to integrate Kafka into your Camunda process!

Phrased another way, the validate application service now has no dependencies on other parts of the application and can function atomically. Camunda sends the data to the service where it’s validated, and the result is returned to Camunda. There is no need, in this example, to write to an event bus or to configure other services to listen for that event.

Looking at it this way, the BPMN diagram becomes something of a migration planning roadmap.

Individual services can be developed, tested, and deployed independently of the rest. The only change needed for the application to continue functioning is to change the REST endpoint in the BPMN model and redeploy it.

All new processes going forward will call the newly deployed service instead of the monolith, and processing will continue as usual regardless of which services have been extracted.

The power of process orchestration

This is the power of process orchestration with Camunda. Where your services are deployed doesn’t matter; what framework or language your services are written in doesn’t matter; whether they used to be part of a monolith doesn’t matter. Camunda allows you to seamlessly integrate all services into a single, well-defined process.

By starting with process orchestration, many of the challenges faced during a monolith to microservices migration can be easily overcome:

  • Business continuity: The process is already running end to end in Camunda, with your application handling the data and the logic. As each service is migrated away from the monolith, the process continues to function as originally designed.
  • Service decomposition: The BPMN diagram can act as a roadmap of sorts, helping teams identify individual services to migrate
  • Domain complexity: One key benefit of BPMN is the visual documentation it provides. By having your process already modeled and defined, much of the domain complexity is already well understood and documented.
  • Data consistency: Camunda continues to run the process as designed, and ensures that the process reaches the end. In this model, Camunda is responsible for managing the eventual consistency of the data in your process by moving the data through the individual tasks.
  • Distributed transactions: Camunda supports distributed transactions and rollbacks out of the box. For instance, teams can implement the saga pattern using compensation events. With compensation events, your individual services don’t need to track each transaction individually; instead, Camunda executes the rollbacks as defined in the BPMN model, easing the complexity of rollbacks.

If you’re new to Camunda and process orchestration, I recommend heading to Camunda Academy to learn more about BPMN, DMN, process orchestration, and using Camunda.

If you’re ready to experiment with your own processes and applications, sign up for a free account or log in at https://camunda.io/. I also recommend you join our community forum and join other developers as they explore Camunda and its ecosystem.

Happy coding!

The post From Monolith to Microservices Using Camunda appeared first on Camunda.

]]>
A Developer’s Guide to Migrating an Existing Workflow to Camunda https://camunda.com/blog/2024/05/developer-guide-migrating-existing-workflow-camunda/ Fri, 24 May 2024 14:47:30 +0000 https://camunda.com/?p=108230 Wondering how to move a workflow from another application over to Camunda? Here's a real-world example for developers.

The post A Developer’s Guide to Migrating an Existing Workflow to Camunda appeared first on Camunda.

]]>
One of the more common questions I receive from developers at conferences and meetups is: “Can you walk me through how I would move an existing workflow in my application to Camunda?” It can be challenging to answer without knowing how their application works; instead, we talk through high-level concepts and how things like service tasks and Connectors work as integration points. With this blog post, I hope to show a real-world but basic end-to-end migration of an existing workflow.

The demo application and the workflow

I’ve created a very simple application to simulate a loan origination process. You can find it on GitHub here. It uses an in-memory database, so it is only useful for testing and has no external dependencies. The application exposes some API endpoints (more on those later) that would allow a front-end application to check the status of each loan application. Internally, the application uses a simple CQRS implementation to mimic an event-driven architecture.

There is a set of requirements for the workflow within the application. The workflow is triggered when a new loan application is submitted, and must move the loan application through a series of steps:

  1. The application must be validated to ensure it is for a loan supported by the financial institution. If the application is invalid, a rejection notice must be sent.
  2. A credit report needs to be pulled, and a risk assessment is performed.
  3. If the risk assessment passes, then the collateral needs to be validated.
  4. If everything passes the automated checks, it goes to an underwriter for a manual review. This step cannot be automated and requires a person to perform it.
  5. After the underwriter signs off, all the paperwork needs to be signed and the funds can be disbursed.
  6. If any of the checks fail, then a rejection notice is sent.

Of course, this is a pared-down version of what many financial institutions perform for every loan application, but I think it demonstrates a real-world process that is hardcoded into an application and would benefit from a process orchestration platform like Camunda!

Creating the model

There are many different ways to model this process. If you are feeling up for a challenge, try to model your own solution before reading any further! Even better, share your models in the comments!

I chose to experiment with error events for this process and came up with the following solution:

A BPMN process model showing the steps required for a loan application.
(click to expand)

(If you’d like to learn more about BPMN, and using event subprocesses and handling errors, please check out our BPMN documentation as well as our excellent BPMN course in Camunda Academy.)

There is also room to add DMN tables to the model. For instance, one of the process requirements is that any loan applications with a credit score below 500 are automatic rejections, and those with a score above 800 are automatic approvals. This could be implemented in a DMN table rather than hard-coding the logic into the web application!

Connecting the model to the application

Because the demo application exposes a set of RESTful API endpoints, we can quickly connect our model to the application using the REST Connector.

The REST Connector enables the engine to make a RESTful call to our web application. It supports various types of authentication (OAuth/OIDC, bearer token, and basic), and all the major HTTP verbs. We can define a JSON body and any required query string parameters for every request, then we can parse the response for the data needed to move the process forward.

Before we look at how to implement the REST Connector, I think it’s important to consider some best practices when handling data—especially sensitive data—in your processes. Data will always be needed for decisions, gateways, and tasks within your process. It is important to be purposeful about what data is needed.

With our fictional loan origination application, there is the potential for personally identifiable information (PII) to be exposed. Variables are logged and are searchable via Camunda’s APIs, for instance. However, no PII is required for this process to complete. We simply need to track the application ID, the ID of the person applying for the loan (applicant ID), and some anonymous details about the loan itself, such as the loan amount and the type of loan.

One last important note: if you are running the demo application locally (either with Maven or Docker), you must use a local Self-Managed instance of Camunda. The cloud does not know how to connect to “localhost” on your workstation! If you want to test this using the cloud, you will need to deploy the application somewhere that is accessible by Camunda.

Using the REST Connector

After the user has submitted the application, it is stored in a database and given an ID. That ID, along with the ID of the applicant, is sent to the process when it is started. The first step in the process is to validate the application: did the user fill out the form correctly and provide all the required information?

The API exposes an endpoint for validating the application: /api/loanApplications/{id}/validate. The id value is the ID of the loan application in the database, provided in the start event. If the ID of the application is “123”, then the URL would be /api/loanApplications/123/validate. The API returns a JSON object as the response:

{
  “loanApplicationId”: 123,
  “isValid”: false,
  “message”: “Why the validation failed.”
}

Let’s look at how that is implemented with the REST Connector:

Rest-connector-camunda
Rest-connector-camunda-feel

We are using FEEL to build the proper URL to call. This particular endpoint is an HTTP GET request (we will look at a POST request next), so we don’t need to provide a body, and it doesn’t require any special headers or query parameters. The “Result expression” maps parts of the response to new process variables. By explicitly mapping these values, rather than using the whole response object, we avoid accidentally exposing data we did not intend to.

The next step in the process I modeled is an exclusive gateway. If the loan application is invalid, it sends a rejection message and ends the process; otherwise, it continues down the happy path. We can use the isValid variable in the gateway by setting the condition expression to =not(isValid) and =isValid respectively.

Making a POST request

Skipping ahead in the process, after the credit report is retrieved, the next step is to perform a risk assessment. The API has an endpoint for this, of course: /api/riskAssessment. However, it is an HTTP POST request, which means it requires a body.

Post-request

Using the values from previous tasks, we can build the JSON body using FEEL. We can use the “Result expression” like before, to map only the data we want from the response to the process.

What’s next?

First, congratulations! You’ve successfully migrated a hard-coded process to a new business process built with BPMN and executed by Camunda. You can now remove all the logic that connected those services within your application! (For the demo application, it means you can remove all the CQRS commands and handlers!) Each API call becomes an atomic request, and the state of each loan application is managed by the process engine rather than the application.

So, what did we accomplish by moving the process state to the engine? Several things:

  • Your application is simplified and decoupled. Each service no longer needs to know what the next step in the process is, which allows for faster iterations with fewer complications.
  • The process is now well described with visual documentation that doesn’t require translation. In other words, the process model you review with your stakeholders is the same process that is executed by the engine. This reduces the chances of errors during development.
  • If your application is currently a monolith and you would like to migrate to a microservices architecture, it is now far easier to accomplish because your services are already decoupled. Each API endpoint could be its own microservice and the only thing you need to change is the URL the REST Connector is calling!
  • If you implemented business rule tasks with DMN in your process, it is now easier to implement additional rules as the business requirements change. Imagine sitting in a room with your stakeholders, making changes to the rules, and deploying them without writing any additional lines of code!

If you’re new to Camunda and process orchestration, I recommend heading to Camunda Academy to learn more about BPMN, DMN, process orchestration, and using Camunda. If you’re ready to experiment with your own processes and applications, sign up for a free account or log in at https://camunda.io/. I also recommend you join our community forum and join other developers as they explore Camunda and its ecosystem.

Happy coding!

The post A Developer’s Guide to Migrating an Existing Workflow to Camunda appeared first on Camunda.

]]>
Orchestrating Chaos: How Process Orchestration Tames Microservices https://camunda.com/blog/2024/05/how-process-orchestration-tames-microservices/ Mon, 06 May 2024 12:36:00 +0000 https://camunda.com/?p=106520 Use BPMN and process orchestration to automate processes that are run frequently or are long running rather than relying on message queues.

The post Orchestrating Chaos: How Process Orchestration Tames Microservices appeared first on Camunda.

]]>
If you attend a software development conference and ask the attendees whether the application(s) they work on use microservices, the answer is an overwhelming “yes.” Microservices provide a scalable infrastructure that not only improves performance by scaling up when there is a heavy load, but also helps save money by scaling them down when the extra processing power isn’t needed.

Follow that question with “How do you feel about microservices?” and you will get varying responses. Visit any online programming forum and you’ll find a wide range of opinions on microservices. Why? A single paradigm is rarely the answer to anything. There are places where it makes sense; there are places where it doesn’t; and most things fall somewhere in the middle, with people trying to find a balance that suits their company’s needs.

In the end, modern software architectures like microservices are great at optimizing resource usage and cost … but have you ever had to explain to a non-technical stakeholder what happens when a certain action is taken in your application? How does data get from A to B and what happens in between? Try diagramming it …

Complex diagram trying to show data and logic flows using an architecture diagram
Data and logic flows using an architecture diagram

… it looks like you threw spaghetti at the wall! It’s quite hard to diagram data and logic flows using an architecture diagram. You ultimately end up with a list of dependencies and relationships, instead of a meaningful picture of what happens when a user takes an action.

Not just that, but many modern software architectures have problems that are difficult to solve. For instance, what if a particular data flow requires manual approval? Or what if you need to add timers (alerts, stuck or idle processes, notifications) or interrupting messages? What happens when the stakeholders slide into your DMs with a non-trivial change to the existing business logic? These are, of course, not impossible issues to solve, but if you didn’t account for these needs when designing the system, they can often be quite difficult to add later.

What if there was a better way?

Forget services—think processes

As developers, we tend to think in terms of systems: this code gets deployed here; this data gets pushed to this database; this message goes into this queue, which is picked up by those systems. This is how I was taught to think about developing applications and it is still my default mental model: Start with the architecture.

For just a moment, forget that mental model. Close your eyes and take a deep breath. I want to introduce you to another way of thinking—not in terms of services or systems, but in terms of processes.

The dictionary defines a process as “…a collection of related, structured activities or tasks performed by people or equipment.” Put another way, your application is made up of a series of processes—some independent, some not—that have a defined start and end.

Flow chart showing loan application process

Take, for example, this loan application process. Nearly every financial institution in the world follows a similar process. It starts when the applicant submits the loan application. A series of steps is then taken to either approve or deny the loan. This isn’t a linear process: not all steps are taken for every application. There are decisions involved that change what happens next (in this example, whether the loan was approved or not).

As a thought experiment, try imagining what the software design/architecture may look like to support this process. I would guess that everyone who reads this article will imagine a different system. That’s because this process doesn’t care about whether you’re using microservices or a monolith, whether it’s running in the cloud or on prem, or what language it’s written in. Thinking instead about the process, not the services, allows the other implementation details to fit in later.

Let’s continue this thought experiment with a real world scenario.

Designing a real-world process

When you’re driving through town (in the United States at least) and you see a very large truck and trailer driving by, very often that truck had to apply and pay for a permit to lawfully drive on those roads. What permit they apply for and how much they have to pay depends on a lot of factors: the size of the truck and trailer, the route they are taking, how long it will take, whether it is a one way trip. But the process the trucking company follows is roughly the same every time: apply for a permit, get matched with a permit and a fee, pay the fee, and the permit is issued.

Linear diagram showing repeatable process
Note: click to enlarge this and other BPMN diagrams in this post

Continuing that thought experiment, take a moment and imagine what the software underneath looks like. In my head, I see a UI that serves a form to fill out, a service that takes the form data and matches a permit, another service to collect the payment, then a final service that issues the permit. Perhaps those are independent microservices or perhaps those are just different classes inside a larger application, but it feels like a very straightforward implementation to me.

But then…

NEW REQUIREMENT!

You’ve started building the software and have written a lot of the code, when the stakeholders show up with a new requirement: “We need the permit turned into a PDF, and email notifications sent to the user and to us.” Because you are trying to problem solve by thinking in terms of processes, you update your process diagram first:

Diagram for workflow adjusted to add new requirement
Adjusting a workflow to add a new requirement

Easy! This new requirement doesn’t change any of the existing plan, you just need to build two new services: one to generate a PDF, and one to send out emails. Then, when the permit is issued, the system can generate the PDF and send the notifications out. That’s not so bad! You write up the new user stories and get back to writing code…

NEW REQUIREMENT!

In the middle of working on one of the services, the stakeholders arrive with a new requirement: “A user tried to fill in the form, and it doesn’t match a permit, it needs a manual review!” You open your process diagram and update it…

Diagram for workflow adjusted to add a second new requirement
Adjusting a workflow to add a second new requirement

This is more complicated than the previous requirement, but at least the “Match Permit” service you already wrote doesn’t need to change much. Somehow you need to build a way for a manual validation to happen, and for that to route to the payment step…

NEW REQUIREMENT!

Another new requirement: “If they don’t pay in X days, the permit needs to be canceled!” This one sounds a little harder than the last requirement, and affects a service you’ve already started development on! You open the process diagram and stare at it, wondering how you should implement this new requirement… and how do you draw it in this process diagram?

Introducing BPMN

Want to know a secret? That process model you’ve been working on? It’s awfully close to a BPMN model! BPMN, or Business Process Model and Notation, is an open standard maintained by the OMG Group (who also maintain other open standards like UML). BPMN has a lot of advantages over drawing a process diagram with a tool like Visio or LucidCharts—primarily that it is XML under the hood, which means it can be easily read by software and also easily checked into your source control and versioned with your releases. Because BPMN is designed to model processes, it is a natural fit for this method of problem solving.

To better illustrate what BPMN is, let’s take a moment to update our previous process diagram to use BPMN:

Diagram updated using BPMN
Updating process diagram with BPMN

It doesn’t look significantly different. There is an added start and end event to clearly define the start and end; and there are two additional gateways (the diamond shapes) to make sure we bring all the paths together cleanly. The only other difference are those little icons on each box. In BPMN, each of those rectangles is a “Task” and each task has a specific type. BPMN supports many different task types. In this example we have two: a Service Task (the gear icon) and a User Task (the person icon).

However, this post isn’t intended to teach you BPMN (if you want a BPMN tutorial, head here). It’s meant to introduce a different way of thinking about solving problems. For now, it’s enough to know that BPMN is an open standard that allows you define processes in a visual manner, and to give some additional context to each task (for instance, with the task types).

NEW REQUIREMENT!

Remember the last requirement? “If they don’t pay in X days, the permit needs to be canceled!” Let’s not worry about how to implement it yet, but let’s start with adding it to our BPMN model:

BPMN model being updated with a new requirement
Adding a new requirement to a BPMN model

In my opinion, this diagram is still very understandable, even if you know almost nothing about BPMN or code. Some sort of timer event happens, and the permit is then canceled, and the process ends.

I can hear what you’re thinking: “That’s great, dude, but it’s still just a picture…”

An executed BPMN model

… or is it?

Introducing BPMN Engines

This is why BPMN matters: Because it is XML, and because it is an open standard, you can execute a BPMN model with a BPMN engine! There are many BPMN engines out there, but not all support the full BPMN specification. (You can view a list of engines and their support for BPMN here.) Camunda, of course, is a BPMN engine!

I’m sure you have a lot of questions right now—primarily, “That’s a neat animation, but how does an executable workflow diagram help me?” In order to answer that, let’s expand our diagram a little bit:

Expanded diagram connects tasks to services
Connecting the tasks of a diagram to the services of your application

Remember how each task has a type associated with it? We can “connect” each of those tasks to a service in our application. When the process encounters the “Match permit and get fee” task, it will call the “PermitMatch” service, get the response, then move to the next step.

So how does the engine know how to call the PermitMatch service? That’s the best part: it’s entirely up to you! Exactly how you implement it may vary depending on the engine you choose, but if you choose Camunda you have three primary options:

  1. A job worker: This is a piece of code that you write that runs in your environment and performs a single task.
  2. A custom Connector: A Connector is like a reusable job worker, with predefined inputs and outputs.
  3. A RESTful API call: A REST Connector is provided out of the box with Camunda, allowing you to make RESTful calls without writing any additional code.

However, these details don’t matter too greatly yet. Here’s the key point:

The state of a process is managed by the workflow engine, not the application. The data is managed by the application, not the engine.

NEW REQUIREMENT!

Just as you start learning about BPMN and BPMN engines, the stakeholders sneak in another new requirement! “For some permits, they need additional approval before collecting payment.” Great… the payment collection service, which you’ve already completed, now needs more changes!

I know we just spent time talking about BPMN, but take a moment and think about how you might accomplish this in code. The form is submitted by the user and it goes to a service to match a permit. If no permit is matched, it needs to go to a manual review. Otherwise, the process checks to see if additional approvals are needed before proceeding to payment. There is another path, too, when a manual review is needed. There are two different, unique places that the decision to require another approval may be coming from. It’s not impossible, but there isn’t a particularly elegant solution either.

Now let’s consider the BPMN model:

Diagram with room for a manual approval gateway
Adding gateways for manual approval when necessary

Another decision gateway is added if the permit needs an additional approval, and that’s it! You don’t need to make any changes to the core logic inside your application, because the process engine is handling the state. As long as the result is either an automatic permit match or a flag for manual review, the process will work exactly as needed to fulfill the new requirement!

NEW REQUIREMENT!

“Oh, and we should probably send a payment reminder before we cancel the permit.”  Of course the stakeholders come in with another requirement, but this time you’re ready with BPMN and your process engine:

Diagram showing easy updates to process with BPMN
It’s easy to update your process for more requirements with BPMN

Now the “Collect payment” task has two timer events. The one with the solid outline is an interrupting event, which will end the process execution when it triggers. The timer with the dashed outline is a non-interrupting event, so it can trigger multiple times without canceling the larger process.

The best part? Because you already have an email service to send notifications at the end of the process, you can simply call that existing service to send the payment reminder, and you have added the new requirement without writing any additional services or logic. You added the timer event to the process model, the new “Send payment reminder” task consumes your existing email service, and done!

(You may have noticed some other changes to the model. While you were at it, you cleaned up the diagram by adding an error event to the “Match permit” task. Now that task just needs to return an error that it couldn’t match a permit, rather than returning a flag saying it couldn’t. While it’s functional and valid BPMN either way, I personally find this model to be a bit nicer. What do you think?)

Let’s Review

My goal in writing this is to give you another tool in your developer toolbox. Certainly this approach doesn’t work for all problems; process automation with BPMN works best with processes that are run frequently or are long running. That one clean up job that executes every quarter? That still might work best as a cronjob.

When I was learning to be a software developer, I was never taught to think about solving a problem in this way. Every microservice orchestration tutorial I’ve read solved these problems with message queues (or a similar concept). After learning BPMN, I can think of several projects (including one related to truck permitting!) that could have benefited greatly from this approach.

If you are new to BPMN and business process management, you likely have a lot of questions. That’s OK. Camunda has some great resources to help you get started. I’ve linked some of them below. Or you can join us on our forums and see how others are using BPMN to solve their problems. I hope to see you there!

The post Orchestrating Chaos: How Process Orchestration Tames Microservices appeared first on Camunda.

]]>
Camunda Self-Managed for Absolute Beginners, Part 2—Ingress and TLS SSL https://camunda.com/blog/2024/01/camunda-self-managed-absolute-beginners-part-2-ingress-tls-ssl/ https://camunda.com/blog/2024/01/camunda-self-managed-absolute-beginners-part-2-ingress-tls-ssl/#comments Tue, 30 Jan 2024 20:06:32 +0000 https://camunda.com/?p=99941 Continue your journey from absolute beginner to getting an instance of Camunda Self-Managed live in this step-by-step guide, focusing on ingress and TLS SSL.

The post Camunda Self-Managed for Absolute Beginners, Part 2—Ingress and TLS SSL appeared first on Camunda.

]]>
If you haven’t read it yet, of course go read “Camunda Self-Managed for Absolute Beginners.” You wouldn’t start reading a book series by skipping the first book would you? (If you would, I have a lot of questions I’d like to ask you!)

After the first post was published, I received a lot of amazing feedback and questions from Camunda users who were new to containers and Kubernetes. The most common question I was asked was “How do I connect to the services I just installed?!”

You asked, Camunda answers! In this post we will add an “ingress” and secure it with a certificate.

Port forwarding

If you followed the steps in the previous post, you probably noticed that you couldn’t connect to any of the services. Port forwarding was briefly mentioned in the previous post (and ingress controllers mentioned in the discussion thread, if you followed that), but it wasn’t explained in any detail. Let’s remedy that first!

It’s important to think about your cluster as a separate network, even though it’s installed on your local workstation rather than in the cloud. Whether you start a single Docker container, or you build a local Kubernetes cluster, the effect is the same: that containerized service will be running on a virtual network. You need to tell both the cluster and your workstation how they can talk to one another.

There are two ways of doing this with Kubernetes: port forwarding, and using an ingress controller.

Port forwarding, sometimes referred to as “port mapping,” is the most basic solution. Keen eyed users may have noticed the output of the helm install command contains this:

Port-forwarding

If you want to access one of those services, simply copy and paste the command! Let’s use this command for Operate as an example: kubectl port-forward svc/camunda-platform-operate 8081:80. The Operate service is listening on port 80 (the port is configurable in the Helm values.yaml file if you wish to change it). Behind the scenes, kubectl is telling Kubernetes to listen on the first port (“8081”) and send the network traffic to the second port (“80”) inside the cluster.

It’s as simple as that! There is one important thing to remember when using the kubectl port-forward command: the command doesn’t return, which means your terminal will not return to a prompt. If you want to forward multiple ports, you will need to open multiple terminal windows or write a custom script.

But don’t worry, there are better options! Port forwarding is great for testing single ports, or if you need quick access to a single pod to test something. But it’s not a very robust solution when you need to work with multiple ports and services, and it isn’t scalable for a production environment.

Ingress controllers

I think Nginx provides the best short definition of an ingress controller: “An Ingress controller abstracts away the complexity of Kubernetes application traffic routing and provides a bridge between Kubernetes services and external ones.”

In other words, instead of manually configuring all the routes needed for your inbound traffic to get to the right services inside your cluster, the ingress controller handles it automatically. Ingress controllers also act as load balancers, routing traffic evenly across your distributed services. (When working with a local deployment, which these blog posts have focused on so far, the benefit of an ingress controller is in the routing capabilities; the load balancing matters much more with a cloud environment deployment.)

There are several different ingress controllers you can choose for your local deployment. Which one you choose depends on a number of factors, including the environment you are deploying it to. This blog series uses kind, which has existing configuration for three different ingress controllers. We will be using the ingress-nginx package for this example.

If you are getting ready to deploy to the cloud or a different Kubernetes environment, be sure to check their documentation. Many cloud providers offer their own ingress controllers that are better suited and easier to configure for those environments.

kind requires a small amount of additional configuration to make the ingress work. When creating your cluster you need to provide a configuration file. If you have already created a cluster from the previous blog post, you will need to delete it using the kind delete cluster --name camunda-platform command.

First, create a new file name kind.config with the following contents:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
  - containerPort: 443
    hostPort: 443
  - containerPort: 26500
    hostPort: 26500

Then, recreate the cluster using kind create cluster --name camunda-local --config kind.config, then deploy the Helm charts again with the same helm install camunda-platform camunda/camunda-platform -f camunda-values.yaml from the previous blog post.

Finally, run the following command to install the ingress controller: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml. (For more information about using kind with ingress controllers, refer to their documentation!)

Now that we have an ingress controller we need to configure Camunda’s services to work with the ingress. (More specifically, we need to configure the pods the services are running in to work with the ingress.)

Combined or separated ingress?

There are two ways to configure the ingress: combined or separated.

A combined ingress configuration uses the same domain for all the services, and routes based on the path. For instance, Identity would be available at https://domain.com/identity, Operate would be available at https://domain.com/operate, and so on. When using a separated ingress, each service is available on its own domain. For instance, Identity would be available at http://identity.domain.com/, Operate would be available at https://operate.domain.com/, and so on.

For this demo we will use the combined configuration. However, there is one quirk with this particular setup to be aware of! Zeebe Gateway uses gRPC, which uses HTTP/2. This means that Zeebe Gateway cannot be on a path. (Explaining computer networking is far outside the scope of this post, but the reason is because the URL https://domain.com/zeebe-gateway/ uses HTTP and not HTTP/2.)

Note: If you’re interested in using a separated setup, you can review our guide in the docs!

With that in mind, let’s look at the changes new values.yaml file:

global:
  ingress:
    enabled: true
    className: nginx
    host: "camunda.local"

operate:
  contextPath: "/operate"

tasklist:
  contextPath: "/tasklist"

zeebe-gateway:
  ingress:
    enabled: true
    className: nginx
    host: "zeebe.camunda.local"

Note: These are only the changes from the previous blog post, not the complete file! The complete file will be included at the bottom of this post.

The changes are pretty straightforward. Globally, we enable the ingress and give it a className of “nginx” because we are using the ingress-nginx controller. (If you are using a different controller, the className may be different, check the controllers documentation!) We also define the host: this is the domain that all the paths will use. For this example, I am using “camunda.local”, but you can use any domain name that doesn’t conflict with any other domain name. For Operate and Tasklist, we define what the path is. Last, for Zeebe Gateway, we define separate ingress using the subdomain “zeebe.camunda.local”.

The domain “camunda.local” doesn’t exist, which means that your workstation doesn’t know how to connect to it. You will need to add two entries to your workstation’s hosts file that resolve “camunda.local” and “zeebe.camunda.local” (or whatever domain you chose) to the IP address “127.0.0.1”. How you do this depends on your operating system, and you can follow this guide to edit your hosts file.

Configuring TLS/SSL

The last step to get everything working is to generate a certificate and secure the ingress with it. While Camunda does not require TLS to work, Nginx does require a certificate for HTTP/2. There are many ways to generate a certificate, but for simplicity we will use a self-signed certificate. (Learn more about self-signed vs CA-signed certificates.)

Note: Generating a self-signed certificate requires OpenSSL; if you don’t have OpenSSL, refer to their documentation for how to install it.

To generate a certificate, execute the following command. You will be asked a series of questions to configure the certificate: for this example, the values you enter do not matter, but refer to the OpenSSL documentation for more information on these values.

openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -sha256 -days 365 --nodes -addext 'subjectAltName=DNS:camunda.local'

I won’t cover all the parameters here, but there are four important values:

  • The -days parameter sets how long the certificate is valid for; in this example, it will expire in 1 year.
  • The -keyout parameter configures the file name of the private key file that the certificate is signed with. You will need this key to install the certificate.
  • The -out parameter configures the file name of the certificate itself.
  • The -addext parameter configures the domain that this certificate is valid for. Because I configured our ingress to use “camunda.local”, that is the domain used for this certificate.

However, we had to configure a separate ingress for Zeebe Gateway, which needs its own certificate. The command is nearly the same: just change the file names and the domain!

openssl req -x509 -newkey rsa:4096 -keyout key-zeebe.pem -out cert-zeebe.pem -sha256 -days 365 --nodes -addext 'subjectAltName=DNS:zeebe.camunda.local'

Next, we need to add the certificates to our Kubernetes clusters as Secrets. Secrets are how Kuberenetes saves sensitive information that shouldn’t be available in plaintext files like the values.yaml file. Instead, the values.yaml file references the secret name and Kuberenetes handles the rest. We will need to create two secrets, one for each certificate:

kubectl create secret tls tls-secret --cert=cert.pem --key=key.pem
kubectl create secret tls tls-secret-zeebe --cert=cert-zeebe.pem --key=key-zeebe.pem

Finally, we need to configure TLS in our values.yaml file, using the secret names we just created. The complete file, with the combined ingress and TLS configured, looks like this:

global:
  ingress:
    enabled: true
    className: nginx
    host: "camunda.local"
    tls:
      enabled: true
      secretName: "tls-secret"
  identity:
    auth:
      # Disable Identity authentication for local development
      # it will fall back to basic-auth: demo/demo as default user
      enabled: false

# Disable Identity for local development
identity:
  enabled: false

# Disable Optimize
optimize:
  enabled: false

operate:
  contextPath: "/operate"

tasklist:
  contextPath: "/tasklist"

# Reduce resource usage for Zeebe and Zeebe-Gateway
zeebe:
  clusterSize: 1
  partitionCount: 1
  replicationFactor: 1
  pvcSize: 10Gi
  resources: {}
  initResources: {}

zeebe-gateway:
  replicas: 1
  ingress:
    enabled: true
    className: nginx
    host: "zeebe.camunda.local"
    tls:
      enabled: true
      secretName: "tls-secret-zeebe"

# Enable Outbound Connectors only
connectors:
  enabled: true
  inbound:
    mode: "disabled"

# Configure Elasticsearch to make it running for local development
elasticsearch:
  resources: {}
  initResources: {}
  replicas: 1
  minimumMasterNodes: 1
  # Allow no backup for single node setups
  clusterHealthCheckParams: "wait_for_status=yellow&timeout=1s"

  # Request smaller persistent volumes.
  volumeClaimTemplate:
    accessModes: [ "ReadWriteOnce" ]
    storageClassName: "standard"
    resources:
      requests:
        storage: 15Gi

Install and test

That’s all of the configuration needed. Now you need to upgrade your Helm deployment with the newest configuration values. (If you are starting from scratch, just use the helm install command from the previous post!) To upgrade your Helm deployment, run the following command:

helm upgrade --install camunda-platform camunda/camunda-platform -f kind-combined-ingress.yaml

That’s it! Now it’s time to test! The first thing you can do is open https://camunda.local/operate or https://camunda.local/tasklist to make sure those applications open. Because we used a self-signed certificate, your browser may give a warning about not being able to verify the certificate. That is expected, you can click through the warning to get to the site. If you use a CA-signed certificate you will not see a warning.

The last thing to test is the gRPC connection to Zeebe Gateway. There are different ways to test this, but for this post I am going to use the zbctl command line utility. Follow the instructions in the documentation to install it, then run the following command:

zbctl status --certPath cert-zeebe.pem --address zeebe.camunda.local:443

We are providing the self-signed certificate to zbctl because without it, zbctl wouldn’t be able to validate the certificate and would fail with a warning similar to what you saw in your browser. We are also providing the address and port that we configured for the ingress, and the ingress controller is automatically routing that port to the gRPC port 26500 internally. If everything is set up correctly, you should see something similar to this:

Grpc-to-zeebe-gateway

What’s Next?

Congratulations! ???? Not only do you have Camunda Self-Managed running locally, it is now secured behind a certificate with a working ingress!

Here are some ideas for what to challenge yourself with next:

  • Add Identity and Optimize, configure the ingress, and test the authentication with zbctl
  • Enable Inbound Connectors
  • Deploy to a cloud provider such as AWS, GCP, OpenShift, or Azure

Challenge yourself! Leave a comment about this blog on our forum and let us know what you’d like to see next in the series! And as always, if you encounter any problems, let us know on the forums.

The post Camunda Self-Managed for Absolute Beginners, Part 2—Ingress and TLS SSL appeared first on Camunda.

]]>
https://camunda.com/blog/2024/01/camunda-self-managed-absolute-beginners-part-2-ingress-tls-ssl/feed/ 7