Joe Pappas, Author at Camunda https://camunda.com Workflow and Decision Automation Platform Tue, 13 May 2025 17:58:56 +0000 en-US hourly 1 https://camunda.com/wp-content/uploads/2022/02/Secondary-Logo_Rounded-Black-150x150.png Joe Pappas, Author at Camunda https://camunda.com 32 32 An Advanced Ad-Hoc Sub-Process Tutorial https://camunda.com/blog/2025/04/an-advanced-ad-hoc-sub-process-tutorial/ Fri, 25 Apr 2025 02:09:15 +0000 https://camunda.com/?p=135934 Learn about the new ad-hoc sub-process capabilities and how you can take advantage of them to create dynamic process flows.

The post An Advanced Ad-Hoc Sub-Process Tutorial appeared first on Camunda.

]]>
Ad-hoc sub-processes are a new feature in Camunda 8.7 that allow you to define what task or tasks are to be performed during the execution of a process instance. Who or what decides which of the tasks are to be performed could be a person, rule, microservice, or artificial intelligence.

In this example, you’ll decide what those tasks are, and later on you’ll be able to add more tasks as you work through the process. We’ll use decision model and notation (DMN) rules along with Friendly Enough Expression Language (FEEL) expressions to carry out the logic. Let’s get started!

Table of contents

SaaS or C8Run?

Download and install Camunda 8 Run

Download and install Camunda Desktop Modeler

Create a process using an ad-hoc sub-process

Add logic for sequential or parallel tasks

Create a form to add more tasks and to include a breadcrumb trail for visibility

Run the process!

You’ve built your ad-hoc sub-process!

SaaS or C8Run?

You can choose either Camunda SaaS or Self-Managed. Camunda provides a free 30-day SaaS trial, or you can choose Self-Managed. I recommend using Camunda 8 Run to simplify standing up a local environment on your computer.

The next sections provide links to assist you in installing Camunda 8 Run and Desktop Modeler. If you’ve already installed Camunda or are using SaaS, you can skip to Create a process using an ad-hoc sub-process.

If using Saas, be sure to create an 8.7 cluster first.

Download and install Camunda 8 Run

For detailed instructions on how to download and install Camunda 8.7 Run, refer to our documentation. Once you have it installed and running, continue on your journey right back here!

Download and install Camunda Desktop Modeler

Download and install Desktop Modeler. You may need to open the Alternative downloads dropdown to find your desired installation.

Select the appropriate operating system and follow the instructions to start Modeler up. We’ll use Desktop Modeler to create and deploy applications to Camunda 8 Run a little bit later.

Create a process using an ad-hoc sub-process

Start by creating a process that will let you select from a number of tasks to be executed in the ad-hoc sub-process.

Open Modeler and create a new process diagram. This post uses SaaS and Web Modeler, but the same principles apply to Desktop Modeler. Be sure to switch versions, if not set correctly already, to 8.7, as ad-hoc sub-processes are available to Camunda 8.7 and later versions.

ad-hoc sub-process 1

Next, add an ad-hoc sub-process after the start event; add a task and click the Change element icon.

ad-hoc sub-process 2

Your screen should look something like this. Notice the tilde (~) denoting the ad-hoc sub-process:

ad-hoc sub-process 3

Now add four User Tasks to the subprocess. We’ll label them Task A, Task B, Task C, and Task D. Be sure to update the ID for each of the tasks to Task_A, Task_B, Task_C, and Task_D. We’ll use these IDs later to determine which of the tasks to execute.

You can ignore the warnings indicating forms should be associated with User Tasks.

Add an end event after the ad-hoc sub-process as well.

ad-hoc sub-process 4

Add a collection (otherwise known as an array) to the ad-hoc sub-process that determines what task or tasks should be completed within it.

Put focus on the ad-hoc sub-process and add the variable activeElements to the Active elements collection property in the Properties panel of the ad-hoc sub-process. You’ll need to pass in this collection from the start of the process.

ad-hoc sub-process 5

Now you need to update the start event by giving it a name and adding a form to it. Put focus on the start event and enter in a name. It can be anything actually, but it’s always a best practice to name events. This post uses the name Select tasks.

Click the link icon above the start event and click Create new form.

ad-hoc sub-process 6

The form should take the name of the start event: Select tasks.

Now drag and drop a Tag list form element onto the Form Definition panel.

ad-hoc sub-process 7

The Tag list form element allows users to select from an array of items and pass it to Camunda as an array.

Next, update the Field label in the Tag list element to Select tasks and the Key to activeElements.

ad-hoc sub-process 8

By default, a Tag list uses Static options with one default option, and we’ll use that in this example. Add three more static options and rename the Label and Value of each to Task A, Task_A; Task B, Task_B; Task C, Task_C; and Task D, Task_D.

ad-hoc sub-process 9

Let’s run the process! Click Deploy and Run. For SaaS, be sure to switch back to the ad-hoc sub-process diagram to deploy and run it.

You can also shortcut this by simply running the process, as running a process also deploys it.

ad-hoc sub-process 10

You’ll receive a prompt asking which Camunda cluster to deploy to, but there is only one choice. Deploy and run processes from Desktop Modeler to Camunda 8 Run.

Upon running a process instance, you should see the screen we created for the start event. Select one or more tasks and submit the form. This post selects Task A, Task B, and Task C. Click Run to start the process.

ad-hoc sub-process 11

A pop-up gives you a link to Camunda’s administrative console, Operate. If you happen to miss the pop-up, you can always click the grid icon in the upper left corner in Web Modeler. Select Operate in the menu.

ad-hoc sub-process 12
ad-hoc sub-process 13

Check out the documentation to see how to get to Operate in Camunda 8 Run.

Once in Operate, you should see your process definition. You can navigate to the process instance by clicking through the hyperlinks. If you caught the link in Web Modeler, you should be brought to the process instance directly. You should see something like this:

ad-hoc sub-process 14

As you can see, the process was started, the ad-hoc sub-process was invoked, and Task A, Task B, and Task C are all active. This was accomplished by passing in the activeElements variable set by the Tag list element in the Start form.

You can switch to Tasklist to complete the tasks. The ad-hoc sub-process will not complete until all three tasks are completed. Navigate to a task by clicking on it in the process diagram panel and clicking Open Tasklist in the dialog box.

ad-hoc sub-process 15

You should see all three tasks in Tasklist. Complete them by selecting each one, then click Assign to me and then click Complete Task.

ad-hoc sub-process 16

Once all three tasks are complete, you can return to Operate and confirm the process has completed.

ad-hoc sub-process 17

Now that you understand the basics of ad-hoc sub-processes, let’s add more advanced behavior:

  • What if you wanted to be able to decide whether those tasks are to be completed in parallel or in sequence?
  • What if you wanted to add more tasks to the process as you execute them?
  • What if you wanted a breadcrumb trail of the tasks that have been completed or will be completed?

In the next section, we’ll add rules and expressions to handle these scenarios. If you get turned around in the pursuit of building this example, we’ll provide solutions to help out.

Add logic for sequential or parallel tasks

Now we’ll add logic to allow the person starting the process to decide whether to run the selected tasks in sequence or in parallel. We’ll add a radio button group, an index variable, FEEL expressions, and rules to handle this.

Go back to the Select tasks form in Web Modeler. Add a Radio group element to the form.

ad-hoc sub-process 18

Update the Radio group element, selecting a Label of Sequential or Parallel and Static options of Sequential with a value of sequential and Parallel with a value of parallel. Update the Key to routingChoice and set the Default value to Sequential. Your screen should look something like this:

ad-hoc sub-process 19

Now you need to add some outputs to the Select tasks start event. Go back to the ad-hoc sub-process diagram and put focus on the Select tasks start event. Add the following Outputs, as shown below:

  • activeElements
  • index
  • tasksToExecute
ad-hoc sub-process 20

Next, update each with a FEEL expression. For activeElements, add the following expression:

{ "initialList": [],
  "appendedList": if routingChoice = "sequential" then append(initialList, tasksToExecute[1]) else tasksToExecute
}.appendedList

If you recall, activeElements is the collection of the task or tasks that are to be executed in the ad-hoc sub-process. Before, you simply passed the entire list, but now that you can choose between sequential or parallel behavior, you need to update the logic to account for that choice. If the choice is sequential, add the next task and that task only to activeElements.

If you’re not familiar with FEEL, let’s explain what you’re seeing here. This FEEL expression starts with the creation of a list called initialList. We then create another variable called appendedList by appending initialList with either the first task (if routingChoice is sequential) or the entire list (if routingChoice is parallel). We then pass back the contents of appendedList, as denoted by .appendedList on the last line, and populate `activeElements`.

ad-hoc sub-process 21

The index variable will be used to track where you are in the process. Set it to 1:

ad-hoc sub-process 22

In tasksToExecute, you’ll hold all of the tasks, whether in sequence or in parallel, in a list which you can use to display where you are in a breadcrumb trail. Use the following expression:

{ "initialList": [],
  "appendedList": if routingChoice = "parallel" then insert before(initialList, 1, tasksToExecute) else tasksToExecute
}.appendedList

In a similar fashion to activeElements, create a list variable called initialList. Next, insert tasks as a nested list if routingChoice is parallel or the entire list if routingChoice is sequential.

ad-hoc sub-process 23

Your screen should look something like this:

ad-hoc sub-process 24

Now you need to increase the index after completion of the ad-hoc sub-process and add some logic to determine if you’re done. In the process diagram, put focus on the ad-hoc sub-process and add an Output called index. Then add an expression of index + 1. Your screen should look something like this:

ad-hoc sub-process 25

Add two more Outputs to the ad-hoc sub-process, interjectYesNo with a value of no and interjectTasks with a value of null. We’ll be using these values later in a form inside the subprocess and this will set those variables to default values upon the conclusion of a sub-process iteration:

ad-hoc sub-process 26

Next, we’ll add a business rule task and a gateway to the process. Drag and drop a generic task from the palette on the left and change it to a Business rule task. Then drag and drop an Exclusive gateway from the palette after the Business rule task. You’ll probably need to move the End event to accommodate these items.

Your screen should look like this (you can see the palette on the left):

ad-hoc sub-process 27

Let’s create a rule set. Put focus on the Business rule task and click the link icon in the context pad that appears.

ad-hoc sub-process 28

In the dialog box that appears, click Create DMN diagram.

ad-hoc sub-process 29

In the decision requirements diagram (DRD) diagram that appears, set the Diagram and Decision names to Set next task.

ad-hoc sub-process 30

The names aren’t critical, but they should be descriptive.

Let’s write some rules! Click the blue list icon in the upper left corner of the Set next task decision table to open the DMN editor.

In the DMN editor, you’ll see a split-screen view. On the left is the DRD diagram with the Set next task decision table. On the right is the DMN editor where you can add and edit rules.

ad-hoc sub-process 31

First things first, update the Hit policy to First to keep things simple. The DMN will execute until it hits the first rule that matches. Check out the documentation for more information regarding Hit Policy.

ad-hoc sub-process 32

Let’s add some rules. In the DMN editor, you can add rule rows by clicking on the blue plus icon in the lower left. Add two rows to the decision table.

ad-hoc sub-process 33

Next, double click Input to open the expression editor. Your screen should look something like this:

ad-hoc sub-process 34

In this screen, enter the following expression: tasksToExecute[index]. Select Any for the Type. Your screen should look like this:

ad-hoc sub-process 35

Just to recap, you’ve incremented the index by one. Here you retrieve the next task or tasks, and now you’ll write rules to determine what to do based on what is retrieved.

In the first row input, enter the following FEEL expression: count(tasksToExecute[index]) > 1.

This checks to see if the count of the tasksToExecute list at the new index is greater than one which indicates parallel tasks. For now it’s not important, but it will be later. Next, double-click Output to open the expression editor.

ad-hoc sub-process 36

For Output name, enter activeElements, and for the Type, enter Any.

ad-hoc sub-process 37

In the first rule row output, enter the expression tasksToExecute[index].

If the count is greater than one, this means that there are parallel tasks to be executed next. All that’s needed is to pass on these tasks. The expression above does just that. You may also want to put in an annotation to remind yourself of the logic.

For example, you can enter Next set of tasks are parallel for the annotation.

Your screen should look like this:

ad-hoc sub-process 38

Next, add logic to the second row. Leave the otherwise notation - in for the input on the second row. Enter the following expression for the output of the second row:

{ "initialArray":[],  "appendedList": append (initialArray, tasksToExecute[index]) }.appendedList

What this does is create an empty list, add the next single task to be executed to the empty list, and then populate activeElements. You may want to add an annotation here as well: Next task is sequential.

Your screen should look like this:

ad-hoc sub-process 39

Now you need to add logic to the gateway to either end the process or to loop back to the ad-hoc sub-process. Go back to the ad-hoc sub-process in your project.

You might notice this in your process:

ad-hoc sub-process 40

Add a Result variable of activeElements and add a name of Set next task. Your screen should look like this:

ad-hoc sub-process 41

Add a name to the Exclusive gateway. Let’s use All tasks completed? Also, add the name Yes on the sequence flow from the gateway to the end event. Your screen should look like this:

ad-hoc sub-process 42

Change that sequence flow to a Default flow. Put focus on the sequence flow, click the Change element icon, and select Default flow.

ad-hoc sub-process 43

Notice the difference in the sequence flow now?

ad-hoc sub-process 44

Next, add a sequence flow from the All tasks completed? gateway back to the ad-hoc sub-process. Put focus on the gateway and click the arrow icon in the context pad.

ad-hoc sub-process 45

Draw the sequence flow back to the ad-hoc sub-process. You may need to adjust the sequence path for better clarity in the diagram.

ad-hoc sub-process 46

Add the name No to the sequence flow. Add the following Condition expression: activeElements[1] != null.

Your screen should look like this:

ad-hoc sub-process 47

Before running this process again, you need to deploy the Set next task rule. Switch over to the Set next rule DMN and click Deploy.

ad-hoc sub-process 48

One update is needed in the starting form. Open the Select tasks form and go to the Select tasks form element. Change the Key from activeElements to tasksToExecute.

ad-hoc sub-process 49

If you recall, the outputs you defined in the Start event will add activeElements.

Go back to the ad-hoc sub-process diagram and click Run. This time, select Task A and Task B and leave the routing choice set to Sequential. Click Run.

ad-hoc sub-process 50

In your Tasklist, you should only see Task A. Claim and complete the task. Wait for a moment, and you should then see Task B in Tasklist. Claim and complete the task.

Now, if you go to Operate and view the completed process instance, it should look something like this:

ad-hoc sub-process 51

Start another ad-hoc sub-process but this time select a number of tasks and choose Parallel. Did you see the tasks execute in parallel? You should have!

In the next section, you’ll add a form to the tasks in the ad-hoc sub-process to allow users to add more parallel and sequential tasks during process execution. You’ll also add a breadcrumb trail to the form to provide users visibility into the tasks that have been completed and tasks that are yet to be completed.

Create a form to add more tasks and to include a breadcrumb trail for visibility

Go back to Web Modeler and make a duplicate of the start form. To do this, click the three-dot icon to the right of the form entry and click Duplicate.

ad-hoc sub-process 52

While you could use the same form for both the start of the process and task completion, it’ll be easier to make changes without being concerned about breaking other things in the short term. Name this duplicate Task completion.

ad-hoc sub-process 53

Click the Select tasks form element and change the Key to interjectTasks.

ad-hoc sub-process 54

We’ll add logic later to add to the tasksToExecute variable.

Next, add a condition to the form elements to show or hide them based on a variable. You’ll add this variable, based on a radio button group, soon. In the Select tasks form element, open the Condition property and enter the expression interjectYesNo = “no”.

Your screen should look something like this:

ad-hoc sub-process 55

Repeat the same for the Sequential or parallel form element:

ad-hoc sub-process 56

You could just as easily put these elements into a container form element and set the condition property in the container instead, rather than setting the condition in each of the elements.

Next, add a Radio group to the form, above the Select tasks element. Set Field label to Interject any tasks?, Key to interjectYesNo, Static options to Yes and No with values of yes and no. Set Default value to No. Your screen should look like this:

ad-hoc sub-process 57

If you’ve done everything correctly, you should notice that the fields Select tasks and Sequential or parallel do not appear in the Form Preview pane. Given that No is selected in Interject any tasks?, this is the correct behavior. You should see both the Select tasks and Sequential or parallel fields if you select Yes in the Interject any tasks? radio group in Form Preview.

Next, you’ll add HTML to show a breadcrumb trail of tasks at the top of the form. Drag and drop an HTML view form element to the top of the form.

ad-hoc sub-process 58

Copy and paste the following into the Content property of the HTML view:

<div>
<style>
  .breadcrumb li {
    display: inline; /* Inline for horizontal list */
    margin-right: 5px;
  }

  .breadcrumb li:not(:last-child)::after {
    content: " > "; /* Insert " > " after all items except the last */
    padding-left: 5px;
  }

  .breadcrumb li:nth-child({{currentTask}}){ 
    font-weight: bold; /* Bold the current task */
    color: green;
  }
</style>
<ul class="breadcrumb">
    {{#loop breadcrumbTrail}}
      <li>{{this}}</li>
    {{/loop}}
</div>

Essentially this creates a breadcrumb trail using an HTML unordered list along with some CSS styling. You’ll need to provide two inputs, currentTask and breadcrumbTrail, which we’ll define next.

Your screen should look something like this:

ad-hoc sub-process 59

Let’s test the HTML view component. Copy and paste this into the Form Input pane:

{"breadcrumbTrail":["Task_A","Task_B","Task_C & Task_D"], "currentTask":2}

Your screen should look something like this (note that Task B is highlighted):

ad-hoc sub-process 60

Feel free to experiment with the CSS.

Go back to the ad-hoc sub-process diagram. You need to add inputs to the ad-hoc sub-process to feed this view. Be sure to put focus on the ad-hoc sub-process. Add an input called currentTask and set the value to index.

ad-hoc sub-process 61

Next, add an input called breadcrumbTrail and enter the following expression:

{  
  "build": [],
  parallelTasksFunction: function(tasks) string join(tasks, " & ") ,
  checkTaskFunction: function(task) if count(task) > 1 then parallelTasksFunction(task) else task,   
  "breadcrumbTrail": for task in tasksToExecute return concatenate (build, checkTaskFunction(task)),
  "breadcrumbTrail": flatten(breadcrumbTrail)
}.breadcrumbTrail

This expression takes the tasksToExecute variable and creates an HTML-friendly unordered list. It creates an empty array, build[], then defines a couple of functions:

  • parallelTasksFunction, that takes the parallel tasks and joins them together into a single string
  • checkTaskFunction, that sees if the list item is an array.

If the list item is an array, it calls the parallelTaskFunction. Otherwise it just returns the task. All the while, data is being added to the build[] list as defined in the loop in breadcrumbTrail. It is eventually flattened and returned for use by the HTML view to show the breadcrumb trail.

Your screen should look something like this:

ad-hoc sub-process 62

Next, link the four tasks in the ad-hoc sub-process to the Task Completion form.

ad-hoc sub-process 63

One last thing you need to do is add a rule set in the ad-hoc sub-process. This will add tasks to the taskToExecute variables if users opt to add tasks as they complete tasks.

Add a Business rule task to the ad-hoc sub-process, add an exclusive gateway join, then add sequence flows from the tasks to the exclusive gateway join. Finally, add a sequence flow from the exclusive gateway join to the business rule task.

It might be easier to just view the next screenshot:

ad-hoc sub-process 64

Every time a task completes, it will also invoke the rule that you’re about to author.

Click the Business rule task and give it the name Update list of tasks. Click the link icon in the context pad, then click Create DMN diagram.

ad-hoc sub-process 65

You should see the DRD screen pop up. Click the blue list icon in the upper left corner of the Update list of tasks decision table.

ad-hoc sub-process 66

In the DMN editor, update the Hit Policy to First. Double-click Input and enter the following expression: interjectYesNo.

Optionally you can enter a label for the input, but we’ll leave it blank for now.

ad-hoc sub-process 67

Add another input to the table by clicking the plus sign button next to interjectYesNo.

ad-hoc sub-process 68

Once again double-click the second Input to open the expression editor and enter the following expression: routingChoice.

Double click Output to open the expression editor and enter the following: tasksToExecute.

ad-hoc sub-process 69

Just to recap—you’ll use the variables interjectYesNo and routingChoice from the form to determine what to do with tasksToExecute.

Let’s add the rules. Here is the matrix of rules if you don’t want to enter them manually:

injectYesNoroutingChoicetasksToExecute
"no"tasksToExecute
"yes""sequential"concatenate(tasksToExecute, interjectTasks)
"yes""parallel"if count(interjectTasks) > 1 then append(tasksToExecute, interjectTasks) else concatenate(tasksToExecute, interjectTasks)

Your screen should look something like this:

ad-hoc sub-process 70

There are some differences between concatenate and append in FEEL in this context. The behavior of concatenate in this context will add the tasks as individual elements into the tasksToExecute list. Since the second argument of append takes Any object, it will add the entire object. In this case, it’s a list that needs to be added in its entirety to tasksToExecute. It’s a subtle but important distinction.

You’ll need an additional check of the count of interjectTasks in row 3 of the Output, in the event the user selects Parallel but only selects one task. In that case, it’s treated like a sequential addition.

Don’t forget to click Deploy as the rule will not be automatically deployed to the server upon the execution of the process.

ad-hoc sub-process 71

Go back to ad-hoc sub-process and add the Result variable tasksToExecute.

ad-hoc sub-process 72

Run the process!

The moment of truth has arrived! Be sure to select the cluster you’ve created for running the process. Select Task A and Task B in the form. The form will default to a routing choice of sequential. Click Run. You should be presented with the start screen upon running the process.

ad-hoc sub-process 73

Check Operate, and your process instance should look something like this:

ad-hoc sub-process 74

Now check Tasklist and open Task A. It should look something like this:

ad-hoc sub-process 75

Click Assign to me to assign yourself the task. Select Yes to interject tasks. Next, select Task C and Task D and Parallel.

ad-hoc sub-process 76

Complete the task. You should see Task B appear in Tasklist. Select it and notice how Task C and Task D have been added in parallel to be executed after Task B.

Also note the current task highlighted in green. Assign yourself the task and complete it.

ad-hoc sub-process 77

You should now see Task C and Task D in Tasklist.

ad-hoc sub-process 78

Assign yourself Task C, interject Task A sequentially, and complete the task. You may need to clear out previous selections.

ad-hoc sub-process 79

Complete Task D without adding any more tasks. You’ll notice that Task A has not been picked up yet in the breadcrumb trail.

ad-hoc sub-process 80

Task A should appear in Tasklist:

ad-hoc sub-process 81

Notice the breadcrumb trail updates. Assign it to yourself and complete the task. Check Operate to ensure that the process has been completed.

ad-hoc sub-process 82

You can view the complete execution of the process in Instance History in the lower left pane.

You’ve built your ad-hoc sub-process!

Congratulations on completing the build and taking advantage of the power of ad-hoc sub-processes! Keep in mind that you can replace yourself in deciding which tasks to add, if any, by using rules, microservices, or even artificial intelligence.

Want to start working with AI agents in your ad-hoc sub-processes right now? Check out this guide for how to build an AI agent with Camunda.

Stay tuned for even more on how to make the most of this exciting new capability.

The post An Advanced Ad-Hoc Sub-Process Tutorial appeared first on Camunda.

]]>
Creating and Testing Custom Exporters Using Camunda 8 Run https://camunda.com/blog/2025/04/creating-testing-custom-exporters-camunda-8-run/ Fri, 18 Apr 2025 17:26:17 +0000 https://camunda.com/?p=134975 Learn how to create custom exporters and how to test them quickly using Camunda 8 Run.

The post Creating and Testing Custom Exporters Using Camunda 8 Run appeared first on Camunda.

]]>
If you’re familiar with Camunda 8, you’ll know that it includes exporters to Elasticsearch and Opensearch for user interfaces, reporting, and historical data storage. And many times folks want the ability to send data to other warehouses for their own purposes. While creating custom exporters has been available for some time, in this post we’ll explore how you can easily test them on your laptop using Camunda 8 Run (C8 Run).

C8 Run is specifically targeted for local development, making it faster and easier to build and test applications on your laptop before deploying it to a shared test environment. Thank you to our colleague Josh Wulf for this blog post detailing how to build an exporter.

Download and install Camunda 8 Run

For detailed instructions on how to download and install Camunda 8 Run, refer to our documentation here. Once you have it installed and running, continue on your journey right back here!

Download and install Camunda Desktop Modeler

You can download and install Desktop Modeler using instructions found here. You may need to open the dropdown menu for “Alternative downloads” to find your preferred installation.. Select the appropriate operating system and follow the instructions and be sure to start Modeler up. We’ll use Desktop Modeler to create and deploy sample applications to Camunda 8 Run a little bit later.

Create a sample exporter

First, we’ll create a very simple exporter and install it on your local C8 Run environment and see the results. In this example, we’ll create a Maven project in IntelliJ, adding the exporter dependency, and then create a Java class, implementing the Exporter interface with  straightforward logging to system out. Feel free to use your favorite integrated development environment and build automation tools.

Once you’ve created a sample Maven project, add the following dependency to the pom.xml file. Be sure to match the version of the dependency, at the very least the minor version, with your C8 Run installation.

<dependencies>
  <dependency>
      <groupId>io.camunda</groupId>
      <artifactId>zeebe-exporter-api</artifactId>
      <version>8.6.12</version>
  </dependency>
</dependencies>

After reloading the project with the updated dependency, go to the src/main/java folder and create a package called io.sample.exporter:

Sample-exporter

Next, create a class called SimpleExporter in the package:

Simple-exporter

In SimpleExporter add implements Exporter, and then you should be prompted to select an interface. Be sure to choose Exporter io.camunda.zeebe.exporter.api:

Exporter-interface

You’ll likely get a message saying you’ll need to implement the export method of the interface. You’ll also want to implement the open method as well. Either select the option to implement the methods or create them yourself. The code should look something like this:

package io.sample.exporter;


import io.camunda.zeebe.exporter.api.Exporter;
import io.camunda.zeebe.exporter.api.context.Controller;
import io.camunda.zeebe.protocol.record.Record;


public class SimpleExporter implements Exporter
{
   @Override
   public void open(Controller controller) {
       Exporter.super.open(controller);
   }


   @Override
   public void export(Record<?> record) {
      
   }
}

Let’s make some updates. First we’ll add a Controller object that includes a method to mark a record as exported and moves the record position forward. The Zeebe broker will not truncate the event log otherwise and will lead to full disks. Add a controller object: Controller controller; to the class and update the open method, replacing the generated code with: this.controller = controller;

Your code should now look something like this:

public class SimpleExporter implements Exporter
{
   Controller controller;


   @Override
   public void open(Controller controller) {
       this.controller = controller;
   }


   @Override
   public void export(Record<?> record) {
   }
}

Let’s implement the export method. We’ll print something to the log and move the record position forward. Add the following code to the export method:

if(! record.getValue().toString().contains("worker")) {
   System.out.println("SIMPLE_EXPORTER " + record.getValue().toString());
}

The connectors will generate a number of records and the if statement above will cut down on the noise so we can focus on events generated from processes. Your class should now look something like this:

public class SimpleExporter implements Exporter
{
   Controller controller;


   @Override
   public void open(Controller controller) {
       this.controller = controller;
   }


   @Override
   public void export(Record<?> record) {
       if(! record.getValue().toString().contains("worker")) {
   		System.out.println("SIMPLE_EXPORTER " + record.getValue().toString());
 	 }
   }
}

Next, we’ll package this up as a jar file, add it to the Camunda 8 Run libraries, update the configuration file to point to this exporter and see it in action.

Add custom exporter to Camunda 8 Run

Using either Maven terminal commands ie: mvn package, or your IDE Maven command interface, package the exporter. Depending on what you’ve defined for artifactId and version in your pom file, you should see a file named artifactId-version.jar.

In the target directory. Here is an example jar file with an artifactId of exporter and a version of 1.0-SNAPSHOT:

Example-jar-artifactid

While you don’t have to copy and paste this jar file into the Camunda 8 installation, it’s a good idea. As long as the Camunda 8 Run application can access the directory, you can place it anywhere. In this example we’re placing the jar into the lib directory of the Camunda 8 Run installation in <Camunda 8 Run root directory>/camunda-zeebe-8.x.x/lib.

Lib-directory

Next, update the application.yaml configuration file to reference the custom exporter jar file. It can be found in the <Camunda 8 Run root directory>/camunda-zeebe-8.x.x/config directory.

Example Configuration:

zeebe:
  broker:
    exporters:
      customExporter:
        className: io.sample.exporter.SimpleExporter
        jarPath: <C8 Run dir>/camunda-zeebe-8.x.x/lib/exporter-1.0-SNAPSHOT.jar

This ensures that Camunda 8 Run recognizes and loads your custom exporter during startup.

Now let’s start up Camunda 8 Run.

Start Camunda 8 Run and observe the custom exporter in action

Open a terminal window, change directory to the Camunda 8 Run root directory. In it you should find the start.sh or c8run.exe file, depending on your operating system. Start either one (./start.sh or .c8run.exe).

Once Camunda 8 Run has started and you once again have a prompt, change the directory to log, ie: <Camunda 8 Run root directory>/log. In that directory there should be three logs, camunda.log, connectors.log, and elasticsearch.log.

Log-directory

Start tailing or viewing camunda.log using your favorite tool. Next, what we’ll do is create a very simple process, deploy it, and run it to view sample records from a process instance.

Create and deploy a process flow in Desktop Modeler

Go to Modeler and create a new Camunda 8 BPMN diagram. Build a simple one step process with a Start Event, a User Task, and an End Event. Deploy it to the Camunda 8 Run instance.Your Desktop Modeler should look something like this:

Process-camunda-desktop-modeler

You can then start a process instance from Desktop Modeler as shown here:

Start-instance-camunda-desktop-modeler

Go back to camunda.log and you should see entries that look something like this:

SIMPLE_EXPORTER {"resources":[],"processesMetadata":[{"bpmnProcessId":"Process_0nhopct","version":1,"processDefinitionKey":2251799813703443,"resourceName":"diagram_1.bpmn","checksum":"xbmiHFXd3lVQbwV1gq/UEQ==","isDuplicate":true,"tenantId":"<default>","deploymentKey":2251799813703442,"versionTag":""}],"decisionRequirementsMetadata":[],"decisionsMetadata":[],"formMetadata":[],"tenantId":"<default>","deploymentKey":2251799813704156}
SIMPLE_EXPORTER {"bpmnProcessId":"Process_0nhopct","processDefinitionKey":0,"processInstanceKey":-1,"version":-1,"variables":"gA==","fetchVariables":[],"startInstructions":[],"tenantId":"<default>"}
SIMPLE_EXPORTER {"bpmnElementType":"PROCESS","elementId":"Process_0nhopct","bpmnProcessId":"Process_0nhopct","version":1,"processDefinitionKey":2251799813703443,"processInstanceKey":2251799813704157,"flowScopeKey":-1,"bpmnEventType":"UNSPECIFIED","parentProcessInstanceKey":-1,"parentElementInstanceKey":-1,"tenantId":"<default>"}
SIMPLE_EXPORTER {"bpmnProcessId":"Process_0nhopct","processDefinitionKey":2251799813703443,"processInstanceKey":2251799813704157,"version":1,"variables":"gA==","fetchVariables":[],"startInstructions":[],"tenantId":"<default>"}
SIMPLE_EXPORTER {"bpmnElementType":"PROCESS","elementId":"Process_0nhopct","bpmnProcessId":"Process_0nhopct","version":1,"processDefinitionKey":2251799813703443,"processInstanceKey":2251799813704157,"flowScopeKey":-1,"bpmnEventType":"UNSPECIFIED","parentProcessInstanceKey":-1,"parentElementInstanceKey":-1,"tenantId":"<default>"}
SIMPLE_EXPORTER {"bpmnElementType":"PROCESS","elementId":"Process_0nhopct","bpmnProcessId":"Process_0nhopct","version":1,"processDefinitionKey":2251799813703443,"processInstanceKey":2251799813704157,"flowScopeKey":-1,"bpmnEventType":"UNSPECIFIED","parentProcessInstanceKey":-1,"parentElementInstanceKey":-1,"tenantId":"<default>"}
SIMPLE_EXPORTER {"bpmnElementType":"START_EVENT","elementId":"StartEvent_1","bpmnProcessId":"Process_0nhopct","version":1,"processDefinitionKey":2251799813703443,"processInstanceKey":2251799813704157,"flowScopeKey":2251799813704157,"bpmnEventType":"NONE","parentProcessInstanceKey":-1,"parentElementInstanceKey":-1,"tenantId":"<default>"}

Now you can experiment extracting data from the JSON objects for your own purposes and experiment with sending data to warehouses of your choice. Enjoy!

Looking for more?

Camunda 8 Run is free for local development, but our complete agentic orchestration platform lets you take full advantage of our leading platform for composable, AI-powered end-to-end process orchestration. Try it out today.

The post Creating and Testing Custom Exporters Using Camunda 8 Run appeared first on Camunda.

]]>
How to use Camunda’s SOAP Connector https://camunda.com/blog/2025/01/how-to-use-camunda-soap-connector/ Tue, 28 Jan 2025 10:30:00 +0000 https://camunda.com/?p=127509 Learn how to use Camunda’s SOAP Connector to enable you to interact with applications that are exposed via Simple Object Access Protocol (SOAP)

The post How to use Camunda’s SOAP Connector appeared first on Camunda.

]]>
SOAP has been around for quite some time and there are many business applications using the protocol. We’re asked on a regular basis about how Camunda can help orchestrate processes that include these mission critical applications. As a result of this feedback from our customers, we have provided a SOAP Connector to help speed the time to value in the creation of your processes.

We’ll be using Camunda 8 Run and Desktop Modeler in this example though other options, like Web Modeler and Camunda 8 Self-Managed, can be used as well. We’ll also be using the ubiquitous SOAP UI application as an endpoint though you can use your own endpoints and utilize this as a tutorial if you’d like.  

Download and install Camunda 8 Run

You can use an already functioning Camunda 8 environment as the SOAP Connector is bundled in the latest releases. If you’re new to Camunda or if you want to get an environment up and running on your computer quickly, we highly recommend using Camunda 8 Run. For detailed instructions on how to download and install it, refer to our documentation here. Once you have it installed and running, continue on your journey right back here!

Download and install Camunda Desktop Modeler

You can download and install Desktop Modeler using instructions found here. You may need to scroll down to the Open Source Desktop Modeler section. Select the appropriate operating system and follow the instructions and be sure to start Modeler up.

Download and install SoapUI

In this example, we’ll use SoapUI and one of its tutorials as the SOAP endpoint. You can download SoapUI here. When you install SoapUI, be sure to also install the tutorials:

Soapui-setup

We’re almost there. With the latest releases of Camunda Desktop Modeler you may have noticed this message pop up when starting it:

Camunda-connector-pop-up

Desktop Modeler creates connector template shortcuts for all of the out of the box connectors, with the exception of the SOAP Connector. To add the SOAP Connector template you’ll need to download and install it. The SOAP Connector template can be found here. Depending on which operating system you’re using, you’ll need to place the connector template in a particular folder and details about it can be found here. You’ll then need to either restart or reset Desktop Modeler (Ctrl-r or Cmd-r) for the connector template to be recognized.

Start SoapUI and start sample project

Start SoapUI locally and follow the instructions to import and start up the SOAP Sample Project as described here. Be sure to start ServiceSoapBinding MockService as shown here:

Start-mock-service

Once you have started the mock service you can explore the various web service requests. Expand ServiceSoapBinding in the navigator panel and expand the login node. Open login rq by double clicking on it. Your screen should look something like this:

Login-rq

You can run the request by clicking on the green play button to ensure that the service is running. You should get a response similar to the following:

Login-rq-response

 Next, what we’ll do is replicate this using the SOAP Connector in Camunda.

Create a process flow in Modeler

Go to Modeler and create a new Camunda 8 BPMN diagram. Build a simple one step process with a Start Event, a Task, and an End Event. Put focus on the task by clicking on it. Look for the SOAP Connector by clicking on the wrench icon in the context pad just to the right of the task to change the element. In the dialog box that appears, you can search for the SOAP Connector by typing in ‘soap’ in the search field or you can scroll through the templates. Your screen should look something like this:

Bpmn-model-soap-connector

Select SOAP Connector. Give the task a name. In this example the name is Call SOAP service. Now we need to fill in some parameters to make it work. You’ll notice the Service URL is in red to indicate a required field.

Service-url

Go back to SoapUI and login rq and find the service URL. You should see it at the top:

Login-rq-response

Copy and paste the URL into the Service URL field:

http://127.0.0.1:8088/mockServiceSoapBinding

We’ll leave the fields of Authentication, SOAP version, and SOAP Header using default values:

Soap-service

For the SOAP body we will define a template with placeholders for variables that we’ll pass in. For SOAP body select Template from the drop down. For the XML Template itself you can copy and paste the contents of

<soapenv:Body>

In SoapUI login rq:

Login-rq-2

Into the XML Template field and replace the values of username and password with {{username}} and {{password}} placeholders. The XML should look something like this:

<sam:login>

<username>{{username}}</username>

<password>{{password}}</password>

</sam:login>

Next, enter in a variable name for the XML template context. We will pass in a JSON object with that name into the process. For this example we used usernamePasssword. That section of the properties panel should look something like this:

Xml-template-soap-connector

Now we need to enter the namespaces and this requires converting what is in XML into JSON. Heading back to SoapUI login rq, we can copy the namespaces defined in

<soapenv:Envelope...

Pasting them into the Namespaces field and editing them into JSON:

{ 
    "soapenv":"http://schemas.xmlsoap.org/soap/envelope/",
    "sam":"http://www.soapui.org/sample/"
}

Finally, set an output variable. You can dump the entire contents into a single variable and/or if you know the structure of the response you can use a FEEL expression to extract what you need. For now, let’s just dump the entire contents into a single variable and then we can create a suitable FEEL expression to retrieve needed contents. For the Result variable field we’ll use result.

Output-variable-soap-connector

Save your work.

Run the process

In Desktop Modeler go to the lower left hand corner and click on the Deploy button which resembles a rocket ship:

Deploy-proces-camunda

A dialog box should appear. Provide a deployment name, select Camunda 8 Self-Managed, enter a Cluster endpoint of:

http://localhost:26500

And set Authentication to None. Your screen should look something like this:

Deployment-name

Click on Deploy. You should get a deployment successful message in Modeler:

Deployed-confirmation

Let’s run a process. In the lower left hand corner of Modeler, click on the Run icon that next the the Deploy button.

Run-process-camunda

A dialog box should appear prompting you to enter in some JSON data. Copy and paste this into the JSON field:

{ "usernamePassword": 
   {
      "username": "Login", 
      "password": "Login123"
   } 
}

The dialog box should look something like this:

Json-data

Click on Start.

View process in Operate

Open a browser and navigate to:

http://localhost:8080/operate

Log into Operate using demo/demo for username and password. The process will likely be completed by the time you navigate to the process instance in Operate. You should see the result variable with its contents. We’ll use this as a guide to extract the sessionid using FEEL in a little bit.

What happens if you run it again? What happens if you change the username or password?

Run-variations

Congratulations, you’ve successfully integrated with a SOAP endpoint using Camunda’s SOAP Connector!

Extracting the sessionid from the response

Before updating the process you’ll probably want to stop and restart ServiceSoapBinding MockService to avoid errors (ie user already logged in if you tried running it multiple times) when running the process again (see red box highlighting service stop and start):

Stop-start-process

Back in Modeler, go to the process diagram and open the Test SOAP Connector task. In the Output Mapping section in the Result expression field, use the following expression to create a variable called sessionid which extracts the sessionid from the response. The response is held in a variable called, oddly enough, response:

{sessionid: response.Envelope.Body.loginResponse.sessionid}

Your screen should look something like this:

Result-expression-soap-connector

Run the process again and head over to Operate to view the results:

View-results-operate

You can see the new variable sessionid with the extracted data. While you can get to sessionid in the result variable using the following expression:

result.Envelope.Body.loginResponse.sessionid

It’s more elegant to extract the data you need. You can now use sessionid for the other calls in the SoapUI example – logout, search, buy. Have fun experimenting!

Try Camunda today

If you don’t already have a Camunda account, you can learn more about Camunda and get started with everything it has to offer for free. Note that Camunda 8 Run is a newly released distribution to help developers run Camunda 8 locally, so it requires a self-managed installation.

The post How to use Camunda’s SOAP Connector appeared first on Camunda.

]]>
Testing Components in Camunda 8 Using a Blue-Green Approach https://camunda.com/blog/2024/12/testing-components-in-camunda-8-using-a-blue-green-approach/ Fri, 20 Dec 2024 00:05:44 +0000 https://camunda.com/?p=125175 Learn how to test various components of a process application in Camunda 8 using a blue-green technique to ensure a new component works as expected.

The post Testing Components in Camunda 8 Using a Blue-Green Approach appeared first on Camunda.

]]>
While some would advise against testing anything in production, many use blue-green methods to test or validate components in production rather than trying to simulate production in nonproduction environments.

Typically, these components are tested with a small population of data rather than the entire population. In this way, a component can be battle-tested before being rolled out to a wider audience, limiting unexpected outcomes and subsequent fixes or rollbacks.

Testing different workers, connectors, rules, and subprocesses in Call Activities

What do workers, connectors, rules, and subprocesses have in common? They can all be dynamically invoked, meaning the types and IDs can be set to variables to use at runtime to enable dynamic behavior. Taking advantage of this feature will allow us to employ a blue-green strategy to test variants or entirely new components. We’ll also discuss ways to dynamically set those values via rules or expressions.

Blue-green testing in Camunda 8

Testing different workers and connectors

The easiest way to enable different workers is to set a different type in Service Tasks. While you can set the type statically, using a FEEL expression allows for the type to be set at runtime.

Here is an example of setting a Service Task type to a FEEL expression:

Setting a Service Task type to a FEEL expression

Clicking on the fx icon in the Job Type field changes it from static text to a FEEL expression. If you’re not familiar with FEEL (Friendly Enough Expression Language), it’s part of the decision model and notation (DMN) standard. It’s used to appeal to a wide range of people, not just developers, to create logical operations. Don’t let the name of the expression language fool you—FEEL is quite powerful. For more information on FEEL, see our documentation. We’ll see more FEEL examples later.

What you see in this example is a FEEL expression that points to a variable called aVariable. In this example, aVariable needs to be set somewhere in the process to enable dynamic behavior in Service Task.

Instead of trying to come up with a unique name for the type or trying to remember what type a particular worker is looking for, you can concatenate strings and variables:

Concatenate strings and variables

In this example we’ve taken the static string “myWorker.” and appended the variable version. We need to make sure version is set somewhere in the process. Let’s investigate some ways that it can be accomplished dynamically.

One option is to use rules to decide what worker to use. Take, for example:

Choosing a worker

While a business rule task could be added to the process, it might be more appropriate to use it if a rule task already exists to piggyback on. For something like this, if there is no existing rule task available, so you may want to use a FEEL expression instead.

Here is the equivalent:

using a FEEL expression

At some point in the process, the variable amount needs to be set, and the FEEL expression will do the rest:

Setting amount

You can also randomize the use of the alternate version to employ a Canary deployment pattern with increasing probabilities as confidence in the newer version grows. A simple example would be to use the random number() FEEL function to generate a random number between 0 and 1 and set the probability percentage as needed.

Taking the example above, it would look something like:

Employing a Canary deployment pattern

Let’s move ahead with the example without the use of the random number() function. Note the jobType in the service task. It’s been correctly appended with the version:

No randomnumber function

The FEEL expression can occur almost anywhere in the process but it makes the most sense to include it, as in this example, as an input variable to a task to make it easy to find and update. Otherwise you may find yourself tracking down a “bug” of unwanted behavior.

You can also test connectors in a similar way by updating the job type in a connector template. Typically, the job type in a connector is a hidden value in the template, meaning a user or developer cannot change it in the properties panel, though it can be changed in a process definition BPMN XML once a connector template has been applied.

While not the most elegant solution, it will work:

process definition BPMN XML

A more elegant solution would be to update the connector template to make the job type field visible to allow people to update the job type in the properties panel instead.

Testing different rules and subprocesses

Using similar techniques (FEEL expressions), rules and sub-processes in Call Activities can also be blue-green tested. The IDs for both can be FEEL expressions, which can be set at runtime to dynamically invoke a rule or process:

Rule

blue green testing a rule

Call Activity

blue green testing a call activity

And there you have it! A few ways to blue-green test various components in Camunda to reduce impacts to provide flexibility when introducing changes to your applications.

The post Testing Components in Camunda 8 Using a Blue-Green Approach appeared first on Camunda.

]]>
Using Camunda 8 JavaScript SDK for Node.js in an AWS Lambda Function https://camunda.com/blog/2024/06/using-camunda-javascript-sdk-for-node-js-in-an-aws-lambda-function/ Thu, 27 Jun 2024 21:26:06 +0000 https://camunda.com/?p=112240 Learn how to use the Camunda 8 JavaScript SDK for Node.js in an AWS Lambda function to interact with Camunda SaaS.

The post Using Camunda 8 JavaScript SDK for Node.js in an AWS Lambda Function appeared first on Camunda.

]]>
Recently, Camunda introduced official support for the Camunda 8 JavaScript SDK for Node.js. Almost immediately, we began fielding questions asking what could be done using the SDK.

While you can do quite a lot with the SDK, one question from the community was the inspiration for this blog post:

Can I use it in an AWS Lambda function to start a process in Camunda SaaS?

I initially thought, “Sure, why not?” But of course having an actual example would be ideal for proving it does actually work. So let’s get started building a running demo!

Note: If you don’t already have an active account, you can can sign up to try Camunda Platform 8 for free and follow along.

Getting the Camunda 8 JavaScript SDK library into AWS

To be able to use libraries in an AWS Lambda function, you need to create a layer and upload a zip file of the library into the layer. First create a zip file of the SDK. Be sure to install Node.js along with npm.

Create a folder somewhere on your drive, like your user directory: /Users/you/blog_example.

Open a terminal window and cd to /Users/you/blog_example.

Make another directory in your blog_example folder called nodejs (it must be nodejs):

mkdir nodejs

Change directory to nodejs and create a new project using init. You can take all of the defaults:

npm init

Next, use npm to install the Camunda 8 JavaScript SDK for Node.js into this project:

npm i @camunda8/sdk

If you need to use a newer version of the SDK later on, you can create a new version of the layer in Layers.

Your blog_example folder should now look something like this:

Now you need to zip up the contents. Change directory to blog_example and zip up the contents of the nodejs folder.

cd ..
zip -r camunda8sdk.zip nodejs/*

This creates a zip file called camunda8sdk.zip, and it should contain everything you’ve just created and installed. You’ll upload this to AWS in the next section.

    Create a layer in AWS and upload the SDK

    Take the following steps to create a layer with the Camunda 8 SDK libraries.

    First, log in to your AWS console and navigate to the Lambda function console and then to Layers. Click Create layer.

      Sidebar showing Layers menu selected and Create Layer button

      In the next screen, provide an arbitrary name, such as Camunda8SDK, and upload the zip file we created earlier. Select Node.js 20.x as a compatible runtime. Click Create to create the layer.

      Next, you need to generate API credentials in Camunda SaaS and set those values in AWS.

      Create Camunda 8 API credentials and use them in AWS

      To access Camunda SaaS securely, you need to generate API credentials. If you already have a set of credentials, you can skip this section and go on to the next section where the credentials are used.

      To create credentials, use the following steps (if you need help with these steps, refer to the documentation listed after each step):

      Log into your Camunda SaaS console (Help).

      If you haven’t created a Camunda cluster, do so now (Creating a cluster).

      Navigate to the API tab in your cluster and create credentials (Creating credentials).

      • Be sure to note the generated Client Secret, as it will only be shown upon initial creation.
      • You can select all scopes to access all Camunda components.
      • You can also download the credentials, including the secret, for later referral.
      • Download the credentials as Env Vars, as it will contain necessary URLs along with the credentials.
      Screen for selecting credentials format, with Env Vars tab selected

      Using Camunda SaaS credentials in AWS

      Once you’ve generated credentials, you can use them in AWS. Go back to AWS and bring up the Lambda function console.

      In the Lambda console, select Functions and click Create function.

      Sidebar showing Functions selected and Create function button highlighted

      In the next screen, provide a name for the function, such as Camunda8SDKExample, and leave the defaults for Author from scratch, Node.js 20.x runtime, and x86_64 architecture.

      Click Create function.

      Create function window with Author from scratch radio button selected

      Click the hyperlink in the list of functions to open the newly created function. In the next screen, click Configuration and then Edit to add environmental variables.

      Function overview screen for Camunda8SDKExample

      Add the following environmental variables. HOME is a folder where the generated authorization token is stored.

      CAMUNDA_OAUTH_URL: https://login.cloud.camunda.io/oauth/token
      ZEEBE_ADDRESS: your Zeebe address from client credentials
      
      ZEEBE_CLIENT_ID: your Zeebe Client ID from client credentials
      
      ZEEBE_CLIENT_SECRET: your Zeebe Client Secret from client credentials
      HOME: /tmp

      Your screen should look something like this:

      Screen showing details for encrypted environment variables

      You may need to update the timeout of your function. The default is 3 seconds, which may not be long enough. In the Configuration tab, go to General Configuration and click Edit to access the timeout configuration.

      Camunda3SDKExample function with general configuration details

      In the Edit basic settings page, update Timeout to 10 seconds.

      Basic settings options showing 128MB for Memory, 512MB for Ephemeral storage, and 10 sec for time out

      You still need to add the layer to the Lambda function code to be able to use it. Click the Code tab in the function.

      The Code tab for the Camunda8SDKExample function overview, showing the code source and Test button

      Next, scroll down in the Code tab until you see the Layers section. Click Add a layer.

      Layers Info page with the Add a layer button

      Add the Camunda 8 SDK layer you created earlier. Be sure to select Custom layers to see it. There should only be one version of this layer available.

      The Choose a Layer page with the Custom layers radio button selected

      Add code and test it

      You may need to rename the file extension of the code; it may default to index.mjs (ECMAScript). You’ll want to rename it to index.js (normal JavaScript).

      Menu showing the option to Rename the index.mjs folder

      It’s now the moment of truth—time to add some code and test it!

      Copy and paste this sample code into the code section. Be sure to update the process ID to something available in your cluster. You may have to create a sample process and deploy it to your cluster:

      const { Camunda8 } = require('@camunda8/sdk');
      
      exports.handler = async (event) => {
      
          const c8 = new Camunda8();
      
          const zeebe = c8.getZeebeGrpcApiClient();
      
          const p = await zeebe.createProcessInstance({
      
                  bpmnProcessId: 'Your process id here',
      
                  variables: {
      
                      hello: "World",
      
                  },
      
              });
      
          return p;
      
      };

      Be sure to deploy your code:

      The Code Source Info tab showing a notice that says Changes not deployed

      Click Test and use an empty JSON for your test event for now. You can add a populated JSON object later and extract its contents to pass as variables to a process instance.

      Configure Test Event screen with the Create new event radio button selected

      Save your test event and then invoke a test:

      The Code Source Info page with Test button

      And voila! Check Operate for a started process instance.

      Process instance for Lambda Test

      As a further exercise, you can pass in a JSON object and extract its contents for variables. Here is the code snippet (be sure to deploy the updated code):

      const { Camunda8 } = require('@camunda8/sdk');
      
      exports.handler = async (event) => {
      
          var evt = JSON.stringify(event);
      
          var parsedEvt = JSON.parse(evt);
      
          var value = parsedEvt.foo;
      
          const c8 = new Camunda8();
      
          const zeebe = c8.getZeebeGrpcApiClient();
      
          const p = await zeebe.createProcessInstance({
      
                  bpmnProcessId: 'Your process id',
      
                  variables: {
      
                      "foo": value,
      
                  },
      
              });
      
          return p;
      
      };

      And the updated test event:

      The Configure test event page with the Edit saved event radio button selected

      And the result upon invocation:

      Menu showing name and value variables for Lambda Test

      Congratulations! You can now use the Camunda 8 JavaScript SDK for Node.js to interact with Camunda SaaS from Lambda functions in AWS!

      And remember, you can always sign up to try Camunda 8 for free.

      The post Using Camunda 8 JavaScript SDK for Node.js in an AWS Lambda Function appeared first on Camunda.

      ]]>
      Migrating Processes from Pega to Camunda: A Step-by-Step Tutorial https://camunda.com/blog/2023/05/migrating-from-pega-to-camunda-platform-tutorial/ Thu, 25 May 2023 20:31:00 +0000 https://camunda.com/?p=82123&preview=true&preview_id=82123 Learn how to migrate your process flows from Pega to Camunda quickly and easily with this guide, which walks you through how to use our free migration tool.

      The post Migrating Processes from Pega to Camunda: A Step-by-Step Tutorial appeared first on Camunda.

      ]]>
      Learn how to migrate your process flows from Pega to Camunda quickly and easily with this guide, which walks you through how to use our free migration tool.

      If you need to migrate away from Pega you can quickly run into a challenge—process flows in Pega don’t conform to any open standard, despite their BPMN-like appearance. This can make any migration a tedious process of manual recreation.

      If you’re migrating to Camunda, the Camunda Consulting team has created a set of freely available tools for migrating process flows. The tools for migrating Pega process flows can be found here. We’ve split the tools into the original converter for Camunda Platform 7, which we released a few years ago, and a newly added converter for Camunda Platform 8. While the BPMN XML for both are the same, there are slight schema differences that distinguish the models for each.

      You’ll quickly notice both are Maven projects which can be opened in just about any integrated development environment. Eclipse and Intellij are two of the more popular IDEs, but first you’ll need to clone or download the migration tools repository here.

      Note: If you don’t already have an active account, you can can sign up to try Camunda Platform 8 for free and follow along.

      Converting Pega XML to BPMN

      For this tutorial we’ll use Eclipse as our IDE. Take the following steps to migrate Pega XML to BPMN:

      1. Once you’ve cloned or downloaded the Git repository, copy the contents of the Pega converter tool repository into a fresh workspace. If, for example, your Git repository is located at /Users/you/Documents/GitHub you’ll find the Pega converter at Users/you/Documents/GitHub/migrate-to-camunda-tools/Pega/create. 
      2. Copy the entire folder to the workspace of your choice.
      3. Start Eclipse and select the workspace you just copied the contents into.
      4. Once Eclipse has started, navigate to File > Import > Maven > Existing Maven Projects.
      5. Click Next.
      6. In the dialog box that appears, click Browse and navigate to the folder you just copied into your workspace. Your screen should look something like this (see below). A dialog box showing the root directory of the project and the pom.xml file.
      7. Click Finish.

      The project will be imported into your workspace. You may want to update any Java compiler differences between the provided code and your environment, but it should work as is.

      Create a Run configuration to run the converter in Eclipse

      Take the following steps to now create a Run configuration and run the converter in Eclipse:

      1. Right click on the root project folder and select Run As > Run Configurations… 
      2. In the dialog box that appears, double click on Java Application to create a new configuration. The project name should already be filled out in the dialog box. You can give this configuration a new name if you want.
      3. Select a main class. Click on the Search button and click – BPMNGenFromPega – org.camunda.bpmn.generator > OK. Your screen should look something like this: A dialog box showing the root directory of the project and the pom.xml file.

      Add input and output arguments to Run configuration

      Now, you’ll need to provide two arguments: the XML export from Pega and the name of the converted file. Enter the path and file names in the Program arguments section of the Arguments tab enclosed by quotation marks, just in case. There is a provided sample Pega XML file to get you started. To use this sample, enter the following for the input and output files:

      “./src/main/resources/SamplePegaProcess.xml”  “./src/main/resources/ConvertedProcessFromPega.bpmn”

      Your screen should look something like this:

      Input-output-args-run-configuration

      Click Run. A console window should open and you should see the following in the console:

      Diagram ./src/main/resources/SamplePegaProcess.xml converted from Pega and can be found at ./src/main/resources/ConvertedProcessFrom Pega.bpmn

      You may need to refresh Project Explorer to see the generated BPMN file. The resources folder contains a PNG file (samplePegaProcessDiagram.png) of the original process in Pega:

      The original process in Pega

      Using Camunda Modeler, open ConvertedProcessFromPega.bpmn:

      The converted process, now in BPMN

      Creating a jar file

      If you’d like to create a jar file of the utility, you have two options:

      • Right click on the pom.xml file and select Run As > Maven install.
      • Right click on the root folder,  select Show in Local Terminal, and issue the following Maven command: mvn clean package install.

      In either case (or using your own preferred method) you should get a jar file in your /target folder. Copy that jar wherever you’d like and issue the following command in a terminal:

      java -jar yourGeneratedJarFile.jar “your input file” “your output file”

      That’s it! Please feel free to provide feedback in our Forum and watch this Git repository for additional converters as they become available.

      The post Migrating Processes from Pega to Camunda: A Step-by-Step Tutorial appeared first on Camunda.

      ]]>
      How to call the Camunda Inbound Webhook from Postman https://camunda.com/blog/2023/04/how-to-call-camunda-inbound-webhook-postman/ Wed, 26 Apr 2023 12:00:00 +0000 https://camunda.com/?p=79381&preview=true&preview_id=79381 Learn how to create processes using the Camunda Webhook Connector and to start them via Postman both in unauthenticated and authenticated modes.

      The post How to call the Camunda Inbound Webhook from Postman appeared first on Camunda.

      ]]>
      By now you’ve probably heard about Camunda’s new Inbound Webhook Connector, which provides a way for applications to start processes on Camunda Platform via HTTP. A favorite tool to invoke HTTP calls is Postman and in this blog entry we’ll discuss how you can create processes using the Webhook Connector and how to start them via Postman both in unauthenticated as well as authenticated modes. Let’s get started.

      Creating your Webhook Connector in Web Modeler

      We’ll be using Camunda Platform 8 SaaS for the purposes of this post, though the same principles apply for Self-Managed. We’ll create a very simple process and apply the Webhook template for the start event. We’ll then deploy the process and then we’ll start an instance using Postman. We’ll add HMAC authentication to the Webhook configuration, redeploy and update Postman to start an instance using proper authentication.

      Log into your SaaS account and open Web Modeler. Create a project if there isn’t one created already, and create a new process diagram. Your screen should look something like the below. I’ve named the process Test Webhook. If you click on the Start event you’ll see a Template section in the Properties panel where you can select among the available Inbound Connectors.

      Selecting Inbound Webhook Connector from the available templates in Web Modeler.

      Select Webhook Connector:

      The available templates include an HTTP Webhook Connector and a GitHub Webhook Connector.

      After the selection you’ll be brought back to the process canvas, where you’ll see a Webhook ID has been generated. The ID, as we’ll see later, will form a part of the URL that will be called from Postman. You can update the ID if desired. For now, we’ll take the default. We’ll also leave HMAC Authentication disabled (though it’ll be enabled later).

      The default Webhook ID.

      Next, add a User Task and an End Event to complete the process. You may also want to provide a name for your process for easier identification later.

      Naming our test webhook process.

      Next, deploy the process to your favorite cluster:

      Deploying the process to a cluster.

      Once the process is deployed we can check back in Web Modeler and get the URL endpoint for the webhook. It can be found in the Webhook tab of the right side panel when you click on the Start event. You’ll notice the Webhook ID forms the last bit of the URL. The other ID after connectors.camunda.io is your cluster ID. We’ll use this URL in Postman to kick off a new instance. You can copy the URL by clicking on the copy icon in the upper right corner Pasted image 0.

      The webhook's URL endpoint in Modeler.

      Starting our process with Postman

      Be sure to download and install Postman. Create a new request, change the method to POST and paste the URL. Click on Send and it should be successful:

      A new request in Postman.

      Check Operate to ensure the process has started:

      Viewing our process in Operate and verifying that the process has begun.

      Enabling authentication in the Connector

      It’s not a good idea to deploy processes using Inbound Connectors without authentication. So now let’s configure a secret in the cluster, enable authentication on the Webhook Connector, use that secret in the Connector configuration, and properly hash that secret in Postman to be able to send an authenticated request.

      Creating a secret in the cluster

      Go to your Cloud Console and navigate to your cluster and then to the Connector Secrets tab.  

      Creating a secret in the Connector Secrets section of the Console.

      Click on Create new secret. Be sure to put in hmacSecret for the Key. The Value can be anything you want. Just be sure to remember it for later. By default the Value is obscured, password style, but can be exposed by clicking on the eye icon. Click on Create.

      Creating a new secret, with a key of hmacSecret, and a value of thisisnotverysecret.

      Your screen should look something like this:

      A view of the secrets page after we have created the secret

      Updating Web Modeler for authentication

      Head back to Web Modeler. Open the Properties of the Start event and enable HMAC Authentication. Once you do that you’ll notice additional fields appearing. The first is HMAC Secret Key. This is the key that is shared between the Connector and applications calling upon the Connector. Enter the name of the secret we just created, hmacSecret, and be sure to preface it with secrets. so that the engine knows to look for the value among the Connector Secrets. 

      Next, we’ll define where in the request header the hashed secret from the calling application can be found using the HMAC Header parameter. Let’s use HMAC-Header for the value of the parameter although it can be anything you want. The last field, HMAC Algorithm, defaults to SHA-256 and can be left as-is. Be sure to Deploy the updated process. Check Operate to ensure there is a second version of your process.

      A view of Web Modeler that includes updated Secret Keys for authentication

      Updating Postman for authentication

      So now we need to hash the secret and store it in the header if we want to be able to start a process. If you click on Send in Postman now you’ll get a message saying the Connector isn’t authorized:

      Pasted image 0

      Let’s fix that.

      Included in the libraries of Postman is crypto-js, a JavaScript library of various crypto standards. We’ll use the SHA-256 protocol to encrypt a blank message with the key (aka secret) we defined earlier in the cluster. Using the Pre-request Script tab of the call in Postman, enter the following code:

      var bytes = CryptoJS.HmacSHA256(pm.request.body.raw, 'thisisnotverysecret');

      Now we need to transform the bytes into a hex string suitable for an HTTP request:

      var hex = CryptoJS.enc.Hex.stringify(bytes);

      Lastly, put the hex string into the request header. The name of the header needs to match what was defined in the Connector.

      pm.request.headers.add({
         key: "HMAC-Header",
         value: hex
      });

      The code in the Pre-request Script should look something like this:

      A view of the code discussed above within Postman.

      Click on Send and this time it should work:

      Pasted image 0

      Check with Operate to ensure a new instance running version 2 has started: 

      Verifying that a new instance with a version number of 2 has started.

      And there you have it, a tutorial on how to use and call Camunda’s Inbound Webhook Connector.

      Key Takeaways

      Camunda 8 SaaS is an easy way to begin, or even continue, your process automation journey. You can create, manage, simulate, deploy, run, and analyze processes without having to download anything. Create your free 30 day trial today and get started within minutes. You can also read about Camunda’s newest Inbound Connectors on our blog here. In this tutorial, you learned how to create, deploy, and call an Inbound Connector in a secure fashion using Postman.

      The post How to call the Camunda Inbound Webhook from Postman appeared first on Camunda.

      ]]>
      Migrating processes from Pega to Camunda Platform 7 – Step-by-step Tutorial https://camunda.com/blog/2020/06/migrating-processes-from-pega-to-camunda-step-by-step-tutorial/ Tue, 09 Jun 2020 11:00:00 +0000 https://wp-camunda.test/migrating-processes-from-pega-to-camunda-step-by-step-tutorial/ It’s well known that process flows created in Pega don’t conform to any open standard, despite looking rather BPMN-like. Folks who are looking to jump start their migration from Pega to Camunda are stuck having to manually redraw processes in Modeler. But manually redrawing process flows is tedious and time consuming, especially if there are many or complex processes to convert. In this tutorial we’ll step you through a utility that can help you generate a BPMN compliant process that can serve as a starting point for your Pega to Camunda conversion. Pega XML to BPMN converter tutorial The Camunda Consulting team has created a set of freely available tools for migrating Pega process flows. You’ll quickly notice it is...

      The post Migrating processes from Pega to Camunda Platform 7 – Step-by-step Tutorial appeared first on Camunda.

      ]]>

      Note

      If you’re using Camunda Platform 8, our cloud-native process orchestration solution, please see this post on migrating from Pega to Camunda Platform 8 for an updated guide.

      It’s well known that process flows created in Pega don’t conform to any open standard, despite looking rather BPMN-like. Folks who are looking to jump start their migration from Pega to Camunda are stuck having to manually redraw processes in Modeler. But manually redrawing process flows is tedious and time consuming, especially if there are many or complex processes to convert. In this tutorial we’ll step you through a utility that can help you generate a BPMN compliant process that can serve as a starting point for your Pega to Camunda conversion.

      Pega XML to BPMN converter tutorial

      The Camunda Consulting team has created a set of freely available tools for migrating Pega process flows. You’ll quickly notice it is a Maven project which can be opened in just about any integrated development environment. Eclipse and Intellij are two of the more popular IDEs. But first you’ll need to clone or download the migration tools repository.

      For this tutorial we’ll use Eclipse as our IDE.

      • Once you’ve cloned or downloaded the Git repository, copy the contents of the Pega converter tool repository into a fresh workspace. If, for example, your Git repository is at C:gitRepos then you’ll find the Pega converter at C:gitReposmigrate-to-camunda-toolsPegacreate BPMN from Pega XML
      • Copy the entire folder to the workspace of your choice.
      • Next, start Eclipse and select the workspace you just copied the contents into. Once Eclipse has started, navigate to File > Import > General > Projects from Folder or Archive.
      • Click on Next.
      • In the dialog box that appears click on Directory and navigate to the folder you just copied into your workspace. Your screen should look something like this (see below)
      • Click on Finish.

      Import Projects from File System or Archive

      The project will be imported into your workspace. You may want to update any Java compiler differences between the provided code and your environment, but it should work as is.

      Next we’ll create a Run configuration to allow you to run the converter in Eclipse:

      • Right click on the root project folder and select Run As > Run Configurations…
      • In the dialog box that appears click on Java Application to create a new configuration. The project name should already be filled out in the dialog box. You can give this configuration a new name if you want.
      • Next, you’ll need to select a main class. Click on the Search button and be sure to select – BPMNGenFromPega – org.camunda.bpmn.generator. Select it and click on OK.
      • Your screen should look something like this:

      Create Manage and Run Configurations

      Now you’ll need to provide two arguments, the first one being the XML export from Pega and the second being the name of the converted file. Enter the path and file names in the Program arguments section of the Arguments tab enclosed by quotation marks, just in case. There is a provided sample Pega xml file to get you started. To use this sample enter the following for the input and output files:

      ./src/main/resources/SamplePegaProcess.xml” “./src/main/resources/ConvertedProcessFromPega.bpmn

      Your screen should look something like this:

      Create Manage and Run Configurations - Arguments

      Click on Run. A console window should open and you should see the following in the console:

      Diagram ./src/main/resources/SamplePegaProcess.xml converted from Pega and can be found at ./src/main/resources/ConvertedProcessFrom Pega.bpmn

      The resources folder contains a PNG file (samplePegaProcessDiagram.png) of the original process in Pega and it will look like this:

      Converted Process From Pega

      Using Camunda Modeler, open ConvertedProcessFromPega.bpmn and it should look something like this:

      Converted Process From Pega

      Creating a jar file

      If you’d like to simply create a jar file of the utility, you have some options:

      • Either right click on the pom.xml file and select Run As > Maven install.
      • Or right click on the root folder and select Show in Local Terminal and issue the following Maven command: mvn clean package install.

      In either case (or using your own preferred method) you should get a jar file in your /target folder. Copy that jar wherever you’d like and issue the following command in a terminal:

      java -jar yourGeneratedJarFile.jar “your input file” “your output file”

      That’s it! Please feel free to provide feedback in our forum and watch this Git repository for additional converters as they become available.

      Getting Started

      Getting started on Camunda is easy thanks to our robust documentation and tutorials

      The post Migrating processes from Pega to Camunda Platform 7 – Step-by-step Tutorial appeared first on Camunda.

      ]]>
      Migrating process BPMN from IBM BPM to Camunda – Step-by-step Tutorial https://camunda.com/blog/2020/04/migrating-process-bpmn-from-ibm-bpm-to-camunda-step-by-step-tutorial/ Thu, 30 Apr 2020 11:00:00 +0000 https://wp-camunda.test/migrating-process-bpmn-from-ibm-bpm-to-camunda-step-by-step-tutorial/ If you’re thinking you can export BPMN from IBM expecting to be able to open it in Camunda Modeler you might be in for a surprise. As has been discovered, IBM BPMN exports do not include diagram information that tools like Camunda Modeler use to render a diagram. In this tutorial we’ll step you through two approaches taking advantage of utilities developed by our consulting team to help you create a complete diagram that can be opened and viewed not only in Camunda Modeler but in any BPMN compliant design tool. The Camunda Consulting team has created a set of freely available tools for migrating IBM process flows. You’ll notice there are currently two tools available for IBM. One is...

      The post Migrating process BPMN from IBM BPM to Camunda – Step-by-step Tutorial appeared first on Camunda.

      ]]>
      If you’re thinking you can export BPMN from IBM expecting to be able to open it in Camunda Modeler you might be in for a surprise. As has been discovered, IBM BPMN exports do not include diagram information that tools like Camunda Modeler use to render a diagram. In this tutorial we’ll step you through two approaches taking advantage of utilities developed by our consulting team to help you create a complete diagram that can be opened and viewed not only in Camunda Modeler but in any BPMN compliant design tool.

      The Camunda Consulting team has created a set of freely available tools for migrating IBM process flows. You’ll notice there are currently two tools available for IBM. One is a BPMN converter and the other is a Teamworks file, aka .twx, converter. We’ll go through the BPMN converter tutorial first and then we’ll step through the .twx converter.

      BPMN converter tutorial

      The BPMN Converter is a Maven project which can be opened in just about any integrated development environment. Eclipse and Intellij are two of the more popular IDEs. But first you’ll need to clone or download the migration tools repository.

      For this tutorial we’ll use Eclipse as our IDE.

      • Once you’ve cloned or downloaded the Git repository, copy the contents of the IBM BPMN export converter tool repository into a fresh workspace. If, for example, your Git repository is at C:gitRepos then you’ll find the IBM BPMN converter at C:gitReposmigrate-to-camunda-toolsIBMcreate diagram from exported BPMN.
      • Copy the entire folder to the workspace of your choice.
      • Next, start Eclipse and select the workspace you just copied the contents into. Once Eclipse has started, navigate to File > Import > General > Projects from Folder or Archive.
      • Click on Next. In the dialog box that appears click on Directory and navigate to the folder you just copied into your workspace. Your screen should look something like this (see below)
      • Click on Finish.

      Import Projects Screenshot

      The project will be imported into your workspace. You may want to update any Java compiler differences between the provided code and your environment but it should work as is.

      Next we’ll create a Run configuration to allow you to run the converter in Eclipse:

      • Right click on the root project folder and select Run As > Run Configurations…
      • In the dialog box that appears double click on Java Application to create a new configuration. The project name is already filled out in the dialog box. You can give this configuration a new name if you want.
      • Next, you’ll need to select a main class. Click on the Search button and you should only see one class available – BPMNDiagramGenerator. Select it and click on OK.
      • Your screen should look something like this:

      Run Configurations Screenshot

      Now you’ll need to provide two arguments, the first one being the BPMN export from IBM and the second being the name of the converted file. Enter the path and file names in the Program arguments section of the Arguments tab enclosed by quotation marks, just in case. There is a provided sample BPMN file to get you started. To use this sample enter the following for the input and output files:

      “./src/main/resources/SampleBPMNfromIBM.bpmn”
      “./src/main/resources/Converted.bpmn”

      Your screen should look something like this:

      Run Configurations Arguments tab Screenshot

      Click on Run. A console window should open and you should see the following in the console:

      BPMN diagram generated
      Diagram ./src/main/resources/SampleBPMNfromIBM.bpmn converted from IBM BPMN and can be found at ./src/main/resources/Converted.bpmn

      Using Camunda Modeler, open Converted.bpmn file and among the things you’ll notice is the ‘swimlane’, now a ‘pool’ in Camunda BPMN, does not quite fit as you might expect.

      The algorithm just sets arbitrary values for height and width and you’ll need to adjust the size of the lane accordingly. Other things you’ll notice is that the converted diagram will look nothing like the original. This is expected as there is nothing in the exported BPMN to indicate any coordinates. That will be addressed in the next section. Lastly, you’ll notice the sequence flows are not your typical rectilinear lines but rather simple point to point lines that will change to the more familiar rectilinear lines as you move objects around.

      Here is an example of a process created in Blueworks Live and exported as BPMN:

      example of a process created in Blueworks Live and exported as BPMN

      And here is the process in Camunda Modeler after the missing diagram has been generated and pool has been adjusted accordingly:

      process in Camunda Modeler

      In the next section you’ll step through another tool that uses another IBM BPM export format which will retain the fidelity of the original diagram.

      Converting IBM BPM .twx file exports

      If diagram fidelity is desired and you can export your processes in a .twx (aka Teamworks) format, the .twx migration tool is the way to go. A .twx file is a project interchange format for IBM BPM which contains diagram information in its zipped xml files. The xml files that describe the processes are BPMN-like but most certainly aren’t BPMN. The project we’ll be working with does contain a sample xml file but we’ll step you through how you can extract the required files from your own twx file.

      If you’ve already cloned/downloaded the git repository just repeat the steps detailed earlier to copy and open the project in an Eclipse workspace. You can even use the workspace you created earlier in the tutorial. Just make sure the .twx tool is copied into a separate directory.

      Next, we’ll create a Run configuration for the sample included in the project. Again, right click on the project root folder and select Run As > Run Configurations….
      Double click on Java Applications among the choices given in the dialog box. If you’re using the same workspace from before, be sure that create BPMN from TWX export has been selected as the project.
      Search for the main class, though this time you may see more than one choice. Be sure to select BPMNGenFromTWX as your main class.
      Next, we’ll need to provide two arguments for the class, one for the input and one for the output like before.

      “./src/main/resources/TWXOriginal.xml” “./src/main/resources/TWXConverted.bpmn”

      Click on Run. A console window should open and you should see the following in the console:

      BPMN diagram generated

      Diagram ./src/main/resources/TWXOriginal.xml converted from IBM .twx export and can be found at ./src/main/resources/TWXConverted.bpmn

      The resources folder contains a PNG file (PictureOfProcess.PNG) of the original process in IBM and it will look like this:

      original process in IBM

      Using Modeler, open the TWXConverted.bpmn and it should look like this:

      process in modeler

      As you can see, by using the .twx export approach you can maintain the original diagram fidelity since the .twx export contains the pertinent diagram information, though not in a BPMN-compliant form. There will be slight differences due to the default shape scaling in IBM and Camunda.

      Your own processes in IBM BPM

      Next, we’ll discuss how you can extract your process xmls from a .twx file and use those extracts as inputs to the tool. The .twx file is just a zip and the easiest way to unzip it is to change the extension from .twx to .zip and, using your favorite zip utility, extract the contents to a folder.

      Once the contents have been extracted, navigate to the root folder and then continue to navigate to the /objects folder. As you’ll see it contains a number of xml files which will include processes along with coach flows. Typically, processes are the largest files and the name will start with ”25.” following by a long string of alphanumeric characters. Open these candidate files in your favorite text editor. At the very beginning of the file you’ll see something like:

      code

      Search for the process you’d like to convert and make a copy of the file using an easier name to remember and use that as the input for another run in the tool. You may also want to change the name of the output file as well. Happy converting!

      Creating a jar file

      If you’d like to simply create a jar file of either utility, you have some options:

      • One would be to right click on the pom.xml file and select Run As > Maven install.
      • Another would be to right click on the root folder and select Show in Local Terminal and issue the following Maven command: mvn clean package install.

      In either case (or using your own preferred method) you should get a jar file in your /target folder. Copy that jar wherever you’d like and issue the following command in a terminal:

      java -jar yourGeneratedJarFile.jar “your input file” “your output file”

      That’s it! Please feel free to provide feedback in our forum and watch this Git repository for additional converters as they become available.

      Plus, if you’re new to Camunda and want to get hands-on with our product stack for setting up and running processes, join our Camunda Code Studio on May 19th. This free, hands-on online workshop will get you up and running in just three hours, focusing on a Java and Spring Boot use case.

      The post Migrating process BPMN from IBM BPM to Camunda – Step-by-step Tutorial appeared first on Camunda.

      ]]>
      Migrating processes from other vendors to Camunda https://camunda.com/blog/2020/03/migrating-processes-from-other-vendors-to-camunda/ Tue, 17 Mar 2020 11:00:00 +0000 https://wp-camunda.test/migrating-processes-from-other-vendors-to-camunda/ Many vendors support the BPMN standard, but others do not. And even those vendors who support the BPMN standard will omit or extend aspects of it, which can create challenges when you attempt to migrate processes to Camunda. However, there’s a set of tools available to help you migrate processes developed in other vendor platforms. The Camunda Consulting team has created a set of tools for migrating process flows from IBM BPM, IBM Blueworks Live, and TIBCO that can be found here. These tools will generate a BPMN compliant file which can then be further developed into an executable application. While IBM BPM and BlueworksLive are BPMN compliant, IBM does not include diagram information in its BPMN exports. Attempting to...

      The post Migrating processes from other vendors to Camunda appeared first on Camunda.

      ]]>
      Many vendors support the BPMN standard, but others do not. And even those vendors who support the BPMN standard will omit or extend aspects of it, which can create challenges when you attempt to migrate processes to Camunda. However, there’s a set of tools available to help you migrate processes developed in other vendor platforms.

      The Camunda Consulting team has created a set of tools for migrating process flows from IBM BPM, IBM Blueworks Live, and TIBCO that can be found here. These tools will generate a BPMN compliant file which can then be further developed into an executable application. While IBM BPM and BlueworksLive are BPMN compliant, IBM does not include diagram information in its BPMN exports. Attempting to open those exports in Camunda’s Modeler result in a ‘No Diagram To Display’ error message. One of the available migration tools will generate diagram information based on the available BPMN information. It draws the process in a roughly grid-like pattern but does not retain the fidelity of the original diagram. Here is an example of a process created in Blueworks Live and exported as BPMN:

      discovery map

      And here is the process in Camunda Modeler after the missing diagram has been generated:

      twx files

      .twx files

      If diagram fidelity is desired, the .twx migration tool is the way to go. A .twx file is a project interchange format for IBM BPM which contains diagram information in its zipped xml files. For the .twx migration tool to work you’ll need to extract or unzip the .twx file and locate the appropriate process xml in the /objects folder. The process xmls are in a human-readable but seemingly proprietary format. Here is an example of a process diagram in IBM BPM using all of the available BPMN elements in IBM BPM:

      monitoring

      And after conversion using the .twx tool:

      after conversion

      The diagram coordinates are retained, although the default shape scaling is a bit different. The shape sizes used are Camunda defaults, though it can be changed by updating the code to reflect your preferences.

      XPDL to BPMN

      Lastly, we have the XPDL to BPMN migration tool. While there are many XPDL to BPMN conversion tools available, most are online tools that either require you to register or upload your process. If you’re not comfortable with either requirement, you can use the XPDL to BPMN migration tool provided here. And since you have access to the source code, you can update the code for extensions or modifications to meet your needs. Here is an example of a BPMN process converted to XPDL:

      XPDL-analog

      Which is then converted back to BPMN. There are a few notations in BPMN that seemingly don’t have an XPDL analog hence you’ll notice items that are ‘lost’ in translation like escalations, abstract tasks, business rule tasks, none event types, and non-interrupting events, subjects for further refinements:

      XPDL-analog

      Have a look at the tools, give them a try, and provide feedback as we’re always interested in your observations.

      Still puzzling over importing processes from other vendors?

      Join us for our virtual user conference, which will bring the Camunda Community together from all across the globe. We’re working on the finalising the details right now, but you can sign up for updates and be the first to hear the news as it happens.

      Joe will walk through the obstacles you can run into when you attempt to migrate processes from vendors like IBM in his technical track: Why can’t I import this BPMN? and how we can help you overcome those challenges with freely available open-source converter utilities. He’ll discuss how they came to be, how they work and how you can use them.

      Automate Any Process Anywhere

      Automate Any Process, Anywhere

      Digital transformation initiatives can’t avoid all potential roadblocks. Learn how to overcome them when they arise.

      The post Migrating processes from other vendors to Camunda appeared first on Camunda.

      ]]>