Bernd Ruecker, Author at Camunda https://camunda.com Workflow and Decision Automation Platform Fri, 18 Apr 2025 19:25:24 +0000 en-US hourly 1 https://camunda.com/wp-content/uploads/2022/02/Secondary-Logo_Rounded-Black-150x150.png Bernd Ruecker, Author at Camunda https://camunda.com 32 32 Migrating Solutions from Camunda 7 to Camunda 8—A Strategy Update https://camunda.com/blog/2025/02/migrating-solutions-camunda-7-camunda-8-strategy-update/ Fri, 28 Feb 2025 11:00:00 +0000 https://camunda.com/?p=129811 We want to make migration to Camunda 8 as easy as we can for you. Read on to learn the latest journey and new strategies you can take to get there.

The post Migrating Solutions from Camunda 7 to Camunda 8—A Strategy Update appeared first on Camunda.

]]>
With the EOL (end of life) of the Camunda 7 CE (Community Edition) in October 2025, we get a lot of requests around migrating existing solutions based on Camunda 7 to Camunda 8.

Camunda 8 is not a direct drop-in replacement for Camunda 7, meaning a simple library swap is insufficient—your solution must be adapted. This post outlines the typical migration journey, available tooling to assist migration, and important timeline considerations.

We have recently adjusted our migration strategy based on learnings from the past year(s), so this information may differ from what you have seen before.

But let’s go step-by-step.

The migration journey

Most of our customers go through the following journey, which is also the basis of our just refreshed migration guide that walks you through that journey in detail.

A diagram of the migration journey from Camunda 7 to Camunda 8

Some solutions are easier to migrate and may not require a full transition process. If your solution adheres to Camunda best practices, does not require data migration, and does not involve complex parallel run scenarios, migration might get relatively straightforward.

A diagram of a simpler migration journey from Camunda 7 to Camunda 8

This blog post will not go into all the details of those journeys, which is what the migration guide itself does—but to give you an idea, the “orient yourself” phase describes how to:

When to migrate?

It goes without saying that any new projects should be started using Camunda 8.

For existing Camunda 7 solutions, consider the following support timeline:

  • Camunda 7 CE (Community Edition) will reach EOL in October 2025, with a final release (v7.24) on Oct 14, 2025. No further Camunda 7 CE releases will occur after this date. The GitHub repository will be archived, and issues/pull requests will be closed.
  • Camunda 7 EE (Enterprise Edition) customers will continue to receive security patches and bug fixes until at least 2030.

Although there is some urgency to start planning migration, you still have time to execute it effectively.

There is a second aspect to the timeline. Camunda 8 is a completely rearchitected platform, meaning some features are still being reintroduced. If your solution depends on features not yet available, you may need to wait for the appropriate Camunda 8 version. Prominent examples are task listeners (planned for 8.8) or the business key (planned for 8.9). We are further running an architecture streamlining initiative to improve the core architecture, which will be released with Camunda 8.8. This introduces a new, harmonized API. Hence, unless you have time pressure or momentum to lose, we generally recommend waiting for 8.8 to happen and consider 8.8 or 8.9 as ideal candidates to migrate to.

Check the public feature roadmap to see when your required features will be available.

Feature-timeline

That said, it is important to note that targeting for example the 8.9 release doesn’t mean you wait for it to happen and postpone migration planning. Many preparatory steps—analysis, budgeting, and project planning—should begin as soon as possible. Migration tasks can often be performed in advance or using early alpha versions of upcoming releases.

Preparedness-timeline

Migration tooling

To support migration, we have several tools available, most importantly:

These are the tools our consultants are using with great success with customers. The tools are open source (Apache 2.0 license) and can be easily adapted or extended to your environment.

However, we acknowledge that some tools are not as self-explanatory as they should be. We have also seen a growing need for additional migration tooling, which is why we are investing in the following tools, targeted for the Camunda 8.8 (October 2025) release:

  • Migration Analyzer: Enhancing user experience around the diagram converter and adding DMN support.
  • Data Migrator: Migrates runtime instances (running process instances) and copies audit data (history) from Camunda 7 to Camunda 8. (Limited to Camunda 8 running on RDBMS, planned for 8.9 release.)
  • Code Converter: A collection of code conversion patterns (e.g., refactoring a JavaDelegate to a JobWorker) with automation guidance (likely provided as OpenRewrite recipes).

These tools aim to simplify and streamline the migration process.

Migrating from Camunda 7 CE (Community Edition)

We regularly also get the question of whether migration from CE is also possible. And of course, it is. It is actually the exact same as for our EE edition.

If you are worried about the timeline because of the EOL of the community edition, you can switch to our Camunda 7 Enterprise Edition now and benefit from the extended support timelines right away.

Where do I get help?

With the updated migration guide, we aim to provide clearer guidance on migration tasks. We will continue improving this guide iteratively—please share your feedback via GitHub or our forum.

You can further leverage:

Next steps

As a Camunda 7 user your next steps towards migration are:

  1. Orient yourself and analyze your existing solution. This will help you understand necessary tasks and effort, so you can plan and budget your project. This can ideally be supported by a Camunda consultant or certified Camunda partner. This will also inform your timeline on migration, ideally targeting Camunda 8.8 or 8.9.
  2. Migrate your solution, adjusting your models and code.
  3. Plan data migration and roll out your migrated solution.

Let’s go!

We know that some of you felt a bit lost with migration in the last year and we are truly sorry for any confusion around the topic. Our priority has been to build the best process orchestration and automation platform in the world—but we fully recognize that supporting existing Camunda 7 users to get to this future is equally critical.

In 2025, migration support will be a top priority, led by a strategic task force headed by Mary Thengvall and myself (Bernd Ruecker). We are committed to making this transition as smooth as possible.

Looking forward to discussing migration with you! Join the conversation in our forum.

The post Migrating Solutions from Camunda 7 to Camunda 8—A Strategy Update appeared first on Camunda.

]]>
Navigating Technical Transactions with Camunda 8 and Spring https://camunda.com/blog/2023/12/navigating-technical-transactions-camunda-8-spring/ Wed, 13 Dec 2023 21:33:34 +0000 https://camunda.com/?p=96367 Wondering how technical transactions work with Camunda and the Spring framework? Learn more about transactional behavior in this helpful article.

The post Navigating Technical Transactions with Camunda 8 and Spring appeared first on Camunda.

]]>
We regularly answer questions around how technical transactions work when using Camunda (in the latest version 8.x) and the Spring framework. For example, what happens if you have two service tasks, and the second call fails with an exception? In this blog post, I’ll sketch typical scenarios to make the behavior more tangible. I will use code examples using Java 17, Camunda 8.3, Spring Zeebe 8.3, Spring Boot 2.7 and Spring Framework 5.3.

Let’s use the simple BPMN process below:

A sample BPMN process showing three service tasks in a sequence.

Every service task has an associated job worker, and every job worker will write two different JPA entities to a single database using two different Spring Data Repositories:

Job-worker-repositories

We can use our example to show technical implications of how you write these workers. The three workers (for task A, B, and C) are implemented slightly differently.

Let’s go over the different scenarios one by one.

Scenario A: Job worker calls repositories directly

The job worker is a Spring bean and gets the repositories injected. The job worker uses these repositories to save the new entities:

@Autowired
private SpringRepository1 repository1;

@Autowired
private SpringRepository2 repository2;

@JobWorker(type = "taskA")
public void executeServiceALogic() {
   repository1.save(new EntityA());
   repository2.save(new EntityB());
}

Note that we haven’t configured anything about transaction management yet. Hence, the call to the repositories will not run within an open transaction, so each repository will create its own transaction, which will be committed right after saving the entity. This means that the second repository will create its own transaction. In this case, the two repository calls don’t span a joined transaction. This is also visualized here:

Job-worker-repositories-directly

Completing the job within Zeebe comes after both transactions are committed. Zeebe does not need a transaction manager and cannot join one.

If you are more into sequence diagrams, you can see the same information presented here:

Job-worker-repositories-directly-sequence

What does this mean for you? To understand implications of transactional behavior, you need to look at failure cases. In the example above, we could have the following interesting error scenarios:

  1. The worker crashes after the job is activated.
  2. The worker crashes after the first repository successfully has saved its entity.
  3. The worker crashes after the second repository successfully has saved its entity.
  4. Something crashes after the job completion was sent to Zeebe.

The error cases are indicated in the sequence diagram below:

Job-worker-repositories-directly-sequence-errors

Let’s go over these scenarios one by one.

#1 The worker crashes after the job was activated

Nothing really happened so far. The job is locked for the specific job worker for some time, but after this timeout the job will be picked up by any other job worker instance. This is normal job retrying and no problem at all.

#2 The worker crashes after the first repository successfully saved its entity

The transaction of Repository1 is already committed, so EntityA is already persisted in the database. This will not be rolled back.

As the job worker crashed, EntityB will never be written and the job will not be completed in Zeebe. Now, the retrying logic of Zeebe will make sure that another job worker will execute this job again (after the lock timeout expires).

This has two important implications:

  1. Because of the retry, the repository1.save method will be called again. That means we have to make sure this isn’t a problem, which is known as idempotency. We’ll revisit this later.
  2. We might have an inconsistent business state for a (short) period of time, as a business might expect that EntityA and EntityB always must exist together. Assume a more business-relevant example, where you might deduct credit points in a first transaction to extend a subscription in a second transaction. The inconsistency now is a customer with reduced credits, but the same old subscription. This is also known as eventual consistency, and a typical challenge in microservice environments. I talked about it in Lost in Transaction. The gist is that you have two possibilities here: (1) decide that this is unbearable and adjust your transaction boundaries, which I will discuss later, or (2) live with this inconsistency as the retrying ensures it is resolved eventually.

In our example, consistency is restored after the retry succeeded and all methods were correctly called, so this might not be a problem at all. See also embracing business transactions and eventual consistency in the Camunda best practices.

Sometimes people complain about why Camunda can’t simply “do transactions” to avoid thinking about those scenarios. I already wrote about achieving consistency without transaction managers and I still believe that distributed systems are the reality for most of what we do nowadays. Additionally, distributed systems are by no means transactional. We should embrace this and get familiar with the resulting patterns. It is actually also not too hard—the above two implications are already the most important ones, and they can be handled.

Idempotency

Let’s get back to idempotency. I see two easy ways to sort this out (see also 3 common pitfalls in microservice integration — and how to avoid them):

  • Natural idempotency. Some methods can be executed as often as you want because they just flip some state. Example: confirmCustomer()
  • Business idempotency. Sometimes you have business identifiers that allow you to detect duplicate calls. Example: createCustomer(email)

If these approaches will not work, you will need to add your own idempotency handling:

  • Unique Id. You can generate a unique identifier and add it to the call. Example: charge(transactionId, amount). This has to be created early in the call chain.
  • Request hash. If you use messaging you can do the same thing by storing hashes of your messages.

In our scenario above, we might be able to store a UUID in the process and in the entities, and that allows us to do a duplicate check before we insert entities:

@JobWorker(type = "taskA-alternative-idempotent")
Public void executeServiceALogic(@Variable String someRequestUuid) {
   repository1.save(new EntityA().setSomeUuid(someRequestUuid));
   repository2.save(new EntityB().setSomeUuid(someRequestUuid));

}

But without knowing the exact context, it is impossible to advise on the best strategy. Because of this, it is especially important to have those problems top of mind to make sure to plan for the right identifiers to be created at the right time and added to relevant APIs.

See also writing idempotent workers in the Camunda best practices.

#3 The worker crashes after the second repository successfully saved its entity

This is very comparable to #2, but this time both entities were written to the database before the crash. So with the retry, both calls will be re-executed. Therefore, the call to repository2 needs to be idempotent.

#4 The worker or network crashes after the job completion was sent to Zeebe

After sending the job complete command to Zeebe, which is done automatically by Spring Zeebe for you, either the server, the network, or even the client might crash. In all of those situations we don’t know if the job completion was accepted by the Zeebe engine.

Just for the sake of completeness, Zeebe has a transactional concept internally. There is a very defined state for every incoming command, and only if it is committed, which also includes replication to all brokers, will it be executed.

So if it is not yet committed, we are back in situation #3 and will retry the job. If it is committed, the workflow will move on. In case of a network failure the client application will not know that everything worked fine but catch an exception instead. 

This is not really a problem and the business state is consistent, but you should not depend on the successful job completion to achieve more business logic in your client application, as this code then might not be executed in case of an exception. Let’s revisit this when talking about Service Task C.

Scenario B: JobWorker calls @Transactional bean

Instead of calling the repositories directly from the job worker, you might have a separate bean containing the business logic to do those calls, and then call this bean from your job worker:

@Autowired
private TransactionalBean transactionalBean;

@JobWorker(type = "taskB")
Public void executeServiceBLogic() {
   transactionalBean.doSomeBusinessStuff();
}

This might be a better design anyway, as the job worker is just an adapter to call business logic, not a place to implement business logic.

But despite this, now you can use the @Transactional annotation to change the transactional behavior. This will ensure all repository calls within that bean will use the same transaction manager, and this transaction manager will either commit or rollback completely.

Jobworker-transactional-bean

This influences the third error case from above: if the worker application crashes after writing the first entity, the transaction is not committed and no entity is in the database.

While this is probably a great improvement for your application, and it also might fit your consistency requirements more, note that it does not solve the other error scenarios, and you still have to think about idempotency.

Note, that technically you could also annotate the job worker method with @Transactional, (leading to the same behavior as just described) but we typically advise not to do this as it can easily lead to confusion instead of clarity about the transaction boundaries.

Scenario C: Job completion is called within the transaction boundary

Now let’s look into the third possible scenario: you could disable the Spring Zeebe auto-completion of the job, but do the API call to complete the job yourself. Now you can influence the exact point in time this call is done, which makes it possible to call it from within your transactional bean.

Job-completion-transaction-boundary

For the error scenario 2 and 3 from above (job worker crashes after entity A or B was inserted) nothing has changed: the error will lead to a normal rollback, nothing has happened at all and retries will take care of things.

But consider error scenario 4 where the behavior changes big time. Assume the job completion command was committed properly on the workflow engine, but the network failed to deliver the result back to your client application. In this case, the blocking call  completeCommand.send().join() will result in an exception. This in turn will lead to the Spring transaction being aborted and rolled back. This means that the entities will not be written to the database.

I want to emphasize this: The business logic was not executed, but the process instance moved on. There will be no more retries.

At-least-once vs. at-most-once

So we just changed the behavior to what is known as at-most-once semantic: We can make sure the business logic is called at most once, but not more often. The catch is, it might never be called (otherwise it would be called exactly once).

This is a contrast to our scenario A and B where we had a at-least-once semantic: We make sure the business logic is called at least once, but we might actually call it more often (due to the retries). The following illustration taken from our developer training emphasizes this important difference:

At-least-once-vs-at-most-once

You might want to revisit achieving consistency without transaction managers to read more about at-least-once vs at-most-once semantics, and why exactly once is not a practical way to achieve consistency in typical distributed systems.

Note, that there is one other interesting implication of at-most-once scenarios, that is probably not obvious: The workflow engine can move on, before the business logic is committed. So in the above example, the job worker for service task B might be actually started, before the changes of service task A are committed, for example visible in the database. If B expects to see the data there, this might lead to problems you have to be aware of.

To summarize, this change might not make sense in our example. Use cases for at-most-once semantics are really rare; one example could be customer notifications that you prefer to lose over sending it multiple times and confuse a customer. The default is at-least-once, which is why Spring Zeebe’s auto completion also makes sense.

Thinking about transaction boundaries

I wanted to give you one last piece of food for thought in this already too long post. This is about a question we also get regularly: can’t we do one big transaction spanning multiple service tasks in BPMN? So basically this:

Transaction boundaries

The visual already gives you a clue; this is neither technically possible with Camunda 8, nor is it desirable. As mentioned, Camunda will not take in any transaction, so that’s why it is not possible. But, let me quickly explain why you also do not want it.

1. It couples the process model to the technical environment

Assume you have the model above in production and rely on the functionality that in case of any error, the process instance is simply rolled back. Now, a year later, you want to change the process model. The business decides that before doing Task B you first need to manually inspect suspicious process instances. The BPMN change is simple:

Bpmn-process-model-environment-coupled

But notice that now you will no longer rollback Service Task A when B fails for cases that go through the user task.

You would also not see the exact transaction boundaries visually in your process model, so you will need to explain this specialty every time you discuss a model. But good models should not need too many additional words!

2. It is only possible in edge cases

Such a transactional integration would only work when you use components that either work in one single database only, or that support two-phase commit, also known as XA transactions. At this point I want to quote Pat Helland from Amazon: “Grown-ups don’t use distributed transactions.” Distributed transactions don’t scale, and a lot of modern systems don’t provide support for it anyway (think for example of REST, Apache Kafka, or AWS SQS). To sum this up: in real-life, I don’t see XA transactions used in distributed systems successfully.

If you are interested in such discussions, the domain-driven design or microservices community has a lot of material on it. In lost in transaction I also look at how to define consistency (= transaction) boundaries, which are typically tied to one domain. Translated to the problem at hand I would argue that if something has to happen transactionally, it should probably happen in one service call, which boils down to one transactional Spring bean. I know this might simplify things a bit—but the general direction of thought is helpful.

Summary

So this was a lot, let me recap:

1. Camunda 8 does not take part in Spring driven-transactions. This typically leads to at-least-once semantics due to retrying of the engine.

2. You can have transactional behavior within the scope of one service task. Therefore, delegate to a @Transactional method in your own Spring bean.

3. You should have a basic understanding of idempotency and eventual consistency, which you need for any kind of remote calls anyway, which means: with the first REST call in your system!

4. Failure scenarios are still clearly defined and can be taken care of, not using XA transactions and two-phase doesn’t mean we are going back to chaos!

As always: I love to hear your feedback and am happy to discuss.

The post Navigating Technical Transactions with Camunda 8 and Spring appeared first on Camunda.

]]>
Pro-code, Low-code, and the Role of Camunda https://camunda.com/blog/2023/12/pro-code-low-code-role-of-camunda/ Fri, 08 Dec 2023 15:00:13 +0000 https://camunda.com/?p=96121 Pro-code is our heart and soul, but people and processes are diverse. Our optional low-code features support more use cases without getting in the way of pro-code developers.

The post Pro-code, Low-code, and the Role of Camunda appeared first on Camunda.

]]>
Developers regularly ask me about Camunda’s product strategy. Especially around the Camunda 8 launch they raised concerns that we “forgot our roots” or “abandoned our developer-friendliness”—the exact attributes that developers love us for. They presume that we “jumped on the low-code train” instead, because we now have funding and need to “chase the big dollars.” As a developer at heart myself I can tell you that nothing is further from the truth, so let me explain our strategy in this post.

Here is the TL/DR: We will stay 100% developer-friendly and pro-code is our heart and soul (or bread and butter if you prefer). But people that create process solutions are diverse, as are the processes that need to be automated. So for some use cases low-code does make sense, and it is great to be able to support those cases. But low-code features in Camunda are optional and do not get in the way of pro-code developers.

For example, your worker code can become a reusable Connector (or be replaced by an out-of-the-box one) that is configured in the BPMN model using element templates. But you don’t have to use that and can just stay in your development environment to code your way forward. This flexibility allows you to use Camunda for a wide variety of use cases, which prevents business departments from being forced into shaky low-code solutions just because IT lacks resources.

But step by step…

Camunda 8 loves developers

First of all, Camunda 8 focuses on the developer experience in the same way—or even more strongly—than former Camunda versions. The whole point of providing Camunda as a product was to break out of unhandy huge BPM or low-code suites, that are simply impossible to use in professional software engineering projects (see the Camunda story here for example). This hasn’t changed. The heart of Camunda is around bringing process orchestration into the professional software developers toolbelt.

Especially with Camunda 8, we put a lot of focus on providing an excellent developer experience and a great programming model. And we now also extend that beyond the Java ecosystem. We might still have to do some homework here and there (for example getting the Spring integration to a supported product component 2024)—but it is very close to what we always had. Let me give you some short examples (you can find working code on GitHub).

Writing worker code (aka Java Delegates):

@JobWorker(type = "payment")
public void retrievePayment(ActivatedJob job) {
  // Do whatever you need to, e.g. invoke a remote service
  String orderId = job.getVariablesMap().get("orderId");
  paymentRestClient.invoke(...);
}

Using the Spring Boot Starter as Maven dependency:

<dependency>
   <groupId>io.camunda</groupId>
   <artifactId>spring-zeebe-starter</artifactId>
   <version>${camunda.version}</version>
</dependency>

Writing a JUnit test case (with an in-memory engine):

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE)
@ZeebeSpringTest
public class TestCustomerOnboardingProcess {
  // ...
  @Test
  public void testAutomaticOnboarding() throws Exception {
      // Define expectations on the REST calls
      // 1. http://localhost:8080/crm/customer
      mockRestServer
              .expect(requestTo("http://localhost:8080/crm/customer")) //
              .andExpect(method(HttpMethod.PUT))
              .andRespond(withSuccess("{\"customerId\": \"12345\"}", MediaType.APPLICATION_JSON));

      // given a REST call
      customerOnboardingRestController.onboardCustomer();

      // Retrieve process instances started because of the above call
      InspectedProcessInstance processInstance = InspectionUtility.findProcessInstances().findLastProcessInstance().get();

      // We expect to have a user task, complete it
      waitForUserTaskAndComplete("TaskApproveCustomerOrder", Collections.singletonMap("approved", true));

      // Now the process should run to the end
      waitForProcessInstanceCompleted(processInstance, Duration.ofSeconds(10));

      // Let's assert that it passed certain BPMN elements (more to show off features here)
      assertThat(processInstance)
              .hasPassedElement("EndEventProcessed")
              .isCompleted();

      // And verify it caused the right side effects on the REST endpoints
      mockRestServer.verify();
  }

The only real change from Camunda version 7 to 8 is that the orchestration engine (or workflow engine if you prefer that term) runs as a separate Java process. So the above Spring Boot Starter actually starts a client that connects to the engine, not the whole engine itself. I wrote about why this is a huge advantage in moving from embedded to remote workflow engines. Summarized, it is about isolating your code from the engine’s code and simplifying your overall solution project (think about optimizing the engine configuration or resolving third-party dependency version incompatibilities).

The adjusted architecture without relational database allows us to continuously look at scalability and performance and make big leaps with Camunda 8, allowing use cases we could not tackle with Camunda 7 (e.g. multiple thousands of process instances per second, geo-redundant active/active datacenters, etc.).

A common misconception is that you have to use our cloud/SaaS offering, but this is not true. You can run the engine self-managed as well and there are different options to do that. The SaaS offering is an additional possibility you can leverage, freeing you from thinking about how to run and operate Camunda, but it is up to you if you want to make use of it.

This is a general recurring theme in Camunda 8: We added more possibilities you can leverage to make your own life easier—but we do not force anyone to use them.

The prime example of new possibilities are our low-code accelerators (e.g. Connectors). Let’s quickly dive into why we do low-code next before touching on how Connectors can help more concretely.

Existing customers adopt Camunda for many use cases

We learned from our customers that they want to use Camunda for a wide variety of use cases. Many of the use cases are core end-to-end business processes, like customer onboarding, order fulfillment, claim settlement, payment processing, trading, or the like.

But customers also need to automate simpler processes. Those processes are less complex, less critical, and typically less valuable, but still those processes are there and automating them has a return on investment or is simply necessary to fulfill customer expectations. Good examples are around master data changes (e.g. address or bank account data), bank transfer limits, annual mileage reports for insurances, delay compensation, and so on.

Process-complexity
Process != process, there are typically some highly critical core processes, but also a long tail of simpler ones

In the past, organizations often did not consider using Camunda for those processes, as they could not set up and staff software development projects for simpler, less critical processes.

And the non-functional requirements for those simpler process automation solutions differ. While the super critical high complex use cases are always implemented with the help of the IT team, to make sure the quality meets the expectations for this kind of solution and everything runs smoothly, the use cases on the lower end of that spectrum don’t have to comply with the same requirements. If they are down, it might not be the end of the world. If they get hacked, it might not be headline news. If there are wired bugs, it might just be annoying. So it is probably OK to apply a different approach to create solutions for these less critical processes.

Categorizing use cases

The important thing is to make a conscious choice and not apply the wrong approach for the process at hand. What we have seen working successfully is to categorize use cases and place them into three buckets:

  • Red: Processes are mission critical for the organization. They are also complex to automate and probably need to operate at scale. Performance and information security can be very relevant, and regulatory requirements might need to be fulfilled. Often we talk about core end-to-end business processes here, but sometimes also other processes might be that critical. For these use cases you need to do professional software engineering using industry best practices like version control, automated testing, continuous integration and continuous delivery. The organization wants to apply some governance, for example around which tools can be used and what best practices need to be applied.
  • Yellow: Processes are less critical, but still the organization’s operations would be seriously affected if there are problems. So you need to apply a healthy level of governance, but need to accept that solutions are not created in the same quality as for red use cases, mostly because you simply have a shortage of software developers.
  • Green: Simple automations, often being very local to one business unit or even an individual. These are often quick fixes stitched together to make one’s life a bit easier, but the overall organization might not even recognize if they break apart. For those uncritical use cases, the organization can afford leaving a lot of freedom to people, so typically there is no governance or quality assurance applied.

While the red use cases are traditionally done with Camunda, and the green use cases are traditionally done with Office-like tooling or low-code solutions (like Airtable or Zapier), the yellow bucket gets interesting. And this is a long tail of processes, that all needs to be automated with a fair level of governance, quality assurance and information security.

A chart sorting different processes into red, yellow and green buckets based on their complexity.
Categorizing processes by their criticality and complexity, as this influences non-functional requirements of automation solutions

We already know organizations using Camunda for those yellow use cases. In order to do this and to ease solution creation, they developed low-code tooling on top of Camunda. A prime example is Goldman Sachs, who built a quite extensive platform based on Camunda 7 (side note: they also talk about a differentiation between core banking use cases and the long tail of simpler processes across the firm in later presentations). Speaking to those customers we found a recurring theme and used this feedback to design product extensions that those organizations could have used off-the-shelf (if it would have been there when they started). And we designed this solution to not get in the way of professional software developers when implementing red use cases around critical core processes.

I am not going into too much detail around all of those low code accelerators in this post, but it is mostly around Connectors, rich forms, data handling, the out-of-the-box experience of tools like Tasklist, and browser-based tooling.

For me it is important to re-emphasize the pattern mentioned earlier: Those accelerators are an offer—you don’t have to use them. And if you look deeper, those accelerators are not mystic black boxes. A Connector, for example, is “just” a reusable job worker with a focused properties panel (if you are interested in code, check out any of our existing out-of-the-box Connectors), whereas the property panel can even be generated from Java code. Camunda Marketplace helps you to make this reusable piece of functionality discoverable. Existing Connectors are available in their source and can be extended if needed.

A model showing Camunda Connectors consist of ConnectorCode (Java) and ModelerUI (Element Templates) between Camunda and an Endpoint.
Connectors are essentially reusable job workers with a property panel definition for easy configuration

Democratization and acceleration by Connectors

There are two main motivations to use Connectors.

Software developers might simply become more productive by using them, and this is what we call acceleration. For example, it might simply be quicker to use a Twilio Connector instead of figuring out the REST API for sending an SMS and how it is best called from Java. As mentioned, if this is not true for you, e.g. because you have an internal library you simply use to hide the complexity of using Twilio, this is great, then you just keep using that. Also, when you want to write more JUnit tests, it might be simpler to write integration code in Java yourself. This is fine! You are not forced to use Connectors, it is an offer, and if it makes your life easier, use them.

The other more important advantage is that it allows a more diverse set of people to take part in solution creation, which is referred to as democratization. So for example, a tech-savvy business person could probably stitch together a simpler process using Connectors, even if they cannot write any programming code. Remember, we are talking about the long tail of simpler processes (yellow) here.

A powerful pattern then is that software developers enable other roles within the organization. One way of doing this can be to have a Center of Excellence where custom Connectors are built specifically shaped around the needs of the organization. And those Connectors are then used by other roles to stitch together the processes. One big advantage is that your IT team has control over how Connectors are built and used, allowing them to enforce important governance rules, e.g. around information security or secret handling (something which is a huge problem with typical low code solutions).

You could also mix different roles in one team creating a solution, and the developer can focus on the technical problems to set up Connectors properly, and more business-like people can concentrate on the process model. And of course there are many nuances in the middle.

An image showing how experienced developers can enable low-code developers or power users.
Developers can enable other roles to take part in solution creation, e.g. by preparing reusable Connectors

This is comparable to a situation we know from software vendors embedding Camunda into their software for customization. Their software product then typically comes with a default process model and consultants can customize the processes to end-customer needs within certain limits the software team built-in.

Avoiding the danger zone when doing vendor rationalization and tool harmonization

Many organizations currently try to reduce the number of vendors and tools they are using. This is understandable on many levels, but it is very risky if the different non-functional requirements of green, yellow, and red processes are ignored.

For example, procurement departments might not want to have multiple process automation tools. But for them, the difference between Camunda and a low-code vendor is not very tangible as they both automate processes.

For red use cases, customers can still easily argue why they cannot use a low-code tool because those tools simply don’t fit into professional software development approaches. But for yellow use cases, this gets much more complicated to argue. This can lead to a situation where low-code tools, made for green use cases, are applied for yellow ones. This might work for very simple yellow processes, but can easily become risky if processes are getting too complex, or simply if requirements around stability, resilience, easing maintenance, scalability or information security rise over time. This is why I consider this a big danger zone for companies to be in.

Danger-zone
Applying low-code tools for yellow use cases can be risky, especially if non-functional requirements change over time

Camunda’s low-code acceleration features allow you to use Camunda in more yellow use cases, as you don’t have to involve software developers for everything. But if non-functional requirements rise, you can always fulfill those with Camunda, as it is built for red use cases as well. Just as an example, you could start adding automated tests whenever the solution starts to be too shaky. Or you could scale operations, if you face an unexpected high demand (think of flight cancellations around the Covid pandemic—this was a yellow use case for airlines, but it became highly important to be able to process them efficiently basically overnight).

To summarize: It’s better to target yellow use cases with a pro-code solution like Camunda with added low-code acceleration layers that you can use, but don’t have to. This prevents risky situations with low-code solutions that cannot cope with rising non-functional requirements.

And to link back to our product strategy: With Camunda 8 we worked hard to allow even “redder” use cases (because of improved performance, scalability, and resilience), as well as more yellow use cases at the same time. So you can go further left (red) and right (yellow) at the same time.

Summary

In today’s post I re-emphasized that Camunda is and will be developer-friendly. Pro-code (red) use cases are our bread and butter business, and honestly those are super exciting use cases where we can play to our strengths. This is strategically highly relevant, even if you might see a lot of marketing messaging around low-code accelerations at the moment.

Those low-code accelerators allow building less complex solutions (yellow) too, where typically other roles take part in solution creation (democratization, acceleration, and enablement). This helps you to reduce the risk of using the wrong tool for yellow use cases ending up in headline news.

You can read more about our vision for low-code here, or if you’re curious about how our Connectors work, feel free to check out our docs to learn more.

The post Pro-code, Low-code, and the Role of Camunda appeared first on Camunda.

]]>
Using the SQL Connector to Streamline Trade Ops Processes https://camunda.com/blog/2023/04/using-sql-connector-streamline-trade-ops-processes/ Thu, 13 Apr 2023 12:30:00 +0000 https://camunda.com/?p=78093&preview=true&preview_id=78093 There are new Connectors available for SQL databases. Here's how you can get them and a quick example of how you could use them today in financial services.

The post Using the SQL Connector to Streamline Trade Ops Processes appeared first on Camunda.

]]>
Camunda’s partner Infosys recently contributed some Connectors for various SQL databases (Oracle, MS-SQL, PostgreSQL, and MySQL) to the Camunda Community. These Connectors allow you to execute common data definition tasks (called DDL, for example creating a table), do data manipulations (called DML, for example inserting data into a table), and query data from the database.

I was immediately reminded of a recent customer scenario in the financial industry where that would have been quite handy to have. Based on this business example from trading operations, I created the example for today’s blog post and want to walk you through it using an SQL Connector in Camunda Platform 8. Of course, I modified the real-life scenario, so if you work in banking don’t expect it to be 100% accurate.

A new requirement around beneficial owners

The customer operated a trading application, where new stock trades needed to be matched with the stock exchange and passed on to custodian banks for execution. New compliance regulations required the broker to transfer beneficial owner information about sellers (beneficial owner information is about the natural persons behind legal entities, even over a hierarchy of legal entities). Such information needed to be captured, but their existing home-grown trading system did not do this out of the box. While they wanted to integrate such information in that system, they lacked the time to do this before the given deadline. So, they went for a quick workaround instead.

While you can decide for yourself if such a workaround is good or bad, it was impressive to see that they could add it within days to their daily operations.

They already had Camunda as the process orchestrator in production, and already executed the customer onboarding process on Camunda. This gave them a great opportunity to simply add the proper activity to store beneficial owner information at the right point in time.

As backend, they created a simple database table in their existing PostgreSQL installation:

CREATE TABLE BeneficialOwner (customerId VARCHAR, individual VARCHAR, organization VARCHAR, share VARCHAR)

The customer onboarding process was extended by the activity leveraging the PostgreSQL Connector:

A BPMN diagram of customer onboarding using the PostgreSQL connector

At the same time, they also run trade execution processes on Camunda, so they could add an activity there to query this information so that they can transfer it to the custodian as required:

A BPMN diagram of the trade execution process where the activity can be added

Again, I agree that this might not be a beautiful architecture and you might very well prefer a proper microservice or even to extend the trading or CRM system to fulfill the requirements. Still, having the possibility to adjust processes this easily is a great opportunity to add resilience to your processes, maybe just as a temporary workaround to buy some time. However, you might also want to keep in mind that processes differ very much in their criticality for the business, so while such workarounds might scare architects responsible for payments or trading, such an architecture might also be just right for other processes like simple approvals.

Deploying the PostgreSQL Connector

Let’s turn our attention to how to run the PostgreSQL Connector. The Connector is not provided out of the box by Camunda, so it is also not ready for use when you spin up a new cluster in Camunda Platform 8 SaaS.

Instead, the Connector itself is provided as an open source component by Camunda’s partner Infosys on their own GitHub repository: https://github.com/Infosys/camunda-connectors/blob/main/connector-postgresql/. As Infosys is not yet providing a binary, you must build the Connector from the sources yourself:

git clone https://github.com/Infosys/camunda-connectors
cd connector-postgresql 
mvn clean install

A Connector then needs a runtime to work. This runtime is basically a simple Java application that contains one or more Connectors and can connect to Camunda Platform as illustrated in the following picture:

A diagram showing how the Connector Runtime can connect to Camunda Platform

One great advantage of this architecture is that you can run one or more Connector runtimes, depending on your exact requirements. For example, you might want to co-locate the Connector runtime for PostgreSQL next to your database. This allows it to leverage Camunda Platform 8 SaaS and connect to PostgreSQL only from the Connector runtime that runs locally in the network of PostgreSQL. Or, you might separate Connectors that have strict performance requirements.

Technically speaking, you can create a Maven project that pulls in the Connectors you want to run as dependencies, like I did for PostgreSQL Connector earlier:

 <dependencies>
  <dependency>
    <groupId>io.camunda</groupId>
    <artifactId>spring-zeebe-connector-runtime</artifactId>
  </dependency>
  <dependency>
    <groupId>com.infosys.camundaconnectors.db.postgresql</groupId>
    <artifactId>connector-postgresql</artifactId>
  </dependency>
</dependencies>

The full pom.xml is available as an example on GitHub:

https://github.com/berndruecker/trade-ops-camunda-8-sql-connector/tree/main/connector-runtime.

To play around, you could add the connection information to a Camunda Platform 8 SaaS cluster to the file src/main/resources/application.properties and start the Java application afterwards. Then, you will have the PostgreSQL Connector ready to do any work.

To allow modelers making use of the Connector, you will further need the so-called element template (the UI part of the Connector) deployed to your Camunda Modeler. You can find the element template, which is a JSON file, for PostgreSQL here: https://github.com/Infosys/camunda-connectors/blob/main/connector-postgresql/element-templates/postgresql-database-connector.json.

If you use Web Modeler, you can import the file there:

You can upload the files to Web Modeler through a dropdown menu.

If you use Desktop Modeler, you can add the JSON file to the right place on your local disk (for me on Windows this is %APPDATA%camunda-modelerresourceselement-templates), see documentation for details.

Use the PostgreSQL Connector

Now you are ready to use the Connector in your BPMN model and configure these new PostgreSQL tasks properly in the palette (I used Web Modeler for the screenshot):

Using the Connector in your BPMN model.

You can find two executable BPMN models to import and start with on GitHub as well: https://github.com/berndruecker/trade-ops-camunda-8-sql-connector/tree/main/models. One is for onboarding new customers (this saves beneficial owner information in PostgreSQL) and one is for trade execution, querying it from there. I leave it up to the reader to dive into the various settings to make those queries work, but as you can already see in the screenshot above, it is not too much to configure.

Camunda Platform 8 offers support for secrets management built-in, which would make it easy to move the secrets for PostgreSQL to an externally-provided secret that can differ for every environment.

For my demo I have used a managed instance of PostgreSQL via ElephantSQL.

Want to see it in action?

I recorded a quick walkthrough here

Next steps

To connect to PostgreSQL, MySQL, MS-SQL, or Oracle, please check out the Connectors from Infosys. To connect to other endpoints, check out our awesome list of existing Connectors, which also points to other open source initiatives providing Connectors. If there is nothing that suits your needs, you are of course welcome to develop your own Connector and share it with the community via the Camunda Community Hub.

As always, if you have questions or feedback, don’t hesitate to reach out, ideally via our forum.

The post Using the SQL Connector to Streamline Trade Ops Processes appeared first on Camunda.

]]>
Running Camunda 8 on OpenShift https://camunda.com/blog/2023/03/running-camunda-8-on-openshift/ Tue, 21 Mar 2023 18:34:45 +0000 https://camunda.com/?p=76579&preview=true&preview_id=76579 Do you use Red Hat OpenShift? We've greatly improved how Camunda Platform 8 runs on OpenShift. Learn what's new and see an installation step by step.

The post Running Camunda 8 on OpenShift appeared first on Camunda.

]]>
Many customers use Red Hat OpenShift internally as their Kubernetes installation of choice. Since version 8.1, you can install Camunda Platform 8 on OpenShift with more ease, which is described in detail in the documentation. Today’s blog post will sketch the big picture and then walk you through a sample installation step-by-step.

The big picture

Camunda Platform ships a few Components to maximize the available features.

A diagram showing the Camunda Components and Architecture

To ease management of the Components outlined above and their integrations, the preferred environment to run Camunda Platform 8 in is Kubernetes (which is not completely true, by the way, the recommendation is to simply use our SaaS offering, but if this does not work for you, running on Kubernetes is the second-best choice). You can read more about all deployment options in the documentation.

There are Helm charts available to properly install all components into Kubernetes. While we internally use a Kubernetes Operator for our own SaaS offering, this is not yet available for Self-Managed environments.

These Helm charts are also exactly what you can use on OpenShift. Let’s look at the process step by step next.

Step-by-step installation

For this blog post, ensure you have access to an OpenShift installation (see version compatibility). I will show the installation using the managed version of OpenShift from Red Hat. I created a dedicated cluster backed by four AWS EC2 instances (m5.xlarge) using the web console, which results in the following OpenShift cluster:

A chart showing the details of the OpenShift cluster

To install Camunda, I leverage Helm. The whole process is described in more detail at https://docs.camunda.io/docs/self-managed/platform-deployment/helm-kubernetes/deploy/. To summarize, install the correct chart providing the correct configuration values (and read on for some specialties for OpenShift mentioned later):

helm repo add camunda https://helm.camunda.io
helm install camunda-platform camunda/camunda-platform -f openshift.yaml -f values.yaml

Note that we will step through the configuration values later in this document.

I know most folks will love doing this using the command line or terminal. Still, I actually decided to try out the OpenShift web interface for this task as I was curious how well that works and if I can avoid the command line altogether. You can. Here is a recording of the whole setup:

Let’s take this step by step:

1. I created a new project (basically a namespace in Kubernetes lingo):

Creating a Project in OpenShift named "Camunda"

2. I added the Camunda Helm chart repository (helm repo add camunda https://helm.camunda.io):

Adding the Camunda Helm chart repository

3. This gave me an easy possibility to pick and install the Camunda Platform (helm install camunda-platform camunda/camunda-platform):

Picking the Camunda Platform option for Helm Charts

So far, this is plain Kubernetes. Now, we’ll briefly discuss the specifics of OpenShift.

OpenShift defines Security Context Constraints (SCC) to manage permissions for pods and containers. For example, the default policy in OpenShift does not allow a container to run as root. Additionally, it defines a specific user id range to be used.

These default settings will require modifications on the Helm charts for deployment to avoid security restrictions. Basically, you have three options:

  1. Disable security restrictions and allow unrestricted deployments. While this is probably not your choice in production, it can be the easiest way forward during development. It is what I did for this blog post, so I added a RoleBinding for the role system:openshift:scc:privileged to the ServiceAccount default and camunda-platform-key. With this, you can use the Helm charts out of the box.
  2. Set proper user ids for all containers as described in the documentation.
  3. Let OpenShift add the Security Context to all pods automatically. Therefore, no security settings must be present, which is true for Camunda components but not for third-party containers like Elasticsearch or PostgreSQL. Due to a known bug in Helm, it is unfortunately not super straightforward to remove values from the default charts on your own, but it is doable, as described in the documentation.

Note that you can adjust the Helm chart via the OpenShift console, but the workaround mentioned above only works via the command line. Anyway, I merged the default values with the OpenShift-specific ones manually and then copied the YAML into the web console:

Installing the Helm Chart with YAML

You can find the YAML file I used here:

https://gist.github.com/berndruecker/09e486e8fad97631ba159f8956c80b37 

Note that this YAML contains full URL endpoints for some applications to do proper redirects on authentication, which you need to adjust when installing on your own cluster (as I hope you will have a different domain name than me ;-)).

Applying the chart installs all necessary components for Camunda Platform 8 into your OpenShift cluster. When done, the web console directly jumps to a nice topology view:

The topology view of the web console showing Camunda Components

If everything goes well, you should not see any errors here. Congratulations, Camunda is now properly running on your OpenShift cluster!

To access the web applications of Camunda, you need an ingress. OpenShift knows a proprietary concept here which is called Routes. So let’s add a route for the various web applications, which is really straightforward as you just point it to the right services. For example, with Operate this looks like the following:

Editing the Route for Operate

On this page, you will also see the URLs, which should fit the URLs you configured in your YAML earlier on.

The URLs of all your Routes

That’s it. Now Camunda Platform 8 is running properly and the web applications are reachable. You can either log in using the default demo user (user: demo, password: demo) or you can also add your own users to Keycloak. Therefore, go to the Keycloak website (In my case: http://keycloak-camunda.apps.open-shift-test.1dms.p1.openshiftapps.com/auth/admin/master/console/) and log in with your admin credentials (username: admin), the password can be checked via the Secrets stored in OpenShift:

Using Keycloak for security

Now, add a user of your choice and directly set a password:

Adding a user in Keycloak
The user details screen for Bernd, showing credentials

Ensure you assign the necessary roles to that user so they can log in afterwards:

The user details screen for Bernd, showing role mappings

That’s it, now you can access the web apps (e.g. Operate) and will be forwarded to the log in screen:

The log in screen, with fields for username and password

After login you will see Operate:

Operate showing no current processes deployed

Next Steps

To use Camunda Platform 8, you might now develop a process solution and deploy it to the same OpenShift cluster, you could, for example, follow our get started with microservices guide. You could also use normal Kubernetes Port forwarding to talk to Zeebe directly via gRPC, for example from a local development project or Camunda Desktop Modeler.

Summary

Let’s quickly recap: You can install Camunda Platform 8 on OpenShift with ease using the provided Helm charts, but all while respecting certain important configurations for OpenShift, most prominently Security Context Constraints. After adding some routes and users with credentials, you can quickly start orchestrating everything ????

In case of any questions or feedback, never hesitate to reach out and post in our forum.

The post Running Camunda 8 on OpenShift appeared first on Camunda.

]]>
A technical sneak peek into inbound Connectors alongside Camunda 8.1 https://camunda.com/blog/2022/10/technical-sneak-peek-inbound-connectors-camunda-8-1/ Wed, 26 Oct 2022 14:34:22 +0000 https://camunda.com/?p=66235 Learn how inbound Connectors work in Camunda Platform 8.1, and how you can get started with them today.

The post A technical sneak peek into inbound Connectors alongside Camunda 8.1 appeared first on Camunda.

]]>
When Camunda Platform 8 launched earlier this year, we introduced Connectors. Now, we’ve published the source code for our out-of-the-box Connectors with the latest 8.1 release. This opens the door to run those Connectors in Self-Managed environments, and also allows users to dive into their source code while building their own Connectors. 

However, the existing out-of-the-box Connectors are outbound Connectors. As discussed in my first technical sneak peek into Connectors, outbound Connectors are helpful if something needs to happen in the third-party system as soon as a process reaches a service task. For example, calling a REST endpoint or publishing a message to Slack.

Therefore, there is also a need for inbound Connectors. With inbound Connectors, something needs to happen within the workflow engine because of an external event in the third-party system. For example, a published Slack message or a called REST endpoint starts a new process instance. There are three different types of inbound Connectors:

  1. Webhook: An HTTP endpoint is made available to the outside, which when called, can start a process instance, for example.
  2. Subscription: A subscription is opened on the third-party system, like messaging or Apache Kafka, and new entries are then received and correlated to a waiting process instance in Camunda, for example.
  3. Polling: Some external API needs to be regularly queried for new entries, such as a drop folder on Google Drive or FTP.

To implement inbound Connectors, we first had to create a bit of infrastructure in Camunda Platform 8. This occurred with the latest 8.1 release by adding generic extension points to BPMN.

Generic properties in BPMN

Let’s briefly explore what a generic property is, and how we can leverage it to build inbound Connectors. Interestingly enough, this feature also allows our users to build their own extensions to Camunda Platform 8.

The end-to-end story looks like this:

  1. You can add custom properties (basically key-value pairs) to any BPMN symbol.
  2. Those properties are passed on by the workflow engine, even if it does not use it by itself.
  3. The properties can be read by third-party components to do whatever they want to do with it.

For inbound Connectors, this allows one to define and store properties, e.g. on the start event of a process model, as shown in the following BPMN XML file:

<bpmn:startEvent id="StartEvent" name="Order received">
  <bpmn:extensionElements>
    <zeebe:properties>
      <zeebe:property name="inbound.type" value="webhook" />
      <zeebe:property name="inbound.context" value="MY_CONTEXT" />
      ...
    </zeebe:properties>
  </bpmn:extensionElements>
  ...

Those properties do not need to be edited on an XML level, but you can leverage element templates in Camunda Modeler to provide an easy-to-use modeling experience for your properties:

This metadata can be read from the BPMN process models later to recognize when a new process with Connector properties is deployed. Then, you could open a new webhook, for example. 

Currently, we poll for new process definitions using the public Operate API. To improve efficiency in obtaining new process definitions, there is a public API on the roadmap to properly notify of Zeebe events, like a process deployment. However, this existing design does provide the most flexibility in running Connectors.

For example, you can run the Connector runtime next to your own Kafka, Vault, or whatever system you don’t want to expose to the outside world. As soon as a more efficient public API for deployment events becomes available in Camunda Platform, we will replace the polling mechanism under the hood, without the need to adjust the Connector architecture itself.

Using this basis, an inbound Connector runtime can start the required inbound Connector for process definitions. In the example above, the runtime would provide a new endpoint under a specific URL (e.g. http://cluster-address/inbound/MY_CONTEXT). Whenever there is a call to it, a process instance will kick off, taking the various other configuration parameters into account. 

Example inbound Connector: GitHub webhook

Let’s get to something more specific: starting a process instance if something happens on GitHub. As you can see in the following screenshot, you will need to set a couple of properties in your Camunda process model on the start event. For example, you need to define a path to be used to create a URL endpoint, provide a secret (normally to be looked up from the secret store used with Connectors), variable mapping, and so forth.

Next, register this webhook within GitHub.

Pretty straightforward, isn’t it? 

In the code, the magic to make it happen is basically contained in two pieces. First, the Connector runtime needs to query process definitions and scan for Connector properties on them, which looks roughly like this:

@Scheduled(fixedDelay = 5000)
public void scheduleImport() throws OperateException {
  List<ProcessDefinition> processDefinitions = camundaOperateClient
    .searchProcessDefinitions();
  for (ProcessDefinition processDefinition: processDefinitions) {
    if (!registry.processDefinitionChecked(processDefinition.getKey())) {
      processBpmnXml(
        processDefinition,
        camundaOperateClient.getProcessDefinitionXml(processDefinition.getKey()));
      registry.markProcessDefinitionChecked(processDefinition.getKey());
    }
  }
}

private void processBpmnXml(ProcessDefinition processDefinition, String resource) {
  final BpmnModelInstance bpmnModelInstance = Bpmn.readModelFromStream(
    new ByteArrayInputStream(resource.getBytes()));
  bpmnModelInstance.getDefinitions()
    .getChildElementsByType(Process.class)
    .stream().flatMap(
       process -> process.getChildElementsByType(StartEvent.class).stream()
    )
    .map(startEvent -> startEvent.getSingleExtensionElement(ZeebeProperties.class))
    .filter(Objects::nonNull)
    .forEach(zeebeProperties -> processZeebeProperties(processDefinition, zeebeProperties));
  // TODO: Also process intermediate catching message events and Receive Tasks
}

private void processZeebeProperties(ProcessDefinition processDefinition, ZeebeProperties zeebeProperties) {
  InboundConnectorProperties properties = new InboundConnectorProperties(
    processDefinition.getBpmnProcessId(),
    processDefinition.getVersion().intValue(),
    processDefinition.getKey(),
    zeebeProperties.getProperties().stream()
      .collect(Collectors.toMap(ZeebeProperty::getName, ZeebeProperty::getValue)));

  if (InboundConnectorProperties.TYPE_WEBHOOK.equals(properties.getType())) {
      registry.registerWebhookConnector(properties);
  } 
  // ...

Now, another part of the runtime can provide an endpoint, and if called, check if there is a Connector registered for the called endpoint. If so, the configuration is used to start a new process instance:

// Part of the runtime that provides webhook endpoints
@PostMapping("/inbound/{context}")
public ResponseEntity<ProcessInstanceEvent> inbound(
    @PathVariable String context,
    @RequestBody Map<String, Object> body,
    @RequestHeader Map<String, String> headers) {

  if (!registry.containsContextPath(context)) {
    throw new ResponseStatusException(HttpStatus.NOT_FOUND, "No webhook found for context: " + context);
  }
  WebhookConnectorProperties connectorProperties = registry.getWebhookConnectorByContextPath(context);
  boolean valid = validateSecret(connectorProperties, webhookContext);
  if (!valid) {
    return ResponseEntity.status(400).build();
  }
  Map<String, Object> variables = extractVariables(connectorProperties, webhookContext);
  ProcessInstanceEvent processInstanceEvent = zeebeClient
    .newCreateInstanceCommand()
    .bpmnProcessId(connectorProperties.bpmnProcessId())
    .version(connectorProperties.version())
    .variables(variables)
    .send()
    .join(); // TODO: Switch to rective HTTP client
  return ResponseEntity.status(HttpStatus.CREATED).body(processInstanceEvent);
}

Of course, the code above is a bit simplified and taken from the first increment of the code (read: not yet using all coding best practices :-)), but it should give you an idea of how an inbound webhook Connector will generally work.

You can see a quick walkthrough of this Connector in action here: 

What’s next?

Important infrastructure to build extensions landed in Camunda 8.1, but we will further improve this infrastructure in the upcoming release. On this basis, we are currently building inbound Connectors, specifically REST webhooks end to end. As part of this effort, we will add bits and pieces to the Connector SDK that allows everybody, including you, to build your own inbound Connectors. At the same time, we also plan to prototype subscription Connectors, so stay tuned!

Want to get started with the new Camunda connectors today? Be sure to check out our Connector SDK to learn more, and watch that space for updates. If you’re new to Camunda, you can always sign up for a free SaaS account here.

The post A technical sneak peek into inbound Connectors alongside Camunda 8.1 appeared first on Camunda.

]]>
CamundaCon 2022: Q&A with Bernd and Daniel https://camunda.com/blog/2022/10/camundacon-2022-qa-bernd-daniel/ Fri, 14 Oct 2022 20:00:27 +0000 https://camunda.com/?p=65420 Learn from Bernd Ruecker, Camunda Co-founder/Chief Technologist, and Daniel Meyer, Camunda CTO, as they tackle questions on connectors, modeling & more.

The post CamundaCon 2022: Q&A with Bernd and Daniel appeared first on Camunda.

]]>
Check out this extended Q&A from the CamundaCon Day 2 keynote with answers from Bernd Ruecker, Camunda Co-founder and Chief Technologist, and Daniel Meyer, Camunda CTO, that we couldn’t get to live. Read on to learn about our plans for connectors, the modeling and developer experience, and more.

CamundaCon 2022 wrapped up last week, and it was packed with amazing sessions. I and Daniel Meyer were glad to give an exciting keynote to open the second day that received a lot of great audience questions. We didn’t have time to answer them all live, but we wanted to make sure we got to them and shared the answers with the community.

If you missed the keynote or any other session of CamundaCon 2022, the recordings are already available on-demand – just click the link below to check them out, or keep reading for answers from myself and Daniel. Note that some questions have been combined/edited for clarity. 

Connectors

Will there be a connector marketplace?

There were many questions around sharing connectors, which is great, because this is one important pillar of our connectivity vision for Camunda 8. So far, we do not have a marketplace ready yet. Still, it is pretty clear how a connector should look to contain all important artifacts in a way that could be later used by a possible connector marketplace (or “bazaar” like one question framed it :-)). 

You can check out our out-of-the-box connectors as good examples. All sources are available and linked from here: https://github.com/camunda/connectors-bundle. A connector 

This is for example what the REST Connector looks like:

Sample REST Connector

One question was also asked regarding whether Camunda verifies or certifies connectors in some way. At the moment, this is not yet a pressing problem, as we for sure verify our own out-of-the-box connectors, as we build and support them. But this is a very interesting question, if you look forward to a marketplace of connectors. Honestly, we haven’t yet defined all the nuts and bolts here, but I assume that we will end up with different “quality levels” of connectors. So, for example, every connector could be either 

  • Supported by Camunda
  • Quality assured by Camunda (but maintained by a third party)
  • Community-based (like all other existing Community Extensions, then also following the defined lifecycle stages)

And I am pretty sure that our partners that develop their own connectors will have their own quality assurance. 

Can we write connectors in other programming languages?

The Connector SDK and the Connector runtimes focus on Java only, so you cannot write Connectors in any other language. We are still discussing if we want to open this up to other languages (especially Node.js and Python), but have more important things to do first (e.g. Inbound Connectors).

Does the REST Connector support OpenAPI?

Providing OpenAPI support directly in the connector is an interesting path to explore. We did first experiments with a community extension that can generate element templates for a connector from an OpenAPI specification. We are currently discussing how to productize this exactly. Expect something to land in one of the next releases.

Can we use out-of-the-box connectors in self-managed environments?

Starting with the just released Camunda Platform 8.1, all out-of-the-box connectors were made available on GitHub: https://github.com/camunda/connectors-bundle. You can run those connectors also self-managed, either by providing your own runtime (e.g. based on Spring Zeebe), or using the provided Docker container

In our SaaS environment, we have a slightly different runtime (using Lambas for code execution), but this is not available for self-managed. But talk to us if you see requirements to have a runtime fitting into your SaaS environment (in a self-managed scenario). 

Can a connector retrieve secrets from external vaults or secret stores?

The Connector SDK contains a simple abstraction for any SecretStore. In Spring Boot, for example, the default is to fall back to the environment (e.g. application.properties or environment variables), but in our SaaS environment we use the Google Secrets Manager to make sure credentials are safe.

If you manage the connector runtime yourself, you could also plug in your own implementation for a Secret Store, like Vault, probably using existing libraries within the Spring universe. We have not yet created a proof of concept (POC) around this, but I expect this to happen soon. So: yes, this should be definitely possible.

What is the difference between a Job Worker and a Connector?

The difference is actually not too big. Especially the code is pretty similar. But the connector works on a slightly higher abstraction. For example, you cannot simply access all process variables, but need to explicitly define what data will be available. Additionally, you will have support to handle secrets properly, which you have to handle on your own in a Job Worker.

This reduces the scope of what a connector can do, which allows us to build different runtimes. So while in Spring Boot the difference is hard to see, you can run a connector function in SaaS in a stateless Google Function, which is not the case for a Job Worker.

Anyway, I expect that we will add some more syntactic sugar on connectors over time, so this might even get the default way of waiting Java glue code in future, making it unnecessary to leverage the “low level” Job Worker API. But honestly, this also depends on how the community is picking things up.

Is there a way to provide different versions of a connector?

Yes, the task type defined for a connector contains an encoded version, so you can run multiple versions of a connector at the same time.

Is there any framework to extend and implement custom connectors?

Yes, the Connector SDK. We are also working on a tutorial blog post that will come up soon.

Modeling and Developer Experience

Great to see Modeler is becoming more like a powerful IDE! I am curious about any plan to help TESTING BPMN files?

There are many ideas and also some prototypes around testing BPMN processes and DMN decisions on this level, but at the moment there is no specific feature available or even planned on the near term roadmap. So this might take a bit to land in Camunda Platform 8, but I could very well imagine some Community Extension forming around this fascinating topic.

How can the artifacts that are being generated using the low-code UIs be stored in a versioning system like Git?

We still produce “normal” BPMN and DMN models as target artifacts. At the end of the day, those are XML files that can be downloaded and stored anywhere, including any version control system, of course including Git. The Desktop Modeler already has features built in to connect to the SaaS Modeler and synchronize this file by the push of a button. So it is not only pretty simple, but also allows you to control which versions should end up exactly in Git.

We are also looking into a direct synchronization of our SaaS repository with customers’ version control systems, but this is something for the future.

Other Questions

Will there be training videos on FEEL in the Camunda Academy?

FEEL is not only an exciting expression language, but also an important core component in the Camunda Platform 8 stack, so yes, we do plan to extend the educational material around FEEL.

Do you have any plans for supporting Signal Event types in Camunda?

BPMN Signal events are not yet available but planned to be available with one of the next releases. You can always check the current BPMN coverage online.

I know how to scale Zeebe, however I want to know how to scale the clients to be synchronized with Zeebe?

In general, you can connect as many clients to Zeebe as you like. The decision of course depends on your scenario. You could either just scale out your one application connecting to Zeebe, or even scale different workers for certain tasks in a process separately. The determination of how many client applications you run solely depends on what they need to do. For example, if they mostly call web endpoints, they might just wait for IO most of the time, so if programmed properly, you don’t need to scale them too much. But if you have workers doing complex calculations requiring a lot of CPU, you do need to run more of them to process high loads. 

Our best practice around writing good workers might help to understand this.

Thank You

CamundaCon, and Camunda in general, can only be as great as it is thanks to our amazing community. Thank you as always for your questions, contributions and support. Don’t forget to check out all the CamundaCon 2022 videos and get excited for CamundaCon 2023. It’s already planned for next September and we can’t wait to see you there!

The post CamundaCon 2022: Q&A with Bernd and Daniel appeared first on Camunda.

]]>
How Camunda 8 supports process automation at enterprise scale https://camunda.com/blog/2022/09/how-camunda-8-supports-process-automation-at-enterprise-scale/ Thu, 29 Sep 2022 19:30:00 +0000 https://camunda.com/?p=63983 Scaling process automation across an enterprise naturally includes a wide array of challenges. To counter these challenges, more and more organizations are evolving their organizational structures accordingly, often creating Centers of Excellence (CoE) and/or communities of practice (CoP). Please note that during this post we will use the term CoE, but keep in mind your organization might call this very differently. That said, in Camunda Platform 8, we included new functionalities that will help companies to scale process automation successfully.  This blog post discusses different aspects of scaling process automation, the role of CoEs, and how new Camunda 8 features support those initiatives.  What does scaling mean? Organizations that want to create a competitive edge through digital transformation (and frankly: which...

The post How Camunda 8 supports process automation at enterprise scale appeared first on Camunda.

]]>
Scaling process automation across an enterprise naturally includes a wide array of challenges.

To counter these challenges, more and more organizations are evolving their organizational structures accordingly, often creating Centers of Excellence (CoE) and/or communities of practice (CoP). Please note that during this post we will use the term CoE, but keep in mind your organization might call this very differently. That said, in Camunda Platform 8, we included new functionalities that will help companies to scale process automation successfully. 

This blog post discusses different aspects of scaling process automation, the role of CoEs, and how new Camunda 8 features support those initiatives. 

What does scaling mean?

Organizations that want to create a competitive edge through digital transformation (and frankly: which don’t?) need to figure out how to scale process automation.

Scale in this context has two main dimensions: 

  1. Technical scale: The technical capability to run more load, which is for example supported by horizontal scalability of Zeebe, the workflow engine within Camunda Platform 8. 
  2. Organizational scale: The organizational ability to implement, maintain and operate an increasing amount of automation solutions across the enterprise, while also leveraging economies of scale.

While the technical abilities are a crucial foundation for scaling process automation, it’s actually organizational aspects that can determine the success or failure of strategic automation initiatives.

Accordingly, as a recent survey by McKinsey points out, the more parts of the organization are involved, the more automation initiatives are likely to succeed which. Other parts of the organization might include things like as different stakeholders and business domains. In Goldman Sachs’ talk at CamundaCon 2020 , for example, they call this “process automation at enterprise scale”. Our post focuses on these organizational challenges of scaling process automation and how Camunda 8 can help here.

Dimensions of Organizational scaling and its challenges

Organizational scaling includes multiple dimensions. For this post, we want to focus on the two most important: 

  • Dimension 1: Multiple process automation projects are running in different parts of the organization — meaning that more and more people and teams will be facing similar questions and challenges, such as architecture decisions, or developing the necessary skills (e.g. for modeling BPMN). 
  • Dimension 2: Every automated process will touch multiple services handled by different teams or business units, potentially resulting in a multitude of workflow engines being deployed across the enterprise. This might look like sketched in the illustration below.
Scaling Camunda engines in your organization

How CoEs help overcome automation challenges 

To counter these challenges, organizations are more frequently establishing CoEs.

A staggering 91% of all respondents of our state of process automation report 2020 reported that they either already have a CoE in place or are actively working or planning on implementing one. According to the McKinsey survey mentioned earlier, companies leading in automation meanwhile follow a trend towards a federated CoE governance. As the analysts point out: “In this model, a central support team provides the necessary tools and capabilities to enable parts of the business to automate processes autonomously.” 

Correspondingly, key tasks that we’re seeing for CoEs are:

  • Communication, e.g. evangelizing for automation in the organization
  • Enablement, e.g. providing best practices and build skills for automation teams
  • Governance, e.g. providing getting started packages, frameworks and architectural guidelines for automation teams, without restricting them

It is important to highlight that CoEs should not provide overly restrictive guidelines and become too bureaucratic, or spend too much time with initiatives that don’t generate value for automation teams. If set up properly, a CoE will be instrumental in scaling process automation while generating excitement and buy-in.

CoE tasks

Camunda 8 helps to scale process automation 

With Camunda 8, we improved support for scaling process automation across the enterprise, which will also help CoEs do their job. This is mostly about workflow engine provisioning and governance, simplifying collaboration between stakeholders using process models, accelerating implementation, and allowing to reuse connectors to third-party systems.

Let’s go over this one by one.

Engine provisioning and governance

With Camunda 8, we introduced a lot of developer convenience around remote workflow engines, making it much easier for teams to use them. At the same time, this setup allows separation of the development of the process solution from the actual configuration and operation of the workflow engine. Now that dedicated teams take care of running the workflow engine, others can focus on their key tasks: Implementing and delivering automation projects to solve business problems.

In the most efficient cases, the hosting is outsourced to Camunda itself. Project teams can simply use the Camunda SaaS offering. Nobody has to worry about how to install, configure, or secure Camunda. You can easily see all workflow engines running for your organization, the Camunda version they run on, their health status, and various metrics like instance counts or incidents. In this setup, you can trigger updates on the workflow engine version without even touching the automated process itself. You could set up generic alarms for technical incidents or service level agreements (SLA) for all process solutions. This will all increase the availability and quality of your automated processes and helps fundamentally with the governance challenges of scaling process automation.

If using a SaaS model is not possible, your CoE can also set up a comparable in-house SaaS model. This allows your teams to centralize knowledge on how to install Camunda, which Kubernetes cluster to use, what Elasticsearch to connect to, or how to apply a helm chart, so that development teams don’t have to build up this infrastructure knowledge. Ideally, the CoE offers the workflow engine capability as a self-service to your company. 

Simplified collaboration using Web Modeler

The Camunda Web Modeler lays the groundwork for deep collaboration between business and IT.

You have a lot of features at your disposal to make BPMN diagrams the centerpiece of communication in your process automation projects. For example, you can directly use comments in BPMN elements to synchronously or asynchronously discuss with the different stakeholders. All diagrams can be easily shared with project teams or individuals. They can further be embedded anywhere in a read-only mode, for example, your confluence or documentation portal, which is especially useful if you need to involve a lot of stakeholders in an enterprise context. These collaboration capabilities further improve business IT collaboration through BPMN, as well as the time to value for automation projects.  

For us, the current state of Web Modeler is a solid starting point and the target state promises even more. We plan to go further by providing simulation or debugging capabilities, and link in insights from other parts of the platform, e.g. around deployed versions, current performance of the process in production, incident status, or the like. 

Solution acceleration and connectors

One key success factor to scale process automation is to speed up the time to deploy any new project into production. This can be done in many ways, like for example, using industry best practices from software engineering around continuous delivery. But there are also specific areas, where a process orchestration solution like Camunda can help accelerate projects.

Besides the many small things we do to enhance developer productivity (e.g. the upcoming expression designer, or debugging capabilities), we also reinvented connectors for Camunda 8 with what we call the integration framework.  A connector is a component that talks to a third-party system via an API and thus allows orchestrating that system via Camunda (or lets that system influence Camunda’s orchestration). 

The connector consists of a bit of programming code needed to talk to the third-party system and some UI parts hooked into Camunda Modeler. Because we define a software development kit (SDK), anybody can easily develop their own connectors, that then will be usable in the Camunda stack (see this technical sneak peeak into Camunda’s Connector architecture for more technical details).

This means that there will be out-of-the-box connectors available for standard problems (currently we started with REST, Sendgrid, and Slack), either provided by Camunda, partners, or the community. Those will make it easy to connect to certain systems. Using connectors is always optional, and connectors will mostly be available open source and thus be extensible, so it does not get in the way of the developer-friendliness we value so much.

The integration framework also means that organizations, and possibly their CoEs, can implement their own connectors to be reused internally. Experience clearly shows that there are typically a small number of legacy systems and a lot of processes have to be connected to. Prebuilding that logic and making it available for re-use can significantly accelerate process automation projects. 

Some organizations might even use this, to enable less technical roles – like citizen developers – building process automation solutions, which can free up IT capacity, further improving time to value for automation projects. In such a scenario, the Web Modeler can be used not only for modeling BPMN, but also to configure predefined connectors through the new properties panel to implement a fully executable process. 

Intelligence

Process intelligence is instrumental when establishing process automation within organizations, as corresponding tools allow ever closer involvement of business stakeholders in projects that are very often IT-heavy. 

The possibility to see real-time business key performance indicators (KPIs) from runtime engines and processes strongly underlines the business case of automation projects. This helps business teams and CoEs to showcase and communicate the ROI of their automation initiatives. Additionally, it makes it easy to identify process inefficiencies or bottlenecks, allowing to continuously improve automation projects. This approach helps you to build trust and awareness around the advantages of process automation tooling.

With Camunda 8, we continue to improve Optimize, our process intelligence tool, and make it especially easier to create helpful analysis and actionable insights out-of-the-box. 

dashboard

Conclusion

We looked at the aspects of scaling process automation and particularly focused on the corresponding organizational challenges and the role of CoEs in this context. With the aim of supporting our customers in implementing “automation at enterprise scale” lead to the release of these new Camunda 8 features discussed here.

If this is interesting to you and want to learn more, feel free to get in touch with your known Camunda contacts, or check out our other resources, such as our blog, whitepapers and events.

The post How Camunda 8 supports process automation at enterprise scale appeared first on Camunda.

]]>
A technical sneak peek into Camunda’s connector architecture https://camunda.com/blog/2022/07/a-technical-sneak-peek-into-camundas-connector-architecture/ Thu, 28 Jul 2022 16:00:00 +0000 https://camunda.com/?p=59035 Learn what a connector is made of, how the code for a connector roughly looks, and how connectors can be operated in various scenarios.

The post A technical sneak peek into Camunda’s connector architecture appeared first on Camunda.

]]>
When Camunda Platform 8 launched earlier this year, we announced connectors and provided some preview connectors available in our SaaS offering, such as sending an email using SendGrid, invoking a REST API, or sending a message to Slack.

Since then, many people have asked us what a connector is, how such a connector is developed, and how it can be used in Self-Managed. We haven’t yet published much information on the technical architecture of connectors as it is still under development, but at the same time, I totally understand that perhaps you want to know more to feel as excited as me about connectors.

In this blog post, I’ll briefly share what a connector is made of, how the code for a connector roughly looks, and how connectors can be operated in various scenarios. Note that the information is a preview, and details are subject to change.

What is a connector?

A connector is a component that talks to a third-party system via an API and thus allows orchestrating that system via Camunda (or let that system influence Camunda’s orchestration).

Visualization of a basic example connector architecture, showing how connectors communicate between Camunda Platform 8 and a third-party system

The connector consists of a bit of programming code needed to talk to the third-party system and some UI parts hooked into Camunda Modeler.

This is pretty generic, I know. Let’s get a bit more concrete and differentiate types of connectors:

  1. Outbound connectors: Something needs to happen in the third-party system if a process reaches a service task. For example, calling a REST endpoint or publishing some message to Slack.
  2. Inbound connectors: Something needs to happen within the workflow engine because of an external event in the third-party system. For example, because a Slack message was published or a REST endpoint is called. Inbound connectors now can be of three different kinds:
    • Webhook: An HTTP endpoint is made available to the outside, which when called, can start a process instance, for example.
    • Subscription: A subscription is opened on the third-party system, like messaging or Apache Kafka, and new entries are then received and correlated to a waiting process instance in Camunda, for example.
    • Polling: Some external API needs to be regularly queried for new entries, such as a drop folder on Google Drive or FTP.

Outbound example

Let’s briefly look at one outbound connector: the REST connector. You can define a couple of properties, like which URL to invoke using which HTTP method. This is configured via Web Modeler, which basically means those properties end up in the XML of the BPMN process model. The translation of the UI to the XML is done by the element template mechanism. This makes connectors convenient to use. 

Note: Want to watch a quick demo of the REST Connector in action? Check out this free short course (<5m) in Camunda Academy.

screenshot showing the configuration of a REST connector

Now there is also code required to really do the outbound call. The overall Camunda Platform 8 integration framework provides a software development kit (SDK) to program such a connector against. Simplified, an outbound REST connector provides an execute method that is called whenever a process instance needs to invoke the connector, and a context is provided with all input data, configuration, and abstraction for the secret store.

public class HttpJsonFunction implements OutboundConnectorFunction {
  @Override
  public Object execute(OutboundConnectorContext context) {
    final var json = context.getVariables();
    final var request = GSON.fromJson(json, HttpJsonRequest.class);

    final var validator = new Validator();
    request.validate(validator);
    validator.validate();

    request.replaceSecrets(context.getSecretStore());

    try {
      return handleRequest(request);
    } catch (final Exception e) {
      throw ConnectorResult.failed(e);
    }
  }
  
  protected HttpJsonResult handleRequest(final HttpJsonRequest request) throws IOException {
    //...
    final var method = Objects.requireNonNull(request.getMethod(), "Missing method parameter").toUpperCase();
    final var url = Objects.requireNonNull(request.getUrl(), "Missing URL parameter");
    //...

Now there needs to be some glue code calling this function whenever a process instance reaches the respective service task. This is the job of the connector runtime. This runtime registers job workers with Zeebe and calls the outbound connector function whenever there are new jobs.

Visualization of the outbound connector architure

This connector runtime is independent of the concrete connector code executed. In fact, a connector runtime can handle multiple connectors at the same time. Therefore, a connector brings its own metadata:

@ZeebeOutboundConnector(
    	name = "http-json",
    	taskType = "io.camunda:http-json:1",
    	variablesToFetch = {"restUrl", "username","password", "..."})
public class RestOutboundConnectorFunction implements OutboundConnectorFunction {
  //...

With this, we’ve built a Spring Boot-based runtime that can discover all outbound connectors on the classpath and register the required job workers. This makes it super easy to test a single connector, as you can run it locally, but you can also stitch together a Spring Boot application with all the connectors you want to run in your Camunda Platform 8 Self-Managed installation.

At the same time, we have also built a connector runtime for our own SaaS offering, running in Google Cloud. While we also run a generic, Java-based connector runtime, all outbound connectors themselves are deployed as Google Functions. Secrets are handled by the Google Cloud Security Manager in this case.

Visualization of an outbound connector architecture running in Google Cloud

The great thing here is that the connector code itself does not know anything about the environment it runs in, making connectors available in the whole Camunda Platform 8 ecosystem.

Inbound example

Having talked about outbound, inbound is a very different beast. An inbound connector either needs to open up an HTTP endpoint, a subscription, or start polling. It might even require some kind of state, for example, to remember what was already polled. Exceptions in a connector should be visible to an operator, even if there is no process instance to pinpoint it to.

We are currently designing and validating architecture on this end, so consider it in flux. Still, some of the primitives from inbound connectors will also be true:

  • Parameters can be configured via the Modeler UI and stored in the BPMN process.
  • The core connector code will be runnable in different environments.
  • Metadata will be provided so that the connector runtime can easily pick up new connectors.

A prototypical connector receiving AMQP messages (e.g., from RabbitMQ) looks like this:

@ZeebeInboundConnector(name = "io.camunda.community:amqp:1")
public class AmqpInboundSubscription implements SubscriptionInboundConnector  {
    
	@Override
	public void activate(InboundConnectorConfig config, InboundConnectorContext context) throws Exception {
    	initialize(context, config.getConfiguredParameters());
    	consumerTemplate = createConsumerTemplate();
    	consumerTemplate.start();
    	while (running) {
        	Exchange receive = consumerTemplate.receive( getParameters().getEndpointUri() );
        	Message message = receive.getIn();
        	try {
            	payload = processMessageContent(message);
            	processIncomingAmqpMessage(config, context, payload);
        	} catch (Exception ex) {
            	// ...
        	}
    	}
	}
    
	private void processIncomingAmqpMessage(InboundConnectorConfig config, InboundConnectorContext context, String payload) {    
   	if (InboundConnectorConfig.CONNECTOR_EFFECT_CORRELATE_MESSAGE.equals(config.getConnectorEffect())) {
        	String correlationKeyValue = context.getFeelEngine().evaluate(
           	config.getCorrelationKey(),
           	payload);

        	PublishMessageCommandStep3 cmd = context.getZeebeClient().newPublishMessageCommand()
                	.messageName(config.getMessageName())
                	.correlationKey(correlationKeyValue)
                	.variables(payload);
        	if (config.getMessageId()!=null) {
            	String messageIdValue = context.getFeelEngine().evaluate(config.getMessageId(), payload);
            	cmd = cmd.messageId(messageIdValue);
        	}
        	cmd.send().join();
        	//...

And here is the related visualization:

Visualization of a prototypical inbound connector receiving AQMP messages (e.g., from RabbitMQ)

Status and next steps

Currently, only a fraction of what we work on is publicly visible. Therefore, there are currently some limitations on connectors in Camunda Platform version 8.0, mainly:

  • The SDK for connectors is not open to the public simply because we need to finalize some things first, as we want to avoid people building connectors that need to be changed later on. 
  • The code of existing connectors (REST, SendGrid, and Slack) is not available and cannot be run on Self-Managed environments yet.
  • The UI support is only available within Web Modeler, not yet within Desktop Modeler.

We are working on all of these areas and plan to release the connector SDK later this year. We can then provide sources and binaries to existing connectors to run them in Self-Managed environments or to understand their inner workings. Along with the SDK, we plan to release connector templates that allow you to easily design the UI attributes and parameters required for your connector and provide you with the ability to share the connector template with your project team.

At the same time, we are also working on providing more out-of-the-box connectors (like the Slack connector that was just released last week) and making them available open source. We are also in touch with partners who are eager to provide connectors to the Camunda ecosystem. As a result, we plan to offer some kind of exchange where you can easily see which connectors are available, their guarantees, and their limitations.

Still, the whole connector architecture is built to allow everybody to build their own connectors. This especially also enables you to build private connectors for your own legacy systems that can be reused across your organization.

Summary

The main building block to implementing connectors is our SDK for inbound and outbound connectors, whereas outbound connectors can be based on webhooks, subscriptions, or polling. This allows writing connector code that is independent of the connector runtime so that you can leverage connectors in the Camunda SaaS offering and your own Self-Managed environment.

At the same time, connector templates will allow a great modeling experience when using connectors within your own models. We are making great progress, and you can expect to see more later this year. Exciting times ahead!

The post A technical sneak peek into Camunda’s connector architecture appeared first on Camunda.

]]>
Why process orchestration needs advanced workflow patterns https://camunda.com/blog/2022/07/why-process-orchestration-needs-advanced-workflow-patterns/ Thu, 21 Jul 2022 12:00:00 +0000 https://camunda.com/?p=58811 The reality of a business process requires advanced workflow patterns. Ensure you use an orchestration product capable of supporting them.

The post Why process orchestration needs advanced workflow patterns appeared first on Camunda.

]]>
Life is seldom a straight line, and the same is true for processes. Therefore, you must be able to accurately express all the things happening in your business processes for proper end-to-end process orchestration. This requires workflow patterns that go beyond basic control flow patterns (like sequence or condition). If your orchestration tool does not provide those advanced workflow patterns, you will experience confusion amongst developers, you will need to implement time-consuming workarounds, and you will end up with confusing models. Let’s explore this by examining an example of why these advanced workflow patterns matter in today’s blog post.

Initial process example

Let’s assume you’re processing incoming orders of hand-crafted goods to be shipped individually. Each order consists of many different order positions, which you want to work on in parallel with your team to save time and deliver quicker. However, while your team is working on the order, the customer is still able to cancel, and in that case, you need to be able to revoke any deliveries that have been scheduled already. A quick drawing on the whiteboard yields the following sketch of this example:

whiteboard sketch of this example

Let’s create an executable process model for this use case. I will first show you a possible process using ASL (Amazon States Language) and AWS Step Functions, and secondly with Camunda Platform and BPMN (Business Process Model and Notation) to illustrate the differences between these underlying workflow languages. 

Modeling using AWS Step Functions

The following model is created using ASL, which is part of AWS Step Functions and, as such, a bespoke language. Let’s look at the resulting diagram:

ASL diagram

To discuss it, I will use workflow patterns, which are a proven set of patterns you will need to express any workflow. 

The good news is that ASL can execute a workflow pattern called “dynamic parallel branches,” which allows parallelizing execution of the order positions. This is good; otherwise, we would need to start multiple workflow instances for the order positions and do all synchronizations by hand.

But this is where things get complicated. ASL does not offer reactions to external messages; thus, you cannot interrupt your running workflow instance if an external event happens, like the customer cancels their order. Therefore, you need a workaround. One possibility is to use a parallel branch that waits for the cancellation event in parallel to execute the multiple instance tasks, marked with (1) in the illustration above. 

When implementing that wait state around cancelation, you will undoubtedly miss a proper correlation mechanism, as you cannot easily correlate events from the outside to the running workflow instance. Instead, you could leverage the task token generated from AWS and keep it in an external data store so that you can locate the correct task token for a given order id. This means you have to implement a bespoke message correlation mechanism yourself, including persistence as described in Integrating AWS Step Functions callbacks and external systems.

When the cancelation message comes in, the workflow advances in that workaround path and needs to raise an error so all order delivery tasks are canceled, and the process can directly move on to cancelation, marked with (2) in the above illustration. 

But even in the desired case that the order does not get canceled; you need to leverage an error. This is marked with (3) in the illustration above. This is necessary to interrupt the task of waiting for the cancelation message. 

You need to use a similar workaround again when you want to wait for payment, but stop this waiting after a specified timeout. Therefore, you will start a timer in parallel, marked with (4), and use an error to stop it later, marked with (5). 

Note that when you configure the wait state, you might assume you may misuse Step Functions here, as you configure the time in seconds, meaning you have to enter a big number (864,000 seconds) to wait ten days.

Of course, you could also implement your requirements differently. For example, you might implement all order cancelation logic entirely outside of the process model and just terminate the running order fulfillment instance via API. But note that by doing so, you will lose a lot of visibility around what happens in your process, not only during design time but also during operations or improvement endeavors. 

Additionally, you distribute logic that belongs together all over the place (step function, code, etc.) For example, a change in order fulfillment might mean you have to rethink your cancelation procedure, which is obvious if cancelation is part of the model.

To summarize, the lack of advanced workflow patterns requires workarounds, which are not only hard to do but also make the model hard to understand and thus weakens the value proposition of an orchestration engine. 

Modeling with BPMN

Now let’s contrast this with modeling using the ISO standard BPMN within Camunda:

model using the ISO standard BPMN within Camunda

This model is directly executable on engines that support BPMN, like Camunda. As you can see, BPMN supports all required advanced workflow patterns to make it not only easy to model this process but also yields a very understandable model.

Let’s briefly call out the workflow patterns (besides the basics like sequence, condition, and wait) that helped to make this process so easy to implement:

  1. Dynamic parallel branches
  2. Reacting to external message events with correlation mechanisms
  3. Reacting to time-based events

This model can be perfectly used to discuss the process with various stakeholders, and can further be shown in technical operations (e.g., if some process instance gets stuck) or business analysis (e.g., to understand which orders are canceled most and in which state of the process execution). Below is a sample screenshot of the operations tooling showing a process instance with six order items, where one raised an incident. You can see how easy it gets to dive into potential operational problems.

screenshot of an incident occurring in Operate

Let’s not let history repeat itself!

I remember one of my projects using the workflow engine JBoss jBPM 3.x back in 2009. I was in Switzerland for a couple of weeks, sorting out exception scenarios and describing patterns on how to deal with those. Looking back, this was hard because jBPM 3 lacked a lot of essential workflow patterns, especially around the reaction to events or error scopes, which I did not know back then. In case you enjoy nostalgic pictures as much as I do, this is a model from back then:

jBPM model example

I’m happy to see BPMN removed the need for all of those workarounds necessary, creating a lot of frustration among developers. Additionally, the improved visualization really allowed me to discuss process models with a larger group of people with various experience levels and backgrounds in process orchestration. 

Interestingly enough, many modern workflow or orchestration engines lack the advanced workflow patterns described above. Often, this comes with the promise of being simpler than BPMN. But in reality, claims of simplicity mean they lack essential patterns. Hence, if you follow the development of these modeling languages over time, you will see that they add patterns once in a while, and whenever such a tool is successful, it almost inevitably ends up with a language complexity comparable to BPMN but in a proprietary way. As a result, process models in those languages are typically harder to understand. 

At the same time, developing a workflow language is very hard, so chances are high that vendors will take a long time to develop proper pattern support. I personally don’t understand this motivation, as the knowledge about workflow patterns is available, and BPMN implements it in an industry-proven way, even as an ISO standard. 

Conclusion

The reality of a business process requires advanced workflow patterns. If a product does not natively support them, its users will need to create technical workarounds, as you could see in the example earlier: 

  • ASL lacked pattern and required complex workarounds.
  • BPMN supports all required patterns and produces a very comprehensible model.

Emulating advanced patterns with basic constructs and/or programming code, as necessary for ASL, means:

  • Your development takes longer.
  • Your solution might come with technical weaknesses, like limited scalability or observability.
  • You cannot use the executable process model as a communication vehicle for business and IT.

To summarize, ensure you use an orchestration product supporting all-important workflow patterns, such as Camunda, which uses BPMN as workflow language.

The post Why process orchestration needs advanced workflow patterns appeared first on Camunda.

]]>