Skip to main content
Version: 8.8

Build your first AI agent

Beginner Time estimate: 45 minutes

Get started with Camunda agentic orchestration by building and running your first AI agent.

About

AI agents represent the practical implementation of agentic process orchestration within Camunda, combining the flexibility of AI with the reliability of traditional process automation.

In Camunda, an AI agent refers to an automation solution that uses ad-hoc sub-processes to perform tasks with non-deterministic behavior.

In this guide, you will:

After completing it, you will have an example AI agent running in Camunda 8.

Prerequisites

To build your first AI agent, see the prerequisites below depending on:

  • Your working environment.
  • Your chosen model.

Camunda 8 environment

To run your agent, you must have Camunda 8 (version 8.8 or newer) running, using either:

Supported models

The AI Agent connector makes it easy to integrate LLMs into your process workflows, with out-of-the-box support for popular model providers. It can also connect to any additional LLM that exposes an OpenAI-compatible API. See all supported model providers for more details.

In this guide, you can try two use cases:

SetupModel providerModel usedPrerequisites
CloudAWS BedrockClaude Sonnet 4

LocalOllamaGPT-OSS:20b

important

Running LLMs locally requires substantial disk space and memory. GPT-OSS:20b requires more than 20GB of RAM to function and 14GB of free disk space to download.

Step 1: Install the model blueprint

To start building your first AI agent, you can use a Camunda model blueprint from Camunda marketplace.

In this guide, you will use the AI Agent Chat Quick Start model blueprint. Depending on your working environment, follow the corresponding steps below.

  1. In the blueprint page, click For SAAS and select or create a project to save the blueprint.
  2. The blueprint BPMN diagram opens in Web Modeler.

About the example AI agent process

The example AI agent process is a chatbot that you can interact with via a user task form.

A example AI agent BPMN process diagram

This process showcases how an AI agent can:

  • Make autonomous decisions about which tasks to execute based on your input.
  • Adapt its behavior dynamically using the context provided.
  • Handle complex scenarios by selecting and combining different tools.
  • Integrate seamlessly with other process components.

The example includes a form linked to the start event, allowing you to submit requests ranging from simple questions to more complex tasks, such as document uploads.

Example AI agent start form
Understand the decision model behind this example

To make this agent reliable, treat each activity in the ad-hoc sub-process as a documented tool. Learn why this matters in AI agents: Why tool documentation in ad-hoc sub-processes matters.

For a runtime view of what the LLM decides vs. what Camunda orchestrates, see Design and architecture: How execution works in an AI agent.

For prompt configuration details, see AI Agent connector: System prompt, user prompt, and tool descriptions.

Step 2: Configure the AI Agent connector

Depending on your model choice, configure the AI Agent connector accordingly.

Configure the connector's authentication and template for AWS Bedrock.

Configure authentication

The example blueprint downloaded in step one is preconfigured to use AWS Bedrock. For authentication, it uses the following connector secrets:

  • AWS_BEDROCK_ACCESS_KEY: The AWS Access Key ID for your AWS account able to call the Bedrock Converse API.
  • AWS_BEDROCK_SECRET_KEY: The AWS Secret Access Key for your AWS account.

You will configure these secrets differently depending on your working environment.

Configure the secrets using the Console.

See Amazon Bedrock model provider for more information about other available authentication methods.

Configure properties

In the blueprint BPMN diagram, the AI agent is implemented using the AI Agent Sub-process connector.

You can keep the default configuration or adjust it to test other setups. To do so, use the properties panel:

AI agent properties panel
tip

When configuring connectors, use FEEL expressions, by clicking the fx icon, to reference process variables and create dynamic prompts based on runtime data.

Step 3: Test your AI agent

Deploy and run your AI agent in your Camunda cluster.

important

Whether you are testing your agent in Camunda 8 SaaS or locally with Camunda 8 Self-Managed, make sure you’re running a cluster with version 8.8 or higher.

Depending on your working environment, test your agent by following the corresponding steps below.

  1. Open Web Modeler.
  2. Select the Play tab
  3. Select the cluster you want to deploy and play the process on.
  4. Open the Start form and add a prompt for the AI agent. For example, enter "Tell me a joke" in the How can I help you today? field, and click Start instance.
  5. The AI agent analyzes your prompt, decides what tools to use, and responds with an answer. Open the Task form to view the result.
  6. You can monitor the process execution in Operate.
  7. You can follow up with more prompts to continue testing the AI agent. Select the Are you satisfied with the result? checkbox when you want to finish your testing and complete the process.
tip

Instead of using Play, you can also test the process within the Implement tab using Deploy & Run, and use Tasklist to complete the form.

What to expect during execution

When you run the AI agent process:

  1. The AI agent receives your prompt and analyzes it together with the configured system prompt and tool descriptions.
  2. The LLM determines which tools from the ad-hoc subprocess should be activated.
  3. Camunda executes the selected BPMN activities.
  4. Tasks can execute in parallel or sequentially, depending on the agent's decisions and process state.
  5. Process variables are updated as each tool completes its execution.
  6. The agent may iterate through multiple tool calls to handle complex requests.

You can observe this dynamic behavior in real-time through Operate, where you'll see which tasks were activated and in what order.

Step 4: Add your first tool

You can customize your AI agent by adding tools. In this section, you will add a tool that fetches weather conditions for a given location using the Open-Meteo API.

Add a REST connector task

  1. Inside the AI agent sub-process, add a new task element.
  2. Change the task type to REST Outbound Connector using the Change element menu.
  3. Name the task. For example, Get current weather. This name is visible to the LLM as the tool name.

Write a tool description

The LLM selects tools based on their description. Open the Documentation field in the properties panel and add a clear description of what the tool does and when to use it. For example:

Fetches current weather conditions for a given location. Use this tool when the user asks about weather, temperature, wind, or climate conditions for a city or place. Returns temperature in Celsius, wind speed, and a weather description.
tip

Provide as much context as possible in tool descriptions to help the LLM select the right tool and generate proper inputs.

Configure the REST connector

Set up the HTTP request in the properties panel:

  1. In the Authentication section, select None.

  2. In the HTTP Endpoint section:

    • Set Method to GET.

    • Set URL to the following FEEL expression by clicking the fx icon:

      "https://api.open-meteo.com/v1/forecast"
    • Set Query parameters to:

    {
    latitude: fromAi(toolCall.latitude, "Latitude of the location to check weather for", "string"),
    longitude: fromAi(toolCall.longitude, "Longitude of the location to check weather for", "string"),
    current: "temperature_2m,wind_speed_10m,weather_code"
    }

The fromAi() calls tell the AI Agent connector which parameters the LLM must provide. At runtime, the LLM generates the latitude and longitude values based on the user's request, while the current parameter is a fixed value that selects which weather fields to return.

Map the response to toolCallResult

Each tool within the AI agent sub-process must return its result in a toolCallResult variable so the AI Agent connector can pass it back to the LLM.

In the Output Mapping section, set Result Expression to:

{
toolCallResult: {
latitude: response.body.latitude,
longitude: response.body.longitude,
temperature_celsius: response.body.current.temperature_2m,
wind_speed_kmh: response.body.current.wind_speed_10m,
weather_code: response.body.current.weather_code
}
}

This extracts the relevant fields from the Open-Meteo API response and returns them in a structure the LLM can interpret and summarize for the user.

Test the new tool

Deploy the updated process and start a new instance. Try prompts like:

  • "What's the weather in Paris right now?"
  • "Is it windy in Tokyo?"
  • "Tell me the temperature in New York"

The LLM will recognize these as weather requests, select the Get current weather tool, provide the appropriate latitude and longitude values, and summarize the response in natural language.

Add your own tools

To add more tools to your agent, follow the same pattern:

  1. Add a task inside the ad-hoc sub-process and apply a connector or configure a job worker.
  2. Write a clear tool name and Documentation description so the LLM knows when to use it.
  3. Use fromAi() in input mappings to define the parameters the LLM must provide.
  4. Return toolCallResult in the result expression or output mapping.

At runtime, each tool call produces one toolCallResult, and the ad-hoc multi-instance output collection aggregates them into toolCallResults for the AI Agent connector.

tip

For more examples, review the tasks already available in this blueprint and the AI Agent tool definitions documentation.

Next steps

Now that you’ve built your first Camunda AI agent, you can tailor it further. For example:

  • Add and configure more tools.
  • Update the system prompt to adjust the AI agent's behavior.
  • Experiment with different model providers and configurations in the AI Agent connector.

Learn more about Camunda agentic orchestration and the AI Agent connector.

Camunda Academy

Register for the free Camunda 8 - Agentic Orchestration course to learn how to model, deploy, and manage AI agents in your end-to-end processes.