AI Agent connector
Use the AI Agent outbound connector to integrate Large Language Models (LLMs) with AI agents.
About this connector
The AI Agent connector enables AI agents to integrate with an LLM to provide interaction/reasoning capabilities. This connector is designed for use with an ad-hoc sub-process in a feedback loop, providing automated user interaction and tool selection.
For example, use this connector to enable an AI agent to autonomously select and execute tasks within ad-hoc sub-processes by evaluating the current process context and determining the relevant tasks and tools to use in response. You can also use the AI Agent connector independently, although it is designed to be used with an ad-hoc sub-process to define the tools an AI agent can use.
Core features include:
Feature | Description |
---|---|
LLM provider support | Supports a range of LLM providers, such as OpenAI and Anthropic. |
Memory | Provides conversational/short-term memory handling to enable feedback loops. For example, this allows a user to ask follow-up questions to an AI agent response. |
Tool calling | Support for an AI agent to interact with tasks within an ad-hoc sub-process, allowing use of all Camunda features such as connectors and user tasks (human-in-the-loop). Automatic tool resolution allows an AI agent to identify the tools available in an ad-hoc sub-process. |
New to agentic orchestration?
- See the example AI Agent connector integration for a worked example of a simple Agent AI feedback loop model.
- See additional resources for examples of how you can use the AI Agent connector.
How to use this connector
This connector is typically used in a feedback loop, with the connector task repeatedly looped back to during an AI agent process.
For example, the following diagram shows a tool calling loop:
- A request is made to the AI agent connector task, where the LLM determines what action to take.
- If the AI agent decides that further action is needed, the process enters the ad-hoc sub-process and calls any tools deemed necessary to satisfactorily resolve the request.
- The process loops back and re-enters the AI agent connector task, where the LLM decides (with contextual memory) if more action is needed before the process can continue. The process loops repeatedly in this manner until the AI agent decides it is complete, and passes the AI agent response to the next step in the process.
Feedback loop use cases
Typical feedback loop use cases for this connector include the following:
Use case | Description |
---|---|
Tool calling | In combination with an ad-hoc sub-process, the AI Agent connector will resolve available tools and their input parameters, and pass these tool definitions to the LLM.
|
Response interaction | After returning a response (and without calling any tools), model the process to act upon the response. For example, present the response to a user who can then ask follow-up questions back to the AI Agent connector. |
As the agent preserves the context of the conversation, follow-up questions/tasks and handling of tool call results can relate to the previous interaction with the LLM, allowing the LLM to provide more relevant responses.
An important concept to understand is the use of the Agent context process variable to store information required for allowing re-entry to the AI Agent connector task with the same context as before. This variable is mapped as both an input and output variable of the connector and updates with each agent execution.
When modelling an AI Agent, you must align the agent context input variable and the response variable/expression so that the context update is correctly passed to the next execution of the AI Agent connector task.
Example conversation
The following is a high-level example conversation with the AI Agent connector, including both user and tool feedback loops. The conversational awareness provided by the agent context allows use cases such as the user only responding with Yes, please proceed
, with the agent understanding what to do next.
# Initial input/user prompt
User: Is John Doe eligible for a credit card?
# Tool feedback loop
AI Agent: Call the `Check_Credit_Card_Eligibility` tool with the following parameters: {"name": "John Doe"}
<process routes through ad-hoc sub-process>
Tool Call Result: {"Check_Credit_Card_Eligibility": {"eligible": true}}
# User feedback loop
AI Agent: John Doe is eligible for a credit card. Would you like to proceed?
<process routes to a user task as no tool calls are requested>
User: Yes, please proceed.
AI Agent: Call the `Create_Credit_Card` tool with the following parameters: {"name": "John Doe"}
Tool Call Result: {"Create_Credit_Card": {"success": true}}
AI Agent: John Doe's credit card has been created successfully.
Prerequisites
The following prerequisites are required to use this connector:
Prerequisite | Description |
---|---|
Set up your LLM model provider and authentication | Prior to using this connector, you must have previously set up an account with access and authentication details for the supported LLM model provider you want to use. For example:
|
Configuration
Model Provider
Select and configure authentication for the LLM model Provider you want to use, from the following supported providers:
- Anthropic (Claude models)
- Amazon Bedrock
- OpenAI
- Different setup/authentication fields are shown depending on the provider you select.
- Use connector secrets to store credentials and avoid exposing sensitive information directly from the process.
Anthropic
Select this option to use an Anthropic Claude LLM model (uses the Anthropic Messages API).
Field | Required | Description |
---|---|---|
Anthropic API Key | Yes | Your Anthropic account API Key for authorization to the Anthropic Messages API. |
For more information about Anthropic Claude LLM models, refer to the Claude models overview.
Bedrock
Select this option to use a model provided by the Amazon Bedrock service, using the Converse API.
Field | Required | Description |
---|---|---|
Region | Yes | The AWS region. Example: us-east-1 |
Authentication | Yes | Select the authentication type you want to use to authenticate the connector with AWS. To learn more about configuring AWS authentication, see Amazon Bedrock connector authentication. |
Model availability depends on the region and model you use. You might need to request a model is made available for your account. To learn more about configuring access to foundation models, refer to access to Amazon Bedrock foundation models.
For a list of Amazon Bedrock LLM models, refer to supported foundation models in Amazon Bedrock.
OpenAI
Select this option to use the OpenAI Chat Completion API.
Field | Required | Description |
---|---|---|
OpenAI API Key | Yes | Your OpenAI account API Key for authorization. |
Organization ID | No | For members of multiple organizations. If you belong to multiple organizations, specify the organization ID to use for API requests with this connector. |
Project ID | No | If you access projects through a legacy user API key, specify the project ID to use for API requests with this connector. |
Custom API endpoint | No | Optional custom API endpoint. |
Custom headers | No | Map of custom headers to add to the request. |
To learn more about authentication to the OpenAPI API, refer to OpenAPI platform API reference.
Model
Select the model you want to use for the selected provider, and specify any additional model parameters.
Field | Required | Description |
---|---|---|
Model | Yes | Specify the model ID for the model you want to use. Example: |
Maximum tokens | No | The maximum number of tokens per request to allow in the generated response. |
Maximum Completion Tokens | No | The maximum number of tokens per request to generate before stopping. |
Temperature | No | Floating point number, typically between 0 and 1 (0 and 2 for OpenAI). The higher the number, the more randomness will be injected into the response. |
top P | No | Floating point number, typically between 0 and 1. Recommended for advanced use cases only (usually you only need to use temperature). |
top K | No | Integer greater than 0. Recommended for advanced use cases only (you usually only need to use temperature). |
- Different model parameter fields are shown depending on the provider/model you select. Additionally, some parameters may be different or have different value ranges (for example, OpenAI Temperature uses a number range between 0 to 2, whereas other models use a range between 0 to 1).
- For more information on each model parameter, refer to the provider documentation links in the element template.
- Parameters that set maximum values (such as maximum tokens) are considered per LLM request, not for the whole conversation. Depending on the provider, the exact meaning of these parameters may vary.
System Prompt
The System Prompt is a crucial part of the AI Agent connector configuration, as it defines the behavior and goal of the agent, and instructs the LLM on how to act.
Field | Required | Description |
---|---|---|
System Prompt | Yes | Specify a system prompt to define how the LLM should act.
|
System Prompt Parameters | No | Define a map of parameters you can use in The default parameters ( |
User Prompt
The User Prompt contains the actual request to the LLM model.
Field | Required | Description |
---|---|---|
User Prompt | Yes | This could either contain the initial request or a follow-up request as part of a response interaction feedback loop.
|
User Prompt Parameters | No | Define a map of parameters you can use in The default parameters ( |
Documents | No | Add a document references list to allow an AI agent to interact with documents and images.
|
Supported document types
As file type support varies by LLM provider/model, you must test your document use case with the provider you are using.
File type | Supported | Description |
---|---|---|
Text | Yes | Text files (MIME types matching text/* , application/xml , application/json , or application/yaml ) are passed as plain text content blocks. |
Yes | PDF files (MIME types matching application/pdf ) are passed as base64 encoded content blocks. | |
Image | Yes | Image files (MIME types matching image/jpg , image/png , image/gif , or image/webp ) are passed as base64 encoded content blocks. |
Audio/video/other | No | Audio and video files are not currently supported, and will result in an error if passed. All other unsupported file types not listed here will also result in an error if passed. |
To learn more about storing, tracking, and managing documents in Camunda 8, see document handling.
Tools
Specify the tool resolution for an accompanying ad-hoc sub-process.
Field | Required | Description |
---|---|---|
Ad-hoc sub-process ID | No | Specify the element ID of the ad-hoc sub-process to use for tool resolution (see Tool Resolution). When entering the AI Agent connector, the connector resolves the tools available in the ad-hoc sub-process, and passes these to the LLM as part of the prompt. |
Tool Call Results | No | Specify the results collection of the ad-hoc sub-process multi-instance execution. Example: |
- Leave this section empty if using this connector independently, without an accompanying ad-hoc sub-process.
- To actually use the tools, you must model your process to include a tools feedback loop, routing into the ad-hoc sub-process and back to the AI agent connector. See example tools feedback loop.
Memory
Configure the Agent's short-term/conversational memory.
Field | Required | Description |
---|---|---|
Agent Context | Yes | Specify an agent context variable to store all relevant data for the agent to support a feedback loop between user requests, tool calls, and LLM responses. Make sure this variable points to the This is an important variable required to make a feedback loop work correctly. This variable must be aligned with the Output mapping Result variable and Result expression for this connector. Example: |
Maximum messages | No | Specify the maximum number of messages to keep in context and pass to the LLM on every call.
|
Limits
Set limits for the agent interaction to prevent unexpected behavior or unexpected cost due to infinite loops.
Field | Required | Description |
---|---|---|
Maximum model calls | No | Specify the maximum number of model calls. As a safeguard, this limit defaults to a value of 10 if you do not configure this value. |
Despite these limits, you must closely monitor your LLM API usage and cost, and set appropriate limits on the provider side.
Response
Configure the response from the AI Agent connector for further processing.
For example, the LLM call typically returns one text content block plus additional metadata such as token usage, but could contain multiple content blocks, depending on the LLM provider and selected model.
Field | Required | Description |
---|---|---|
Include text output | No | Returns the first text block returned by the LLM as
|
Include assistant message | No | Returns the entire message returned by the LLM as `responseMessage', including any additional content blocks and metadata. Select this option if you need more than just the first response text. |
If you select both options, the response object contains both responseText
and responseMessage
fields, for example:
{
"responseText": "Based on the result from the GetDateAndTime function, the current date and time is:\n\nJune 2, 2025, 09:15:38 AM (Central European Summer Time).",
"responseMessage": {
"role": "assistant",
"content": [
{
"type": "text",
"text": "Based on the result from the GetDateAndTime function, the current date and time is:\n\nJune 2, 2025, 09:15:38 AM (Central European Summer Time)."
}
],
"metadata": {
"framework": {
"tokenUsage": {
"inputTokenCount": 1563,
"outputTokenCount": 95,
"totalTokenCount": 1658
},
"finishReason": "STOP"
}
}
}
}
To retrieve the response text from the responseMessage
object, use the following FEEL expression (assuming the response variable is named agent
):
agent.responseMessage.content[type = "text"][1].text
Output mapping
Specify the process variables that you want to map and export the AI Agent connector response into.
Field | Required | Description |
---|---|---|
Result variable | Yes | The result of the AI Agent connector is a context containing the following fields:
|
Result expression | No | In addition, you can choose to unpack the content of the response into multiple process variables using the Result expression field, as a FEEL Context Expression. |
An easy approach to get started with modeling your first AI Agent is to use the result variable (for example, agent
) and configure the Agent Context as agent.context
.
To learn more about output mapping, see variable/response mapping.
Error handling
If an error occurs, the AI Agent connector throws an error and includes the error response in the error variable in Operate.
Field | Required | Description |
---|---|---|
Error expression | No | You can handle an AI Agent connector error using an Error Boundary Event and error expressions. |
Retries
Specify connector execution retry behavior if execution fails.
Field | Required | Description |
---|---|---|
Retries | No | Specify the number of retries (times) the connector repeats execution if it fails. |
Retry backoff | No | Specify a custom Retry backoff interval between retries instead of the default behavior of retrying immediately. |
Execution listeners
Add and manage execution listeners to allow users to react to events in the workflow execution lifecycle by executing custom logic.
Additional resources
- Intelligent by Design: A Step-by-Step Guide to AI Task Agents in Camunda.
- AI Email Support Agent Blueprint on the Camunda Marketplace.
- AI integration GitHub repository working examples.