Skip to main content
Version: 8.8 (unreleased)

AI Agent connector

Use the AI Agent outbound connector to integrate Large Language Models (LLMs) with AI agents.

About this connector

The AI Agent connector enables AI agents to integrate with an LLM to provide interaction/reasoning capabilities. This connector is designed for use with an ad-hoc sub-process in a feedback loop, providing automated user interaction and tool selection.

For example, use this connector to enable an AI agent to autonomously select and execute tasks within ad-hoc sub-processes by evaluating the current process context and determining the relevant tasks and tools to use in response. You can also use the AI Agent connector independently, although it is designed to be used with an ad-hoc sub-process to define the tools an AI agent can use.

Core features include:

FeatureDescription
LLM provider supportSupports a range of LLM providers, such as OpenAI and Anthropic.
MemoryProvides conversational/short-term memory handling to enable feedback loops. For example, this allows a user to ask follow-up questions to an AI agent response.
Tool callingSupport for an AI agent to interact with tasks within an ad-hoc sub-process, allowing use of all Camunda features such as connectors and user tasks (human-in-the-loop). Automatic tool resolution allows an AI agent to identify the tools available in an ad-hoc sub-process.
tip

New to agentic orchestration?

How to use this connector

This connector is typically used in a feedback loop, with the connector task repeatedly looped back to during an AI agent process.

For example, the following diagram shows a tool calling loop:

agenticai-ai-agent-loop-overview.png

  1. A request is made to the AI agent connector task, where the LLM determines what action to take.
  2. If the AI agent decides that further action is needed, the process enters the ad-hoc sub-process and calls any tools deemed necessary to satisfactorily resolve the request.
  3. The process loops back and re-enters the AI agent connector task, where the LLM decides (with contextual memory) if more action is needed before the process can continue. The process loops repeatedly in this manner until the AI agent decides it is complete, and passes the AI agent response to the next step in the process.

Feedback loop use cases

Typical feedback loop use cases for this connector include the following:

Use caseDescription
Tool calling

In combination with an ad-hoc sub-process, the AI Agent connector will resolve available tools and their input parameters, and pass these tool definitions to the LLM.

  • The LLM generates a response, that might include tool calls (a request to call a tool paired with the input parameters).

  • If tool calls are requested, model the process to pass these tool calls to the ad-hoc sub-process and to return the tool call results to the AI Agent task by modelling the feedback loop.

Response interactionAfter returning a response (and without calling any tools), model the process to act upon the response. For example, present the response to a user who can then ask follow-up questions back to the AI Agent connector.

As the agent preserves the context of the conversation, follow-up questions/tasks and handling of tool call results can relate to the previous interaction with the LLM, allowing the LLM to provide more relevant responses.

An important concept to understand is the use of the Agent context process variable to store information required for allowing re-entry to the AI Agent connector task with the same context as before. This variable is mapped as both an input and output variable of the connector and updates with each agent execution.

important

When modelling an AI Agent, you must align the agent context input variable and the response variable/expression so that the context update is correctly passed to the next execution of the AI Agent connector task.

Example conversation

The following is a high-level example conversation with the AI Agent connector, including both user and tool feedback loops. The conversational awareness provided by the agent context allows use cases such as the user only responding with Yes, please proceed, with the agent understanding what to do next.

# Initial input/user prompt
User: Is John Doe eligible for a credit card?

# Tool feedback loop
AI Agent: Call the `Check_Credit_Card_Eligibility` tool with the following parameters: {"name": "John Doe"}
<process routes through ad-hoc sub-process>
Tool Call Result: {"Check_Credit_Card_Eligibility": {"eligible": true}}

# User feedback loop
AI Agent: John Doe is eligible for a credit card. Would you like to proceed?
<process routes to a user task as no tool calls are requested>
User: Yes, please proceed.

AI Agent: Call the `Create_Credit_Card` tool with the following parameters: {"name": "John Doe"}
Tool Call Result: {"Create_Credit_Card": {"success": true}}

AI Agent: John Doe's credit card has been created successfully.

Prerequisites

The following prerequisites are required to use this connector:

PrerequisiteDescription
Set up your LLM model provider and authentication

Prior to using this connector, you must have previously set up an account with access and authentication details for the supported LLM model provider you want to use.

For example:

  • To use an LLM model provided by Amazon Bedrock, you must have an AWS account with an access key and secret key to execute Converse actions.

  • For OpenAI, you must configure the OpenAI model and obtain an OpenAI API key to use for authentication.

Configuration

Model Provider

Select and configure authentication for the LLM model Provider you want to use, from the following supported providers:

note
  • Different setup/authentication fields are shown depending on the provider you select.
  • Use connector secrets to store credentials and avoid exposing sensitive information directly from the process.

Anthropic

Select this option to use an Anthropic Claude LLM model (uses the Anthropic Messages API).

FieldRequiredDescription
Anthropic API KeyYesYour Anthropic account API Key for authorization to the Anthropic Messages API.
info

For more information about Anthropic Claude LLM models, refer to the Claude models overview.

Bedrock

Select this option to use a model provided by the Amazon Bedrock service, using the Converse API.

FieldRequiredDescription
RegionYesThe AWS region. Example: us-east-1
AuthenticationYesSelect the authentication type you want to use to authenticate the connector with AWS. To learn more about configuring AWS authentication, see Amazon Bedrock connector authentication.

Model availability depends on the region and model you use. You might need to request a model is made available for your account. To learn more about configuring access to foundation models, refer to access to Amazon Bedrock foundation models.

info

For a list of Amazon Bedrock LLM models, refer to supported foundation models in Amazon Bedrock.

OpenAI

Select this option to use the OpenAI Chat Completion API.

FieldRequiredDescription
OpenAI API KeyYesYour OpenAI account API Key for authorization.
Organization IDNoFor members of multiple organizations. If you belong to multiple organizations, specify the organization ID to use for API requests with this connector.
Project IDNoIf you access projects through a legacy user API key, specify the project ID to use for API requests with this connector.
Custom API endpointNoOptional custom API endpoint.
Custom headersNoMap of custom headers to add to the request.
info

To learn more about authentication to the OpenAPI API, refer to OpenAPI platform API reference.

Model

Select the model you want to use for the selected provider, and specify any additional model parameters.

FieldRequiredDescription
ModelYes

Specify the model ID for the model you want to use.

Example: anthropic.claude-3-5-sonnet-20240620-v1:0

Maximum tokensNoThe maximum number of tokens per request to allow in the generated response.
Maximum Completion TokensNoThe maximum number of tokens per request to generate before stopping.
TemperatureNoFloating point number, typically between 0 and 1 (0 and 2 for OpenAI). The higher the number, the more randomness will be injected into the response.
top PNoFloating point number, typically between 0 and 1. Recommended for advanced use cases only (usually you only need to use temperature).
top KNoInteger greater than 0. Recommended for advanced use cases only (you usually only need to use temperature).
note
  • Different model parameter fields are shown depending on the provider/model you select. Additionally, some parameters may be different or have different value ranges (for example, OpenAI Temperature uses a number range between 0 to 2, whereas other models use a range between 0 to 1).
  • For more information on each model parameter, refer to the provider documentation links in the element template.
  • Parameters that set maximum values (such as maximum tokens) are considered per LLM request, not for the whole conversation. Depending on the provider, the exact meaning of these parameters may vary.

System Prompt

The System Prompt is a crucial part of the AI Agent connector configuration, as it defines the behavior and goal of the agent, and instructs the LLM on how to act.

FieldRequiredDescription
System PromptYes

Specify a system prompt to define how the LLM should act.

  • A minimal example system prompt is provided as a starting point for you to customize.

  • You can use FEEL expressions or inject parameters into the text in this field, using the {{parameter}} syntax to inject any parameters defined in the System Prompt Parameters field (FEEL context).

    Example: {{current_date_time}}.

System Prompt ParametersNo

Define a map of parameters you can use in {{parameter}} format in the System Prompt field.

The default parameters (current_date, current_time, current_date_time) do not need to be explicitly defined in this field.

User Prompt

The User Prompt contains the actual request to the LLM model.

FieldRequiredDescription
User PromptYes

This could either contain the initial request or a follow-up request as part of a response interaction feedback loop.

  • The value provided as part of this field is added to the conversation memory and passed to the LLM call.

  • For example, in the example conversation, this would be the messages prefixed with User:.

  • You can use FEEL expressions or inject parameters into the text in this field, using the {{parameter}} syntax to inject any parameters defined in the User Prompt Parameters field (FEEL context).

    Example: {{current_date_time}}.

User Prompt ParametersNo

Define a map of parameters you can use in {{parameter}} format in the User Prompt field.

The default parameters (current_date, current_time, current_date_time) do not need to be explicitly defined in this field.

DocumentsNo

Add a document references list to allow an AI agent to interact with documents and images.

  • This list is internally resolved and passed to the LLM model if the document type is supported.

  • LLM APIs provide a way to specify the user prompt as a list of content blocks. If document references are passed, they are resolved to a corresponding content block and passed as part of the user message.

  • For examples of how LLM providers accept document content blocks, refer to the Anthropic and OpenAI documentation.

Supported document types

As file type support varies by LLM provider/model, you must test your document use case with the provider you are using.

File typeSupportedDescription
TextYesText files (MIME types matching text/*, application/xml, application/json, or application/yaml) are passed as plain text content blocks.
PDFYesPDF files (MIME types matching application/pdf) are passed as base64 encoded content blocks.
ImageYesImage files (MIME types matching image/jpg, image/png, image/gif, or image/webp) are passed as base64 encoded content blocks.
Audio/video/otherNoAudio and video files are not currently supported, and will result in an error if passed. All other unsupported file types not listed here will also result in an error if passed.
info

To learn more about storing, tracking, and managing documents in Camunda 8, see document handling.

Tools

Specify the tool resolution for an accompanying ad-hoc sub-process.

FieldRequiredDescription
Ad-hoc sub-process IDNo

Specify the element ID of the ad-hoc sub-process to use for tool resolution (see Tool Resolution).

When entering the AI Agent connector, the connector resolves the tools available in the ad-hoc sub-process, and passes these to the LLM as part of the prompt.

Tool Call ResultsNo

Specify the results collection of the ad-hoc sub-process multi-instance execution.

Example: =toolCallResults

note
  • Leave this section empty if using this connector independently, without an accompanying ad-hoc sub-process.
  • To actually use the tools, you must model your process to include a tools feedback loop, routing into the ad-hoc sub-process and back to the AI agent connector. See example tools feedback loop.

Memory

Configure the Agent's short-term/conversational memory.

FieldRequiredDescription
Agent ContextYes

Specify an agent context variable to store all relevant data for the agent to support a feedback loop between user requests, tool calls, and LLM responses. Make sure this variable points to the context variable that is returned from the agent response.

This is an important variable required to make a feedback loop work correctly. This variable must be aligned with the Output mapping Result variable and Result expression for this connector.

Example: =agent.context

Maximum messagesNo

Specify the maximum number of messages to keep in context and pass to the LLM on every call.

  • Configuring this is a trade-off between cost/tokens and the context window supported by the used model.

  • When the conversation exceeds the maximum number of messages, oldest messages from past feedback loops will be removed first. The system prompt is always kept in the context.

Limits

Set limits for the agent interaction to prevent unexpected behavior or unexpected cost due to infinite loops.

FieldRequiredDescription
Maximum model callsNoSpecify the maximum number of model calls. As a safeguard, this limit defaults to a value of 10 if you do not configure this value.
caution

Despite these limits, you must closely monitor your LLM API usage and cost, and set appropriate limits on the provider side.

Response

Configure the response from the AI Agent connector for further processing.

For example, the LLM call typically returns one text content block plus additional metadata such as token usage, but could contain multiple content blocks, depending on the LLM provider and selected model.

FieldRequiredDescription
Include text outputNo

Returns the first text block returned by the LLM as responseText.

  • Typically a good option if you want to use the agent's text output for further processing.

  • This option is selected by default.

Include assistant messageNo

Returns the entire message returned by the LLM as `responseMessage', including any additional content blocks and metadata.

Select this option if you need more than just the first response text.

If you select both options, the response object contains both responseText and responseMessage fields, for example:

{
"responseText": "Based on the result from the GetDateAndTime function, the current date and time is:\n\nJune 2, 2025, 09:15:38 AM (Central European Summer Time).",
"responseMessage": {
"role": "assistant",
"content": [
{
"type": "text",
"text": "Based on the result from the GetDateAndTime function, the current date and time is:\n\nJune 2, 2025, 09:15:38 AM (Central European Summer Time)."
}
],
"metadata": {
"framework": {
"tokenUsage": {
"inputTokenCount": 1563,
"outputTokenCount": 95,
"totalTokenCount": 1658
},
"finishReason": "STOP"
}
}
}
}

To retrieve the response text from the responseMessage object, use the following FEEL expression (assuming the response variable is named agent):

agent.responseMessage.content[type = "text"][1].text

Output mapping

Specify the process variables that you want to map and export the AI Agent connector response into.

FieldRequiredDescription
Result variableYes

The result of the AI Agent connector is a context containing the following fields:

  • context: The updated Agent Context. Make sure you map this to a process variable and re-inject this variable in the Agent Context input field if your AI agent is part of a feedback loop.

  • toolCalls: Tool call requests provided by the LLM that need to be routed to the ad-hoc sub-process.

  • responseText: The last response text provided by the LLM if the Include text output option is enabled in the Response section.

  • responseMessage: The last response message provided by the LLM if the Include assistant message option is enabled in the Response section.

Result expressionNoIn addition, you can choose to unpack the content of the response into multiple process variables using the Result expression field, as a FEEL Context Expression.
tip

An easy approach to get started with modeling your first AI Agent is to use the result variable (for example, agent) and configure the Agent Context as agent.context.

info

To learn more about output mapping, see variable/response mapping.

Error handling

If an error occurs, the AI Agent connector throws an error and includes the error response in the error variable in Operate.

FieldRequiredDescription
Error expressionNoYou can handle an AI Agent connector error using an Error Boundary Event and error expressions.

Retries

Specify connector execution retry behavior if execution fails.

FieldRequiredDescription
RetriesNoSpecify the number of retries (times) the connector repeats execution if it fails.
Retry backoffNoSpecify a custom Retry backoff interval between retries instead of the default behavior of retrying immediately.

Execution listeners

Add and manage execution listeners to allow users to react to events in the workflow execution lifecycle by executing custom logic.

Additional resources