Skip to main content
Version: 8.8 (unreleased)

AI Agent Task connector

Implement an AI agent using an AI Agent connector applied to a service task, paired with an optional ad-hoc sub-process to provide tools usable by the AI.

info
  • For more information and usage examples, see AI Agent Task.
  • The example integration page outlines how to model an agentic AI process using the AI Agent Task implementation.
AI Agent Task with tool calling feedback loop

Configuration

Model Provider

Select and configure authentication for the LLM model Provider you want to use, from the following supported providers:

note
  • Different setup/authentication fields are shown depending on the provider you select.
  • Use connector secrets to store credentials and avoid exposing sensitive information directly from the process.

Anthropic

Select this option to use an Anthropic Claude LLM model (uses the Anthropic Messages API).

FieldRequiredDescription
Anthropic API keyYesYour Anthropic account API key for authorization to the Anthropic Messages API.
info

For more information about Anthropic Claude LLM models, refer to the Claude models overview.

Bedrock

Select this option to use a model provided by the Amazon Bedrock service, using the Converse API.

FieldRequiredDescription
RegionYesThe AWS region. Example: us-east-1
AuthenticationYesSelect the authentication type you want to use to authenticate the connector with AWS. To learn more about configuring AWS authentication, see Amazon Bedrock connector authentication.

Model availability depends on the region and model you use. You might need to request a model is made available for your account. To learn more about configuring access to foundation models, refer to access to Amazon Bedrock foundation models.

info

For a list of Amazon Bedrock LLM models, refer to supported foundation models in Amazon Bedrock.

Azure OpenAI

Select this option to use Azure OpenAI models.

FieldRequiredDescription
EndpointYesThe Azure OpenAI endpoint URL. Example: https://<your-resource-name>.openai.azure.com/
AuthenticationYesSelect the authentication type you want to use to authenticate the connector with Azure OpenAI.

Two authentication methods are currently supported:

  • API key: Authenticate using an Azure OpenAI API key, available in the Azure AI Foundry portal.

  • Client credentials: Authenticate using a client ID and secret. This method requires registering an application in Microsoft Entra ID. Provide the following fields:

    • Client ID – The Microsoft Entra application ID.
    • Client secret – The application’s client secret.
    • Tenant ID – The Microsoft Entra tenant ID.
    • Authority host – (Optional) The authority host URL. Defaults to https://login.microsoftonline.com/. This can also be an OAuth 2.0 token endpoint.
note

To use an Azure OpenAI model, you must first deploy it in the Azure AI Foundry portal. For details, see Deploy a model in Azure OpenAI. The deployment ID must then be provided in the Model field.

Google Vertex AI

Select this option to use Google Vertex AI models.

FieldRequiredDescription
Project IDYesThe Google Cloud project ID.
RegionYesThe region where AI inference should take place.
AuthenticationYesSelect the authentication type to use for connecting to Google Cloud.

Two authentication methods are currently supported:

  • Service Account Credentials: Authenticate using a service account key in JSON format.
  • Application Default Credentials (ADC): Authenticate using the default credentials available in your environment.
    This method is only supported in Self-Managed or hybrid environments.
    To set up ADC in a local development environment, follow the instructions here.
info

For more information about Google Vertex AI models, see the Vertex AI documentation.

OpenAI

Select this option to use the OpenAI Chat Completion API.

FieldRequiredDescription
OpenAI API keyYesYour OpenAI account API key for authorization.
Organization IDNoFor members of multiple organizations. If you belong to multiple organizations, specify the organization ID to use for API requests with this connector.
Project IDNoIf you access projects through a legacy user API key, specify the project ID to use for API requests with this connector.
info

To learn more about authentication to the OpenAPI API, refer to OpenAPI platform API reference.

OpenAI-compatible

Select this option to use an LLM provider that provides OpenAI-compatible endpoints.

FieldRequiredDescription
API endpointYesThe base URL of the OpenAI-compatible endpoint. Example value: https://api.your-llm-provider.com/v1
API keyNoThe API key for authentication. Leave blank if using HTTP headers for authentication. If an Authorization header is specified in the headers, then the API key is ignored.
HeadersNoOptional HTTP headers to include in the request to the OpenAI-compatible endpoint.
note

A Custom parameters field is available in the model parameters to provide any additional parameters supported by your OpenAI-compatible provider.

Model

Select the model you want to use for the selected provider, and specify any additional model parameters.

FieldRequiredDescription
ModelYes

Specify the model ID for the model you want to use.

Example: anthropic.claude-3-5-sonnet-20240620-v1:0

Maximum tokensNoThe maximum number of tokens per request to allow in the generated response.
Maximum completion tokensNoThe maximum number of tokens per request to generate before stopping.
TemperatureNoFloating point number, typically between 0 and 1 (0 and 2 for OpenAI). The higher the number, the more randomness will be injected into the response.
top PNoFloating point number, typically between 0 and 1. Recommended for advanced use cases only (usually you only need to use temperature).
top KNoInteger greater than 0. Recommended for advanced use cases only (you usually only need to use temperature).
note
  • Different model parameter fields are shown depending on the provider/model you select. Additionally, some parameters may be different or have different value ranges (for example, OpenAI Temperature uses a number range between 0 to 2, whereas other models use a range between 0 to 1).
  • For more information on each model parameter, refer to the provider documentation links in the element template.
  • Parameters that set maximum values (such as maximum tokens) are considered per LLM request, not for the whole conversation. Depending on the provider, the exact meaning of these parameters may vary.

System Prompt

The System Prompt is a crucial part of the AI Agent connector configuration, as it defines the behavior and goal of the agent and instructs the LLM on how to act.

FieldRequiredDescription
System promptYes

Specify a system prompt to define how the LLM should act.

  • A minimal example system prompt is provided as a starting point for you to customize.

  • You can use FEEL expressions to add dynamic values into the text.

User Prompt

The User Prompt contains the actual request to the LLM model.

FieldRequiredDescription
User promptYes

This could either contain the initial request or a follow-up request as part of a response interaction feedback loop.

  • The value provided as part of this field is added to the conversation memory and passed to the LLM call.

  • For example, in the example conversation, this would be the messages prefixed with User:.

  • You can use FEEL expressions to add dynamic values into the text.

DocumentsNo

Add a document references list to allow an AI agent to interact with documents and images.

  • This list is internally resolved and passed to the LLM model if the document type is supported.

  • LLM APIs provide a way to specify the user prompt as a list of content blocks. If document references are passed, they are resolved to a corresponding content block and passed as part of the user message.

  • For examples of how LLM providers accept document content blocks, refer to the Anthropic and OpenAI documentation.

Supported document types

As file type support varies by LLM provider/model, you must test your document use case with the provider you are using.

File typeSupportedDescription
TextYesText files (MIME types matching text/*, application/xml, application/json, or application/yaml) are passed as plain text content blocks.
PDFYesPDF files (MIME types matching application/pdf) are passed as base64 encoded content blocks.
ImageYesImage files (MIME types matching image/jpg, image/png, image/gif, or image/webp) are passed as base64 encoded content blocks.
Audio/video/otherNoAudio and video files are not currently supported, and will result in an error if passed. All other unsupported file types not listed here will also result in an error if passed.
info

To learn more about storing, tracking, and managing documents in Camunda 8, see document handling.

Tools

Specify the tool resolution for an accompanying ad-hoc sub-process.

FieldRequiredDescription
Ad-hoc sub-process IDNo

Specify the element ID of the ad-hoc sub-process to use for tool resolution (see Tool Definitions).

When entering the AI Agent connector, the connector resolves the tools available in the ad-hoc sub-process, and passes these to the LLM as part of the prompt.

Tool call resultsNo

Specify the results collection of the ad-hoc sub-process multi-instance execution.

Example: =toolCallResults

note
  • Leave this section empty if using this connector independently, without an accompanying ad-hoc sub-process.
  • To actually use the tools, you must model your process to include a tools feedback loop, routing into the ad-hoc sub-process and back to the AI agent connector. See example tools feedback loop.

Memory

Configure the agent's short-term/conversational memory.

For the AI Agent Task implementation, the agent context field is required to enable a feedback loop between user requests, tool calls, and LLM responses.

FieldRequiredDescription
Agent contextYes

Specify an agent context variable to store all relevant data for the agent to support a feedback loop between user requests, tool calls, and LLM responses. Make sure this variable points to the context variable that is returned from the agent response.

This is an important variable required to make a feedback loop work correctly. This variable must be aligned with the Output mapping Result variable and Result expression for this connector.

Avoid reusing the agent context variable across different agent tasks. Define a dedicated result variable name for each agent instead and align it in the context and the result configuration.

Example: =agent.context, =anotherAgent.context

Depending on your use case, you can store the conversation memory in different storage backends.

FieldRequiredDescription
Memory storage typeYes

Specify how the conversation memory should be stored.

  • In Process (part of agent context): conversation messages will be stored as process variable and be subject to variable size limitations. This is the default value.
  • Camunda Document Storage: messages will be stored as a JSON document in document storage.
  • Custom Implementation (Hybrid/Self-Managed only): a custom storage implementation using a customized connector runtime.
Context window sizeNo

Specify the maximum number of messages to pass to the LLM on every call. Defaults to 20 if not configured.

  • Configuring this is a trade-off between cost/tokens and the context window supported by the used model.
  • When the conversation exceeds the configured context window size, the oldest messages from past feedback loops are omitted from the model API call first.
  • The system prompt is always kept in the list of messages passed to the LLM.

In-process storage

Messages passed between the AI agent and the model are stored within the agent context variable and directly visible in Operate.

This is suitable for many use cases, but you must be aware of the variable size limitations that limit the amount of data that can be stored in the process variable.

Camunda document storage

Messages passed between the AI agent and the model are not directly available as process variable but reference a JSON document stored in document storage.

As documents are subject to expiration, to avoid losing the conversation history you must be able to predict the expected lifetime of your process, so you can correctly configure the document time-to-live (TTL).

FieldRequiredDescription
Document TTLNo

Time-to-live (TTL) for documents containing the conversation history. Use this field to set a custom TTL matching your expected process lifetime.

The default cluster TTL is used if this value is not configured.

Custom document propertiesNo

Optional map of properties to store with the document.

Use this option to reference custom metadata you might want to use when further processing conversation documents.

Custom implementation

info

This option is only supported if you are using a customized AI Agent connector in a Self-Managed or hybrid setup. See customization for more details.

FieldRequiredDescription
Implementation typeYes

The type identifier of your custom storage implementation. See customization for an example.

ParametersNo

Optional map of parameters to be passed to the storage implementation.

Limits

Set limits for the agent interaction to prevent unexpected behavior or unexpected cost due to infinite loops.

FieldRequiredDescription
Maximum model callsNoSpecify the maximum number of model calls. As a safeguard, this limit defaults to a value of 10 if you do not configure this value.
caution

Despite these limits, you must closely monitor your LLM API usage and cost, and set appropriate limits on the provider side.

Response

Configure the response format by specifying how the model should return its output (text or JSON) and how the connector should process and handle the returned response.

The outcome of an LLM call is stored as an assistant message designed to contain multiple content blocks.

  • This message always contains a single text content block for the currently supported providers/models.
  • The connector returns the first content block when handling the response, either as a text string or as a parsed JSON object.
FieldRequiredDescription
Response formatYes

Instructs the model which response format to return.

  • This can be either text or JSON.
  • JSON format support varies by provider and model.

Include assistant messageNo

Returns the entire message returned by the LLM as responseMessage, including any additional content blocks and metadata.

Select this option if you need more than just the first response text.

Text response format

If not configured otherwise, this format is used by default and returns a responseText string as part of the connector response.

FieldRequiredDescription
Parse text as JSONNo

If this option is selected, the connector will attempt to parse the response text as JSON and return the parsed object as responseJson in the connector response.

  • Use this option for models that do not support setting JSON as response format (such as Anthropic models) in combination with a prompt instructing the model to return a JSON response.

  • If parsing fails, the connector does not return an responseJson object, but only returns the original response text as responseText.

For an example prompt that instructs the model to return a JSON response, (see Anthropic documenation):

Output in JSON format with keys: "sentiment" (positive/negative/neutral), "key_issues" (list), and "action_items" (list of dicts with "team" and "task").

JSON response format

note

The JSON response format is currently only supported for OpenAI and Google Vertex AI models. Use the text response format in combination with the Parse text as JSON option for other providers.

If the model supports it, selecting JSON as response format instructs the model to always return a JSON response. If the model does not return a valid JSON response, the connector throws an error.

To ensure the model generates data according to a specific JSON structure, you can optionally provide a JSON Schema. Alternatively, you can instruct the model to return JSON following a specific structure as shown in the text example above.

Support for JSON responses varies by provider and model.

For OpenAI, selecting the JSON response format is equivalent to using the JSON mode. Providing a JSON Schema instructs the model to return structured outputs.

FieldRequiredDescription
Response JSON schemaNo

Describes the desired response format as JSON Schema.

Response JSON schema nameNo

Depending on the provider, the schema must be configured with a name for the schema (such as Person).

Ideally this name describes the purpose of the schema to make the model aware of the expected data.

For example, the following shows an example JSON Schema describing the expected response format for a user profile:

={
"type": "object",
"properties": {
"userId": {
"type": "number"
},
"firstname": {
"type": "string"
},
"lastname": {
"type": "string"
}
},
"required": [
"userId",
"firstname",
"lastname"
]
}

Assistant message

If the Include assistant message option is selected, the response from the AI Agent connector contains a responseMessage object that includes the assistant message, including all content blocks and metadata. For example:

{
"responseMessage": {
"role": "assistant",
"content": [
{
"type": "text",
"text": "Based on the result from the GetDateAndTime function, the current date and time is:\n\nJune 2, 2025, 09:15:38 AM (Central European Summer Time)."
}
],
"metadata": {
"framework": {
"tokenUsage": {
"inputTokenCount": 1563,
"outputTokenCount": 95,
"totalTokenCount": 1658
},
"finishReason": "STOP"
}
}
}
}

To retrieve the response text from the responseMessage object, use the following FEEL expression (assuming the response variable is named agent):

agent.responseMessage.content[type = "text"][1].text

Output mapping

Specify the process variables that you want to map and export the AI Agent connector response into.

FieldRequiredDescription
Result variableYes

The result of the AI Agent connector is a context containing the following fields. Set this to a unique value for every agent task in your process to avoid interference between agents.

  • context: The updated Agent Context. Make sure you map this to a process variable and re-inject this variable in the Agent Context input field if your AI agent is part of a feedback loop.

  • toolCalls: Tool call requests provided by the LLM that need to be routed to the ad-hoc sub-process.

Response fields depend on how the Response is configured:

  • responseText: The last response text provided by the LLM if the Response Format is set to Text.

  • responseJson: The last response text provided by the LLM, parsed as a JSON object if the Response Format is set to JSON or if the Parse text as JSON option is enabled.

  • responseMessage: The assistant message provided by the LLM if the Include assistant message option is enabled.

Result expressionNoIn addition, you can choose to unpack the content of the response into multiple process variables using the Result expression field, as a FEEL Context Expression.
tip

To model your first AI Agent, you can use the default result variable (agent) and configure the Agent Context as agent.context.

When adding a second AI Agent connector, use a different variable name (such as mySecondAgent) and align the context variable accordingly (for example, mySecondAgent.context) to avoid interference and unexpected results between different agents.

info

To learn more about output mapping, see variable/response mapping.

Error handling

If an error occurs, the AI Agent connector throws an error and includes the error response in the error variable in Operate.

FieldRequiredDescription
Error expressionNoYou can handle an AI Agent connector error using an Error Boundary Event and error expressions.

Retries

Specify connector execution retry behavior if execution fails.

FieldRequiredDescription
RetriesNoSpecify the number of retries (times) the connector repeats execution if it fails.
Retry backoffNoSpecify a custom Retry backoff interval between retries instead of the default behavior of retrying immediately.

Execution listeners

Add and manage execution listeners to allow users to react to events in the workflow execution lifecycle by executing custom logic.

Limitations

No event handling support

Unlike the AI Agent Process implementation, the AI Agent Task implementation does not support event handling as part of an event subprocess.

If you want to handle events while the AI agent is working on a task, use the AI Agent Process implementation instead.

Process definition not found errors when running the AI Agent for the first time

The AI Agent Task implementation relies on the eventually consistent Get process definition XML API to fetch the BPMN XML source when resolving available tool definitions.

  • If you deploy a new or changed process and directly run it after (for example using Deploy & Run), the process definition might not be available when the AI Agent attempts to fetch the process definition XML.
  • It will retry to fetch the definition several times, but if the definition is still not available after the retries are exhausted, the connector will fail with a "Process definition not found" error and raise an incident.

To avoid this error, wait a few seconds before running a newly deployed new or changed process, to allow the exporter to make the process definition available via the API.