Skip to main content
Version: 8.8 (unreleased)

Example AI Agent connector integration

This worked example shows how you can integrate an AI Agent connector into a feedback loop model.

About this worked example

This worked example demonstrates how you can use the AI Agent connector and an ad-hoc sub-process to model AI Agent tools and response interaction feedback loops.

Example tools feedback loop

First, an AI Agent connector is added and configured in the process diagram. Next, an ad-hoc sub-process is added in a feedback loop to connect the agent to the tools it needs.

aiagent-tools-loop-empty.png

Add ad-hoc sub-process and loop

  1. An ad-hoc sub-process is added and marked as a parallel multi-instance. This allows the process to execute the tools in parallel, and wait for all tool calls to complete before continuing with the process.

  2. A descriptive ID is configured for the ad-hoc sub-process. This can then be configured in the Ad-hoc sub-process ID field in the AI Agent connector tools section.

  3. A loop is modeled into the sub-process and back to the AI Agent connector.

    • The no flow of the Contains tool calls? gateway is marked as the default flow.

    • The yes flow condition is configured to activate when the AI Agent response contains a list of tool calls. For example, if the suggested default values for the result variable/expression are used, this condition could be configured as follows:

      not(agent.toolCalls = null) and count(agent.toolCalls) > 0

      The process execution routes through the ad-hoc sub-process if the LLM response requests one or more tools to be called.

Configure multi-instance execution

The ad-hoc sub-process in this example is configured as a parallel multi-instance sub-process (instead of sequential multi-instance).

This allows:

  • Tools to be called independently of each other, each with its own set of input parameters. This also implies that the same tool can be called multiple times with different parameters within the same ad-hoc sub-process execution. For example, a Lookup user tool could be called multiple times with different user IDs.

  • The process to wait until all requested tools have been executed before passing the results back to the AI Agent/LLM. After all tools have been executed, results are passed back to the AI Agent connector.

Configure properties

The following properties for the ad-hoc sub-process must be configured. You can use the following suggested values as a starting point and change as required or if dealing with multiple agents within the same process.

  • Input collection: Set this to the list of tool calls your AI Agent connector returns, for example agent.toolCalls.
  • Input element: Contains the individual tool call, including LLM-generated input parameters based on the tool definition. Suggested value: toolCall. This must be aligned with the fromAi function calls in the tool definition.
  • Output collection: Collects the results of all the requested tool calls. Suggested value: toolCallResults. Make sure you pass this value as Tool Call Results in the AI Agent configuration.
  • Output element: Collects the individual tool call result as returned by an individual tool (see Tool Call Responses). When changing this toolCallResult to a different value, make sure you also change your tools to write to the updated variable name.
    {
    id: toolCall._meta.id,
    name: toolCall._meta.name,
    content: toolCallResult
    }

As a final step, the element must be configured to activate the ad-hoc sub-process.

  • When using a multi-instance configuration, this is always the single task ID of the tool being executed in the individual instance.
  • Configure Active elements collection to contain the exact [toolCall._meta.name].

For example, the completed ad-hoc sub-process configuration would look as follows:

agenticai-ad-hoc-sub-process-multi-instance.png

Example response interaction feedback loop

Similar to the tools feedback loop, another feedback loop acting on the agent response can be added by re-entering the AI Agent connector with new information. You must model your user prompt so that it adds the follow-up data instead of the initial request.

For example, your User Prompt field could contain the following FEEL expression to make sure it acts upon follow-up input:

=if (is defined(followUpInput)) then followUpInput else initialUserInput

agenticai-user-feedback-loop.png

note

How you model this type of feedback loop greatly depends on your specific use case.

  • The example feedback loop expects a simple feedback action based on a user task, but this could also interact with other process flows or another agent process.
  • Instead of the user task, you could also use another LLM connector to verify the response of the AI Agent. For an example of this pattern, see the fraud detection example.

Tool Resolution

When resolving the available tools within an ad-hoc sub-process, the AI Agent will take all activities into account which have no incoming flows (root nodes within the ad-hoc sub-process) and are not boundary events.

For example, in the following image the activities marked in red are the ones that will be considered as tools:

agenticai-tool-resolution.png

You can use any BPMN elements and connectors as tools and to model sub-flows within the ad-hoc sub-process.

To resolve available tools the AI Agent connector:

  • Reads the BPMN model and looks up the ad-hoc sub-process using the configured ID. If not found, the connector throws an error.
  • Iterates over all activities within the ad-hoc sub-process and checks that they are root nodes (no incoming flows) and not boundary events.
  • For each activity found, analyzes the input/output mappings and looks for the fromAi function calls that define the parameters that need to be provided by the LLM.
  • Creates a tool definition for each activity found, and passes these tool definitions to the LLM as part of the prompt.
note

Refer to the Anthropic and OpenAI documentation for examples of how tool/function calling works in combination with an LLM.

Tool Definitions

important

The AI Agent connector only considers the root node of the sub-flow when resolving a tool definition.

A tool definition consists of the following properties which will be passed to the LLM. The tool definition is closely modeled after the list tools response as defined in the Model Context Protocol (MCP).

PropertyDescription
nameThe name of the tool. This is the ID of the activity in the ad-hoc sub-process.
descriptionThe description of the tool, used to inform the LLM of the tool purpose. If the documentation of the activity is set, this is used as the description, otherwise the name of the activity is used. Make sure you provide a meaningful description to help the LLM understand the purpose of the tool.
inputSchemaThe input schema of the tool, describing the input parameters of the tool. The connector will analyze all input/output mappings of the activity and create a JSON Schema based on the fromAi function calls defined in these mappings. If no fromAi function calls are found, an empty JSON Schema object is returned.
note

Provide as much context and guidance in tool definitions and input parameter definitions as you can to ensure the LLM selects the right tool and generates proper input values.

Refer to the Anthropic documentation for tool definition best practices.

AI-generated parameters via fromAi

Within an activity, you can define parameters which should be AI-generated by tagging them with the fromAi FEEL function in input/output mappings.

The function itself does not implement any logic (it simply returns the first argument it receives), but provides a way to configure all the necessary metadata (for example, description, type) to generate an input schema definition. The tools schema resolution will collect all fromAi definitions within an activity and combine them into an input schema for the activity.

important

The first argument passed to the fromAi function must be a reference type (for example, not a static string), referencing a value within the variable defined as Input element in the multi-instance configuration. In the examples provided, toolCall is typically used as the input element. Example value: toolCall.myParameter.

By using the fromAi tool call as a wrapper function around the actual value, the connector can both describe the parameter for the LLM by generating a JSON Schema from the function calls and at the same time utilize the LLM-generated value as it can do with any other process variable.

You can use the fromAi function in:

  • Input & Output mappings (for example, service task, script task, user task).
  • Custom input fields provided by an element template if an element template is applied to the activity as technically these are handled as input mappings.

For example, the following image shows an example of fromAi function usage on a REST outbound connector:

agenticai-tool-resolution-fromAi.png

fromAi examples

The fromAi FEEL function can be called with a varying number of parameters to define simple or complex inputs. The simplest form is to just pass a value.

fromAi(toolCall.url)

This makes the LLM aware that it needs to provide a value for the url parameter. As the first value to fromAi needs to be a variable reference, the last segment of the reference is used as parameter name (url in this case).

To make an LLM understand the purpose of the input, you can add a description:

fromAi(toolCall.url, "Fetches the contents of a given URL. Only accepts valid RFC 3986/RFC 7230 HTTP(s) URLs.")

To define the type of the input, you can add a type (if no type is given, it will default to string):

fromAi(toolCall.firstNumber, "The first number.", "number")

fromAi(toolCall.shouldCalculate, "Defines if the calculation should be executed.", "boolean")

For more complex type definitions, the fourth parameter of the function allows you to specify a JSON Schema from a FEEL context. Note that support for the JSON Schema features depends on your AI integration. For a list of examples, refer to the JSON Schema documentation.

fromAi(
toolCall.myComplexObject,
"A complex object",
"string",
{ enum: ["first", "second"] }
)

You can combine multiple parameters within the same FEEL expression, for example:

fromAi(toolCall.firstNumber, "The first number.", "number") + fromAi(toolCall.secondNumber, "The second number.", "number")

Tool Call Responses

To collect the output of the called tool and pass it back to the agent, the task within the ad-hoc sub-process needs to set its output to the variable configured as content when setting up the multi-instance execution. This variable is typically named toolCallResult and can be used from every tool call within the ad-hoc sub-process as the multi-instance execution takes care of isolating individual tool calls.

Depending on the used task, this can be achieved in multiple ways, as:

  • A result variable or a result expression containing a toolCallResult key
  • An output mapping creating the toolCallResult variable or adding to a part of the toolCallResult variable (for example, an output mapping could be set to toolCallResult.statusCode)
  • A script task that sets the toolCallResult variable

Tool call results can be either primitive values (for example, a string) or complex ones, such as a FEEL context that is serialized to a JSON string before passing it to the LLM.

Document support

Similar to the user prompt Documents field, tool call responses can contain Camunda Document references within arbitrary structures (supporting the same file types as for the user prompt).

When serializing the tool call response to JSON, document references are transformed into a content block containing the plain text or base64 encoded document content, before being passed to the LLM.