Configure IDP
Configure IDP for your Camunda 8 setup and make sure IDP can access the required components and credentials.
Prerequisites
The following prerequisites are required for IDP:
Amazon Web Services (AWS)
When using AWS as your cloud provider, the following AWS-specific prerequisites are required. IDP supports both structured and unstructured document extraction with AWS services:
Prerequisite | Description |
---|---|
Amazon Web Services (AWS) IAM user and permissions |
|
Amazon S3 bucket |
|
Google Cloud Platform (GCP)
When using GCP as your cloud provider, the following GCP-specific prerequisites are required. IDP supports both structured and unstructured document extraction with GCP services:
Prerequisite | Description |
---|---|
Google Cloud Vertex AI (required for unstructured extraction) |
|
Google Cloud Storage (GCS) bucket (required for unstructured extraction) |
|
Google Cloud Document AI (required for structured extraction) |
|
Google Cloud Service Account |
|
Microsoft Azure
When using Azure as your cloud provider, the following Azure-specific prerequisites are required. IDP currently supports unstructured document extraction with Azure services:
Prerequisite | Description |
---|---|
Azure AI Document Intelligence |
|
Azure AI Foundry |
|
General requirements
The following prerequisites apply regardless of your cloud provider:
Prerequisite | Description |
---|---|
Web Modeler |
|
Cluster requirements
The following requirements apply for IDP application clusters:
Requirement | Description |
---|---|
Connector secrets | You must configure the required IDP AWS connector secrets on any cluster used with IDP. |
Document handling | IDP requires a cluster that supports document handling. For example, a version 8.7 or higher cluster. |
Cluster health | IDP applications and projects are only fully operational when linked to a healthy, active cluster. If needed, you can select an unstable or unhealthy cluster when first creating an IDP application, and change to a stable cluster when one is available. |
To learn more about storing, tracking, and managing documents in Camunda 8, see document handling.
Identity
If you are using an identity-enabled cluster, the following authorizations are required for IDP operations:
Resource type | Permission | Owner type | Owner | Description |
---|---|---|---|---|
DOCUMENT | READ | Role | Connectors | Required for the idp connector to read the document from the cluster |
DOCUMENT | CREATE | User | user's email | Required to upload documents to the cluster during IDP extraction |
RESOURCE | CREATE | User | user's email | Required to deploy process instances |
PROCESS_DEFINITION | CREATE_PROCESS_INSTANCE | User | user's email | Required to start process instances |
Configure IDP
Once you have completed all the required prerequisites, configure IDP in a suitable dev
cluster as follows. You only need to add the connector secrets for the cloud provider you plan to use.
Add AWS connector secrets to cluster
If you are using AWS as your cloud provider, add the following AWS connector secrets required for IDP.
- SaaS: Create and configure as connector secrets.
- Self-Managed: Connector secrets are generally provided as environment variables, set via
values.yaml
or the command line. Add these connector secrets as environment variables for the Tasklist and Zeebe components. To learn more about using connector secrets in Self-Managed, see managing secrets in Helm charts and secrets in manual installations.
Connector secret Key | Required | Description |
---|---|---|
IDP_AWS_ACCESSKEY | Yes | The AWS access key ID used to interact with the Amazon S3 bucket. |
IDP_AWS_SECRETKEY | Yes | The AWS secret access key associated with the IDP_AWS_ACCESSKEY . |
IDP_AWS_REGION | Yes | The AWS region where documents can be temporarily stored during Amazon Textract analysis. This should match the region where the Amazon S3 bucket is located. Example: |
IDP_AWS_BUCKET_NAME | Yes | The name of the Amazon S3 bucket you want to use for document storage during extraction. Example: |
Add GCP connector secrets to cluster
If you are using GCP as your cloud provider, add the following GCP connector secrets required for IDP. The secrets you need depend on which type of extraction you plan to use.
Connector secret Key | Required for Unstructured Extraction | Required for Structured Extraction | Description |
---|---|---|---|
IDP_GCP_SERVICE_ACCOUNT | Yes | Yes | The GCP service account JSON key file content for authentication with GCP services. |
IDP_GCP_VERTEX_REGION | Yes | No | The GCP region where Vertex AI resources are located. Example: |
IDP_GCP_VERTEX_PROJECT_ID | Yes | No | The Vertex project ID where Vertex AI resources are configured. Example: |
IDP_GCP_VERTEX_BUCKET_NAME | Yes | No | The name of the Google Cloud Storage bucket for temporary document storage during Vertex AI analysis. Example: |
IDP_GCP_DOCUMENT_AI_REGION | No | Yes | The GCP region where Document AI resources are located. Must be either Example: |
IDP_GCP_DOCUMENT_AI_PROJECT_ID | No | Yes | The DocumentAI project ID where Document AI resources are configured. Example: |
IDP_GCP_DOCUMENT_AI_PROCESSOR_ID | No | Yes | The Document AI processor ID for the specific processor you want to use for structured extraction. Example: |
Add Azure connector secrets to cluster
If you are using Azure as your cloud provider, add the following Azure connector secrets required for IDP.
Connector secret Key | Required | Description |
---|---|---|
IDP_AZURE_DOCUMENT_INTELLIGENCE_ENDPOINT | Yes | The endpoint URL for your Azure AI Document Intelligence resource. |
IDP_AZURE_DOCUMENT_INTELLIGENCE_KEY | Yes | The access key for your Azure AI Document Intelligence resource. |
IDP_AZURE_AI_FOUNDRY_ENDPOINT | Yes | The endpoint URL for your Azure AI Foundry resource. Construct this URL using the pattern: https://<resource-name>.services.ai.azure.com/models . |
IDP_AZURE_AI_FOUNDRY_KEY | Yes | The access key for your Azure AI Foundry resource. You can find this key in the details page of deployed base models or on the Azure AI Foundry "Overview" page. |
IDP_AZURE_OPEN_AI_ENDPOINT | Optional | The endpoint URL for your Azure OpenAI resource. Required only if you want to use OpenAI models. You can find this endpoint in the "Models + endpoints" page in the Azure AI Foundry dashboard. |
IDP_AZURE_OPEN_AI_KEY | Optional | The access key for your Azure OpenAI resource. Required only if you want to use OpenAI models. You can find this key in the "Models + endpoints" page in the Azure AI Foundry dashboard. |
- These connector secrets are used in IDP document extraction templates. See integrate IDP into your processes.
- You can rename these connector secrets if you want to change the testing configuration used in other environments (such as
test
,stage
orprod
). If you do this, you must also change these names to match within the Authentication section of the Properties panel for any related published document extraction templates.
Example IDP deployment
The following examples show how you can deploy and configure IDP in your local development environment.
Camunda 8 Run
To use Camunda 8 Run to deploy and run Camunda 8 with IDP in a local development environment:
-
Ensure you have completed the IDP Amazon Web Services (AWS) prerequisites and have obtained your AWS access key pair (access key and secret access key).
-
Install Camunda 8 Run. For example, download the latest release of Camunda 8 Run for your operating system and architecture and open the .tgz file to extract the Camunda 8 Run script into a new directory.
-
Navigate to the
docker-compose-8.x
folder in the new c8run directory.-
Open the
connector-secrets.txt
file, and add your AWS connector secrets.For example:
IDP_AWS_ACCESSKEY=AWSACCESSKEYID
IDP_AWS_SECRETKEY=AWSSECRETACCESSKEYGOESHERE
IDP_AWS_REGION=us-east-1
IDP_AWS_BUCKET_NAME=idp-extraction-connector -
Save and close the file.
-
Configure document handling environment variables for the Tasklist and Zeebe components (for example, in the
.env
file).
-
-
Start Camunda 8 Run via Docker Compose. For example, run
./start.sh --docker
(or.\c8run.exe start -docker
on Windows) in your terminal. -
Launch Web Modeler at http://localhost:8070 and log in with the username
demo
and passworddemo
. -
Get started with IDP by creating a new IDP application in a Web Modeler project.
To learn more about using Camunda 8 Run to run Camunda Self-Managed locally, see Camunda 8 Run.
Docker
To use Docker to deploy and run Camunda 8 with IDP in a local development environment:
-
Ensure you have completed the IDP Amazon Web Services (AWS) prerequisites and have obtained your AWS access key pair (access key and secret access key).
-
Download the latest Camunda Docker Compose release artifact from the camunda-distributions GitHub repository, and extract the file contents to your desired directory.
-
In the extracted directory:
-
Open the
connector-secrets.txt
file, and add your AWS connector secrets.For example:
IDP_AWS_ACCESSKEY=AWSACCESSKEYID
IDP_AWS_SECRETKEY=AWSSECRETACCESSKEYGOESHERE
IDP_AWS_REGION=us-east-1
IDP_AWS_BUCKET_NAME=idp-extraction-connector -
Save and close the file.
-
-
Configure document handling environment variables for the Tasklist and Zeebe components.
-
Run Camunda 8 with Docker Compose. For example, run the following command in the extracted directory:
Docker compose up -d
-
Launch Web Modeler at http://localhost:8070 and log in with the username
demo
and passworddemo
. -
Get started with IDP by creating a new IDP application in a Web Modeler project.
To learn more about using Docker Compose to run Camunda Self-Managed locally, see Docker Compose.