Deploy Kafka Connect in Docker
The Redpanda Connectors Docker image is a community-supported artifact. Redpanda Data does not provide enterprise support for this image. For support, reach out to the Redpanda team in Redpanda Community Slack. |
The Redpanda Connectors Docker image includes a pre-configured instance of Kafka Connect that works with Redpanda. This image contains only the MirrorMaker2 connector but you can build a custom image to install additional connectors.
The latest Docker image contains:
-
Red Hat Enterprise Linux 8.9
-
OpenJDK 21 LTS
-
Kafka Connect
-
JMX-Exporter JMX Prometheus JavaAgent
The image also includes the following connectors as plugins:
-
MirrorSourceConnector
-
MirrorCheckpointConnector
-
MirrorHeartbeatConnector
Docker image configuration properties
The following table lists the available Docker image properties.
Property | Description |
---|---|
|
Comma-separated list of host and port pairs that are the addresses of the Redpanda brokers. |
|
Properties-based Kafka Connect configuration. |
|
Comma-separated Kafka Connect properties. This can be used as an alternative to the Example: |
|
SASL mechanism. Allowed values: Default: not set |
|
SASL username used to authenticate connecting to a Redpanda broker. Default: not set |
|
Relative path to a file containing the SASL password, relative to the Default: not set |
|
Set to Default: |
|
TLS authentication certificate location (relative path from For example: |
|
TLS authentication key location (relative path from For example: |
|
Truststore locations (relative path from For example: |
|
Additional TLS authentication certificate location, used, for example, to connect with the source MM2 cluster, (relative path from For example: |
|
Additional TLS authentication key location, used, for example, to connect with the source MM2 cluster (relative path from For example: |
|
Additional truststore locations, used, for example, to connect with the source MM2 cluster (relative path from For example: |
|
Set to |
|
Comma-separated list of directories with plugins to load by Kafka Connect. Default: |
|
Set to Default: |
|
JVM heap options. For example Default: |
|
By default, Kafka Connect logs at |
|
By default, Kafka Connect logs at |
Install new connector type
To install a new connector type:
-
Prepare a new connector jar. Place a fat-jar or a jar with all dependent jars in a dedicated directory.
For example:
./connect-plugins/snowflake-sink/snowflake-sink-fat.jar
-
Mount a volume to bind the directory with a container. For example, make the
./connect-plugins
directory content visible in/opt/kafka/connect-plugins
in a container:volumes: - ./connect-plugins:/opt/kafka/connect-plugins
yaml -
Use the
CONNECT_PLUGIN_PATH
image property to configure a directory with the new connector. Use Kafka Connect to discover new connectors. For example:CONNECT_PLUGIN_PATH: "/opt/kafka/connect-plugins"
yaml -
The new connector type should be discovered by Kafka Connect automatically on startup. Use the
/connector-plugins
Kafka Connect REST endpoint to check available connector types. For example:curl localhost:8083/connector-plugins
Create a separate child directory for each connector, and place the connector’s jar files and other resource files in that child directory.
Configure SASL
To configure SASL:
-
Prepare the SASL user and password, making sure the user has necessary permissions.
-
Required: Write access for internal topics and access to consumer groups (so all workers in the cluster can communicate with each other).
-
ACLs depend on used connector type (source/sink) and topics used by the connectors.
-
-
Create a file containing the plain text password in a dedicated directory. For example,
./connect-password/redpanda-password/password
where thepassword
file contains just the password -
Mount a volume to bind the directory with a container. For example, make the
./connect-password
directory content visible in/opt/kafka/connect-password
in a container:volumes: - ./connect-password:/opt/kafka/connect-password
yaml -
Use
CONNECT_SASL_USERNAME
to set the SASL username, and useCONNECT_SASL_PASSWORD_FILE
to set the relative path to a password file. For example, if the file is in/opt/kafka/connect-password/redpanda-password/password
, use theredpanda-password/password
value.CONNECT_SASL_USERNAME: "connect-user" CONNECT_SASL_PASSWORD_FILE: "redpanda-password/password"
yaml
Configure TLS
To configure TLS:
-
Prepare Redpanda cluster certificate and key, and place them in a dedicated directory. For example:
./connect-certs/ca.crt ./connect-certs/client.crt ./connect-certs/client.key
-
Mount a volume to bind the directory with a container. For example, make the
./connect-plugins
directory content visible in/opt/kafka/connect-plugins
in a container:volumes: - ./connect-certs:/opt/kafka/connect-certs/user-secret
yaml -
Set the
CONNECT_TLS_ENABLED
property to"true"
. -
Use the
CONNECT_TLS_AUTH_CERT
,CONNECT_TRUSTED_CERTS
, andCONNECT_TLS_AUTH_KEY
image properties to configure the relative path to the certificate and key. For example, if the files are in/opt/kafka/connect-certs/user-secret
, use:CONNECT_TRUSTED_CERTS: "user-secret/ca.crt" CONNECT_TLS_AUTH_CERT: "user-secret/client.crt" CONNECT_TLS_AUTH_KEY: "user-secret/client.key"
yaml
Connect with Docker Compose
You can use the following Docker Compose sample file to connect:
version: '3.8'
services:
connect:
image: docker.redpanda.com/redpandadata/connectors:latest
volumes:
- ./connect-password:/opt/kafka/connect-password
- ./connect-plugins:/opt/kafka/connect-plugins
- ./connect-certs:/opt/kafka/connect-certs/user-secret
hostname: connect
ports:
- "8083:8083"
environment:
CONNECT_CONFIGURATION: |
key.converter=org.apache.kafka.connect.converters.ByteArrayConverter
value.converter=org.apache.kafka.connect.converters.ByteArrayConverter
group.id=connectors-group
offset.storage.topic=_connectors_offsets
config.storage.topic=_connectors_configs
status.storage.topic=_connectors_status
config.storage.replication.factor=-1
offset.storage.replication.factor=-1
status.storage.replication.factor=-1
CONNECT_BOOTSTRAP_SERVERS: ...data.redpanda:30499,...data.redpanda:30499,...data.redpanda:30499
CONNECT_GC_LOG_ENABLED: "false"
CONNECT_HEAP_OPTS: -Xms1G -Xmx1G
CONNECT_METRICS_ENABLED: "false"
CONNECT_SASL_MECHANISM: "scram-sha-256"
CONNECT_SASL_USERNAME: "connect-user"
CONNECT_SASL_PASSWORD_FILE: "redpanda-password/password"
CONNECT_TLS_ENABLED: "true"
CONNECT_TRUSTED_CERTS: "user-secret/ca.crt"
CONNECT_TLS_AUTH_CERT: "user-secret/client.crt"
CONNECT_TLS_AUTH_KEY: "user-secret/client.key"
CONNECT_PLUGIN_PATH: "/opt/kafka/connect-plugins"
├── ... ├── connect-certs │ ├── ca.crt # A file with Redpanda cluster CA cert │ ├── client.crt # A file with Redpanda cluster cert │ └── client.key # A file with Redpanda cluster key ├── connect-password │ └── redpanda-password │ └──password # A file with SASL password ├── connect-plugins │ └── custom-connector │ └── custom-sink-connector-fat.jar # Connector fat jar or jar and dependencies jars └── docker-compose.yaml # A docker-compose file
To connect with Docker Compose:
-
From a directory containing the
docker-compose.yaml
file, run:docker-compose up
bash -
To list installed plugins, run:
curl localhost:8083/connector-plugins
bash -
To get Kafka Connect basic information, run:
curl localhost:8083/
bash -
Metrics are available at
localhost:9404/
. -
Use the Redpanda Console or Kafka Connect REST API to manage connectors.
Connect to a Redpanda Cloud cluster
To connect to a Redpanda Cloud cluster with Docker Compose:
-
Use
rpk
or Redpanda Console (Security tab) to create a Redpanda user. -
Create ACLs for the user.
-
Set the username in the
CONNECT_SASL_USERNAME
property. -
Create a file containing the user password (for example, in the path
passwords/redpanda-password/password
). Specify this path in theCONNECT_SASL_PASSWORD_FILE
property. -
Specify a value in the
CONNECT_BOOTSTRAP_SERVERS
property. You can view this value in Redpanda Console > Overview > Kafka API, in theBootstrap server URL
option. -
Set the
CONNECT_SASL_MECHANISM
property value to"scram-sha-256"
. -
Set the
CONNECT_TLS_ENABLED
property value to"true"
.
version: '3.8'
connect:
image: docker.redpanda.com/redpandadata/connectors:latest
volumes:
- ./passwords:/opt/kafka/connect-password/passwords
hostname: connect
ports:
- "8083:8083"
environment:
CONNECT_CONFIGURATION: |
key.converter=org.apache.kafka.connect.converters.ByteArrayConverter
value.converter=org.apache.kafka.connect.converters.ByteArrayConverter
group.id=connectors-group
offset.storage.topic=_connectors_offsets
config.storage.topic=_connectors_configs
status.storage.topic=_connectors_status
config.storage.replication.factor=-1
offset.storage.replication.factor=-1
status.storage.replication.factor=-1
CONNECT_BOOTSTRAP_SERVERS: seed-....redpanda.com:9092
CONNECT_GC_LOG_ENABLED: "false"
CONNECT_HEAP_OPTS: -Xms1G -Xmx1G
CONNECT_SASL_MECHANISM: "scram-sha-256"
CONNECT_SASL_USERNAME: "connectors-user"
CONNECT_SASL_PASSWORD_FILE: "passwords/redpanda-password/password"
CONNECT_TLS_ENABLED: "true"
├── ... ├── passwords │ └── redpanda-password │ └──password # A file with SASL password └── docker-compose.yaml # A docker-compose file
Redpanda Cloud Schema Registry
For converters using Schema Registry (like AvroConverter, JsonConverter), use the following connector configuration properties to set up a connection with Schema Registry:
Property | Description |
---|---|
|
Key converter class to use for the connector. |
|
Key converter Schema Registry URL, which you can view in the cluster Overview > Schema Registry. |
|
Key converter authentication method, should be |
|
Key converter user and password used for authentication, separated by a colon. |
|
Value converter class to use for the connector. |
|
Value converter Schema Registry URL, which you can view in the cluster Overview > Schema Registry. |
|
Value converter authentication method, should be |
|
Value converter user and password used for authentication, separated by a colon. |
Example:
{
....
"value.converter.schema.registry.url": "https://schema-registry-....redpanda.com:30081",
"value.converter.basic.auth.credentials.source": "USER_INFO",
"value.converter.basic.auth.user.info": "connect-user:secret-password"
}
Manage connectors with Kafka Connect
You can manage connectors using the Kafka Connect REST API.
View version of Kafka Connect worker
To view the version of the Kafka Connect worker, run:
curl localhost:8083 | jq
View list of connector plugins
To view the list of available connector plugins, run:
curl localhost:8083/connector-plugins | jq
View list of active connectors
To view the list of active connectors, run:
curl 'http://localhost:8083/connectors?expand=status&expand=info' | jq
Create connector
To create the connector, run:
curl "localhost:8083/connectors" -H 'Content-Type: application/json' --data-raw '<connector-config>'
For example:
curl "localhost:8083/connectors" \
-H 'Content-Type: application/json' \
--data-raw '{ "name": "heartbeat-connector", "config": { "connector.class": "org.apache.kafka.connect.mirror.MirrorHeartbeatConnector", "heartbeats.topic.replication.factor": "1", "replication.factor": "1", "source.cluster.alias": "source", "source.cluster.bootstrap.servers": "redpanda:29092", "target.cluster.bootstrap.servers": "redpanda:29092"}}'
Manage connectors with Redpanda Console
Redpanda Console provides a user interface that lets you manage multiple Kafka Connect clusters. You can inspect or patch connectors; restart, pause, and resume connector tasks; and delete connectors.
For details on how to set it up, see Connect Redpanda Console to Kafka Connect Clusters.