Skip to main content

Advanced parameters for Aiven for Apache Kafka®

See the configuration options available for Aiven for Apache Kafka®:

Parameter

custom_domain

string,null

Custom domain

Serve the web frontend using a custom CNAME pointing to the Aiven DNS name

  • default: 0.0.0.0/0

IP filter

Allow incoming connections from CIDR address block, e.g. '10.20.0.0/16'

service_log

boolean,null

Service logging

Store logs for the service so that they are available in the HTTP API and console.

static_ips

boolean

Static IP addresses

Use static public IP addresses

Single-zone configuration

Enabled

Whether to allocate nodes on the same Availability Zone or spread across zones available. By default service nodes are spread across different AZs. The single AZ support is best-effort and may temporarily allocate nodes in different AZs e.g. in case of capacity limitations in one AZ.

Allow access to selected service ports from private networks

Allow clients to connect to kafka with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations

Allow clients to connect to kafka_connect with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations

Allow clients to connect to kafka_rest with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations

Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations

Allow clients to connect to schema_registry with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations

Allow access to selected service ports from the public Internet

Allow clients to connect to kafka from the public internet for service nodes that are in a project VPC or another type of private network

Allow clients to connect to kafka_connect from the public internet for service nodes that are in a project VPC or another type of private network

Allow clients to connect to kafka_rest from the public internet for service nodes that are in a project VPC or another type of private network

Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network

Allow clients to connect to schema_registry from the public internet for service nodes that are in a project VPC or another type of private network

Allow access to selected service components through Privatelink

Enable jolokia

Enable kafka

Enable kafka_connect

Enable kafka_rest

Enable prometheus

Enable schema_registry

Use Letsencrypt CA for Kafka SASL via Privatelink

Use Letsencrypt CA for Kafka SASL via Privatelink

kafka

object

  • default: [object Object]

Kafka broker configuration values

compression.type

Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4', 'zstd'). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer.(Default: producer)

  • max: 300000

group.initial.rebalance.delay.ms

The amount of time, in milliseconds, the group coordinator will wait for more consumers to join a new group before performing the first rebalance. A longer delay means potentially fewer rebalances, but increases the time until processing begins. The default value for this is 3 seconds. During development and testing it might be desirable to set this to 0 in order to not delay test execution time. (Default: 3000 ms (3 seconds))

  • max: 60000

group.min.session.timeout.ms

The minimum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures. (Default: 6000 ms (6 seconds))

  • max: 1800000

group.max.session.timeout.ms

The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures. Default: 1800000 ms (30 minutes)

  • min: 1000
  • max: 3600000

connections.max.idle.ms

Idle connections timeout: the server socket processor threads close the connections that idle for longer than this. (Default: 600000 ms (10 minutes))

  • min: 1000
  • max: 10000

max.incremental.fetch.session.cache.slots

The maximum number of incremental fetch sessions that the broker will maintain. (Default: 1000)

  • max: 100001200

message.max.bytes

The maximum size of message that the server can receive. (Default: 1048588 bytes (1 mebibyte + 12 bytes))

  • min: 1
  • max: 2147483647

offsets.retention.minutes

Log retention window in minutes for offsets topic (Default: 10080 minutes (7 days))

  • max: 315569260000

log.cleaner.delete.retention.ms

How long are delete records retained? (Default: 86400000 (1 day))

  • min: 0.2
  • max: 0.9

log.cleaner.min.cleanable.ratio

Controls log compactor frequency. Larger value means more frequent compactions but also more space wasted for logs. Consider setting log.cleaner.max.compaction.lag.ms to enforce compactions sooner, instead of setting a very high value for this option. (Default: 0.5)

  • min: 30000
  • max: 9223372036854776000

log.cleaner.max.compaction.lag.ms

The maximum amount of time message will remain uncompacted. Only applicable for logs that are being compacted. (Default: 9223372036854775807 ms (Long.MAX_VALUE))

  • max: 9223372036854776000

log.cleaner.min.compaction.lag.ms

The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted. (Default: 0 ms)

log.cleanup.policy

The default cleanup policy for segments beyond the retention window (Default: delete)

  • min: 1
  • max: 9223372036854776000

log.flush.interval.messages

The number of messages accumulated on a log partition before messages are flushed to disk (Default: 9223372036854775807 (Long.MAX_VALUE))

  • max: 9223372036854776000

log.flush.interval.ms

The maximum time in ms that a message in any topic is kept in memory (page-cache) before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms is used (Default: null)

  • max: 104857600

log.index.interval.bytes

The interval with which Kafka adds an entry to the offset index (Default: 4096 bytes (4 kibibytes))

  • min: 1048576
  • max: 104857600

log.index.size.max.bytes

The maximum size in bytes of the offset index (Default: 10485760 (10 mebibytes))

  • min: -2
  • max: 9223372036854776000

log.local.retention.ms

The number of milliseconds to keep the local log segments before it gets eligible for deletion. If set to -2, the value of log.retention.ms is used. The effective value should always be less than or equal to log.retention.ms value. (Default: -2)

  • min: -2
  • max: 9223372036854776000

log.local.retention.bytes

The maximum size of local log segments that can grow for a partition before it gets eligible for deletion. If set to -2, the value of log.retention.bytes is used. The effective value should always be less than or equal to log.retention.bytes value. (Default: -2)

log.message.downconversion.enable

This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests. (Default: true)

log.message.timestamp.type

Define whether the timestamp in the message is message create time or log append time. (Default: CreateTime)

  • max: 9223372036854776000

log.message.timestamp.difference.max.ms

The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message (Default: 9223372036854775807 (Long.MAX_VALUE))

log.preallocate

Should pre allocate file when create new segment? (Default: false)

  • min: -1
  • max: 9223372036854776000

log.retention.bytes

The maximum size of the log before deleting messages (Default: -1)

  • min: -1
  • max: 2147483647

log.retention.hours

The number of hours to keep a log file before deleting it (Default: 168 hours (1 week))

  • min: -1
  • max: 9223372036854776000

log.retention.ms

The number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes is used. If set to -1, no time limit is applied. (Default: null, log.retention.hours applies)

  • max: 9223372036854776000

log.roll.jitter.ms

The maximum jitter to subtract from logRollTimeMillis (in milliseconds). If not set, the value in log.roll.jitter.hours is used (Default: null)

  • min: 1
  • max: 9223372036854776000

log.roll.ms

The maximum time before a new log segment is rolled out (in milliseconds). (Default: null, log.roll.hours applies (Default: 168, 7 days))

  • min: 10485760
  • max: 1073741824

log.segment.bytes

The maximum size of a single log file (Default: 1073741824 bytes (1 gibibyte))

  • max: 3600000

log.segment.delete.delay.ms

The amount of time to wait before deleting a file from the filesystem (Default: 60000 ms (1 minute))

auto.create.topics.enable

Enable auto-creation of topics. (Default: true)

  • min: 1
  • max: 7

min.insync.replicas

When a producer sets acks to 'all' (or '-1'), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. (Default: 1)

  • min: 1
  • max: 1000

num.partitions

Number of partitions for auto-created topics (Default: 1)

  • min: 1
  • max: 10

default.replication.factor

Replication factor for auto-created topics (Default: 3)

  • min: 1048576
  • max: 104857600

replica.fetch.max.bytes

The number of bytes of messages to attempt to fetch for each partition . This is not an absolute maximum, if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. (Default: 1048576 bytes (1 mebibytes))

  • min: 10485760
  • max: 1048576000

replica.fetch.response.max.bytes

Maximum bytes expected for the entire fetch response. Records are fetched in batches, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. As such, this is not an absolute maximum. (Default: 10485760 bytes (10 mebibytes))

  • min: 256
  • max: 2147483647

max.connections.per.ip

The maximum number of connections allowed from each ip address (Default: 2147483647).

  • min: 10
  • max: 10000

producer.purgatory.purge.interval.requests

The purge interval (in number of requests) of the producer request purgatory (Default: 1000).

sasl.oauthbearer.expected.audience

The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. (Default: null)

sasl.oauthbearer.expected.issuer

Optional setting for the broker to use to verify that the JWT was created by the expected issuer.(Default: null)

sasl.oauthbearer.jwks.endpoint.url

OIDC JWKS endpoint URL. By setting this the SASL SSL OAuth2/OIDC authentication is enabled. See also other options for SASL OAuth2/OIDC. (Default: null)

sasl.oauthbearer.sub.claim.name

Name of the scope from which to extract the subject claim from the JWT.(Default: sub)

  • min: 10485760
  • max: 209715200

socket.request.max.bytes

The maximum number of bytes in a socket request (Default: 104857600 bytes).

  • min: 1048576
  • max: 2147483647

transaction.state.log.segment.bytes

The transaction topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads (Default: 104857600 bytes (100 mebibytes)).

  • min: 600000
  • max: 3600000

transaction.remove.expired.transaction.cleanup.interval.ms

The interval at which to remove transactions that have expired due to transactional.id.expiration.ms passing (Default: 3600000 ms (1 hour)).

transaction.partition.verification.enable

Enable verification that checks that the partition has been added to the transaction before writing transactional records to the partition. (Default: true)

Kafka authentication methods

  • default: true

Enable certificate/SSL authentication

Enable SASL authentication

Kafka SASL mechanisms

  • default: true

Enable PLAIN mechanism

  • default: true

Enable SCRAM-SHA-256 mechanism

  • default: true

Enable SCRAM-SHA-512 mechanism

Enable follower fetching

Enabled

Whether to enable the follower fetching functionality

Enable Kafka Connect service

Kafka Connect configuration values

Client config override policy

Defines what client configurations can be overridden by the connector. Default is None

Consumer auto offset reset

What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server. Default is earliest

  • min: 1048576
  • max: 104857600

The maximum amount of data the server should return for a fetch request

Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum.

Consumer isolation level

Transaction read isolation level. read_uncommitted is the default, but read_committed can be used if consume-exactly-once behavior is desired.

  • min: 1048576
  • max: 104857600

The maximum amount of data per-partition the server will return.

Records are fetched in batches by the consumer.If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress.

  • min: 1
  • max: 2147483647

The maximum delay between polls when using consumer group management

The maximum delay in milliseconds between invocations of poll() when using consumer group management (defaults to 300000).

  • min: 1
  • max: 10000

The maximum number of records returned by a single poll

The maximum number of records returned in a single call to poll() (defaults to 500).

  • min: 1
  • max: 100000000

The interval at which to try committing offsets for tasks

The interval at which to try committing offsets for tasks (defaults to 60000).

  • min: 1
  • max: 2147483647

Offset flush timeout

Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt (defaults to 5000).

  • max: 5242880

The batch size in bytes the producer will attempt to collect for the same partition before publishing to broker

This setting gives the upper bound of the batch size to be sent. If there are fewer than this many bytes accumulated for this partition, the producer will 'linger' for the linger.ms time waiting for more records to show up. A batch size of zero will disable batching entirely (defaults to 16384).

  • min: 5242880
  • max: 134217728

The total bytes of memory the producer can use to buffer records waiting to be sent to the broker

The total bytes of memory the producer can use to buffer records waiting to be sent to the broker (defaults to 33554432).

The default compression type for producers

Specify the default compression type for producers. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4', 'zstd'). It additionally accepts 'none' which is the default and equivalent to no compression.

  • max: 5000

Wait for up to the given delay to allow batching records together

This setting gives the upper bound on the delay for batching: once there is batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if there are fewer than this many bytes accumulated for this partition the producer will 'linger' for the specified time waiting for more records to show up. Defaults to 0.

  • min: 131072
  • max: 67108864

The maximum size of a request in bytes

This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.

  • max: 600000

The maximum delay of rebalancing connector workers

The maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassigning their connectors and tasks to the group. During this period the connectors and tasks of the departed workers remain unassigned. Defaults to 5 minutes.

  • min: 1
  • max: 2147483647

The timeout used to detect failures when using Kafka’s group management facilities

The timeout in milliseconds used to detect failures when using Kafka’s group management facilities (defaults to 10000).

Kafka Connect secret providers

Configure external secret providers in order to reference external secrets in connector configuration. Currently Hashicorp Vault (provider: vault, auth_method: token) and AWS Secrets Manager (provider: aws, auth_method: credentials) are supported. Secrets can be referenced in connector config with ${<provider_name>:<secret_path>:<key_name>}

kafka_rest

boolean

Enable Kafka-REST service

kafka_version

string,null

Kafka major version

Enable Schema-Registry service

Enable authorization in Kafka-REST service

Kafka REST configuration

  • default: 1

producer.acks

The number of acknowledgments the producer requires the leader to have received before considering a request complete. If set to 'all' or '-1', the leader will wait for the full set of in-sync replicas to acknowledge the record.

producer.compression.type

Specify the default compression type for producers. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4', 'zstd'). It additionally accepts 'none' which is the default and equivalent to no compression.

  • max: 5000

producer.linger.ms

Wait for up to the given delay to allow batching records together

  • max: 2147483647
  • default: 1048576

producer.max.request.size

The maximum size of a request in bytes. Note that Kafka broker can also cap the record batch size.

  • default: true

consumer.enable.auto.commit

If true the consumer's offset will be periodically committed to Kafka in the background

  • max: 671088640
  • default: 67108864

consumer.request.max.bytes

Maximum number of bytes in unencoded message keys and values by a single request

  • min: 1000
  • max: 30000
  • default: 1000

consumer.request.timeout.ms

The maximum total time to wait for messages for a request if the maximum number of messages has not yet been reached

  • default: topic_name

name.strategy

Name strategy to use when selecting subject for storing schemas

  • default: true

name.strategy.validation

If true, validate that given schema is registered under expected subject name by the used name strategy when producing messages.

  • min: 10
  • max: 250
  • default: 25

simpleconsumer.pool.size.max

Maximum number of SimpleConsumers that can be instantiated per broker

Tiered storage configuration

Enabled

Whether to enable the tiered storage functionality

Schema Registry configuration

topic_name

The durable single partition topic that acts as the durable log for the data. This topic must be compacted to avoid losing data due to retention policy. Please note that changing this configuration in an existing Schema Registry / Karapace setup leads to previous schemas being inaccessible, data encoded with them potentially unreadable and schema ID sequence put out of order. It's only possible to do the switch while Schema Registry / Karapace is disabled. Defaults to _schemas.

leader_eligibility

If true, Karapace / Schema Registry on the service nodes can participate in leader election. It might be needed to disable this when the schemas topic is replicated to a secondary cluster and Karapace / Schema Registry there must not participate in leader election. Defaults to true.

schema_reader_strict_mode

If enabled, causes the Karapace schema-registry service to shutdown when there are invalid schema records in the _schemas topic. Defaults to false.

retriable_errors_silenced

If enabled, kafka errors which can be retried or custom errors specified for the service will not be raised, instead, a warning log is emitted. This will denoise issue tracking systems, i.e. sentry. Defaults to true.

Allow access to read Kafka topic messages in the Aiven Console and REST API.