Client config override policy Defines what client configurations can be overridden by the connector. Default is None |
Consumer auto offset reset What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server. Default is earliest |
- min:
1048576 - max:
104857600
The maximum amount of data the server should return for a fetch request Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum. |
Consumer isolation level Transaction read isolation level. read_uncommitted is the default, but read_committed can be used if consume-exactly-once behavior is desired. |
- min:
1048576 - max:
104857600
The maximum amount of data per-partition the server will return. Records are fetched in batches by the consumer.If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress. |
The maximum delay between polls when using consumer group management The maximum delay in milliseconds between invocations of poll() when using consumer group management (defaults to 300000). |
The maximum number of records returned by a single poll The maximum number of records returned in a single call to poll() (defaults to 500). |
The interval at which to try committing offsets for tasks The interval at which to try committing offsets for tasks (defaults to 60000). |
Offset flush timeout Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt (defaults to 5000). |
The batch size in bytes the producer will attempt to collect for the same partition before publishing to broker This setting gives the upper bound of the batch size to be sent. If there are fewer than this many bytes accumulated for this partition, the producer will 'linger' for the linger.ms time waiting for more records to show up. A batch size of zero will disable batching entirely (defaults to 16384). |
- min:
5242880 - max:
134217728
The total bytes of memory the producer can use to buffer records waiting to be sent to the broker The total bytes of memory the producer can use to buffer records waiting to be sent to the broker (defaults to 33554432). |
The default compression type for producers Specify the default compression type for producers. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4', 'zstd'). It additionally accepts 'none' which is the default and equivalent to no compression. |
Wait for up to the given delay to allow batching records together This setting gives the upper bound on the delay for batching: once there is batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if there are fewer than this many bytes accumulated for this partition the producer will 'linger' for the specified time waiting for more records to show up. Defaults to 0. |
The maximum size of a request in bytes This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. |
The maximum delay of rebalancing connector workers The maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassigning their connectors and tasks to the group. During this period the connectors and tasks of the departed workers remain unassigned. Defaults to 5 minutes. |
The timeout used to detect failures when using Kafka’s group management facilities The timeout in milliseconds used to detect failures when using Kafka’s group management facilities (defaults to 10000). |