Skip to main content

Prepare AWS for Amazon S3 sink

Set up AWS to allow the S3 sink connector to write data from Apache Kafka® to Amazon S3.

Create the S3 bucket

  1. Open the AWS S3 console.
  2. Create a bucket.
  3. Enter a bucket name and choose a region. Keep the remaining settings as default.
note

Keep Block all public access enabled. The connector uses IAM permissions to access the bucket.

Create an IAM policy

The Apache Kafka Connect S3 sink connector requires these permissions:

  • s3:GetObject
  • s3:PutObject
  • s3:AbortMultipartUpload
  • s3:ListMultipartUploadParts
  • s3:ListBucketMultipartUploads

Create an inline policy in AWS IAM and replace <AWS_S3_BUCKET_NAME> with your bucket name:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts",
"s3:ListBucketMultipartUploads"
],
"Resource": [
"arn:aws:s3:::<AWS_S3_BUCKET_NAME>",
"arn:aws:s3:::<AWS_S3_BUCKET_NAME>/*"
]
}
]
}

Create the IAM user

  1. Open the IAM console.
  2. Create a user.
  3. In Select AWS credential type, select Access key - Programmatic access. Copy the Access key ID and Secret access key. You use these values in the connector configuration.
  4. In Permissions, attach the policy created in the previous section.
note

If you see Access Denied errors when starting the connector, review the AWS guidance for S3 access issues.