Prepare AWS for Amazon S3 sink
Set up AWS to allow the S3 sink connector to write data from Apache Kafka® to Amazon S3.
Create the S3 bucket
- Open the AWS S3 console.
- Create a bucket.
- Enter a bucket name and choose a region. Keep the remaining settings as default.
note
Keep Block all public access enabled. The connector uses IAM permissions to access the bucket.
Create an IAM policy
The Apache Kafka Connect S3 sink connector requires these permissions:
s3:GetObjects3:PutObjects3:AbortMultipartUploads3:ListMultipartUploadPartss3:ListBucketMultipartUploads
Create an inline policy in AWS IAM and replace <AWS_S3_BUCKET_NAME> with your bucket
name:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts",
"s3:ListBucketMultipartUploads"
],
"Resource": [
"arn:aws:s3:::<AWS_S3_BUCKET_NAME>",
"arn:aws:s3:::<AWS_S3_BUCKET_NAME>/*"
]
}
]
}
Create the IAM user
- Open the IAM console.
- Create a user.
- In Select AWS credential type, select Access key - Programmatic access. Copy the Access key ID and Secret access key. You use these values in the connector configuration.
- In Permissions, attach the policy created in the previous section.
note
If you see Access Denied errors when starting the connector, review the AWS guidance
for
S3 access issues.