Configure GCP for a Google BigQuery sink connector
To be able to sink data from Apache Kafka® to Google BigQuery via the dedicated Aiven connector, open the GCP console and:
- Create a Google service account and generate a JSON service key
- Verify that BigQuery API is enabled
- Create a BigQuery dataset or define an existing one where the data is going to be stored
- Grant dataset access to the service account
Create a Google service account and generate a JSON service key
Follow the instructions to:
- create a Google service account
- create a JSON service key
The JSON service key will be used in the connector configuration.
Verify that BigQuery API is enabled
The BigQuery sink connector uses the API to push the data. To enable them:
- Go to the GCP API & Services dashboard and click the BigQuery API.
- Verify the BigQuery API is already enabled or follow the provided steps to enable it.
Create the Google BigQuery dataset
You can either send the Apache Kafka data to an existing Google BigQuery dataset or create one using the GCP console by following the instructions in the dedicated page.
When creating the dataset, specify data location in a region close to where your Aiven for Apache Kafka is running, to minimize latency.
Grant dataset access to the service account
The newly created service account needs to have access to the dataset in order to write data to it. Follow the dedicated instructions to check and modify the dataset permissions. The BigQuery Data Editor is sufficient for the connector to work.