Skip to content
This repository has been archived by the owner on May 9, 2024. It is now read-only.

Commit

Permalink
drop bullets
Browse files Browse the repository at this point in the history
  • Loading branch information
benofben committed Mar 9, 2019
1 parent 94c0ad7 commit 68214ea
Showing 1 changed file with 14 additions and 14 deletions.
28 changes: 14 additions & 14 deletions Oracle Object Storage and Confluent Connector.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
# Oracle Object Storage and Confluent Connector

## Prerequisites
1. This assumes you already have an Oracle Cloud Infrastructure account. If not, to create a Oracle Cloud Infrastructure tenant. See [Signing Up for Oracle Cloud Infrastructure.](https://docs.cloud.oracle.com/iaas/Content/GSG/Tasks/signingup.htm)
This assumes you already have an Oracle Cloud Infrastructure account. If not, to create a Oracle Cloud Infrastructure tenant. See [Signing Up for Oracle Cloud Infrastructure.](https://docs.cloud.oracle.com/iaas/Content/GSG/Tasks/signingup.htm)

2. Create an [Amazon S3 Compatibility API key.](https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingcredentials.htm#Working2) An Amazon S3 Compatibility API key consists of an Access Key/Secret Key pair.
Create an [Amazon S3 Compatibility API key.](https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingcredentials.htm#Working2) An Amazon S3 Compatibility API key consists of an Access Key/Secret Key pair.

3. Identify your Object Storage Namespace, which is basically your tenancy name, since we will need it. You can find it in OCI console, see screenshot below.
Identify your Object Storage Namespace, which is basically your tenancy name, since we will need it. You can find it in OCI console, see screenshot below.

![](./images/object%20storage/01%20-%20tenant.png)

4. Identify the Oracle Cloud Infrastructure region which you plan to use. eg: us-phoenix-1, us-ashburn-1, etc.
Identify the Oracle Cloud Infrastructure region which you plan to use. eg: us-phoenix-1, us-ashburn-1, etc.

5. The API endpoint (store.url) to be used in Connect S3 connector configuration to access Oracle Object Storage will depend on the values of region and namespace (Step3 and Step4 above).
The API endpoint (store.url) to be used in Connect S3 connector configuration to access Oracle Object Storage will depend on the values of region and namespace from the prerequisites.

Examples of API endpoints include:

Expand All @@ -20,31 +20,31 @@ Examples of API endpoints include:
https://<object_storage_namespace>.compat.objectstorage.eu-frankfurt-1.oraclecloud.com
https://<object_storage_namespace>.compat.objectstorage.uk-london-1.oraclecloud.com

Replace <object_storage_namespace> with value from Step3 above.
Replace <object_storage_namespace> with value from the prerequisites.

6. Create a bucket in Oracle Object Storage using OCI console. **eg: kafka_sink_object_storage_bucket**
Create a bucket in Oracle Object Storage using OCI console. **eg: kafka_sink_object_storage_bucket**

![](./images/object%20storage/02%20-%20create%20bucket.png)

## Configure Confluent to Access Object Storage
1. Assuming you already have Confluent installed on OCI using this Github repo. Let's create a topic using Confluent Control Center UI or command line or REST API. **example: kafka_oci_object_storage_test.**
Assuming you already have Confluent installed on OCI using this Github repo. Let's create a topic using Confluent Control Center UI or command line or REST API. **example: kafka_oci_object_storage_test.**

![](./images/object%20storage/03%20-%20create%20topic.png)

2. Produce a few messages using JSON with the value '{ "foo": "bar" }' to the topic created above.
Produce a few messages using JSON with the value '{ "foo": "bar" }' to the topic created above.
I am using the REST API, so you can run it from anywhere as far as confluent worker nodes (cf-worker-1) are reachable.

Example:

ssh -i ~/.ssh/id_rsa opc@<ip address or cf-worker-1>
for i in {1..10} ; do echo $i; curl -X POST -H "Content-Type: application/vnd.kafka.json.v1+json" --data '{"records":[{"value":{"foo":"bar"}}]}' http://cf-worker-1:8082/topics/kafka_oci_object_storage_test ; done;

3. Gracefully stop the connect-distributed daemon using the below command. Run this on all Confluent worker nodes.(example: cf-worker-1):
Gracefully stop the connect-distributed daemon using the below command. Run this on all Confluent worker nodes.(example: cf-worker-1):

ssh -i ~/.ssh/id_rsa opc@<ip address or cf-worker-1>
ps -efw | grep "org.apache.kafka.connect.cli.ConnectDistributed" | grep -v "grep " | gawk '{ print $2 }' | xargs sudo kill -15

4. Update connect-distributed.properties to use JsonConverter and schemas.enable set to false on all worker nodes. In my example, I am using JSON messages and hence the below change is needed, since by default, it comes configured with AvroConverter
Update connect-distributed.properties to use JsonConverter and schemas.enable set to false on all worker nodes. In my example, I am using JSON messages and hence the below change is needed, since by default, it comes configured with AvroConverter

On each of the Confluent Worker Nodes (example: cf-worker-<n>):

Expand All @@ -70,7 +70,7 @@ Or the following:
key.converter.schemas.enable=true
value.converter.schemas.enable=true

5. Configure Confluent worker nodes with credentials to access Object Storage ans start Kafka connect daemon. The keys below are labelled as AWS_xxxxx, but its values needs to be set with the keys generated in prerequisites Step2 on OCI console.
Configure Confluent worker nodes with credentials to access Object Storage ans start Kafka connect daemon. The keys below are labelled as AWS_xxxxx, but its values needs to be set with the keys generated in prerequisites.

Do the steps on each of the Confluent Worker Nodes (example: cf-worker-<n>):

Expand All @@ -80,14 +80,14 @@ Do the steps on each of the Confluent Worker Nodes (example: cf-worker-<n>):
AWS_SECRET_ACCESS_KEY=<replace with your OCI Object storage secret key>
/opt/confluent/bin/connect-distributed -daemon /opt/confluent/etc/kafka/connect-distributed.properties

6. Load the Confluent Connect S3 Sink connector with configuration to access Oracle Object Storage.
Load the Confluent Connect S3 Sink connector with configuration to access Oracle Object Storage.

Note: We are setting the below parameters with OCI specific values (not AWS values):

"s3.region": "us-phoenix-1"
"store.url": "intmahesht.compat.objectstorage.us-phoenix-1.oraclecloud.com"

Replace the above with values from prerequisite Step 4 and 5.
Replace the above with values from prerequisites above.

Similarly replace the below with the values which apply for your implementation:

Expand Down

0 comments on commit 68214ea

Please sign in to comment.