Move data from Database to Kafka and then to Amazon S3 Storage

Contributed by @pkona

There are two pipelines in this pattern. The first pipeline extracts data from a Database and publishes to a Kafka Topic. The second pipeline consumes from the Kafka Topic and ingests into the Amazon S3 Storage by calling a third pipeline.

Publish to Kafka

Source: Oracle
Target: Kafka
Snaps used: Oracle Select, Group By N, Confluent Kafka Producer

Consume from Kafka to S3

Source: Kafka
Target: Pipeline Execute
Snaps used: Confluent Kafka Consumer, Mapper, Pipeline Execute (calling Write file to S3 pipeline)

Write file to S3

Source: Kafka
Target: Amazon S3 Storage
Snaps used: Mapper, JSON Formatter, File Writer


Publish to Kafka.slp (5.4 KB)
Consume from Kafka to S3.slp (5.4 KB)
Write file to S3.slp (4.5 KB)