Generate Avro Schema and DDL (to create Hive Table) based on Oracle Table Metadata
Created by @pkona Integrations involving ingesting data into Hive requires creating tables on Hive and creating the schema for data formats like Avro. This pattern generates the AVRO schema based on querying the table metadata for a Oracle table. It also generates the create table statement for Hive. Both the schema file and DDL file are written to HDFS, SnapLogic Database and returned as output of the pipeline. Configuration Sources: Oracle Table Targets: Generate Hive table create statement and AVRO schema Snaps used: JSON Generator, Oracle Execute, Hive Execute, Mapper, Copy, Join, JSON Formatter, File Writer, Pipeline Execute Downloads Pattern 3 - Step 1.1 - Ingest - Generate Hive table Avro based on Oracle Table metadata.slp (22.0 KB) Pattern 3 - Core - Write schema to HDFS.slp (8.2 KB) (child pipeline called by the ingest pipeline)6KViews0likes1CommentPattern to ingest data from Google Storage into Google BigQuery using AVRO file format
Created by @pkoppishetty This pipeline is designed for transferring data from a local machine to Google Storage, and then loading data into Google BigQuery using a data source as Google Storage. Configuration Make sure- Upload type is:- “Upload existing files from Google Cloud Storage” and file paths:- “file format”:“AVRO”, “file path”:“gs://gcs_cloud_1/data/leads.avro” Sources: Local machine, Google Storage Targets: Google BigQuery Snaps used: File Reader, File Writer, Google BigQuery Bulk Load (Cloud Storage) Downloads Pattern to ingest data from Google Storage into Google BigQuery using AVRO file format.slp (5.6 KB)3.3KViews0likes0Comments