Reference Implementation - Integration Pattern - On-Prem Database to Cloud Datawarehouse
This is a common integration patterns that customers tend to use as they are moving from on-prem to cloud. The following videos describe this integration pattern and the attached reference implementation pipelines help you get a head start on the implementation. Part 1 Part 2 Pipelines Attached The pipeline moves all the tables from an Oracle database to Redshift. The pipelines are designed to use our best practices like parameterization of all aspects of the pipeline (eg. account names , db names, table names). All are dynamic and controlled via pipeline parameters. Source: Oracle Tables Target: Redshift Tables Snaps used: Step 1: Oracle - Table List, Filter, Oracle - Execute, Mapper, Pipeline Execute Step 2: Mapper, Redshift - Execute, Router, Join, Oracle - Select, Shard Offsets*, Redshift - Bulk Load, Pipeline Execute, Exit, Union Step 3: Mapper, Oracle - Select, Redshift - Bulk Load, Exit *This is a Snap developed by the SnapLogic Professional Services team. Downloads Pattern 1 - Step 1.0 - Oracle to Redshift Parent_2018_06_25.slp (9.4 KB) Pattern 1 - Step 1.1 - Oracle to Redshift No Shredding_2018_06_25.slp (20.9 KB) Pattern 1 - Step 1.2 - Oracle to Redshift Shred_2018_06_25.slp (7.7 KB)8KViews1like8CommentsGenerate Avro Schema and DDL (to create Hive Table) based on Oracle Table Metadata
Created by @pkona Integrations involving ingesting data into Hive requires creating tables on Hive and creating the schema for data formats like Avro. This pattern generates the AVRO schema based on querying the table metadata for a Oracle table. It also generates the create table statement for Hive. Both the schema file and DDL file are written to HDFS, SnapLogic Database and returned as output of the pipeline. Configuration Sources: Oracle Table Targets: Generate Hive table create statement and AVRO schema Snaps used: JSON Generator, Oracle Execute, Hive Execute, Mapper, Copy, Join, JSON Formatter, File Writer, Pipeline Execute Downloads Pattern 3 - Step 1.1 - Ingest - Generate Hive table Avro based on Oracle Table metadata.slp (22.0 KB) Pattern 3 - Core - Write schema to HDFS.slp (8.2 KB) (child pipeline called by the ingest pipeline)6KViews0likes1CommentEmployee Journey: Scheduled employee data batch update from Oracle into Workday
Created by @rdill This pipeline pattern is ideal for organizations that require a complete view of their employees. This pattern is used to run a scheduled/periodic update of employee data and information from Oracle into Workday. The pipeline will compare the lists of employees in Oracle and Workday, if there are employees that are found in Oracle but not listed in Workday, then the employee information will be added to Workday. Configuration The table in Oracle needs to contain the necessary data required to execute the Hire Object in Workday Sources: Oracle database table, employees, Workday Hire Employee Targets: Workday Service, Staffing, Object, Hire_Employee Snaps used: Mapper, Oracle Execute, Workday Read, Workday Write, Diff Snap Downloads ORCL Workday Hire Employee.slp (15.9 KB)3.3KViews0likes0CommentsOracle to XML File
Submitted by @stodoroska from Interworks This pipeline reads the flat data stored in an Oracle database table, then converts this data into XML. Configuration The XML Generator Snap allows you to edit the XML format structure, so for the same source data, different target XML files to be created. Sources: Oracle table Targets: XML file on the File share system Snaps used: Oracle - Select, Group by N, XML Generator, Mapper, Document to binary, File writer Downloads Oracle2XML.slp (8.3 KB)3.1KViews0likes0CommentsOracle to Snowflake Schema Replication
Created by @vish This pipeline pattern replicates Oracle table schema and loads data into Snowflake. Configuration Sources : Oracle table Targets : Snowflake table Snaps used : Oracle - Select, Snowflake - Bulk Load Downloads Oracle to SnowFlake.slp (4.2 KB)3KViews0likes0CommentsSalesforce Contacts - Database Replication (SQL Server-MongoDB-Oracle)
Created by @asharifian This pattern provides an example pipeline to replicate data from Salesforce to multiple different database types: SQL Server, MongoDB, and Oracle. Data replication is useful for disaster recovery, analytics, and improves overall system resilience and reliability. Configuration Two approaches are covered in this pattern: Data replication post success of an insert into a dependent database, such as data first getting inserted into MongoDB, and upon success, inserted into Oracle. Data replication in parallel, such as data being inserted to MongoDB and SQL Server in a parallel fashion. Sources: Salesforce Contacts Targets: Database tables for customer contact information in SQL Server, MongoDB, Oracle Snaps used: Salesforce Read, Mapper, Copy, MongoDB - Insert, SQL Server - Insert, Oracle - Insert Downloads Salesforce Contacts-Database Replication (SQL Server-MDB-Oracle).slp (14.5 KB)2.9KViews0likes0CommentsWorkday to Oracle and BambooHR of Employee Information
Created by @asharifian This pattern provides an HRIS integration from Workday to Oracle and BambooHR. Oracle will store employee information in a database table for analytics purposes. BambooHR is an HRIS system that will store employee information for human resources and onboarding. The source is the Human_Resources service, Get_Workers object in Workday. Configuration You can get a free trial of BambooHR and test out their API’s. API is relatively easy to configure. Sources: Workday Get_Workers Targets: Database Table for Oracle, Employees for BambooHR Snaps used: Workday Read, Mapper, Copy, Oracle - Insert, JSON Splitter, REST Post Downloads Workday to Oracle-BambooHR.slp (13.2 KB)2.8KViews0likes0CommentsOracle to Redshift - Dynamic Metadata
Created by Chris Ward, SnapLogic This Pipeline replicates table data obtained from an Oracle database and upserts the data into Redshift. Firstly, the Parent Pipeline reads the Source and Target schema & tables and a where clause used to filter the source data from an expression file “oracle_to_redshift__c.expr” (the customer would need to modify this file to correspond with the table & schemas they are looking to target), then obtains the primary key constraints for each target table in Redshift. The Pipeline then constructs an array of documents for up to 10 key columns that will be used to determine the unicity of the data when being upserted into Redshift. Each document is then passed into the child pipeline that reads the data from the source Oracle table and then upserts the data into Redshift. A router Snap is used between the Oracle read and Redshift upsert to achieve concurrency for large data volumes. The Pattern used within this Pipeline could potentially be reworked to operate with different source and target databases, eg SQL Server to Google Big Query. Configuration To make this Pipeline reusable, make use of the expression file to store the source and target tables & schemas. Sources: Oracle Table Targets: Redshift Table Snaps used: Mapper, JSON Splitter, Redshift Execute, Group By Fields, Pipeline Execute, Oracle Select, Router, Redshift Upsert Downloads Oracle to Redshift - Dynamic Metadata.slp (15.8 KB) Oracle_to_Redshift_02.slp (11.3 KB) oracle_to_redshift_c.expr (3.2 KB)2.7KViews0likes0CommentsIntegration pattern from Oracle to Salesforce
Created by @pkoppishetty This pipeline pattern updates data in Salesforce whenever a new record is pushed into Oracle RDBMS. In this example, for each new record that is added into the Oracle table “employee_details”, a new object record is created in Salesforce. Configuration Sources: Oracle employee_details table Targets: Salesforce account Snaps used: Oracle Select, Salesforce Read, Join, Mapper, Salesforce Create Downloads Integration pattern from Oracle to Salesforce.slp (14.9 KB)2.4KViews0likes0Comments