Reference Implementation - Integration Pattern - On-Prem Database to Cloud Datawarehouse
This is a common integration patterns that customers tend to use as they are moving from on-prem to cloud. The following videos describe this integration pattern and the attached reference implementation pipelines help you get a head start on the implementation. Part 1 Part 2 Pipelines Attached The pipeline moves all the tables from an Oracle database to Redshift. The pipelines are designed to use our best practices like parameterization of all aspects of the pipeline (eg. account names , db names, table names). All are dynamic and controlled via pipeline parameters. Source: Oracle Tables Target: Redshift Tables Snaps used: Step 1: Oracle - Table List, Filter, Oracle - Execute, Mapper, Pipeline Execute Step 2: Mapper, Redshift - Execute, Router, Join, Oracle - Select, Shard Offsets*, Redshift - Bulk Load, Pipeline Execute, Exit, Union Step 3: Mapper, Oracle - Select, Redshift - Bulk Load, Exit *This is a Snap developed by the SnapLogic Professional Services team. Downloads Pattern 1 - Step 1.0 - Oracle to Redshift Parent_2018_06_25.slp (9.4 KB) Pattern 1 - Step 1.1 - Oracle to Redshift No Shredding_2018_06_25.slp (20.9 KB) Pattern 1 - Step 1.2 - Oracle to Redshift Shred_2018_06_25.slp (7.7 KB)7.9KViews1like8CommentsUsing Oracle Execute to run complicated SQL Select not returning any data
I’m new to SnapLogic and was told by one of our senior SL developers that instead of using the Oracle Select snap, I could use the Oracle Execute snap to run a complex SQL select statement. When I try to do that though, I get no data and I can’t figure out how to get a “snap”, such as a csv formatter, to “connect” to it. The pieces are next to each other in the pipeline, but there is nothing in the spot in between them showing they have fitted together. the only way I have been able to get it to work is by creating a database view representing the complex sql statement, then using an Oracle Select snap to pull the data from the db view.6.5KViews0likes7CommentsGenerate Avro Schema and DDL (to create Hive Table) based on Oracle Table Metadata
Created by @pkona Integrations involving ingesting data into Hive requires creating tables on Hive and creating the schema for data formats like Avro. This pattern generates the AVRO schema based on querying the table metadata for a Oracle table. It also generates the create table statement for Hive. Both the schema file and DDL file are written to HDFS, SnapLogic Database and returned as output of the pipeline. Configuration Sources: Oracle Table Targets: Generate Hive table create statement and AVRO schema Snaps used: JSON Generator, Oracle Execute, Hive Execute, Mapper, Copy, Join, JSON Formatter, File Writer, Pipeline Execute Downloads Pattern 3 - Step 1.1 - Ingest - Generate Hive table Avro based on Oracle Table metadata.slp (22.0 KB) Pattern 3 - Core - Write schema to HDFS.slp (8.2 KB) (child pipeline called by the ingest pipeline)6KViews0likes1CommentSnap Execute snap with Oracle Cloud HCM Data Loader (HDL)
We are new customers of Snaplogic and Oracle Cloud HCM and are trying to use Snaplogic for integrations with Oracle Cloud HCM (OC). One of our first projects is to update a flexfield on OC with our custom ID number for each person listed in the a csv file that we will receive from another process. We are using the OC HCM Data Loader (HDL). Outside of Snaplogic we transform the info. from the csv to a “.dat” file in a format that HDL uses, and then zip the file as required by HDL. I have loaded and processed the file manually using HDL so that I know that the file and HDL are working properly. Now I want to use Snaplogic to automate a process to run nightly to update OC What I want Snaplogic to do is: Send the zip file to OC. Initiate the process on OC to process the contents of the zip file. Store/log the results of the transactions. I plan on using the SOAP Execute snap for this unless I find a better method. The first thing I need to know is how to use Snaplogic to send the zip file to OC. It looks like I need the file to be loaded into the envelope in BASE64 format as “Contents”. How do I use Snaplogic to get the file from the server into snaplogic, transform it to BASE64 format and get it into the envelope? Thanks in advance, Scott5.5KViews0likes9CommentsHow do you use the Result Set output on the Oracle Stored Procedure snap?
I have an Oracle stored function that returns a ref cursor. When I call it from an Oracle Stored Procedure snap, I’d like the data returned to the Result Set output (I think), but instead, it’s getting returned to the regular output with each column in the data as it’s own array. I haven’t been able to locate any documentation how to use the result set output. Has anyone used the result set output successfully?3.4KViews0likes2CommentsDiscard empty rows while writing an excel file
I am trying to write an excel after comparing a database with salesforce data, I want to ignore the null rows after comparison is done in a mapper post Join. The data I am getting is perfect, only issue is with the way excel looks like. Please advise how can these empty rows be ignored3.4KViews0likes2CommentsEmployee Journey: Scheduled employee data batch update from Oracle into Workday
Created by @rdill This pipeline pattern is ideal for organizations that require a complete view of their employees. This pattern is used to run a scheduled/periodic update of employee data and information from Oracle into Workday. The pipeline will compare the lists of employees in Oracle and Workday, if there are employees that are found in Oracle but not listed in Workday, then the employee information will be added to Workday. Configuration The table in Oracle needs to contain the necessary data required to execute the Hire Object in Workday Sources: Oracle database table, employees, Workday Hire Employee Targets: Workday Service, Staffing, Object, Hire_Employee Snaps used: Mapper, Oracle Execute, Workday Read, Workday Write, Diff Snap Downloads ORCL Workday Hire Employee.slp (15.9 KB)3.2KViews0likes0CommentsOracle to XML File
Submitted by @stodoroska from Interworks This pipeline reads the flat data stored in an Oracle database table, then converts this data into XML. Configuration The XML Generator Snap allows you to edit the XML format structure, so for the same source data, different target XML files to be created. Sources: Oracle table Targets: XML file on the File share system Snaps used: Oracle - Select, Group by N, XML Generator, Mapper, Document to binary, File writer Downloads Oracle2XML.slp (8.3 KB)3.1KViews0likes0CommentsOracle to Snowflake Schema Replication
Created by @vish This pipeline pattern replicates Oracle table schema and loads data into Snowflake. Configuration Sources : Oracle table Targets : Snowflake table Snaps used : Oracle - Select, Snowflake - Bulk Load Downloads Oracle to SnowFlake.slp (4.2 KB)3KViews0likes0CommentsSalesforce Contacts - Database Replication (SQL Server-MongoDB-Oracle)
Created by @asharifian This pattern provides an example pipeline to replicate data from Salesforce to multiple different database types: SQL Server, MongoDB, and Oracle. Data replication is useful for disaster recovery, analytics, and improves overall system resilience and reliability. Configuration Two approaches are covered in this pattern: Data replication post success of an insert into a dependent database, such as data first getting inserted into MongoDB, and upon success, inserted into Oracle. Data replication in parallel, such as data being inserted to MongoDB and SQL Server in a parallel fashion. Sources: Salesforce Contacts Targets: Database tables for customer contact information in SQL Server, MongoDB, Oracle Snaps used: Salesforce Read, Mapper, Copy, MongoDB - Insert, SQL Server - Insert, Oracle - Insert Downloads Salesforce Contacts-Database Replication (SQL Server-MDB-Oracle).slp (14.5 KB)2.9KViews0likes0Comments