Flatten JSON files into CSV files
Created by @schelluri The pipeline pattern flattens a JSON file, which has multiple objects, and turns it into a CSV file. Configuration Sources: JSON Generator Targets: CSV file Snaps used: JSON Generator, JSON Formatter, JSON Parser, Script, CSV Formatter, File Writer Downloads MS_Flatten_Script.slp (31.4 KB)7.4KViews1like2CommentsCSV to Workday Tenant
Submitted by @stodoroska from Interworks This pipeline reads a CSV file, parses the content, then the Workday Write Snap is used to call the web service operation Put_Applicant to write the data into a Workday tenant. Configuration If there is no lookup match in the SQL Server lookup table, we are using MKD as a default code for country code. Sources: CSV file on the File share system Targets: Workday tenant Snaps used: File Reader, CSV Parser, SQL Server - Lookup, Mapper, Union, Workday Write Downloads CSV2Workday.slp (17.3 KB)4.7KViews4likes1CommentMonitor the Health and Performance of Your Integrations
Created by @rsramkoski The following pipeline comes from the Using SnapLogic’s Pipeline Monitoring API video from the blog post 3 tips for working remotely with SnapLogic | SnapLogic Downloads SL_PipelineMonitorAPI.slp (26.4 KB)3.7KViews0likes1CommentDynamic Data Pivot
Created by @dwhite An example of how to perform a dynamic pivot on data that needs to be pivoted. The traditional Pivot Snap is static and has to be configured per set of data (Ex. in pipeline). This pipeline shows how one can perform a pivot operation on data with variable fields that could be sent in at runtime instead, so pivot configuration could be done on the fly during the run. Configuration Configure dynamic pivot via parameter values. Enter the number of fields to split to in “nSplitFields” parameter. Enter the field names that are being split in the “splitFields” Parameter in a comma separated list. Enter new fields to generate in the “genFields” parameter in a comma separated list. For actual use, remove the sample data and traditional Pivot, those are only for demonstration and comparison. Sources: Any flat datasource that needs pivoting Targets: Any Snaps used: CSV Generator, Copy, Pivot, Mapper, Sequence, Join, JSON Splitter, Group by n Downloads Dynamic Data Pivot.slp (15.7 KB)3.3KViews2likes1CommentData Orchestration Process with Asynchronous Response
Created by @dwhite A sample pattern one could use to trigger a large data orchestration process with several steps (synchronous and parallel) that has an asynchronous response at the start so the calling process will not wait. Response contains a SnapLogic Ruuid so one could call back later to check on status or log. Configuration Insert you child data warehousing process pipelines into the pipeline execute(s) as needed. Assign pipeline to a trigger task. Call from external application/process Sources: Any child pipeline process performing data loads / gets Targets: Any Snaps used: Pipeline Execute, Mapper, Copy Downloads Asynchronous Orchestration Process.slp (16.8 KB)2.3KViews0likes0CommentsActive Directory get User Data Endpoint
Created by @dwhite Enter a base DN and an AD username to use LDAP search to find a specific user. Run as a trigger task endpoint or as a child pipeline. Configuration Enter the base dn to look for users in via the “baseDn” parameter. Enter the AD username in the “filterUser” parameter. Assign to trigger task and call via rest. Or assign to pipeline execute and run as a child as part of an orchestrator pipeline. Sources: Active Directory User Object Class Targets: REST call return Snaps used: LDAP Search, Filter, Mapper, Join, Union, Group by n Downloads AD Get User Data Endpoint.slp (14.9 KB)2.4KViews1like0CommentsRabbitMQ Consumer to MySQL
Created by Chris Ward, SnapLogic This pipeline consumes messages from a RabbitMQ queue and writes them to a MySQL Database and acknowledges or rejects the message dependent on the delivery. It could be used for a use case where theres need to stream messages in real time and to write the content to a relational database for further consumption. Configuration Sources: RabbitMQ message Targets: MySQL Table Snaps used: RabbitMQ Consumer, Binary to Document, MySQL Insert, Mapper, RMQ Message Reject, RMQ, JSON Formatter, File Writer Downloads RabbitMQ_to_MySQL.slp (17.6 KB)3.3KViews0likes0CommentsPipeline Directory Generator
Created by Chris Ward, SnapLogic This pipeline generates a Google Sheet containing key information relating to all Pipelines within a customers org. This allows customers to maintain both a consolidated & up to date view of all Pipelines within a specific Org alongside its last run status and other important metadata. The Pipeline would need to be scheduled to run on a set cadence. The pattern could be further adapted to write the output to an alternative destination or even consumed externally as an API. Configuration The following pipeline parameters can be used to configure certain aspects of the Pipeline. OrgName GoogleSheetName GoogleWorksheetName Sources: SnapLogic API & Metadata Snaps Targets: Google Sheets Snaps used: SnapLogic Metadata, REST Get, JSON Splitter, Mapper, Join, Sort, Google Sheets Worksheet Writer Downloads Pipeline Directory Generator.slp (19.7 KB)2.4KViews0likes0CommentsPardot Prospect Import
Created by Chris Ward, SnapLogic This pattern provides the ability to integrate with Pardot’s async batch APIs allowing customers to import Prospects into their Pardot instance. Sources: SQL Server. Can be changed to another source by swapping out the SQL Server Select for another Snap Targets: Salesforce Pardot Prospect Note: Pipelines in this pattern use the expression library file pardot-config.json. Pardot_Prospect_Batch_Process Configuration Snaps used: SQL Server Select, Mapper, Group By N, Pipeline Execute (calling the pipeline Pardot_Prospect_Orchestration) Pardot_Prospect_Orchestration Configuration Snaps used: Pipeline Execute, Head This pipeline calls a series of other pipelines: Pardot_Prospect_Generate_CSV Pardot_Prospect_Open_Batch Pardot_Prospect_Add_Batch_Child Pardot_Prospect_Update_Batch_Child Pardot_Prospect_Generate_CSV Configuration Snaps used: Mapper, JSON Splitter, CSV Formatter, File Writer Pardot_Prospect_Open_Batch Configuration Snaps used: JSON Generator, REST POST, Mapper Pardot_Prospect_Add_Batch_Child Configuration Snaps used: REST POST, File Delete, Mapper, Union Pardot_Prospect_Update_Batch_Child Configuration Snaps used: REST Patch, Mapper, JSON Formatter, File Writer, Union Pardot_Prospect_Check_Batch_Status Configuration Snaps used: Directory Browser, Copy File Reader, JSON Parser, REST Get, Router, Tail, Join, File Delete, Mapper, Document to Binary, File Writer Downloads Pardot_Prospect_Batch_Process.slp (7.8 KB) pardot-config.json (233 Bytes) Pardot_Prospect_Orchestration.slp (8.8 KB) Pardot_Prospect_Generate.slp (6.6 KB) Pardot_Prospect_Open_Batch.slp (7.1 KB) Pardot_Prospect_Add_Batch_Child.slp (7.9 KB) Pardot_Prospect_Update_Batch_Child_.slp (9.1 KB) Pardot_Prospect_Check_Batch_Status.slp (23.8 KB)2KViews0likes0CommentsOracle to Redshift - Dynamic Metadata
Created by Chris Ward, SnapLogic This Pipeline replicates table data obtained from an Oracle database and upserts the data into Redshift. Firstly, the Parent Pipeline reads the Source and Target schema & tables and a where clause used to filter the source data from an expression file “oracle_to_redshift__c.expr” (the customer would need to modify this file to correspond with the table & schemas they are looking to target), then obtains the primary key constraints for each target table in Redshift. The Pipeline then constructs an array of documents for up to 10 key columns that will be used to determine the unicity of the data when being upserted into Redshift. Each document is then passed into the child pipeline that reads the data from the source Oracle table and then upserts the data into Redshift. A router Snap is used between the Oracle read and Redshift upsert to achieve concurrency for large data volumes. The Pattern used within this Pipeline could potentially be reworked to operate with different source and target databases, eg SQL Server to Google Big Query. Configuration To make this Pipeline reusable, make use of the expression file to store the source and target tables & schemas. Sources: Oracle Table Targets: Redshift Table Snaps used: Mapper, JSON Splitter, Redshift Execute, Group By Fields, Pipeline Execute, Oracle Select, Router, Redshift Upsert Downloads Oracle to Redshift - Dynamic Metadata.slp (15.8 KB) Oracle_to_Redshift_02.slp (11.3 KB) oracle_to_redshift_c.expr (3.2 KB)2.7KViews0likes0Comments