Convert data into various file formats and write to HDFS
Created by @pkona This pipeline pattern has a total of two pipelines: Setup and HDFS RW (Standard), which contains the datasets and writes to HDFS Hadoop file formats (Standard) that converts the CSV dataset into file formats that are commonly used for Hadoop and Big Data use cases. File formats include: CSV JSON Parquet ORCFile Avro SequenceFile Configuration Specify values for the following pipeline parameters: hdfs_base_uri hdfs_folder_path Sources: CSV File on HDFS Targets: Avro, Parquet, CSV, JSON, ORCFile, SequenceFile files on HDFS Snaps used: Downloads1.7KViews0likes0CommentsSnapLogic Loyalty Program
Did you know there is a SnapLogic Loyalty Program? This program recognizes our customers – SnapLogic Ambassadors – for their contributions promoting and sharing their SnapLogic successes with others. If you are already a SnapLogic customer or partner, you may have engaged at events or even helped spread the word about our products and features by speaking and answering questions. The SnapLogic Loyalty Program is our way to ensure customers are properly recognized and thanked for their efforts. Points are rewarded for several activities including podcasts, blog posts, and sharing social media content [Community Manager note: I’m working with the loyalty program manager to get your community time included as well]. These points can then be redeemed for SnapLogic swag, training & support, or invites to exclusive industry events. Go to (bad link) to sign up! Have any questions? Contact us at loyalty@snaplogic.com.839Views2likes0CommentsHow to create a poll?
Create a topic Put “Poll:” in the title and type an intro Click the gear icon and select Build Poll. Select the type of poll (single choice, multi select, or number rating) and either make a list of items to vote on or the number rating range. To close a poll, change the title to start with “Closed Poll:”. Voting will end, but discussion can continue. Single Selection Alpha Beta Gamma Delta 0 voters Multiselect Poll Alpha Beta Gamma Delta 0 voters Number Rating On a scale of 1 to 5, with 1 meaning Not at All and 5 meaning Most Definitely, how likely are you to recommend this product to your team? 1 2 3 4 5 0 voters817Views0likes0CommentsLove SnapLogic? Complete a user review and get SnapLogic Loyalty points!
Love using SnapLogic for data and app integrations? Start sharing your SnapLogic experiences with your peers on G2Crowd! As a thank you for your time (the review only takes ~15 minutes!), we’d like to offer you 1,000 points that can be redeemed via our SnapLogic customer loyalty portal. You can redeem points for cool stuff like Eddie Bauer jackets, Callaway golf shirts, or Attivo sport bags! Complete a review on G2Crowd: https://www.g2crowd.com/products/snaplogic/reviews Start claiming your points once you complete your review by sending a quick email with the survey link to: loyalty@snaplogic.com! Note: Offer valid for SnapLogic customers and partners only.795Views1like0CommentsSurvey - How are you using the Integration Assistant?
SnapLogic needs feedback from you on the Integration Assistant! Please fill out this 5- minute survey whether you have used the Integration Assistant or not: https://www.surveymonkey.com/r/IrisAI We want to hear from you about how you are currently using the Integration Assistant or what’s stopping you from using it! And how we can improve IA!771Views0likes0CommentsCDC Delta Load Oracle
Submitted by @stodoroska from Interworks Delta Load from a staging to Oracle DWH table. Pipeline should detect if the record is an Update, Delete or Insert according to a flag in the Mlog table which is Oracle native table for storing logs of transactions. According to those flags and the id of the records, the pipeline is deciding which action it should perform, update, delete or insert. Screenshot of pipeline Configuration In each of the Snaps you should define your respective queries and DB accounts. Sources: Oracle DB table Targets: Oracle DB table Snaps used: Oracle Execute, Router, Mapper, Aggregate, Unique, Join, Copy Downloads Attach pipelines and any necessary resources such as expression libraries or source files.766Views0likes0CommentsWorkday to Oracle EBS Employees
Created by @ckonduru When there is a new or updated object in Workday, execute operation in Oracle E-business Suite Screenshot of pipeline Configuration Sources: Workday Targets: Oracle EBS Snaps used: JSON Generator, XML Parser, JSON Formatter, JSON Parser, JSON Splitter, Group By N, Mapper, Workday Read Downloads Attach pipelines and any necessary resources such as expression libraries or source files.729Views0likes0CommentsCreate Box folders for Salesforce accounts
Created by @skatpally For an account created in Salesforce, SnapLogic creates a folder in Box with the Salesforce account name as the folder name, if it doesn’t already exist. SnapLogic creates a shared link to the Box folder. Configuration Sources: Salesforce account ID Targets: Box folder Snaps used: Mapper, Box Add Folder, REST Put Downloads684Views0likes0CommentsEDWH to Azure Blob Storage and Azure Data Lake
Submitted by @stodoroska from Interworks___ Selecting from Data warehouse data from an on-premise Oracle table, converting into CSV, storing it in Azure Blob Storage and storing in Azure Data Lake. Screenshot of pipeline Configuration In order to make this pipeline work you, need to configure the child backlog new table, which is running on premise and that is why it is put in a Pipeline Execute Snap. Also you need to configure Azure accounts and Azure paths for Blob Storage and Data Lake. Sources: Oracle table Targets: Azure Blob Storage and Azure Data Lake Snaps used: Oracle Select, Pipeline Execute, CSV Formatter, File Writer, File Reader Downloads Attach pipelines and any necessary resources such as expression libraries or source files.676Views0likes0CommentsScenario Detection Stage
Submitted by @stodoroska from Interworks Based on the source data, we are creating action flags that later will be used in the Action stage pipeline. Screenshot of pipeline Configuration Because we are working with a large scale of data, the target records are grouped in the chunks of 2000 records. This makes the pipeline faster and less memory is used. Sources: CSV file Targets: CSV file and PGP-encrypted file Snaps used: Mapper, Join, Group by N, Splitter, CSV Parser, File Writer, File Reader, Union, Copy, PGP Encrypt Downloads Attach pipelines and any necessary resources such as expression libraries or source files.673Views0likes0Comments