[10-Minute Exercise] How to send a list of employees to email using the Email Sender Snap
Difficulty Level: Hard In this exercise, you will learn how to send a list of employee names to an email by: Using the Email Sender Snap from the Snaps Catalog Adding a valid email account that you want the list of employees sent to The sample pipeline for this exercise is for a mergers & acquisitions use case. When a company is acquired, HR receives a CSV file containing employee information from that company and wants to route a list of employees to department heads for review, including Customer Success, General & Administration (G&A), Sales & Marketing, and Engineering. In this exercise, the list of employees in the Engineering organization will be sent to an email. The following 4-minute video shows walks through this exercise: Step 1: Make sure you have copied the sample pipeline (Sample_Pipeline) from the Shared folder within the Pipeline Catalog to your own project folder. Upload the shared EmployeeRecords file from the shared folder as well. Step 2: In the Engineering branch of the pipeline, delete the Excel Formatter and File Writer Step 3: Go to the Snap Catalog, find the Email Sender Snap and drag it next to the Sort Snap. We will configure the Email Sender Snap first before connecting it to the Sort Snap. Step 4: Open the Snap to configure the following: Click Add Account Select your project as the location and click Continue Supply the following information: Label: a name to help you identify the account, like My Personal Email Email ID: your full email address Password: your password for this account. (It will be encrypted) Server Domain: The SMTP server domain name. For Gmail, it’s smtp.gmail.com Port: The port number of the email server. This will vary depending on your mail provider, account type, and the Secure connection selected. For Gmail, use 465 Secure Connection: How the secure connection to the email server should be initiated. For Gmail, keep it at SSL Click Apply, then go to the Views tab Step 5: To receive information from the Sort Snap, add an input view by clicking the + sign under Input. You don’t need to rename it. You can also remove the output view, but it’s not required. Then go to the Settings Tab Step 6: Set the To field to your email address, then add a subject. You don’t need to specify a from as it will use your account information Step 7: Set the Email type to HTML Table. If you configure an Email Sender Snap with an input view and connect it to receive data from upstream Snaps, one email will be sent for each document processed through the pipeline unless you are using an HTML table as the Email type. The HTML table format will embed the data in the email body up to the Batch size limit, sending as many emails as necessary to batch through the documents Step 8: Go to the SnapLogic Documentation for the Email Sender Snap (https://docs-snaplogic.atlassian.net/wiki/spaces/SD/pages/1438208/Email+Sender), and copy the HTML sample provided under Template. Paste the HTML into the Template field of your Email Sender Snap. Update the title and body paragraphs of the email to your messaging, but leave the table within the body as is Step 9: Set the Table-data path to $ Step 10: Save the Snap, then close the dialog Step 11: Drag the Email Sender Snap until it connects with the Sort Snap and place the Snap Step 12: Save the pipeline and execute it. You should receive one email with all of the users for the Engineering department Congratulations! You have completed this pipeline exercise! Need more help? You can launch the step-by-step exercise and additional resources on the SnapLogic Knowledge Center on our SnapLogic platform, or peruse through the categories and topics on the SnapLogic Community. [ The SnapLogic Knowledge Center is only available within the SnapLogic Free Trial. ]8.2KViews0likes5CommentsSnaps available in the 30-Day Free Trial
A subset of all Snaps are available in the 30-day Free Trial. Don’t have a free trial account yet? Sign up here. Last updated: August 19, 2021 The Snaps available are (click on the link to see the documentation for that Snap Pack): Active Directory Amazon SQS Anaplan Azure SQL Binary Birst Box Confluent Kafka Coupa Data Catalog DynamoDB Eloqua ELT Snap Pack Email Flow Google Analytics Google BigQuery Google Directory Google Sheets JDBC JIRA JMS LDAP Marketo MongoDB Microsoft Exchange Online Microsoft OneDrive Microsoft SharePoint Online MS Dynamics 365 for Sales MySQL NetSuite OpenAir Oracle DB PostgreSQL Redshift Reltio REST Salesforce SAP HANA Script ServiceNow Shopify Slack SnapLogic Metadata Snowflake SOAP Splunk SQL Server SuccessFactors Sumo Logic Transform Twilio Workday Workday Prism Analytics Xactly Zuora As of July 26, 2019, all trial users also have access to the Machine Learning Snap Packs. Snaps not included in the trial but available in the full product include: Oracle EBS Google DFA Cassandra Hadoop Hive Teradata Tableau Microsoft Dynamics AX Microsoft Dynamics CRM Reltio Vertica Azure Active Directory Expensify Microsoft Exchange Rabbit MQ Spark Script MQTT field Snap Essbase field Snap6.8KViews0likes3CommentsDelete a trial account: as SSO failing
Hello All, How can I delete a trial account ? I have created a trail account , then we got added to our companies SSO. I am using my business email for my trail , so when try to login using SSO , it gives error “SSO login cannot be used used for users that have different identity provider”6.3KViews1like6Comments[5-Minute Exercise] How to add fields onto the Mapper Snap
Difficulty Level: Easy In this exercise, you will learn how to add ‘hire date’ to the employee list by: Validating and previewing data in the pipeline Using the Mapper Snap to add fields Running a pipeline to obtain lists with added fields Watch this 3-minute video to learn how to use the Mapper Snap and execute the sample pipeline: Step 1: Make sure you have copied the sample pipeline (Sample_Pipeline) from the Shared folder within the Pipeline Catalog to your own project folder Step 2: Click to validate and preview the data in the pipeline. Once the pipeline is validated, the pipeline will turn green, and you can click on the file icon to preview the data. Step 3: Open the Mapper Snap. In the Input Schema, you should see Hire Date as an unmapped heading. Step 4: In the Mapping table, click the + to add a row. Add $[‘Hire Date’] to the field in the Expression Column either by dragging it from the Input Schema or manually. The brackets and single quotes are there to maintain the space. Step 5: Enter a value for the Target Path. To keep it user-readable, use $[‘Hire Date’] again. Step 6: Click Save and close the dialog. Step 7: Click on to validate the pipeline. The sample pipeline is ready to be executed, you can click on on the toolbar. The sample pipeline will turn green once it’s successfully executed. Then open the Mapper preview to verify that Hire Date is now available to be sent down to the individual files. Step 8: Go to the SnapLogic Manager to view the updated files. Congratulations! You have completed this pipeline exercise! Need more help? You can launch the step-by-step exercise and additional resources on the SnapLogic Knowledge Center on our SnapLogic platform, or peruse through the categories and topics on the SnapLogic Community. Not in the trial but want to try this exercise? Upload this pipeline to your project in SnapLogic. Sample_Pipeline.slp (23.3 KB) [ The SnapLogic Knowledge Center is only available within the SnapLogic Free Trial. ]3.2KViews0likes0Comments[5-Minute Exercise] How to integrate an integration pipeline
Get started with SnapLogic with this 5-minute exercise. Difficulty Level: Easy In this exercise, you will learn how to: Use integration pipelines from the Pipeline and Patterns Catalog tabs Copy and save an sample pipeline to your folder Configure and execute a pipeline View files from the sample pipeline in the SnapLogic Manager Note: This tutorial is for users within the SnapLogic Trial. All assets referenced can only be found there. Here is a 4-minute video to show you how to execute this sample pipeline: Step 1: Log into the SnapLogic platform Step 2: Go to the Pipeline Catalog and open the Shared folder. You will see a pre-built pipeline called “Sample_Pipeline.” The sample pipeline will appear on the Canvas and look like the pipeline below. This sample pipeline is for a mergers & acquisitions use case. When a company is acquired, HR receives a CSV file containing employee information from that company and wants to route a list of employees to department heads for review, including Customer Success, General & Administration (G&A), Sales & Marketing, and Engineering. Step 3: To configure the pipeline, you will first need to copy and save this sample pipeline into your folder. Expand the Designer toolbar , and click on to copy and save the pipeline. Step 4: Once your sample pipeline is saved, you need to configure the pipeline using the Configuration Wizard by uploading the employee record CSV in the Sample folder. Step 5: The sample pipeline is ready to be executed using the Pipeline Configuration Wizard. Alternatively, you can click on on the toolbar. The sample pipeline will turn green once it’s successfully executed. Step 6: Now go to the SnapLogic Manager to view the files of employee names for each department in your own project folder. Congratulations! You have completed the sample pipeline exercise! Need more help? You can launch the step-by-step exercise and additional resources on the SnapLogic Knowledge Center on our SnapLogic platform, or peruse through the categories and topics on the SnapLogic Community. Not in the trial but want to try this exercise? Upload this pipeline to your project in SnapLogic. Sample_Pipeline.slp (23.3 KB) [ The SnapLogic Knowledge Center is only available within the SnapLogic Free Trial. ]3.1KViews0likes0Comments[5-Minute Exercise] How to add a new output field using the Router Snap
Difficulty Level: Medium In this exercise, you will learn how to add a new department and route employees from one department to the new one by: Adding a new output view (new department) in the Router Snap Adding a new expression (employees) to the output view in the Router Snap Executing the pipeline to get a list of employees for the new department Here is a 4-minute video that walks through this exercise: The sample pipeline for this exercise is for a mergers & acquisitions use case. When a company is acquired, HR receives a CSV file containing employee information from that company and wants to route a list of employees to department heads for review, including Customer Success, General & Administration (G&A), Sales & Marketing, and Engineering. The company has decided to create a new department, Support, and re-assign some employees from Customer Success to Support. Step 1: Make sure you have copied the sample pipeline (Sample_Pipeline) from the Shared folder within the Pipeline Catalog to your own project folder Step 2: Click to validate and preview the data in the pipeline. Once the pipeline is validated, the pipeline will turn green, and you can click on the file icon to preview the data. Then open the Router Snap. Step 3: Go to the Views tab. Click the + under Output and name the new output view Support. Step 4: Go back to the Setting tab. In the Routes section, click + to add a new row. Set the Output view name to Support. Copy the expression from the Customer Success row and paste it into the Support’s expression field. Step 5: Open the Expression builder by clicking on the arrow, then the icon in the drop-down to see the expressions for the department. Step 6: Remove $Department == “Customer Relations”||, because we want to re-assign employees under Customer Service and Tech Support to the new Support department. Click OK. Step 7: Let’s go back and open up the Expression builder in Customer Success and only keep $Department == “Customer Relations”. Save and close the Router dialog. Step 8: You want to add Sort > Excel Formatter > File Writer Snaps to complete the pipeline. On the Designer canvas, hold the Shift key down, then click and drag select one set of Sort > Excel Formatter > File Writer Snaps from the other departments. Right-click on the selection and select Copy. Right click next to the Support output and select Paste. If there is not enough room to paste it there, click the Zoom Out button in the toolbar, then try to paste the segment under the pipeline and drag it into place. Click the circle between Support and Sort to connect them. Step 9: Open the File Writer Snap, change the Filename to Support.xls and click Save. Step 10: Click on to validate the pipeline. The sample pipeline is ready to be executed, you can click on on the toolbar. The sample pipeline will turn green once it’s successfully executed. Step 11: Go to the SnapLogic Manager and in your project folder, you should now have 5 files, including the Support list. Congratulations! You have completed this pipeline exercise! Need more help? You can launch the step-by-step exercise and additional resources on the SnapLogic Knowledge Center on our SnapLogic platform, or peruse through the categories and topics on the SnapLogic Community. Not in the trial but want to try this exercise? Upload this pipeline to your project in SnapLogic. Sample_Pipeline.slp (23.3 KB) [ The SnapLogic Knowledge Center is only available within the SnapLogic Free Trial. ]2.9KViews0likes0CommentsHow to Grant Access to SnapLogic's IPs for Amazon Redshift
In order for SnapLogic’s Snaplex application to be able to read and write data to your Amazon Redshift cluster, we need allow external connection from the SnapLogic IP addresses for inbound access. Open the Clusters page within the Amazon Redshift console and choose the cluster you wish to integrate with. Click the “Properties” tab: In the “Network and security” section, click the “Edit publicly accessible” button and change it to be “Enabled” (if not already). Click the link to the security group for the cluster: Under the “Inbound rules” tab, click the “Edit inbound rules” button and add the following entries, clicking the “Add Rule” button after each one (you may customize the Description column value as you see fit): Redshift TCP 5439 52.11.8.103/32 Redshift TCP 5439 34.208.181.167/32 Redshift TCP 5439 52.10.35.99/32 Redshift TCP 5439 52.36.97.11/32 Redshift TCP 5439 34.208.230.181/32 Redshift TCP 5439 34.209.24.34/32 SnapLogic should now be able to connect to your Redshift cluster directly. If you experience any issues, follow the Amazon Redshift documentation for assistance, or please do contact us at trial-support@snaplogic.com.2.9KViews0likes0Comments[10-Minute Exercise] How to load data from SQLServer to Amazon Redshift
Difficulty Level: Medium In this exercise, you will learn how to load data from SQLServer to Amazon Redshift. Watch this 3-minute video to learn how to load data into Amazon Redshift: Step 1: Go to the Pattern Catalog and select “SQLServerRedshift Loader” to use this pipeline pattern. If you don’t see it on your Pattern Catalog, upload this: SQLServer Redshift Loader - 2021-02-23 22_24_18_2021_02_23.slp (3.9 KB) Step 2: Make sure to whitelist certain IP addresses according to the Amazon Redshift Getting Started Guide. The IP Addresses are also listed on the first page of the Pipeline Configuration Wizard when you open the MySQL to Amazon Redshift pattern. Step 3: Open the SQL Server - Select Snap and click on the Account tab to provide your database login information. You will need to label the account and provide your account properties, such as the SQL Server hostname, port number, and database name. You will also need to upload a JDBC driver and specify the JDBC driver class. Step 4: Now open the Amazon Redshift - Bulk Load Snap to add your Redshift account information in the Account tab. You will also need to label the account name and the account properties, such as the Redshift endpoint, port number, and database name. Now you are ready to load data into Redshift. Step 5: Open up the MySQL - Select Snap. Provide the table name, the ‘where’ clause (a query of the data you need), and the order of your data. Click Save and close the dialog. Step 6: Now open up the Redshift - Bulk Load Snap. You will need to provide the table name and set any other settings as needed. Click Save and close the dialog. Step 7: Click on to validate the pipeline. The pipeline is ready to be executed, you can click on on the toolbar. The pipeline will turn green once it’s successfully executed. Step 8: You can view the data in your Redshift account. Congratulations! You have completed this pipeline exercise! Need more help? You can launch the step-by-step exercise and additional resources on the SnapLogic Knowledge Center on our SnapLogic platform, or peruse through the categories and topics on the SnapLogic Community. [ The SnapLogic Knowledge Center is only available within the SnapLogic Free Trial.]2.7KViews0likes0Comments[10-Minute Exercise] How to load data from Oracle to Amazon Redshift
Difficulty Level: Medium In this exercise, you will learn how to load data from Oracle to Amazon Redshift. Watch this 3-minute video to learn how to load Oracle data to Amazon Redshift: Step 1: Go to the Pattern Catalog and select “Oracle Redshift Loader” to use this pipeline pattern. If you don’t see it on your Pattern Catalog, upload this: Oracle Redshift Loader_2021_02_24.slp (3.2 KB) Step 2: Make sure to whitelist certain IP addresses as per the Amazon Redshift Getting Started Guide. The IP Addresses are listed on the first page of the Pipeline Configuration Wizard when you open the MySQL to Amazon Redshift pattern. Step 3: Open the Oracle - Select Snap and click on the Account tab to provide your database login information. You will need to label the account and provide your account properties, such as the Oracle hostname, port number, and database name. You will also need to upload a JDBC driver and specify the JDBC driver class. Step 4: Now open the Amazon Redshift - Bulk Load Snap to add your Redshift account information in the Account tab. You will also need to label the account name and the account properties, such as the Redshift endpoint, port number, and database name. Now you are ready to load data into Redshift. Step 5: Open up the Oracle - Select Snap. Provide the table name, the where clause, and the order. Click Save and close the dialog. Step 6: Now open up the Redshift - Bulk Load Snap. You will need to provide the table name and set any other settings as needed. Click Save and close the dialog. Step 7: Click on to validate the pipeline. The pipeline is ready to be executed, you can click on on the toolbar. The pipeline will turn green once it’s successfully executed. Congratulations! You have completed this pipeline exercise! Need more help? You can launch the step-by-step exercise and additional resources on the SnapLogic Knowledge Center on our SnapLogic platform, or peruse through the categories and topics on the SnapLogic Community. [ The SnapLogic Knowledge Center is only available within the SnapLogic Free Trial. ]2.4KViews0likes0Comments[10-Minute Exercise] How to load data from MySQL to Amazon Redshift
Difficulty Level: Medium In this exercise, you will learn how to load data from MySQL to Amazon Redshift. Watch this 3-minute video to learn how to load data to Amazon Redshift: Step 1: Go to the Pattern Catalog and select “MySQL Redshift Loader” to use this pipeline pattern. If you don’t see it on your Pattern Catalog, upload this: MySQL Redshift Loader - 2021-02-24 21_47_19_2021_02_24.slp (3.9 KB) Step 2: Make sure to whitelist certain IP addresses as per the Amazon Redshift Getting Started Guide. The IP Addresses are listed on the first page of the Pipeline Configuration Wizard when you open the MySQL to Amazon Redshift pattern for the first time. Step 3: Open the MySQL-Select Snap and click on the Account tab to provide your database login information. You will need to label the account and provide your account properties, such as the MySQL hostname, port number, and database name. Step 4: Now open the Amazon Redshift - Bulk Load Snap to add your Redshift account information in the Account tab. You will also need to label the account name and the account properties, such as the Redshift endpoint, port number, and database name. Now you are ready to load data into Redshift. Step 5: Open up the MySQL - Select Snap. Provide the table name, the ‘where’ clause (a query of the data you need), and the order of your data. Click Save and close the dialog. Step 6: Now open up the Redshift - Bulk Load Snap. You will need to provide the table name and set any other settings as needed. Click Save and close the dialog. Step 7: Click on to validate the pipeline. The pipeline is ready to be executed, you can click on on the toolbar. The pipeline will turn green once it’s successfully executed. Step 8: You can view the data in your Redshift account. Congratulations! You have completed this pipeline exercise! Need more help? You can launch the step-by-step exercise and additional resources on the SnapLogic Knowledge Center on our SnapLogic platform, or peruse through the categories and topics on the SnapLogic Community. [ The SnapLogic Knowledge Center is only available within the SnapLogic Free Trial. ]2.3KViews0likes0Comments