[10-Minute Exercise] How to send a list of employees to email using the Email Sender Snap
Difficulty Level: Hard In this exercise, you will learn how to send a list of employee names to an email by: Using the Email Sender Snap from the Snaps Catalog Adding a valid email account that you want the list of employees sent to The sample pipeline for this exercise is for a mergers & acquisitions use case. When a company is acquired, HR receives a CSV file containing employee information from that company and wants to route a list of employees to department heads for review, including Customer Success, General & Administration (G&A), Sales & Marketing, and Engineering. In this exercise, the list of employees in the Engineering organization will be sent to an email. The following 4-minute video shows walks through this exercise: Step 1: Make sure you have copied the sample pipeline (Sample_Pipeline) from the Shared folder within the Pipeline Catalog to your own project folder. Upload the shared EmployeeRecords file from the shared folder as well. Step 2: In the Engineering branch of the pipeline, delete the Excel Formatter and File Writer Step 3: Go to the Snap Catalog, find the Email Sender Snap and drag it next to the Sort Snap. We will configure the Email Sender Snap first before connecting it to the Sort Snap. Step 4: Open the Snap to configure the following: Click Add Account Select your project as the location and click Continue Supply the following information: Label: a name to help you identify the account, like My Personal Email Email ID: your full email address Password: your password for this account. (It will be encrypted) Server Domain: The SMTP server domain name. For Gmail, it’s smtp.gmail.com Port: The port number of the email server. This will vary depending on your mail provider, account type, and the Secure connection selected. For Gmail, use 465 Secure Connection: How the secure connection to the email server should be initiated. For Gmail, keep it at SSL Click Apply, then go to the Views tab Step 5: To receive information from the Sort Snap, add an input view by clicking the + sign under Input. You don’t need to rename it. You can also remove the output view, but it’s not required. Then go to the Settings Tab Step 6: Set the To field to your email address, then add a subject. You don’t need to specify a from as it will use your account information Step 7: Set the Email type to HTML Table. If you configure an Email Sender Snap with an input view and connect it to receive data from upstream Snaps, one email will be sent for each document processed through the pipeline unless you are using an HTML table as the Email type. The HTML table format will embed the data in the email body up to the Batch size limit, sending as many emails as necessary to batch through the documents Step 8: Go to the SnapLogic Documentation for the Email Sender Snap (https://docs-snaplogic.atlassian.net/wiki/spaces/SD/pages/1438208/Email+Sender), and copy the HTML sample provided under Template. Paste the HTML into the Template field of your Email Sender Snap. Update the title and body paragraphs of the email to your messaging, but leave the table within the body as is Step 9: Set the Table-data path to $ Step 10: Save the Snap, then close the dialog Step 11: Drag the Email Sender Snap until it connects with the Sort Snap and place the Snap Step 12: Save the pipeline and execute it. You should receive one email with all of the users for the Engineering department Congratulations! You have completed this pipeline exercise! Need more help? You can launch the step-by-step exercise and additional resources on the SnapLogic Knowledge Center on our SnapLogic platform, or peruse through the categories and topics on the SnapLogic Community. [ The SnapLogic Knowledge Center is only available within the SnapLogic Free Trial. ]8.2KViews0likes5Comments[10-Minute Exercise] How to sync data between Oracle and Amazon Redshift
Difficulty Level: Medium In this exercise, you will learn how to sync data between Oracle and Amazon Redshift. Watch this 3-minute video to learn how to sync data to Amazon Redshift: Step 1: Go to the Pattern Catalog and select “Oracle Redshift Sync” to use this pipeline pattern. Step 2: You will need to download last_modified.json (zipped file), extract it, and upload it to the project you’re using to save this pipeline (Last Modified Date Snap). Step 3: Open the Oracle Select Snap to add your Oracle account. You will need to provide your login information, including your Oracle hostname, port number, and database name used for the initial loading pipeline. Step 4: Now open the Amazon Redshift - Bulk Load Snap to add your Redshift account information in the Account tab. You will also need to label the account name and the account properties, such as the Redshift endpoint, port number, and database name. Now you are ready to load data into Redshift. Step 5: Click on to validate the pipeline. The pipeline is ready to be executed, you can click on on the toolbar. The pipeline will turn green once it’s successfully executed and synched. Congratulations! You have completed this pipeline exercise! Need more help? You can launch the step-by-step exercise and additional resources on the SnapLogic Knowledge Center on our SnapLogic platform, or peruse through the categories and topics on the SnapLogic Community. [ The SnapLogic Knowledge Center is only available within the SnapLogic Free Trial. ]2.3KViews0likes0Comments[10-Minute Exercise] How to load data from Oracle to Amazon Redshift
Difficulty Level: Medium In this exercise, you will learn how to load data from Oracle to Amazon Redshift. Watch this 3-minute video to learn how to load Oracle data to Amazon Redshift: Step 1: Go to the Pattern Catalog and select “Oracle Redshift Loader” to use this pipeline pattern. If you don’t see it on your Pattern Catalog, upload this: Oracle Redshift Loader_2021_02_24.slp (3.2 KB) Step 2: Make sure to whitelist certain IP addresses as per the Amazon Redshift Getting Started Guide. The IP Addresses are listed on the first page of the Pipeline Configuration Wizard when you open the MySQL to Amazon Redshift pattern. Step 3: Open the Oracle - Select Snap and click on the Account tab to provide your database login information. You will need to label the account and provide your account properties, such as the Oracle hostname, port number, and database name. You will also need to upload a JDBC driver and specify the JDBC driver class. Step 4: Now open the Amazon Redshift - Bulk Load Snap to add your Redshift account information in the Account tab. You will also need to label the account name and the account properties, such as the Redshift endpoint, port number, and database name. Now you are ready to load data into Redshift. Step 5: Open up the Oracle - Select Snap. Provide the table name, the where clause, and the order. Click Save and close the dialog. Step 6: Now open up the Redshift - Bulk Load Snap. You will need to provide the table name and set any other settings as needed. Click Save and close the dialog. Step 7: Click on to validate the pipeline. The pipeline is ready to be executed, you can click on on the toolbar. The pipeline will turn green once it’s successfully executed. Congratulations! You have completed this pipeline exercise! Need more help? You can launch the step-by-step exercise and additional resources on the SnapLogic Knowledge Center on our SnapLogic platform, or peruse through the categories and topics on the SnapLogic Community. [ The SnapLogic Knowledge Center is only available within the SnapLogic Free Trial. ]2.4KViews0likes0Comments[10-Minute Exercise] How to sync data between MySQL and Amazon Redshift
Difficulty Level: Medium In this exercise, you will learn how to sync data between MySQL and Amazon Redshift. Watch this 3-minute video to learn how to sync data to Amazon Redshift: Step 1: Go to the Pattern Catalog and select “MySQL Redshift Sync” to use this pipeline pattern. Step 2: You will need to download last_modified.json (zipped file), extract it, and upload it to the project you’re using to save this pipeline (Last Modified Date Snap). Step 3: Open the MySQL Read Snap to add your MySQL account. You will need to provide your login information, including your MySQL hostname, port number, and database name used for the initial loading pipeline. Step 4: Now open the Amazon Redshift - Bulk Load Snap to add your Redshift account information in the Account tab. You will also need to label the account name and the account properties, such as the Redshift endpoint, port number, and database name. Now you are ready to load data into Redshift. Step 5: Click on to validate the pipeline. The pipeline is ready to be executed, you can click on on the toolbar. The pipeline will turn green once it’s successfully executed and synched. Congratulations! You have completed this pipeline exercise! Need more help? You can launch the step-by-step exercise and additional resources on the SnapLogic Knowledge Center on our SnapLogic platform, or peruse through the categories and topics on the SnapLogic Community. [ The SnapLogic Knowledge Center is only available within the SnapLogic Free Trial. ]2.2KViews0likes0Comments[10-Minute Exercise] How to load data from MySQL to Amazon Redshift
Difficulty Level: Medium In this exercise, you will learn how to load data from MySQL to Amazon Redshift. Watch this 3-minute video to learn how to load data to Amazon Redshift: Step 1: Go to the Pattern Catalog and select “MySQL Redshift Loader” to use this pipeline pattern. If you don’t see it on your Pattern Catalog, upload this: MySQL Redshift Loader - 2021-02-24 21_47_19_2021_02_24.slp (3.9 KB) Step 2: Make sure to whitelist certain IP addresses as per the Amazon Redshift Getting Started Guide. The IP Addresses are listed on the first page of the Pipeline Configuration Wizard when you open the MySQL to Amazon Redshift pattern for the first time. Step 3: Open the MySQL-Select Snap and click on the Account tab to provide your database login information. You will need to label the account and provide your account properties, such as the MySQL hostname, port number, and database name. Step 4: Now open the Amazon Redshift - Bulk Load Snap to add your Redshift account information in the Account tab. You will also need to label the account name and the account properties, such as the Redshift endpoint, port number, and database name. Now you are ready to load data into Redshift. Step 5: Open up the MySQL - Select Snap. Provide the table name, the ‘where’ clause (a query of the data you need), and the order of your data. Click Save and close the dialog. Step 6: Now open up the Redshift - Bulk Load Snap. You will need to provide the table name and set any other settings as needed. Click Save and close the dialog. Step 7: Click on to validate the pipeline. The pipeline is ready to be executed, you can click on on the toolbar. The pipeline will turn green once it’s successfully executed. Step 8: You can view the data in your Redshift account. Congratulations! You have completed this pipeline exercise! Need more help? You can launch the step-by-step exercise and additional resources on the SnapLogic Knowledge Center on our SnapLogic platform, or peruse through the categories and topics on the SnapLogic Community. [ The SnapLogic Knowledge Center is only available within the SnapLogic Free Trial. ]2.3KViews0likes0Comments[10-Minute Exercise] How to load data from SQLServer to Amazon Redshift
Difficulty Level: Medium In this exercise, you will learn how to load data from SQLServer to Amazon Redshift. Watch this 3-minute video to learn how to load data into Amazon Redshift: Step 1: Go to the Pattern Catalog and select “SQLServerRedshift Loader” to use this pipeline pattern. If you don’t see it on your Pattern Catalog, upload this: SQLServer Redshift Loader - 2021-02-23 22_24_18_2021_02_23.slp (3.9 KB) Step 2: Make sure to whitelist certain IP addresses according to the Amazon Redshift Getting Started Guide. The IP Addresses are also listed on the first page of the Pipeline Configuration Wizard when you open the MySQL to Amazon Redshift pattern. Step 3: Open the SQL Server - Select Snap and click on the Account tab to provide your database login information. You will need to label the account and provide your account properties, such as the SQL Server hostname, port number, and database name. You will also need to upload a JDBC driver and specify the JDBC driver class. Step 4: Now open the Amazon Redshift - Bulk Load Snap to add your Redshift account information in the Account tab. You will also need to label the account name and the account properties, such as the Redshift endpoint, port number, and database name. Now you are ready to load data into Redshift. Step 5: Open up the MySQL - Select Snap. Provide the table name, the ‘where’ clause (a query of the data you need), and the order of your data. Click Save and close the dialog. Step 6: Now open up the Redshift - Bulk Load Snap. You will need to provide the table name and set any other settings as needed. Click Save and close the dialog. Step 7: Click on to validate the pipeline. The pipeline is ready to be executed, you can click on on the toolbar. The pipeline will turn green once it’s successfully executed. Step 8: You can view the data in your Redshift account. Congratulations! You have completed this pipeline exercise! Need more help? You can launch the step-by-step exercise and additional resources on the SnapLogic Knowledge Center on our SnapLogic platform, or peruse through the categories and topics on the SnapLogic Community. [ The SnapLogic Knowledge Center is only available within the SnapLogic Free Trial.]2.7KViews0likes0Comments[10-Minute Exercise] How to sync data between SQLServer and Amazon Redshift
Difficulty Level: Medium In this exercise, you will learn how to sync data between SQLServer and Amazon Redshift. Watch this 3-minute video to learn how to sync data to Amazon Redshift: Step 1: Go to the Pattern Catalog within the SnapLogic Free Trial and select “SQLServer Redshift Sync” to use this pipeline pattern. Step 2: You will need to download last_modified.json (zipped file), extract it, and upload it to the project you’re using to save this pipeline. Upload the JSON file to the Last Modified Date Snap. Step 3: Open the SQLServer Select Snap to add your SQLServer account. You will need to provide your login information, including your SQLServer hostname, port number, and database name used for the initial loading pipeline. Step 4: Now open the Amazon Redshift - Bulk Load Snap to add your Redshift account information in the Account tab. You will also need to label the account name and the account properties, such as the Redshift endpoint, port number, and database name. Now you are ready to load data into Redshift. Step 4: Click on to validate the pipeline. The pipeline is ready to be executed, you can click on on the toolbar. The pipeline will turn green once it’s successfully executed and synched. Congratulations! You have completed this pipeline exercise! Need more help? You can launch the step-by-step exercise and additional resources on the SnapLogic Knowledge Center on our SnapLogic platform, or peruse through the categories and topics on the SnapLogic Community. [ The SnapLogic Knowledge Center is only available within the SnapLogic Free Trial.]2KViews0likes0Comments[5-Minute Exercise] How to reassign fields using the Router Snap
Difficulty Level: Easy In this exercise, you will learn how to reassign employees to another team by: Using the Router Snap in a sample pipeline Changing fields on the Expressions builder within the Router Snap Watch this 3-minute video that walks through this exercise: The sample pipeline for this exercise is for a mergers & acquisitions use case. When a company is acquired, HR receives a CSV file containing employee information from that company and wants to route a list of employees to department heads for review, including Customer Success, General & Administration (G&A), Sales & Marketing, and Engineering. During the first pass of reallocating departments, the Asset Management team was assigned to General and Administrative (G&A). Upon further inspection, it was determined that the team should actually be part of Engineering. In this exercise, we will reassign the Asset Management team to Engineering. Step 1: Make sure you have copied the sample pipeline (Sample_Pipeline) from the Shared folder within the Pipeline Catalog to your own project folder Step 2: Click to validate and preview the data in the pipeline. Once the pipeline is validated, the pipeline will turn green. Click to open the Router Snap. The expressions here follow the logic of “if a Department is either A or B or C…, send it to the new department of AA; if D or E of F, send to BB” Step 3: Once you open the Router Snap, go to the Expressions field for the G&A row, click on the down arrow. This will open a drop-down menu. Step 4: Within the drop-down, click on the icon in the upper right corner to open the Expression builder. Step 5: Within the expression, remove || $Department ==“Asset Management”. Then Click OK to close the dialog. Step 6: In the Expressions field for the Engineering row, click on the arrow, then open the Expression builder. Between Development” and the ending parenthesis, add || “Asset Management”, so you end up with $Department == “Quality Assurance” || $Department == “Research and Development” ||$Department == “Asset Management”. Click OK, then Save. Step 7: Click on to validate the pipeline. The sample pipeline is ready to be executed, you can click on on the toolbar. The sample pipeline will turn green once it’s successfully executed. Step 8: Go to the SnapLogic Manager and in your project folder, you can view and download the updated Engineering list. Congratulations! You have completed this pipeline exercise! Need more help? You can launch the step-by-step exercise and additional resources on the SnapLogic Knowledge Center on our SnapLogic platform, or peruse through the categories and topics on the SnapLogic Community. Not in the trial but want to try this exercise? Upload this pipeline to your project in SnapLogic. Sample_Pipeline.slp (23.3 KB) [ The SnapLogic Knowledge Center is only available within the SnapLogic Free Trial. ]2.2KViews0likes0Comments[5-Minute Exercise] How to add a new output field using the Router Snap
Difficulty Level: Medium In this exercise, you will learn how to add a new department and route employees from one department to the new one by: Adding a new output view (new department) in the Router Snap Adding a new expression (employees) to the output view in the Router Snap Executing the pipeline to get a list of employees for the new department Here is a 4-minute video that walks through this exercise: The sample pipeline for this exercise is for a mergers & acquisitions use case. When a company is acquired, HR receives a CSV file containing employee information from that company and wants to route a list of employees to department heads for review, including Customer Success, General & Administration (G&A), Sales & Marketing, and Engineering. The company has decided to create a new department, Support, and re-assign some employees from Customer Success to Support. Step 1: Make sure you have copied the sample pipeline (Sample_Pipeline) from the Shared folder within the Pipeline Catalog to your own project folder Step 2: Click to validate and preview the data in the pipeline. Once the pipeline is validated, the pipeline will turn green, and you can click on the file icon to preview the data. Then open the Router Snap. Step 3: Go to the Views tab. Click the + under Output and name the new output view Support. Step 4: Go back to the Setting tab. In the Routes section, click + to add a new row. Set the Output view name to Support. Copy the expression from the Customer Success row and paste it into the Support’s expression field. Step 5: Open the Expression builder by clicking on the arrow, then the icon in the drop-down to see the expressions for the department. Step 6: Remove $Department == “Customer Relations”||, because we want to re-assign employees under Customer Service and Tech Support to the new Support department. Click OK. Step 7: Let’s go back and open up the Expression builder in Customer Success and only keep $Department == “Customer Relations”. Save and close the Router dialog. Step 8: You want to add Sort > Excel Formatter > File Writer Snaps to complete the pipeline. On the Designer canvas, hold the Shift key down, then click and drag select one set of Sort > Excel Formatter > File Writer Snaps from the other departments. Right-click on the selection and select Copy. Right click next to the Support output and select Paste. If there is not enough room to paste it there, click the Zoom Out button in the toolbar, then try to paste the segment under the pipeline and drag it into place. Click the circle between Support and Sort to connect them. Step 9: Open the File Writer Snap, change the Filename to Support.xls and click Save. Step 10: Click on to validate the pipeline. The sample pipeline is ready to be executed, you can click on on the toolbar. The sample pipeline will turn green once it’s successfully executed. Step 11: Go to the SnapLogic Manager and in your project folder, you should now have 5 files, including the Support list. Congratulations! You have completed this pipeline exercise! Need more help? You can launch the step-by-step exercise and additional resources on the SnapLogic Knowledge Center on our SnapLogic platform, or peruse through the categories and topics on the SnapLogic Community. Not in the trial but want to try this exercise? Upload this pipeline to your project in SnapLogic. Sample_Pipeline.slp (23.3 KB) [ The SnapLogic Knowledge Center is only available within the SnapLogic Free Trial. ]2.9KViews0likes0Comments[5-Minute Exercise] How to add fields onto the Mapper Snap
Difficulty Level: Easy In this exercise, you will learn how to add ‘hire date’ to the employee list by: Validating and previewing data in the pipeline Using the Mapper Snap to add fields Running a pipeline to obtain lists with added fields Watch this 3-minute video to learn how to use the Mapper Snap and execute the sample pipeline: Step 1: Make sure you have copied the sample pipeline (Sample_Pipeline) from the Shared folder within the Pipeline Catalog to your own project folder Step 2: Click to validate and preview the data in the pipeline. Once the pipeline is validated, the pipeline will turn green, and you can click on the file icon to preview the data. Step 3: Open the Mapper Snap. In the Input Schema, you should see Hire Date as an unmapped heading. Step 4: In the Mapping table, click the + to add a row. Add $[‘Hire Date’] to the field in the Expression Column either by dragging it from the Input Schema or manually. The brackets and single quotes are there to maintain the space. Step 5: Enter a value for the Target Path. To keep it user-readable, use $[‘Hire Date’] again. Step 6: Click Save and close the dialog. Step 7: Click on to validate the pipeline. The sample pipeline is ready to be executed, you can click on on the toolbar. The sample pipeline will turn green once it’s successfully executed. Then open the Mapper preview to verify that Hire Date is now available to be sent down to the individual files. Step 8: Go to the SnapLogic Manager to view the updated files. Congratulations! You have completed this pipeline exercise! Need more help? You can launch the step-by-step exercise and additional resources on the SnapLogic Knowledge Center on our SnapLogic platform, or peruse through the categories and topics on the SnapLogic Community. Not in the trial but want to try this exercise? Upload this pipeline to your project in SnapLogic. Sample_Pipeline.slp (23.3 KB) [ The SnapLogic Knowledge Center is only available within the SnapLogic Free Trial. ]3.2KViews0likes0Comments