Performing an Action when there is no data
A common integration pattern is to do something when no data is received. For example, we might read a file, parse it, and find that no records meet some filter criteria. As a result, we might send an email, or insert a ticket into a ticket management system like ServiceNOW. However, in SnapLogic, this can be somewhat more difficult than it seems initially because of the streaming architecture. In fact, many snaps will not execute without input documents - rather hard to accomplish when there is no document to flow through: So, how can we take an action, even though there’s no document to run the snap? The trick is to make a document and force it into the stream with a Join Snap (set to Merge): Note in this figure that even though nothing flows into the top view of the Join Snap, it still produces an output. This enables us to use the Router snap to check whether or not a field that we know will exist in the source does in fact exist. If it does, we know that data has passed through the Filter snap and then been merged with the Sequence data. If it does not, we know that only the Sequence data has been passed through and therefore nothing made it through the Filter . Only one of these views will have anything pass through it. The magic here is in the Merge option in the Join snap. As long as it is receiving data on a view, it will output - even if it has nothing to ‘join’ to. Meanwhile, the Sequence snap will always output the number of documents that you set it to. In this case, only one is required. (This pipeline here: Do something after no doc filter_2017_04_26.slp (11.1 KB) )14KViews7likes14CommentsChecking for optional properties and returning defaults in the expression language
With the Summer 2017 release there have been a couple of enhancements to the expression language to make it easier to check for the existence of properties in an object and returning a default value. Previously, you could use the ‘hasOwnProperty()’ method to check if a property is in an object. Now, objects have a ‘get()’ method to get the value of a property with the given name or return a default value. For example, to get the value of the property named ‘optional’, you can do: $.get('optional') // Returns the value of $optional or null To get the property or a default value if the property is not found, you can do: $.get('optional', 5) Note that there is some subtlety here if the property can be in the object with a value of null. If you pass a default value to the ‘get()’ method and the property is in the object with a value of null, you will get a null result. So, if you wish to get a default if the property is not in the object or if it is null, you should use a logical or with the default value. For example: $.get('nullable') || 5 In addition to the ‘get()’ method, we have added the ‘in’ operator as a shorthand for checking if a property is in an object. 'optional' in $ Error messages for undefined references have also been improved to try and suggest other properties in the object that have names similar to the one that was referenced. For example, in the following screenshot, there is no ‘ms’ field, but the object does have a ‘msg’ field. Here is an exported pipeline that demonstrates some of these features: optional-properties_2017_08_13.slp (4.8 KB)5.7KViews6likes0CommentsExpression library for condensing Reltio REST API response
Before 2017 Winter Release, transforming a Reltio REST API response JSON object into a simple structure can be a tedious task. The expression library feature from 2017 Winter Release can make it much easier. Encouraged by @dmiller, am sharing a pipeline to showcase this technique. In this fictional scenario, the ask is that only output the Mobile phone fields. Attached: Community posting expression library for condensing Reltio REST API response_2017_04_12.slp (7.6 KB) reltio.expr.txt (136 Bytes) Fake_Reltio API response.json.txt (3.1 KB) Phone.json.txt (288 Bytes) How to test the example pipeline: download the pipeline file and reltio.expr.txt rename the reltio.expr.txt to reltio.expr upload the pipeline and reltio.expr to a project generate the previews Content of the expression library file: { ov: y => y.find(x => x.ov == true).value, findByType: (x,type) => x.find(y => this.ov(y.value.Type) == type && y.ov == true) } Pipeline property tab: Pipeline: First mapper: Second mapper: Condensed data from the output: [ { "uri": "entities/geFfGTn", "updatedTime": 1492024546326, "attributes": { "FirstName": "Gr", "LastName": "Moun", "Gender": "Male", "Phone": { "CountryCode": "+1", "Number": "9999990003", "Type": "Mobile" } } } ] The input Reltio API response JSON looked like this: [ { "uri": "entities/geFfGTn", "updatedTime": 1492024546326, "attributes": { "FirstName": [ { "type": "configuration/entityTypes/Individuals/attributes/FirstName", "ov": true, "value": "Gr", "uri": "entities/geFfGTn/attributes/FirstName/sv4pKXS0" } ], "LastName": [ { "type": "configuration/entityTypes/Individuals/attributes/LastName", "ov": true, "value": "Moun", "uri": "entities/geFfGTn/attributes/LastName/sv4pKfyW" } ], "Gender": [ { "type": "configuration/entityTypes/Individuals/attributes/Gender", "ov": true, "value": "Male", "lookupCode": "M", "lookupRawValue": "M", "uri": "entities/geFfGTn/attributes/Gender/19BWhlRa3" } ], "Phone": [ { "label": "Mobile 9999990003", "value": { "Type": [ { "type": "configuration/entityTypes/Individuals/attributes/Phone/attributes/Type", "ov": true, "value": "Mobile", "uri": "entities/geFfGTn/attributes/Phone/19BWhm8Cd/Type/19BWhmKzP" } ], "Number": [ { "type": "configuration/entityTypes/Individuals/attributes/Phone/attributes/Number", "ov": true, "value": "9999990003", "uri": "entities/geFfGTn/attributes/Phone/19BWhm8Cd/Number/19BWhmCSt" } ], "CountryCode": [ { "type": "configuration/entityTypes/Individuals/attributes/Phone/attributes/CountryCode", "ov": true, "value": "+1", "uri": "entities/geFfGTn/attributes/Phone/19BWhm8Cd/CountryCode/19BWhmGj9" } ] }, "ov": true, "uri": "entities/geFfGTn/attributes/Phone/19BWhm8Cd" }, { "label": "Home 5193217654", "value": { "Type": [ { "type": "configuration/entityTypes/Individuals/attributes/Phone/attributes/Type", "ov": true, "value": "Home", "uri": "entities/geFfGTn/attributes/Phone/zUqfhOhP/Type/zUqfhSxf" } ], "Number": [ { "type": "configuration/entityTypes/Individuals/attributes/Phone/attributes/Number", "ov": true, "value": "5193217654", "uri": "entities/geFfGTn/attributes/Phone/zUqfhOhP/Number/zUqfhXDv" } ], "CountryCode": [ { "type": "configuration/entityTypes/Individuals/attributes/Phone/attributes/CountryCode", "ov": true, "value": "+1", "uri": "entities/geFfGTn/attributes/Phone/zUqfhOhP/CountryCode/zUqfhbUB" } ] }, "ov": true, "uri": "entities/geFfGTn/attributes/Phone/zUqfhOhP" } ] } } ]3.7KViews5likes0CommentsMigrate projects, pipelines, accounts, tasks across environments
One of the common requests from customers is how to move Snaplogic asset (pipelines, accounts and tasks) in a project from one environment to another easily as part of the configuration management or promotion processes. The standard project export/import function will move the pipeline, tasks, files, etc., but the customer will need to re-create the account objects and re-link all the account references in the target environment. This extra step could be a hassle if you have a big project with many pipelines and snaps require account references. The sample pipeline here use Snaplogic Metadata Snapack to read the asset definition from a project and write to the target location (same org or not). It will move the accounts, pipelines and tasks from source to target. More importantly, it will maintain the account references in the pipeline and pipeline reference in the task. User just need to re-enter the Account password and re-select the Task snaplex in the target environment. The attached project export contains 4 pipelines: 01 Main - Migrate Project: This is the main pipeline which will call into the following in sequence to move the assets. 02 Upsert Account - This pipeline will move the Account object in a project to the target location. 03 Upsert Pipeline - This pipeline will move the Pipelines in a project to the target location. 04 Upsert Task - This pipeline will move the Tasks object in a project to the target location. User can specify the source and target location (org + space + project) in the pipeline parameters. To run the pipeline to move project across org, the user account will need to have read/write permission to both source and target location. The project export of those pipelines can be downloaded here: Migrate Project v2.zip (10.7 KB)11KViews4likes13CommentsMigration Patterns
The following patterns migrate assets from one project to another in the same org. These patterns make use of the SnapLogic Metadata Snaps. Source: Existing accounts, files, and pipelines within SnapLogic Target: A second project within SnapLogic Snaps used: SnapLogic Metadata Snaps, Mapper, Requirements You must have access to both projects. You will need to define the following pipeline parameters: source_path, in the form of /orgname/projectspace/project target_path, in the form of /orgname/projectspace/project Migrate Accounts The SnapLogic List Snap gathers the list of accounts in the specified source_path parameter. The SnapLogic Read Snap reads the incoming $path for the accounts. The Mapper Snap maps the target path. The SnapLogic Create Snap writes the accounts to the target location. Migrate Files The SnapLogic List Snap gathers the list of files in the specified source_path parameter. The Mapper Snap maps the source path to a $source_path field for use in the Read Snap. The SnapLogic Read Snap reads the incoming $path for the files. The SnapLogic Create Snap writes the files to the target location. Migrate Pipelines The SnapLogic List Snap gathers the list of pipelines in the specified source_path parameter. The SnapLogic Read Snap reads the incoming $path for the pipelines. The Mapper Snap maps the target path. The SnapLogic Create Snap writes the pipelines to the target location. Pipeline Downloads Migrate Accounts.slp (5.7 KB) Migrate Files.slp (6.0 KB) Migrate Pipelines.slp (5.7 KB)7.8KViews4likes9CommentsCSV to Workday Tenant
Submitted by @stodoroska from Interworks This pipeline reads a CSV file, parses the content, then the Workday Write Snap is used to call the web service operation Put_Applicant to write the data into a Workday tenant. Configuration If there is no lookup match in the SQL Server lookup table, we are using MKD as a default code for country code. Sources: CSV file on the File share system Targets: Workday tenant Snaps used: File Reader, CSV Parser, SQL Server - Lookup, Mapper, Union, Workday Write Downloads CSV2Workday.slp (17.3 KB)4.7KViews4likes1CommentChange Data Capture from Workday
This is often a requirement from a variety of Workday customers to extract data within a time range i.e. extract data that was changed 7 days ago or 2 days back or even within the last day. Classic examples are: Terminations within last 90 days or 30 days New hires for the next 7 days who were entered within the last 7 days All active and only terminations within last pay period or 15 days It is very easy to accomplish this. Attached is a sample pipeline that extracts worker data whose preferred name changed since last July 2016. Obviously you can parameterize these date ranges and make this a batch job that runs every night or morning. The most important thing is to understand that Workday provides something called Transaction Log which keeps a track of all transaction changes within workday. Note that Workday keeps changes as part of transactions and every change has an underlying transaction which is basically identified by Transaction Type Listed below are some things you can refer for easy development of integrations. WKD_TRANSACTION_LOG.SLP - a simple pipeline that gives changes in preferred names since Jul-2016 Workday Community Documentation Reference: Workday Resource Center - Sign In Listed below is the mapper which tells the workday read snap on what changes to extract Attached is a sample pipeline. WKD_Transaction_log.slp (6.6 KB)4.6KViews3likes0CommentsConnecting to Marketo with the REST Snap Pack
While SnapLogic’s REST OAuth account supports only OAuth 2.0, it does not work with Marketo’s OAuth implementation. To work with Marketo, you must authenticate manually using the REST Get Snap. In this pipeline, we pass the credentials in as pipeline parameters. Note: This method does expose your credentials in the pipeline. Authorization To simplify the process, define the following pipeline parameters url: the REST API URL for your Marketo instance, like: https://xxx-xxx-xxx.mktorest.com clientID: The clientID for API access. clientKey: The client secret for API access. Add a REST Get Snap (labeled Marketo Login here) and configure as follows: For Service URL, toggle on the Expression button ( = ) and set the field to: _url + '/identity/oauth/token?grant_type=client_credentials&client_id=' + _clientID + '&client_secret=' +_clientKey Remove the input view. Validate the Snap and it will a return a response that contains an access_token and scope. In this example, we follow the REST Get with a Mapper Snap to map the token outside of the array. Using the Access Token In subsequent Snaps, we pass this token as a header, rather than a query parameter because it simplifies paged operations such as Get Lead Changes. Here’s an example of a simple call which does this. For Service URL, toggle on the Expression button ( = ) and set the field to: _url + '/rest/v1/activities/types.json' Under HTTP Header, set Key to Authorization and Value with the Expression button ( = ) toggled on to ‘Bearer ‘ + $accessToken Paged Operations When you get to more complex operations, such as getting lead changes, you need to make two API calls: the first creates a paging token, and the second uses the paging token typically with the paging mechanism enabled in our REST GET Snap. Get Paging Token In this REST Get Snap (renamed Get Paging Token for clarity) is where you specify the query parameters. For instance, if you want to get lead changes since a particular date, you’d pass that in via “sinceDateTime”. The example provided uses a literal string, but could be a pipeline parameter or ideally one of a Date objects formatted to match what Marketo expects. _url + '/rest/v1/activities/pagingtoken.json' Configure Paging Mechanism When calling Get Leads (via a REST GET Snap), a few things to bear in mind: You need to pass “nextPageToken” as a query parameter, along with the fields you want back. Ideally, the list of fields should be in a pipeline parameter because they appear twice in this configuration. The leads will be returned in $entity.result, which is an array. This field will not exist if there are no results, so you need to enable “Null safe” on a Splitter Snap after this REST Get. Paging expressions for the REST Get Snap are: Has next: $entity.moreResult == true Next URL: '%s/rest/v1/activities/leadchanges.json?nextPageToken=%s&fields=firstName,lastName'.sprintf( _url, $entity.nextPageToken ) API Throttling Marketo throttles API calls. Their documentation says “100 API calls in a 20 second window”. Since our REST Snap paging now includes an option to wait for X seconds or milliseconds between requests, use it whenever you are retrieving paginated results. Downloads Marketo REST.slp (14.7 KB)10KViews3likes11CommentsHow to make a Pipeline wait during execution
This can be achieved using Python script for making a pipeline to wait in the flow during execution. Change the value in line number 22 in the script time.sleep(120) Here 120 is the seconds which defined to make the pipeline wait. Note: The value needs to be provided in seconds Phython script Script_to_Wait_pipline.zip (723 Bytes) Pipeline Make Pipeline to wait_2017_03_06.slp (4.0 KB)8.2KViews3likes1CommentJIRA Search to Email Pattern
This pattern queries JIRA for items submitted within the last day (at the time the pipeline is run) and within the specified projects, then sends out an email. It also uses the routing trick described in Performing an Action when there is no data to send an alternate email if nothing was reported. Source: JIRA issue Target: email Snaps used: JIRA Search, Mapper, Sort, Join, Sequence, Router, Email Sender You will need accounts for JIRA and the Email Senders. Set the following pipeline parameters: emailTo: who will receive the email emailFrom: who is sending the email JIRAurl: the url for you instance of JIRA, for example “https://company.atlassian.net” projects: the JIRA projects you want to query as part of the JQL query in the JIRA Search Snap. The query is set up to search multiple projects (for example, “projects in (ABC,DEF,GHI)” ) Refer to JIRA’s Advanced Searching documentation if you wish to change this query. The information sent in the email includes: Key (sent to be a link to the issue in JIRA), Issue Type, Title, Priority, Submitter, Status, and Assignee. Download Pattern - JIRA All Recent Items.slp (17.1 KB)3KViews3likes0Comments