Connecting to Marketo with the REST Snap Pack
While SnapLogic’s REST OAuth account supports only OAuth 2.0, it does not work with Marketo’s OAuth implementation. To work with Marketo, you must authenticate manually using the REST Get Snap. In this pipeline, we pass the credentials in as pipeline parameters. Note: This method does expose your credentials in the pipeline. Authorization To simplify the process, define the following pipeline parameters url: the REST API URL for your Marketo instance, like: https://xxx-xxx-xxx.mktorest.com clientID: The clientID for API access. clientKey: The client secret for API access. Add a REST Get Snap (labeled Marketo Login here) and configure as follows: For Service URL, toggle on the Expression button ( = ) and set the field to: _url + '/identity/oauth/token?grant_type=client_credentials&client_id=' + _clientID + '&client_secret=' +_clientKey Remove the input view. Validate the Snap and it will a return a response that contains an access_token and scope. In this example, we follow the REST Get with a Mapper Snap to map the token outside of the array. Using the Access Token In subsequent Snaps, we pass this token as a header, rather than a query parameter because it simplifies paged operations such as Get Lead Changes. Here’s an example of a simple call which does this. For Service URL, toggle on the Expression button ( = ) and set the field to: _url + '/rest/v1/activities/types.json' Under HTTP Header, set Key to Authorization and Value with the Expression button ( = ) toggled on to ‘Bearer ‘ + $accessToken Paged Operations When you get to more complex operations, such as getting lead changes, you need to make two API calls: the first creates a paging token, and the second uses the paging token typically with the paging mechanism enabled in our REST GET Snap. Get Paging Token In this REST Get Snap (renamed Get Paging Token for clarity) is where you specify the query parameters. For instance, if you want to get lead changes since a particular date, you’d pass that in via “sinceDateTime”. The example provided uses a literal string, but could be a pipeline parameter or ideally one of a Date objects formatted to match what Marketo expects. _url + '/rest/v1/activities/pagingtoken.json' Configure Paging Mechanism When calling Get Leads (via a REST GET Snap), a few things to bear in mind: You need to pass “nextPageToken” as a query parameter, along with the fields you want back. Ideally, the list of fields should be in a pipeline parameter because they appear twice in this configuration. The leads will be returned in $entity.result, which is an array. This field will not exist if there are no results, so you need to enable “Null safe” on a Splitter Snap after this REST Get. Paging expressions for the REST Get Snap are: Has next: $entity.moreResult == true Next URL: '%s/rest/v1/activities/leadchanges.json?nextPageToken=%s&fields=firstName,lastName'.sprintf( _url, $entity.nextPageToken ) API Throttling Marketo throttles API calls. Their documentation says “100 API calls in a 20 second window”. Since our REST Snap paging now includes an option to wait for X seconds or milliseconds between requests, use it whenever you are retrieving paginated results. Downloads Marketo REST.slp (14.7 KB)10KViews3likes11CommentsFetch data whose date is less than 10hrs from the existing date!
Hi Team, I’m trying to achieve a filter condition but haven’t found any luck so far. My data has a field as Last_Updated which has date stored in the format 2022-06-23 03:54:45 I want to consider only those records whose date is less than 10 hours than the existing date. How can I achieve this? Shall I use a mapper or a filter snap and I would really appreciate if the logic behind this can be shared. If the format of the date that is stored in the Last_Updated needs to be transformed as well, please let me know. Thanks! Regards, DarshSolved9.2KViews0likes16CommentsCI/CD Solution with Bitbucket
Submitted by @Linus and @uchohan The pipelines in this solution are for a proposed CI/CD process. The implementation and documentation will enable the following capabilities Ability to source control any SnapLogic Pipeline, Task and Account Commit entire project Commit individual asset Specify commit message Specify branch name Automatic Bitbucket project, repository and branch creation Automatic Bitbucket CI/CD file upload and Pipeline enablement Automatic SnapLogic project space, project and asset creation Ability to pull assets from Bitbucket to a SnapLogic project Revert changes based on specific branch Revert entire project or specific asset SnapLogic Compare Pipeline review Bitbucket Pull Request creation, approval and merge Automatic promotion of assets from development to production Terminology A SnapLogic project space (belongs to a SnapLogic organization) will be mapped to a Bitbucket repository A SnapLogic project (belongs to a SnapLogic project space) will be mapped to a Bitbucket repository (belongs to a Bitbucket project) Each repository will have 1 or more Bitbucket branches. By default, the master branch will reflect the state of assets in the SnapLogic production organization. Additional branches (feature branches) inherits the master branch and will reflect various new development efforts in the SnapLogic development organization Developer assets Each SnapLogic user that should be involved in committing or pulling assets to the Bitbucket space could have its unique and individual assets. It is recommended that each user duplicates the User_Bitbucket project and replaces User with its unique name. Although covered in greater detail in the attached PDF, the User_Bitbucket project holds these four Pipelines, each containing a single Snap: Commit Project - Commits any Pipelines, Accounts and Tasks within the specified SnapLogic project, to the specified branch in Bitbucket Commit Asset - Commits the specified asset within the specified SnapLogic project, to the specified branch in Bitbucket Pull Project - Reads any Pipelines, Accounts and Tasks from the specified branch in the specified Bitbucket, to the specified project and organization of SnapLogic Pull Asset - Reads the specified asset from the specified branch in the specified Bitbucket, to the specified project and organization of SnapLogic For each Pipeline, each user needs to update the bitbucket_account Pipeline Parameter in the respective Snaps, matching the path to their own Bitbucket Account. Downloads Documentation CI_CD Documentation.pdf (1.3 MB) For User_Bitbucket project: Commit Asset.slp (3.5 KB) Commit Project.slp (3.4 KB) Pull Asset.slp (3.6 KB) Pull Project.slp (3.6 KB) Note: These pipelines all rely on shared pipelines located in a CICD-BitBucket project. Make sure to update the mappings to the pipelines within the CICd-BitBucket project to your location. For CICD-BitBucket project: 1.0 Main - SL Project to Bitbucket.slp (17.5 KB) 1.1 Create Project and Repo.slp (19.2 KB) 1.2 SL Asset to Bitbucket.slp (14.8 KB) 2.0 Main - Migrate Assets To SL.slp (23.1 KB) 2.1 Upsert Space And Project.slp (16.4 KB) 2.2 Read Assets.slp (29.3 KB) 2.2.1 Upsert Pipeline To SL.slp (12.8 KB) 2.2.2 Upsert Account To SL.slp (17.9 KB) 2.2.3 Upsert Task To SL.slp (21.2 KB) PromotionRequest.slp (26.0 KB)9KViews5likes6CommentsNeed to convert yyyymmdd to yyyy-mm-dd
Hi, I have a element as:- jsonPath($, “$Z_ORD.IDOC.DK02[].DATE") and value is coming as 20210521(yyyymmdd). I need to convert it to 2021-05-21 and map it to target ml element. I am using mapper. Tried like below:- Date.parse(jsonPath($, "$Z_ORD.IDOC.DK02[].DATE”,“yyyMMdd”)).toLocaleDateTimeString({“format”:“yyyy-MM-dd”}) || null but getting error as:- Not-a-number (NaN) does not have a method named: toLocaleDateTimeString, found in: …:“yyyy-MM-dd”}). Perhaps you meant: toString, toExponential, toPrecision, toFixedSolved8.4KViews0likes7CommentsArchiving Files
Submitted by @stodoroska from Interworks This pipeline reads the file from source location, writes it to the archive location, and after that deletes the files from source location. Configuration Source and Archive location are configured using pipeline parameters. Sources: Files on a File sharing system Targets: Files on a File sharing system Snaps used: File Reader, File Writer, and File Delete Downloads Generic.Archive.slp (5.9 KB)8KViews1like10CommentsReference Implementation - Integration Pattern - On-Prem Database to Cloud Datawarehouse
This is a common integration patterns that customers tend to use as they are moving from on-prem to cloud. The following videos describe this integration pattern and the attached reference implementation pipelines help you get a head start on the implementation. Part 1 Part 2 Pipelines Attached The pipeline moves all the tables from an Oracle database to Redshift. The pipelines are designed to use our best practices like parameterization of all aspects of the pipeline (eg. account names , db names, table names). All are dynamic and controlled via pipeline parameters. Source: Oracle Tables Target: Redshift Tables Snaps used: Step 1: Oracle - Table List, Filter, Oracle - Execute, Mapper, Pipeline Execute Step 2: Mapper, Redshift - Execute, Router, Join, Oracle - Select, Shard Offsets*, Redshift - Bulk Load, Pipeline Execute, Exit, Union Step 3: Mapper, Oracle - Select, Redshift - Bulk Load, Exit *This is a Snap developed by the SnapLogic Professional Services team. Downloads Pattern 1 - Step 1.0 - Oracle to Redshift Parent_2018_06_25.slp (9.4 KB) Pattern 1 - Step 1.1 - Oracle to Redshift No Shredding_2018_06_25.slp (20.9 KB) Pattern 1 - Step 1.2 - Oracle to Redshift Shred_2018_06_25.slp (7.7 KB)7.9KViews1like8CommentsWrite expression for conditional check
Hi, I want to write this code in expression snaplogic. I need to map TestID. <xsl:variable name=“TestID”> <xsl:if test=“exists(node()/IDOC/A1[VW=‘F’]/NO)”> <xsl:copy-of select=“node()/IDOC/A1[VW=‘F’]/NO”> </xsl:copy-of> </xsl:if> <xsl:if test=“not(exists(node()/IDOC/A1[VW=‘F’]/NO))”> <xsl:copy-of select=“node()/IDOC/A1[VW=‘F’]/NP”> </xsl:copy-of> </xsl:if> </xsl:variable> How do I do it?Solved7.5KViews0likes10CommentsIngest data from SQL Server (RDBMS) to AWS Cloud Storage (S3)
Contributed by @SriramGopal from Agilisium Consulting The pipeline is designed to fetch records on an incremental basis from any RDBMS system and load to cloud storage (Amazon S3 in this case) with partitioning logic. This use case is applicable to Cloud Data Lake initiatives. This pipeline also includes, the Date based Data Partitioning at the Storage layer and Data Validation trail between source and target. Parent Pipeline Control Table check : Gets the last run details from Control table. ETL Process : Fetches the incremental source data based on Control table and loads the data to S3 Control Table write : Updates the latest run data to Control table for tracking S3 Writer Child Pipeline Audit Update Child Pipeline Control Table - Tracking The Control table is designed in such a way that it holds the source load type (RDBMS, FTP, API etc.) and the corresponding object name. Each object load will have the load start/end times and the records/ documents processed for every load. The source record fetch count and target table load count is calculated for every run. Based on the status (S-success or F-failure) of the load, automated notifications can be triggered to the technical team. Control Table Attributes: UID – Primary key SOURCE_TYPE – Type of Source RDBMS, API, Social Media, FTP etc TABLE_NAME – Table name or object name. START_DATE – Load start time ENDDATE – Load end time SRC_REC_COUNT – Source record count RGT_REC_COUNT – Target record count STATUS – ‘S’ Success and ‘F’ Failed based on the source/ target load Partitioned Load For every load, the data gets partitioned automatically based on the transaction timestamp in the storage layer (S3) Configuration Sources : RDBMS Database, SQL Server Table Targets : AWS Storage Snaps used : Parent Pipeline : Sort, File Writer, Mapper, Router, Copy, JSON Formatter, Redshift Insert, Redshift Select, Redshift - Multi Execute, S3 File Writer, S3 File Reader, Aggregate, Pipeline Execute S3 Writer Child Pipeline : Mapper, JSON Formatter, S3 File Writer Audit Update Child Pipeline : File Reader, JSON Parser, Mapper, Router, Aggregate, Redshift - Multi Execute Downloads IM_RDBMS_S3_Inc_load.slp (43.6 KB) IM_RDBMS_S3_Inc_load_S3writer.slp (12.2 KB) IM_RDBMS_S3_Inc_load_Audit_update.slp (18.3 KB)7.2KViews1like3CommentsEmployee Journey: Insert new employee into Workday
Created by @rdill This pipeline pattern allows users to respond to a REST Post event triggered by a third party HR solution (e.g. Jobvite, Glassdoor, LinkedIn, etc.) and insert new employee(s) data into Workday. Configuration This pipeline requires the proper structure to insert a new employee in Workday. There are two kinds of data needed to create a new employee, “static” data, such as country reference, organization reference, etc. The second is the dynamic data, user name, hire date, etc. The pipeline is configured to provide the static data and all the user needs to do is provide the dynamic data. Sources: Any application that can invoke a REST Post to create an employee in Workday and contains the necessary attributes to create a new employee in Workday. (In this case, a JSON document is used.) Targets: Workday: Service, Staffing, Object, Hire Employee Snaps used: File Reader, JSON Parser, Mapper, Workday Write Downloads POST Hire Employee.slp (9.4 KB)7KViews0likes3Comments