Remove obsolete fields from the mapper snap which uses endpoint
You have a response from an API endpoint that brings a huge number of fields. There are few obsolete fields that you want to remove in the pipeline. What is the best way to handle this in Mapper snap? • Add only the fields to be removed from the Input schema and leave the Target Schema as blank with Pass through. • Add all the required fields from the Input Schema and map to Target Schema • Add all the required fields from the Input Schema and map to Target Schema with Pass through • Add the fields to be removed to the Target Schema and pass null values in the Input Schema with Pass through.2.8KViews0likes2CommentsHow to avoid double encoding issue for URI in Snaplogic?
While processing a Azure blob storage file in Snaplogic, Snaplogic is not able to read it as its encountering with double encoding issue. I have already tried below method to handle double encoding to fix it and worked well but encountering with other issues due to this. Example URI : “https://myaccount.blob.core.windows.net/sascontainer/sasblob.txt?sv=2015-04-05&st=2015-04-29T22%3A18%3A26Z&se=2017-04-30T02%3A23%3A26Z&sr=b&sp=rw&sip=168.1.5.60-168.1.5.70&spr=https&sig=Z%2FRHIX5Xcg0Mq2rqI3OlWTjEg2tYkboXr1P9ZUXDtkk%3D” Endindex : $data.indexOf(“?”) Startindex : $data.search(“sascontainer /”)+11 Head: $.data.slice(0,$.startIndex) Middle : encodeURIComponent(decodeURIComponent($.data.slice($.startIndex,$.endIndex))) Tail : $.data.slice($.endIndex) Rest GET service URL: Head + Middle + Tail above logic is used in Rest GET to read the service url by decoding and encoding only middle part. However this method is also failing even though it handled double encoding as its decoding everything instead of only decoding hexadecimal values of URL. Is there any other method to decode only those hexadecimal values present in URI and encode whole URI on top of it?3.4KViews0likes3CommentsIssue with Arrays in Downstream Mappers after a Custom Snap Transformation
We built a custom snap that performs a transformation. You will notice in the output of the custom snap in the screenshot below that “medicalEnrollment” is a json array. In a downstream mapper, we attempted a transformation on this array. We have tried (1) using the sl.ensureArray() method and (2) another downstream mapper performing a transformation in the array. However, we notice that in both the #1 and #2 attempts the mappers do not detect the array, thus causing issues with the transformation. Below is a snippet of the preview of what we expect vs actual preview. We also notice that chaining a Json Formatter and a Json Parser right after the custom snap does indeed makes the mappers work correctly. Pattern shown below: So my question is, is there something hidden in the JSON parser snap that allows strong typing of the array that we are missing? Anything in between? @robin , Tagging you since I was told you might be able to help out. Thanks!Solved9.5KViews0likes20CommentsEditing JSON for CSVParser to add hundreds of headers
Continuing the discussion from CSV Parser, no headers, hundreds of columns: I edited/added the Column Names to CSVParser snap for snaplogic pipeline and when uploaded it as a new file, all of them were empty however those number of fields were added in the snap. below is how I made chnages to the pipline by opening .slp file in a text editor: "columnList": { "value": [ { "column": { "value": " Number" } }, { "column": { "value": " Suffix" } }]} This only gives empty headers. Please suggest how could I make changes to JSON file also let me know if there are better ways to write Column Names without having to add them manually? Thanks!2.4KViews0likes1CommentZip Writer S3 - base directory uses full S3 folder structure
I’m using a Zip Writer snap to add two files to a single .zip file. The files are located in a S3 bucket and are several directories “deep” in the S3 bucket folder structure (S3:///folder1/folder2/folder3/folder4/file.txt). I would like to add the two files to the base directory of the unzipped .zip file (/home/file1.txt, /home/file2.txt). The problem is that no matter what I put into the Base Directory option, I always get my complete S3 bucket folder structure when the .zip is unzipped (/folder1/folder2/folder3/folder4/file1.txt). Is there a way to put my files in the root of the unzipped file without the full S3 bucket folder structure?Solved4.8KViews1like4CommentsXML parser is not able handle UTF-8 Character set
I am trying to give input to XML parser after transcoding into UTF-8 format but the special character is getting converted to the garbage value Input : Before XML parser <?xml version="1.0" encoding="UTF-8"?> <UTF_encoded>Š</UTF_encoded> <UTF_encoded1>Montréal,</UTF_encoded1> <street_address></street_address> Output AfterXMLpeaser “UTF_encoded”: "Å " “UTF_encoded1”: “Montréal,” “zipcode”: “11767” “street_address”: "ç¥å¥å·çå·å´å¸ä¸ååºæ¨æ4-34-1, ï½³ï¾ï½¨ï¾ï¾ï½°ï¾ï¾405, " “full_home_address”: “ç¥å¥å·çå·å´å¸ä¸ååºæ¨æ4-34-1” “national_identifier_type”: “ç¥å¥å·çå·å´å¸ä¸ååºæ¨æ4-34-1” Can someone please help2.4KViews0likes0CommentsParser issue with size
I am trying to parse 600MB files using json parser, file reader reads the file but json parser runs into following error - Failure: Cannot parse JSON data, Reason: Exception while reading document from the stream, SLDB does not support data larger than 100 MB, Resolution: Please check for correct input JSON data. What is the best practice here to parse large json file ? what is the role of SLDB here ? What is being stored in SLDB here ? even in the architecture document, there is no mention of SLDB here. Can some one clarify the details here ?5.3KViews0likes8CommentsAggregate Snap example
This is an example to illustrate the Aggregate Snap with Functions like Sum, Count, Min, Max and Average. The 1st Snap generates the required input data like Name, Age, Dept, etc. The next Snap “Aggregate” is configured with functions like Sum of Age for Total Age Count of Name (Unique column) for Total employee Min of Age for Minimum Age Max of Age for Maxiumum Age Avg of Age for Average Age Below is the quick screen shot of the configuration of the functions of this Snap. Attached is the .slp for the pipeline reference of this example.Aggregate_2017_03_11.slp (6.0 KB)3.4KViews0likes0CommentsAn XML Generator & XML Formatter Snaps example
Here’s an example illustrating the use case of the snaps XML Generator, XML formatter and File writer. The 1st snap “XML Generator” of the pipeline generates a document format data with the given xml input configured in the snap on “Edit XML *” in snap settings. The 2nd snap “XML Formatter” accepts the document format and encapsulates the data into XML formates and provides the the binary output which can be an input to the File writer snap which accepts the binary input. Attached is the .slp for pipeline reference.XMLGenerator_Formatter_2017_03_1.slp (4.1 KB)4.6KViews1like1CommentHow to generate a unique sequence number for every document
How to generate a unique sequence number It’s very simple to generate a unique sequence number on each document being processed through a snap. Below is the screen shot and attached is the sample pipeline. Sequence_2017_02_18.slp (2.8 KB)3.5KViews0likes0Comments