ContributionsMost RecentMost LikesSolutionsRe: SnapLogic Execution Mode Confusion: LOCAL_SNAPLEX vs SNAPLEX_WITH_PATH with pipe.plexPath Ranjith - I have verified with our development team that you are correct that if the plex referenced in the Execute On is the same plex that the calling pipeline is running on, then SNAPLEX_WITH_PATH acts the same as LOCAL_SNAPLEX. Analyze Pipeline is based on best practices established by the Professional Services team. This particular check appears to be based on an outdated understanding of the behavior of SNAPLEX_WITH_PATH. At one time, Pipeline Execute would go back to the Control Plane for load balancing whenever SNAPLEX_WITH_PATH was used, even when Execute On evaluates to the same plex as the calling pipeline. This check should be updated to a lesser warning or removed altogether. Re: Inserting large data in servicenow deepanshu_1 - You can relieve some of the memory requirement off your snaplex nodes by removing the Group By N and JSON Splitter snaps and use the Batch Size option in the Pipeline Execute snap instead. This accomplishes exactly the same result without the memory consumption of combining large sets of records into a single document as an array. If you are still experiencing slowness, you can follow-up with your ServiceNow admins to see if anything can be done on that side. I believe that ServiceNow is not really meant for batch operations, so inserting millions of records into ServiceNow is probably your bottleneck. Re: How can I pop the last element from one array and append it to another? adam_gataev - you have the right terminology. Just use a Mapper as follows: Here I'm just using two Array methods: concat() and pop() to complete your goal in one expression. The Array.pop() removes the last element and returns that element. The Array.concat() creates a new array with the new element added in. Note that I'm also using the "pass-through" option on the Mapper settings. This allows any other elements in the input doc to flow through to the target path without specifying them. Hope this helps! Re: XML namespace prefix removal from XML document vpalan - another solution is to use an expression library that recursively traverses the object tree, updating the keys as it goes down. See the attached example pipeline. Hope this helps! Re: Combine CSV document into a new file marenas - Here is an updated version of the pipeline to preserve the order of the files. Basically, I've simply added a Mapper to each path and placed a file-number and record-number to the documents of both paths before the Union, then sorted the data to ensure proper record ordering, and finally removed the sorting fields from the document. So a couple new concepts here: snap.in.totalCount is a built-in value that gives us the number of records seen by the input view of the snap. I enabled Pass-Through in the Mapper settings for the "Add fileN sortKey" snaps to allow the rest of the data to flow through the snap without having to specify it. In the "Remove sortKey" Mapper, I use Pass-Through again to allow all the fields to pass through except the fields that I have in the Expression without specifying the Target-Path - this effectively deletes just those 2 fields. Hope this helps! Re: Combine CSV document into a new file marenas - I would not recommend using Gate to combine the data files - it can cause excessive memory consumption for very large files since the data has to be stored completely in memory. I recommend the attached approach. The trick here is in the Mapper on the bottom path and the second input view added to the CSV Parser. If you review the documentation, you will see that the second input view allows you to specify a header and also datatypes, if you choose. I simply added the header in the Mapper. Then in the Union, it combines the data in the way you are looking for. One thing to note is that Union will take documents as they come from each input view - meaning in this case that if both CSV Parsers are sending a large volume of records, you will see them intermixed - it does not wait for all of the documents on the first path before consuming the documents from the second path. There are easy fixes for this, but thought I would mention it in case it is a requirement that the data ordering be preserved between the input files. Hope this helps! Re: Create a date range by hour Coyote - Sorry that our SnapGPT currently isn't working well for this case. I have submitted an internal message to our SnapGPT support group and they will look into it. In the meantime, I've created a sample pipeline to do what you're looking for. It's basically just two snaps: a Mapper to generate an array with the start/end timeframes, then a JSON Splitter to convert the array of objects into individual JSON documents. Please download the attached ZIP file, decompress it, then import the SLP to your Designer. Let me take just a moment to explain the expression in the Mapper as it contains a few pieces you may not be familiar with: sl.range(0, Math.ceil(($end - $start) / (1000 * 60 * 60))) .map(x => { "start" : $start.plusHours(x) , "end" : ($start.plusHours(x+1) > $end ? $end : $start.plusHours(x+1)) } ) sl.range is a built-in function that creates an array with incrementing values from start to stop. Math.ceil is another built-in function that rounds up a float value to the next integer value. Note that you can perform date-difference logic, with the result as number of milliseconds between. Next is an Array.map() method that allows you to update the entries in an array. The syntax that I used in the map method is creating an object with the desired start/end date-time values The Date.plusHours() method adds the appropriate number of hours generated by sl.range() function Finally, note the ternary operator syntax used to ensure the final "end" timestamp doesn't go past the current time (given to us in the input document) Hope this helps! Re: Trying to 'flatten' array using a mapper and pull out values Auroth - I believe what you are looking for is: jsonPath($, "$tdline_array..TDLINE") jsonPath is a tool that lets you perform powerful scans through JSON. In this case, I'm using the descent operator (..) to search for TDLINE regardless of where it exists in the JSON document. If you're already comfortable with Object and Array functions, this is your complement to those expressions. Hope this helps! Re: Transpose columns(Dynamic) into rows. Sorry - here is the attachment. Re: Transpose columns(Dynamic) into rows. kumar25 - Here is another solution that does not use the Script snap. I typically avoid the Script snap as it is notoriously difficult to debug and requires a different skill set that some developers may not yet possess. This solution uses a Gate snap to gather all the data into a single array. Please be conscious of how large the file is that you're consuming as this will load the entire dataset into memory, which can't be avoided in this case since you want to pivot the entire dataset. Then in the Mapper, we're using the Object.keys() method to find the key names that were read in - basically just so we can loop through and grab all the values you want to pivot. Then we're using the Array.map() method to change the value of each array element - this is where we're pivoting the records by re-creating the record from each set of column values. Within the map() method, we're also using the Object.get() method to retrieve the value associated with the current field-name we're on. After the Mapper, we have a JSON Splitter snap to pull the array elements into individual documents, completing the pivot. Hope this helps!