How to make a Pipeline wait during execution
This can be achieved using Python script for making a pipeline to wait in the flow during execution. Change the value in line number 22 in the script time.sleep(120) Here 120 is the seconds which defined to make the pipeline wait. Note: The value needs to be provided in seconds Phython script Script_to_Wait_pipline.zip (723 Bytes) Pipeline Make Pipeline to wait_2017_03_06.slp (4.0 KB)8.1KViews3likes1CommentFlatten JSON files into CSV files
Created by @schelluri The pipeline pattern flattens a JSON file, which has multiple objects, and turns it into a CSV file. Configuration Sources: JSON Generator Targets: CSV file Snaps used: JSON Generator, JSON Formatter, JSON Parser, Script, CSV Formatter, File Writer Downloads MS_Flatten_Script.slp (31.4 KB)7.4KViews1like2CommentsRemote file renaming/moving
Hi, I have a requirement to push a file to SFTP server. But since the file is quite large and to avoid race condition (we writing and remote function reading at the same time) we like to write the file with different extension or to a temp folder in remote server and them rename or move the file at the server. None for the File snaps support these feature. Could you please help me identify a way around to address above problem.7.4KViews1like13CommentsCount number of records fetched/processed from a flat file/upstream system systems (snowflake, salesforce, oracle)/file writer without using a pipeline execute
Hi Team, I’m looking to count records on a couple of scenarios listed below: (1) Records fetched from a flat file (e.g. excel, csv) and writing the total counts of records into a new column e.g. File Reader --> Mapper (transformation rules here with new column added to count the total number of records) --> Excel/CSV formatter --> File Writer I’ve tried using snap.in.totalCount and snap.outputViews inside a mapper but didn’t get the expected results. (2) Records fetched from source system like snowflake, salesforce, oracle, etc. without using a count command in the query itself I’m thinking of using a Group By or an Aggregate snap to get the counts, would that be the right approach? (3) Counting number of records processed after the operation has been completed. For instance, I’m writing a flat file (excel/csv) but want a new column ingested into that file dynamically that states the total number of docs processed AND send an email to the team that states total number of docs processed. e.g. File Reader/Salesforce Read --> Mapper --> excel/csv formatter --> File Writer --> Mapper (anticipating this should have some rules) --> Email Sender (sends count ONLY) Thanking you in advance for your time and help on this one. Best Regards, DarshSolved6.6KViews0likes2CommentsPowershell command execute Snap
Hi, Is there any way to execute a command line scripts (Ex: Power shell)? Our scenario is to execute a set of commands (which is located over in a server, where we could establish a connection through an FTP/SFTP) at the end of load completion for an UI Refresh, please advise if there is any direct way or just let us know any workaround!! Thanks, Aravind6.5KViews0likes7CommentsElastic Mapping with a simple Script Snap
An Elastic Mapper which outputs JSON that only contains keys that have non-null values Background: The use case came while trying to map fields from a Salesforce Outbound Message. This message would send a varying number of fields, from each record, each having non-null values. For example, the SFDC Object had 50 fields in total and one record would return a document with 35 of the 50 possible fields populated and the next record would send 25 and so on; the data is then moved to another database that is pre-populated with data for each record. The problem was that using a Mapper, all 50 fields would need to be mapped in advance, and therefore those fields with NULL values would potentially overwrite data when updating the database. Our Mapper is static. Target Schemas need to be defined up front. The problem to solve was how to create a Mapper that filters out the non-null fields from each input document so that the update can occur without overwriting data with the null values. The Magic: All you need is ONE line of code added to the ‘try’ block of the script snap (javascript in this case) that removes from the input map any elements that have null values: … this.log.info("Executing Transform Script"); while (this.input.hasNext()) { try{ // Read the next document, wrap it in a map and write out the wrapper var doc = this.input.next(); while (doc.values().remove(null)); var wrapper = new java.util.HashMap(); wrapper.put("original", doc); this.output.write(doc, doc); //ß Optional note: I modified this to flatten the output this.log.info("Transform Script finished"); } … ** In Python you can use this construct in two lines (please note the indentation required in python): while (in_doc.values().remove(None)): None See attached test case pipeline: Limitations: This was created to work only with flattened, non-hierarchical structures. **Elastic Mapping_2017_03_01.slp (9.6 KB)6.4KViews2likes5CommentsDebugging Javascript - General Guidance
I’m looking for General Guidance, not a solution to a specific problem! I have a lot of experience programming in server-side languages like C++, C#, T-SQL, PL/SQL, and others… but am very much a newbie at Javascript. (When my employers’ pipelines use a Script snap, we solely use Javascript, not Python nor Ruby.) How do other Snaplogic users debug the JS in their script snaps? I’ve been largely just writing the equivalent of “print” statements by creating a debug object or a set of “trace” properties in the document stream. But that’s a rather hamfisted and slow-going method for debugging. I seem to recall that SnapLogic uses (used?) the Nashorn engine, but now that Snaplexes are on Java 11, is that still true? There’s the “NCDbg” project on GitHub, which provides a debugger for Nashorn that can connect to Visual Studio Code (VSCode) as a front-end… but it can only handle JS atop an underlying Java 8 or 9 JVM. I’m wondering if there is something similar that would match up more closely with the SnapLogic engines and environment. What do the rest of you do when you want to debug a script snap? Thanks!6KViews0likes6CommentsCall a command line utility on the Snaplex node
Created by @pkona This pipeline pattern uses the Script Snap (here labeled Execute Script Snap) to call a command line utility on the Snaplex node. This pipeline calls any allowed shell command on the Snaplex node and the command is executed as the Snap user that is running the JCC process. In this sample the command can be configured in the Mapper Snap. Configuration Sources: Python Script Targets: JSON File Snaps used: Mapper, Script, JSON Formatter, File Writer Downloads Pattern to call-shell-command using Script Snap on Snaplex node.slp (6.8 KB)5.5KViews1like2Comments