How to make a Pipeline wait during execution
This can be achieved using Python script for making a pipeline to wait in the flow during execution. Change the value in line number 22 in the script time.sleep(120) Here 120 is the seconds which defined to make the pipeline wait. Note: The value needs to be provided in seconds Phython script Script_to_Wait_pipline.zip (723 Bytes) Pipeline Make Pipeline to wait_2017_03_06.slp (4.0 KB)8.2KViews3likes1CommentCount number of records fetched/processed from a flat file/upstream system systems (snowflake, salesforce, oracle)/file writer without using a pipeline execute
Hi Team, I’m looking to count records on a couple of scenarios listed below: (1) Records fetched from a flat file (e.g. excel, csv) and writing the total counts of records into a new column e.g. File Reader --> Mapper (transformation rules here with new column added to count the total number of records) --> Excel/CSV formatter --> File Writer I’ve tried using snap.in.totalCount and snap.outputViews inside a mapper but didn’t get the expected results. (2) Records fetched from source system like snowflake, salesforce, oracle, etc. without using a count command in the query itself I’m thinking of using a Group By or an Aggregate snap to get the counts, would that be the right approach? (3) Counting number of records processed after the operation has been completed. For instance, I’m writing a flat file (excel/csv) but want a new column ingested into that file dynamically that states the total number of docs processed AND send an email to the team that states total number of docs processed. e.g. File Reader/Salesforce Read --> Mapper --> excel/csv formatter --> File Writer --> Mapper (anticipating this should have some rules) --> Email Sender (sends count ONLY) Thanking you in advance for your time and help on this one. Best Regards, DarshSolved6.6KViews0likes2CommentsPowershell command execute Snap
Hi, Is there any way to execute a command line scripts (Ex: Power shell)? Our scenario is to execute a set of commands (which is located over in a server, where we could establish a connection through an FTP/SFTP) at the end of load completion for an UI Refresh, please advise if there is any direct way or just let us know any workaround!! Thanks, Aravind6.5KViews0likes7CommentsElastic Mapping with a simple Script Snap
An Elastic Mapper which outputs JSON that only contains keys that have non-null values Background: The use case came while trying to map fields from a Salesforce Outbound Message. This message would send a varying number of fields, from each record, each having non-null values. For example, the SFDC Object had 50 fields in total and one record would return a document with 35 of the 50 possible fields populated and the next record would send 25 and so on; the data is then moved to another database that is pre-populated with data for each record. The problem was that using a Mapper, all 50 fields would need to be mapped in advance, and therefore those fields with NULL values would potentially overwrite data when updating the database. Our Mapper is static. Target Schemas need to be defined up front. The problem to solve was how to create a Mapper that filters out the non-null fields from each input document so that the update can occur without overwriting data with the null values. The Magic: All you need is ONE line of code added to the ‘try’ block of the script snap (javascript in this case) that removes from the input map any elements that have null values: … this.log.info("Executing Transform Script"); while (this.input.hasNext()) { try{ // Read the next document, wrap it in a map and write out the wrapper var doc = this.input.next(); while (doc.values().remove(null)); var wrapper = new java.util.HashMap(); wrapper.put("original", doc); this.output.write(doc, doc); //ß Optional note: I modified this to flatten the output this.log.info("Transform Script finished"); } … ** In Python you can use this construct in two lines (please note the indentation required in python): while (in_doc.values().remove(None)): None See attached test case pipeline: Limitations: This was created to work only with flattened, non-hierarchical structures. **Elastic Mapping_2017_03_01.slp (9.6 KB)6.4KViews2likes5CommentsDebugging Javascript - General Guidance
I’m looking for General Guidance, not a solution to a specific problem! I have a lot of experience programming in server-side languages like C++, C#, T-SQL, PL/SQL, and others… but am very much a newbie at Javascript. (When my employers’ pipelines use a Script snap, we solely use Javascript, not Python nor Ruby.) How do other Snaplogic users debug the JS in their script snaps? I’ve been largely just writing the equivalent of “print” statements by creating a debug object or a set of “trace” properties in the document stream. But that’s a rather hamfisted and slow-going method for debugging. I seem to recall that SnapLogic uses (used?) the Nashorn engine, but now that Snaplexes are on Java 11, is that still true? There’s the “NCDbg” project on GitHub, which provides a debugger for Nashorn that can connect to Visual Studio Code (VSCode) as a front-end… but it can only handle JS atop an underlying Java 8 or 9 JVM. I’m wondering if there is something similar that would match up more closely with the SnapLogic engines and environment. What do the rest of you do when you want to debug a script snap? Thanks!6KViews0likes6CommentsConfiguring the Script Snap to use a configured HTTP proxy environment variable
HTTP-compatible Snap Packs can leverage an HTTP proxy configured in the Snaplex’s Network Proxies configuration tab within the SnapLogic Manager web application. However, the Script Snap is different because you can write Scripts to call external processes (e.g. curl) and these will not be aware of any proxy configuration set within the SnapLogic application. curl can be configured to use a proxy directly via the --proxy argument, but if you wished to enforce that proxy usage across all usages of the Script Snap, you can set the http_proxy and/or https_proxy environment variables within a special file - /etc/sysconfig/jcc . Environment variables declared within this file will be visible to the Snaplex application (OS-level env vars will not be). This file (and directory) may not exist in your Snaplex, so you may have to create them (similar to the instructions on the Configuring a custom JRE version page): sudo mkdir -p /etc/sysconfig; sudo sh -c "echo 'export http_proxy=username:password@proxy-ip-address:port' >> /etc/sysconfig/jcc" substituting the equivalent values for username/password (if authentication is required), proxy-ip-address , and port (you may also want to add https_proxy too). Once this file is created, restart the Snaplex application ( /opt/snaplogic/bin/jcc.sh restart or c:\opt\snaplogic\bin\jcc.bat restart ) and the http_proxy / https_proxy environment variable will now be active within the SnapLogic product. Assuming your proxy is correctly configured, you can then run your Script that calls the external process and, if the process supports using a proxy, it will respect the setting. For example, the following Script Snap (Python) uses the subprocess library to execute curl and adds the response body to the output document. # Import the interface required by the Script snap. from com.snaplogic.scripting.language import ScriptHook import subprocess class TransformScript(ScriptHook): def __init__(self, input, output, error, log): self.input = input self.output = output self.error = error self.log = log # The "execute()" method is called once when the pipeline is started # and allowed to process its inputs or just send data to its outputs. def execute(self): self.log.info("Executing Transform script") while self.input.hasNext(): try: # Read the next input document, store it in a new dictionary, and write this as an output document. inDoc = self.input.next() proc = subprocess.Popen(['curl','https://www.snaplogic.com'], stdout=subprocess.PIPE) (out, err) = proc.communicate() outDoc = { 'original' : out } self.output.write(inDoc, outDoc) except Exception as e: errDoc = { 'error' : str(e) } self.log.error("Error in python script") self.error.write(errDoc) self.log.info("Script executed") # The "cleanup()" method is called after the snap has exited the execute() method def cleanup(self): self.log.info("Cleaning up") # The Script Snap will look for a ScriptHook object in the "hook" # variable. The snap will then call the hook's "execute" method. hook = TransformScript(input, output, error, log) On execution, the proxy access log should show the request being routed through the proxy.5.1KViews0likes2CommentsPython Script Snap still running sub process on stop
Hi - I’m using the script snap to execute some external code on a server and running subprocess.check_output to execute it leaves the script running if I stop the pipeline. Is there a way to guarantee the script gets halted when I stop my pipeline? Not sure what’s going on under the hood. Thanks, -James4.8KViews0likes2CommentsGenerate dynamic HTML file in snaplogic
Hi All, Have a question: I have a Script file that generates a dynamic html file. And this dynamic html file generated is fed into EMAIL SENDER snap. Problem: I want to place my script file inside snaplogic directly. Unable to achieve this. I am pasting my script here: script.txt (10.4 KB) If any one can help me how to place the script code into snaplogic. Tried using script snap but was unable to make it work. Any help would be much appreciated.Solved4.6KViews0likes5Comments