ContributionsMost RecentMost LikesSolutionsRe: SSH account for windows sftp SFTP (FTP over SSH) is just a protocol and doesn’t really matter what kind of system (Windows, Linux, etc.) is on the other end. You can authenticate via user/pass (basic auth) or SSH key. Re: Is there a better way to calculate a value and use it throughout a pipleine? Sometimes it’s useful to have a parent pipeline that executes a child pipeline and passes some values like these as parameters to the child. Then the child can reference them anytime (_parm) without having to keep passing the value along from one snap to the next. Especially useful when pipeline is doing things like document-to-binary where you can’t easily pass the values along. Re: snap.out.totalCount variable Documents (records) are generally processed through snaps (including the router) in a streaming fashion, independent of each other. To accomplish what you’re looking for, perhaps you could group all the documents together using one of the group by snaps to get a “before” count to copy down a side path. Then split them back apart into separate documents again and do whatever processing. Then afterwards group them back together again to get an “after” count, which you could merge back with the “before” count to compare and take action on. Best if you can leave things streaming though, especially if it might be a lot of documents. Re: Creating and testing error pipelines An “error pipeline” might not be right for you if you want specific processing for various types of errors that various snaps can give. In my opinion this would be for more generic error handling, and there are usually pretty standard fields ($error, $reason, $error_entity) that you can throw to logs or whatever for troubleshooting. For specific error handling on different snaps within your pipeline, you might want to put the error processing on the snap’s error path itself. For example, “pull the pug” (can’t connect) type errors, you should be able to test your sad/error paths by trying to hit something else (bad url, etc.) rather than asking your vendor to kill their site… I’m sure you can figure out how to test your code without the participation of your partners. Re: Creating and testing error pipelines If you have an error pipeline defined for your base pipeline, then I believe it will always run (and wait for error data) when your base pipeline runs, not only run when there is actually an error. If you have an error path on a snap, then it should only have data passed that way when errors occur. Re: Table name in SQL select snap Seems like “failed to retrieve metadata” would be more about preview and the snap trying to pull and suggest fields and such for where clause, etc. Do you get this error when you actually run/execute the pipeline, or only when previewing/configuring? Re: File exists or not If the problem is doing something else when no data to trigger your downstream snaps (router, etc.) you could look at patterns in some of the “when no data” threads like below: Performing an Action when there is no data Designing Pipelines A common integration pattern is to do something when no data is received. For example, we might read a file, parse it, and find that no records meet some filter criteria. As a result, we might send an email, or insert a ticket into a ticket management system like ServiceNOW. However, in SnapLogic, this can be somewhat more difficult than it seems initially because of the streaming architecture. In fact, many snaps will not execute without input documents - rather hard to accomplish when there i… Basically create some fake data, join it with your actual data (or missing data) from your directory browser, then check if you actually had data or not in the downstream router snap (if/else). Re: Scheduled Tasks don't run on schedule If you need precise execution times, then yes, you should switch to triggered tasks with an external scheduler. Re: Is there way to create global variable? Unfortunately no such thing as global variables. There are a few ways to accomplish what I think you’re looking for though: You could have your value as a pipeline parameter. Have a first pipeline that gets your token, and then executes a second pipeline, passing the token as a pipeline parameter. Then that second pipeline will have access to the parameter (like a global variable) for all of his steps (calling other pipelines, etc.) You can keep passing the token along from snap to snap. In your example, the pipeline execute snaps should have output documents with an “original” array containing the data that was passed in to the snap. if your token was part of that, you should be able to use a mapper to put the value back where you want it for the next downstream snaps, etc. Re: CSV to IDOC Conversion CSV Parser should output separate documents to the stream for each row.