ContributionsMost RecentMost LikesSolutionsRe: Delaying a process I am using Python Script with 15 seconds of sleep. Snap is taking slightly more than 15 seconds because of script load, execution, input & output time. Try this one, may be it can work for you. Scripting Language: Python # Import the interface required by the Script snap. from com.snaplogic.scripting.language import ScriptHook import java.util.HashMap import time import os import platform class TransformScript(ScriptHook): def __init__(self, input, output, error, log): self.input = input self.output = output self.error = error self.log = log # The "execute()" method is called once when the pipeline is started # and allowed to process its inputs or just send data to its outputs. def execute(self): self.log.info("Executing Transform script") while self.input.hasNext(): # Read the next input document, wrap it in a map and write out the wrapper. inDoc = self.input.next() outDoc = inDoc time.sleep(15) self.output.write(inDoc, outDoc) self.log.info("Script executed") # The Script Snap will look for a ScriptHook object in the "hook" # variable. The snap will then call the hook's "execute" method. hook = TransformScript(input, output, error, log) Re: Comparing the schema and discard the Object mohit_jain the solution is very straight forward. 1. Get the header from your initial file. 2. Merge it with your incremental file. 3. Filter match and unmatched in two different output streams. Download solution pipeline Compare_Schema_Discard_load & below is the screenshot. Re: Convert array into different format. JSON structure cannot have same key repeated multiple times within a object. The expected output structure is not a proper JSON structure. Re: Storing Error Data You can route your error to error view and save it in another csv file. If the returned error is a custom error and route the response in another downstream. Also in csv formatter, check option "ignore empty stream" to avoid creation of file if no documents exists in the upstream. Re: Multiple Postgres DB Create a Postgres Dynamic Account and parameterized your credentials as needed. Also ensure that the parameter which you have defined in the connection should be available in your pipeline with credentials. During the execution your pipeline will send these credentials to the connection. Re: How to check the parameters passed from parent to child pipeline You can see the parameter value in the dashboard from pipeline properties if you have checked capture option in pipeline properties of the child pipeline. Alternatively, you can check the parameter in child pipeline using pipe.args Example: pipe.args.hasPath('your_parameter') ? 'present' : 'not present' Re: How to delete extra columns in csv file which have no data in it Seems like your input file is not a proper formatted CSV. Provide the sample input file to understand the use case. Re: Extract data from oracle table If you are talking about pipeline parameter, you can use it in almost all of the Snaps. If you want to use local variable generated as output of the SNAP, use passthrough option and accordingly map the variable. Re: How to add MFT account in snaplogic I have created BASIC AUTH account earlier to connect with MFT server. For reading the file, I have used SFTP file path with basic auth credential. Re: Extract data from oracle table Make sure that your are executing the Oracle Select post the execution of stored procedure and commit statement. Also can you share a snapshot of your pipeline because the above details are not enough to identify the issue.