ContributionsMost RecentMost LikesSolutionsRe: CLI for BCP (SQL SVR) and dynamic file names I have to execute BCP command on Linux nodes, How to configure the credentials to run the BCP on groundplex node? Re: How to Unzip tar.gz file I was unable to unzip the .tar file but I hope here the solution to unzip the zip files Decompressing and reading WINDOWS zip files Designing Pipelines Here is a pipeline that reads a .zip file, decompresses (unzips) it and parses it contents. Please note that our current “Decompress” Snap only supports BZIP2, GZIP and DEFLATE. [image] Binary to Document Snap Encode or Decode property is set to NONE [image] Mapper configuration [image] Zip file read Snap configuration [image] This will return a binary stream and based on content-type, in this case we have an Excel xls file so we snap-in Excel Parser. Pipeline is attached, you need to … Hope that works for you. SQL server to Amazon S3 Am trying to load large volumes of data from SQL Server to Amazon S3. If I use ‘SQL Server Select’ then it is taking way longer to extract a single big table. Is there a faster way to extract data from SQL server to Amazon S3. Using GroundPlex Is there any way to set variables to before executing snowflake statement I would like to execute the 2 set variable statements before executing the select * from viewname. This view internally uses VAR1 and VAR2 variables. set VAR1 = ‘ABC’; set VAR2 = ‘XYZ’; Select * from dbname.schemaname.viewname ; Is there anyway to set variables before executing select statement Re: Getting Error streaming data while writing to a temporary file, reason=No space left on device I raised ticket with Snaplogic support team and it automatically got fixed and they mentioned that hotfix could have fixed the problem. Getting Error streaming data while writing to a temporary file, reason=No space left on device Using snowflake bulk load to load a big table. Got the error after extracting 43+ Millions rows from source and not inserted any rows in the target table. I have changed Buffer size from 10MB to 100MB but no luck. I am not sure what other settings need to change to run this pipeline. Here is the detailed error message. Snap errors: {ruuid=79122f75-f6ed-4af3-867b-012f2c8e03fa, label=Snowflake - Bulk Load, failure=Error streaming data while writing to a temporary file, reason=No space left on device, resolution=Address the reported issue.} Got the error after extracting 43+ Millions rows from source and not inserted any rows in the target table. Re: Snowflake Bulk Load is giving error =Table does not exist in the database Thanks for the reply. I have tried all the options. There was nothing in the marked fields and still cleared them and also deleted and added the snap but no luck. I am using Chrome browser. Snowflake Bulk Load is giving error =Table does not exist in the database Snowflake Bulk Load is giving error Cannot find table : “schemaname”.“tablename”, reason=Table does not exist in the database, resolution=Ensure the provided table exists in the database. If I change use Snowflake Insert Snap in place of Snowflake Bulk Load snap it is working fine. I am not sure why Snowflake Bulk Load is giving error. These pipelines were working fine couple of days back Calling a pipeline/task from curl command I have created a pipeline/task and called that using curl command and it is returning the document which is available in the last snap. I have added error pipeline to it. Called main pipeline/task using curl again. When there is no error it is returning the document which is available in the last snap(from main pipeline) as expected but when there is failure it is simply returning instead of last snap output in the error pipeline. how to return some value to curl command when pipeline is failed and entered in error pipeline. How to Unzip tar.gz file How to unzip tar.gz file? Zip File Read snap is not working for tar.gz files. I am trying decompress snap but I am not sure how to get individual files from the output of Decompress.