Forum Discussion
The fetch size should not affect the load, it is used when reading from the database with the SELECT Snap.
With the batch size, that will only affect when you use the INSERT or UPDATE. If you are using the BULK Snaps, you should not see any such difference, it will be using SQLLDR under the covers.
Does the Oracle Bulk Load snap make use of the ROWS SQL*Loader Command-Line Parameter or if the parameter isn’t used will the default value be used?
A few observations and suggestions:
- Without the Aggregate or another snap like it in the pipeline, the Multi Execute will execute for every input document. Adding the Aggregate snap was probably sufficient to make it execute only once, since the Aggregate snap has only one output document. There are other snaps you could use for this, such as Tail or Gate.
- If you're using S3 for staging, consider using the S3 Upload snap. It's much faster than the File Writer snap for writing multiple files.
- Try using Snowflake Bulk Load to load the staged files instead of Snowflake Multi Execute. It should be much simpler.
- marjan_karafilo2 years agoContributor
HI salishrodinger ,
You can create a child pipeline in order to write the files into the external stage.
After all files are processed successfully, you can continue with the process.
- salishrodinger2 years agoNew Contributor II
Thank you marjan_karafilo for you reply, I did try back then to create a child pipeline right after the File Writer so it can process all my Snowflake executions. Parametered the child pipeline as a batch of 30 but somehow i did have some errors too
Related Content
- 4 months ago
- 3 years ago
- 6 months ago