cancel
Showing results for 
Search instead for 
Did you mean: 

Too much time executing a pipeline

darshthakkar
Valued Contributor

Hi Team,

The pipeline is taking too much time to execute, isn’t that strange when it takes around 40+ minutes on a single file reader snap? Snaps below for reference:

image

image

CC: @dmiller @ptaylor @siwadon @koryknick

15 REPLIES 15

If you just want to execute the Snowflake Execute once and it depends on the previous snaps finishing but doesn’t actually use the input documents, consider using the Gate snap and note its options.

Noted, thank you both of you @ptaylor @koryknick for your comments.
Yes, I do need the input documents prior to snowflake execute snap and not just waiting for it to finish. That’s where I was specifying to use an “Inner Join”.

Lemme throw some more light what I was trying to achieve. I have a flat file with ~6k records which I’m interested to deal with. Those ~6k records are IDs itself whose data I need from snowflake DB which would be accomplished putting multiple joins on different tables in snowflake. So, I wrote a query on snowflake execute with multiple joins and as the records were ~6k, I couldn’t put a where clause with those 6k records inside my query in the snowflake execute. This is where I enabled the input view of snowflake execute and added a clause like AND a1.NAME=$rawKnowledgeArticles which would only consider those ~6k records. You guessed it right, Name in snowflake = rawKnowledgeArticles from the flat file.

I could have used an Inner Join (which I’m currently using) but Inner join doesn’t display an output preview due to the nature of the data I’ve currently, it doesn’t generate any preview even if I change my settings to 2000 documents. Thus, I went ahead with the development starting off with snowflake execute (thank you @ptaylor for your suggestion on another thread I opened with Joins) and it was pretty fast, 40s max and the entire pipeline would be executed. While the dev was finished, I plugged in the flat file output to snowflake input and that did run to be honest but took ~1.5hrs.

The workaround was always using an Inner Join after the dev was completed as output preview wouldn’t be required for now and using Inner Join reduced the time to 13m instead of ~1.5hrs. 13m itself is long but I’m having a pipeline execute that generates ~6k files in different formats thus it justifies the 13m and I’m fine with it. For testing, I used head/tail with 50records and that took like 15s to run.

koryknick
Employee
Employee

If using Gate, make sure you change the “Output Document Content” from “All Input Documents” to either of the other options to prevent everything from going to memory. You could also use the Tail snap if you don’t need to send the input documents to the Snowflake Execute snap.

I don’t know what you’re trying to do in this pipeline, but removing the input view from the Snowflake Execute would have it execute as soon as the pipeline starts, which might not be the behavior you want.

Yes, good points, Kory, especially about disabling the input view.

Tail would probably make more sense than Gate if there’s only one input view. I couldn’t recall the name of that option in the Gate snap (even though I added it!) so just said “and note its options”. 🙂

koryknick
Employee
Employee

If you really want performance out of this, I would recommend that you load the ID values into another Snowflake table and run your complex query using that table of IDs as your filtering condition in the query itself. The load of IDs to that table can be done using the Snowflake Bulk Load snap and should be pretty quick - then the query using the Snowflake Execute only has to run once to return all data.