cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Fetching 1k records from table using "Snowflake select snap"

darshthakkar
Valued Contributor

Iโ€™m planning to fetch the data from a table using โ€œSnowflake select snapโ€ while putting the condition of getting 1k records ONLY in WHERE clause however it is not getting any results.

I can put the entire query in โ€œSnowflake execute snapโ€ as well but wanting to do with Snowflake Select as of now. Any help is highly appreciable.

Thanks!

1 ACCEPTED SOLUTION

darshthakkar
Valued Contributor

Wanted to share the solution here. The query was right, no need to change the โ€œFetch-sizeโ€ as well.
The only thing that was missing was โ€™ โ€™ for the IDs that we were passing in the query. Thus, the query would look like -
WHERE NAME IN (โ€˜102345โ€™, โ€˜1701โ€™, โ€˜5604878โ€™)

View solution in original post

27 REPLIES 27

Snowflake lookup is definitely helpful and infact snowflake execute does wonders too if joins are used in the query itself however with both snowflake lookup and execute, I have seen the performance of the pipeline getting degraded.

The pipeline that used to complete within a minute, now takes around ~35-40m just to go over lookup/execute respectively.

@robin @bojanvelevski: Would you recommend using Join Snap instead of snowflake lookup & execute for a faster execution of the pipeline?

Please feel free to provide any other suggestions, trust me, with those suggestions of yours, I would be exposed to something different (Iโ€™m still new to snapLogic and learning my way out)

The database lookup pattern would work well with an OLTP database, which has low query startup costs and is optimized for single record operations. With a data warehouse like Snowflake which is optimized for analytical queries, the query startup cost and id based lookup will not be as performant as an OLTP database.

You could fetch the required column of the data set from Snowflake using a select query and then use the In-Memory Lookup snap. If the lookup table fits in memory, that would be more performant. If the data set is too large, using a OLTP database would be better for such an use case.