Snowflake Bulk Load

Hi,

I am trying to use Snowflake bulk load snap to read data from external S3 location and load into the snowflake table.
here the challenge is that. Under the Snowflake account, I have mentioned the external bucket name and provided keys. but when am running the pipeline I see that 0 records loaded.

Note:- Snowflake is on the AWS account, data is on another AWS account. If I use the S3 file reader snap. am able to see the data

Can anyone help me with this?

Error:-

Bulk load snap settings:-

@venkat
I would be interested to see if there are any command executed on the Snowflake History Console after executing the Bulk Load Snap. I understand that 0 records are loaded but the pipeline did not fail from your screenshot, let me know if that is the case.

I will also interested to see if you can use the s3 file reader as an upstream snap to Snowflake Bulk load, having the data source as input view. The second option will be very slow but will be good to know if there are data issues involved.

Yes, pipeline ran successfully but data is not loaded. I am able to load data into table using s3 read and then snowflake insert snap which i am trying to avoid .

As per the Snowflake History:-

After reading the data from s3 read, Instead of Snowflake Insert Snap, you can use the Snowflake Bulk Load Snap and select the Data Source as Input view. I would like to know if the Bulk Load snap is able to process that data.

Seems like the History doesn’t have any bulk api executions (“copy into”). can you attach your pipelines after removing the sensitive data.

I took a look at the pipeline. The root cause of the issue seems to be the case sensitivity of the column headers. Can you sync up with the support contact, Eric on this.

thank you
I am able to fix it. What if I want to use External staging file (S3) instead of S3 file reader

As long as you make sure that file contains the column headers in the same case as present in the Snowflake Database table, it should not be a problem in most cases.

In your case, you have said that the staged Data and Snowflake are on different AWS accounts. We have to figure out if there are any gaps there.
In the Snowflake account, did you give the Amazon S3 credentials of the staged data?

Yes, I am using same credentials while loading data from S3 to snowflake table using “Input view” in bulk load.

Note:- I have changed the Headers to All Caps in the input file.

@venkat
The reason why the pipeline looked successful and no commands were executed on Snowflake is because the input view of the Bulk Load Snap was open but it did not contain any input documents. If you add a dummy JSON Generator with some values, the pipeline will be executed. Unfortunately, the minimum number of input views for the Snowflake Bulk Load snap is 1, which is the reason for this workaround. There is an internal ticket to make the minimum input view as 0.

Also, in the Snowflake Account, the s3 folder name should just contain the name of the folder and not the full path with “http”. For example, if test is your folder, just provide the value as “test” in folder

It worked, thank you