ContributionsMost RecentMost LikesSolutionsRe: Loading Null values into Snowflake DB via Bulk Load Snap @amubeen Were you able to resolve your issue with the above information ? Re: Loading Null values into Snowflake DB via Bulk Load Snap @amubeen There are two ways that you can handle this use case Leave the fields you want to loaded as null as Empty in the csv file and un-select the property, “Load empty strings” in Snowflake Bulk Load. This will make sure Snowflake Bulk Load will load the empty values as null in the table 2.The null values coming from a csv parser will be a string. A condition in the mapper to convert the null String values to null will work: $NULLABLECHAR == “null” ? null : $NULLABLECHAR cc: @robin Re: Change datatypes dynamically of every column @vsunilbabu If Create new table if not present option is selected without providing the schema in a secondary input view, varchar will be used for all the column’s data types. In your use case, you can provide the schema of the table that you want to create in a second input view to get exactly the data type that you want in the Snowflake Table. The first example mentioned in the Snowflake Bulk Load snap’s documentation covers a similar use case with an example: https://docs-snaplogic.atlassian.net/wiki/spaces/SD/pages/1438549/Snowflake+-+Bulk+Load cc: @dmiller Re: Salesforce Bulk Upsert Snap @apoorva_mithal, Can you provide the screenshot of the pipeline and preferably the downloaded slb file if it does not contain sensitive data. Did the examples provided in the Salesforce Bulk Upsert snap helpful to you: https://docs-snaplogic.atlassian.net/wiki/spaces/SD/pages/855146980/Salesforce+Bulk+Upsert Re: Loading JSON to Snowflake @davidm You don’t have to write files to an external S3 or Azure storage for using the Snowflake Bulk Load Snap. What is the complete failure message and stack trace that you got from Snowflake Bulk Load. Re: Snowflake Bulk Load @venkat The reason why the pipeline looked successful and no commands were executed on Snowflake is because the input view of the Bulk Load Snap was open but it did not contain any input documents. If you add a dummy JSON Generator with some values, the pipeline will be executed. Unfortunately, the minimum number of input views for the Snowflake Bulk Load snap is 1, which is the reason for this workaround. There is an internal ticket to make the minimum input view as 0. Also, in the Snowflake Account, the s3 folder name should just contain the name of the folder and not the full path with “http”. For example, if test is your folder, just provide the value as “test” in folder Re: Snowflake Bulk Load As long as you make sure that file contains the column headers in the same case as present in the Snowflake Database table, it should not be a problem in most cases. In your case, you have said that the staged Data and Snowflake are on different AWS accounts. We have to figure out if there are any gaps there. In the Snowflake account, did you give the Amazon S3 credentials of the staged data? Re: Snowflake Bulk Load I took a look at the pipeline. The root cause of the issue seems to be the case sensitivity of the column headers. Can you sync up with the support contact, Eric on this. Re: Snowflake Bulk Load After reading the data from s3 read, Instead of Snowflake Insert Snap, you can use the Snowflake Bulk Load Snap and select the Data Source as Input view. I would like to know if the Bulk Load snap is able to process that data. Seems like the History doesn’t have any bulk api executions (“copy into”). can you attach your pipelines after removing the sensitive data. Re: Snowflake Bulk Load @venkat I would be interested to see if there are any command executed on the Snowflake History Console after executing the Bulk Load Snap. I understand that 0 records are loaded but the pipeline did not fail from your screenshot, let me know if that is the case. I will also interested to see if you can use the s3 file reader as an upstream snap to Snowflake Bulk load, having the data source as input view. The second option will be very slow but will be good to know if there are data issues involved.