SQL Bulk load truncating data

Hi Community,

I am trying to SQL Bulk Load with the Create Table option. When this runs it is erroring on string truncation. I do not understand how to use the Mapper with only one side as a second input into the pipeline. I do not want to have to map all the fields/objects I have 1900+ fields and 167+ tables.
SQL Insert is inserting with Varchar(8000) and this doesn’t work because it is not supporting unicode characters.

Thanks for any help.

@Garrett I am not 100% sure about your problem but it seems you want only few filed to transform and rest of the fields remain untouched, if that’s the case can use pass through functionality of mapper. Do transformation for only required fields keep the target name same as source name and checked pass through chekbox.

That sounds like something I could do… but I need to define those fields programmatically.
I don’t know how to use a 1 sided mapper eg the second input.

Use nvarchar or text in SQL DDL

Hi Mohit. You cannot use nvarchar unless you define the schema with the mapper. I do not want to define all of the fields. I want SnapLogic to send them over as nvarchar instead of varchar but it doesn’t have that option. It says you can use a mapper as an input to provide a mapping but I don’t know how to define it or set it up. In other words I may be able to define the mapper programmatically but I do not see out to use the second input view to do that.

Hi @Garrett

I had the similar requirement too and in this case I used the SQL Server Execute snap before all other snaps in the pipeline to create the table with desired data types. And after that I loaded the data.

Do you have a sample pipeline?
How do you determine the fields dynamically? Eg how do you create the table if you don’t know ahead of time what the fields are?