11-25-2019 04:50 PM
I am trying to create a pipeline which will take a configuration file as an input and it will have Table name and column names that i will need for that table. You can put any no of tables and for each table no of column selected will be different. I want to write the data in output file and output file will be created for each table separate with naming pattern out_.csv. I used a SQL execute snap to get the data and pass to target but not able to create separate files. All data are getting appended in one file and header row is coming only for the first table. Any suggestions on how this can be done? Please feel free to ask question regarding the requirement. This will help to do one time loads from a DB to a file system and i will be able to control the table names and column names on the fly.
12-03-2019 07:26 AM
Once you “go binary” (triangle shape output) in the CSV Formatter, there is no data retained other than the binary data itself. So you don’t have the TableName anymore. You can still reference values that were passed into the pipeline as a parameters though (_TableName). So if you call a child pipeline, passing the table name as a pipeline parameter, then the child pipeline can use that parameter in the downstream File Writer.
12-03-2019 07:32 AM
Ok. Any way you assign a parameter in the same pipeline and access it from file writer snap?
12-03-2019 07:39 AM
You can’t change a pipeline parameter within the pipeline. You might be able to pass a value through as a binary header in the CSV Formatter and then reference it downstream, but that might also make your file a little wonky, not sure. Calling child pipelines is pretty straightforward though, as this is a typical reason to do so.
12-03-2019 08:06 AM
Thanks for the detailed inputs. Certainly appreciate that everyone.