Tuesday - last edited Tuesday
Hello,
My input is S3, where we are receiving files of different objects. Our target is to read these files and insert data into postgres DB where we have created target tables for all objects. How can we dynamically map all the objects using one control table that will hold input column name and target column name. So if we add a new column in the control table and in target_table in future the pipeline should be able to map and insert the data into postgres DB.
Control_table :
object_name(target_table_name) | Input_column_name(column name from file) | Target_column_name(column name in postgres target table) |
Thanks in advance.
Wednesday
@userg - Take a look at the documentation for the input views of the CSV Parser snap. There is a second input view that you can use to define the input columns coming from the input view. I would simply name the fields based on the target column names so that no real translation is required. So use the column ordering of the input, but use the column naming of the target.
Hope this helps!
Wednesday
Hi @koryknick ,
We cannot map it dynamically rit, we are going to get new files(for new objects) and column names. So everytime we get a new field or file we need to map it manually in the pipeline. Also we have different Target tables to insert. Can we do it using a control table , is it possible for real time translation.
Thank you for your response.