Forum Discussion
Hi, @Akshita! Welcome to the SnapLogic pipeline developer community.
To answer your question, we’ll need some clarification from you. As it stands now, there are several apparent contradictions so it’s not clear what you want to do.
First off, are the source and target really both inputs? Or is the source the input, and the target is the output? The mapper snap only takes one input view (doc stream) and produces one output view (doc stream). It changes the inputs to the outputs, thus “mapping” them.
Second off, what format are the files? CSV? Something else?
Thirdly, are there column headings or names associated with the columns? In the source file? In the target file?
In the meantime, I’ll try to give you a few things to think about:
Generally, when you read data in SnapLogic, the JSON document stream where it ends up will be attribute/value pairs and not order-dependent. So, first you’ll read the data, get it into a document stream, then transform the document stream into a set of names and values appropriate for the output. Only when you go to write the output will you actually put them in some kind of order.
When the intermediate form of the data really does require ordering, then you’ll use arrays and/or ordered lists to manage that. But that is more for handling “row 2” vs “row 20” and not for columns within the rows. Think of it just like you do columns in a table in an RDMBS – those are generally order-independent as well (even though the physical storage underneath does have an ordering).
Looking forward to your reply with the additional information!
– JB, aka “Forbin”