New element in mapper

I have a pipeline in which I am reading a file and parse it with CSV parser.

David| 18| M

I am able to parse the file but at the end of each record, I would like to append a file name in which record comes.

So reading a file using file reader and copying a file name on copy snap to merge with join but somehow I only see a file name on first record.

I am trying a merge join but it merges output at first record, how do I merge file name on to mapped element ?

So expecting output something like this -

Name Age Gender Filename
David 18 M names.csv
Julie 20 F names.csv


Any one had done something like this ? help much appreciated here

If the file name is the same throughout, you can just add a row in Mapper with the details, such as

I don’t think there’s a way to configure the CSV Parser to include the binary header from the input file (which would contain the file name) in the output documents. Instead, you’ll need to move the file read and parse into a child pipeline that can be called from PipelineExecute. The file name can then be passed as a pipeline parameter that can be referenced in any snap in the child.

Here’s a pair of pipelines to demonstrate:

FileParserParent_2018_09_07.slp (5.4 KB)
FileParserChild_2018_09_07.slp (5.4 KB)

The parent has a DirectoryBrowser that you should point at the directory you’re traversing. The child reads the file, parses it, and has a mapper to add the $Filename. Right now, the output of the child is sent out of the PipeExec output view in the parent. You might want to send it somewhere else in the child. If you want to process the files in parallel, you can change the “Pool Size” property in the PipeExec to allow more than one child to run at a time.

Thanks I was able to get done with child pipeline here.

I am wondering why parser looses the header such as $[“content-location”]

If we can have a flag on parser to preserve those headers, we would be able to accomplish all in one pipeline ?

You can also add Binary to document to the FileWriter, after that a copy snap, then document to binary and csv parser. After the csv parser you will add a join snap, where you will join the output of the copy snap and the output of the csv parser where you will have the $[‘content-location’] and the output of the parser.

But join is an expensive operation, so I am not sure with how many records are you dealing with in this pipeline. This is just a suggestion of a possible way how to implement it in a single pipeline.