Forum Discussion
That’s clever! It’s kind of weird to start a pipeline getting good data from an error. My mind is not set for this kind of thing. But it is definitely working.
I thank you very much Dimitri.
As an addition, or rather, a subtraction 🙂, I tried and removed the File Writer and File Reader snap, and connected the binary output of the Mapper Snap to the binary input of the CSV Parser Snap, and, it still works 🙂
Cheers,
Dimitri
- philliperamos6 years agoContributor
That sounds good.
I pivoted the data using as JSON, and then used a combo of JSON Splitter/Mapper and Data Validator to validate the data. I still had to hard code some JSON generation scripts, but that was the easy part.
Thanks!
- Ksivagurunathan6 years agoContributor
the way we developed, we don’t need to make changes to the snaplogic pipeline if we want to validate different set of files. it could be new set of validation or few new column validation to the same set of files. all we need to do is add validation rule to the table and corresponding file name. its more dynamic validation of data and column name in a file
- philliperamos6 years agoContributor
Thanks. We’re thinking of doing something similar for the dynamic validation approach, as the way I described above is leading to huge memory problems.
Related Content
- 10 months ago
- 4 months ago
- 3 years ago
- 3 years ago
- 9 months ago