Flatten JSON files into CSV files
Created by @schelluri The pipeline pattern flattens a JSON file, which has multiple objects, and turns it into a CSV file. Configuration Sources: JSON Generator Targets: CSV file Snaps used: JSON Generator, JSON Formatter, JSON Parser, Script, CSV Formatter, File Writer Downloads MS_Flatten_Script.slp (31.4 KB)7.4KViews1like2CommentsDynamically change the Delimiter in CSV parser
I’m designing ah pipeline that will handle multiple delimiters in a single CSV parser. I’m using a expression library that has the delimiter details of the respective files as below. [ { “src_file”: “aab”, “Delimit”: “PIPE”, “tgt_table”: “T_STG_AAB” } }, { “src_file”: “abc”, “Delimit”: “TAB”, “tgt_table”: “T_STG_ABC” } }, { “src_file”: “efg”, “Delimit”: “COMMA”, “tgt_table”: “T_STG_EFG” } } ] On the mapper snap I can write expression to refer this expression library lib.sample.find(x => x.src_file==“incoming_filename”).get(‘Delimit’). But when I try to use the same in CSV parser it doesnt accept. can we really do this.2.8KViews0likes2CommentsCSV to Workday Tenant
Submitted by @stodoroska from Interworks This pipeline reads a CSV file, parses the content, then the Workday Write Snap is used to call the web service operation Put_Applicant to write the data into a Workday tenant. Configuration If there is no lookup match in the SQL Server lookup table, we are using MKD as a default code for country code. Sources: CSV file on the File share system Targets: Workday tenant Snaps used: File Reader, CSV Parser, SQL Server - Lookup, Mapper, Union, Workday Write Downloads CSV2Workday.slp (17.3 KB)4.7KViews4likes1CommentExporting text file with CSV formatting, but with different numbers of fields
I’m working on a pipeline that needs to create and send a text file to a third party That text file contains rows formatted in the same was as a csv file The file contains a header row, then the data rows, then a footer row The header and footer rows only have 6 columns, and I cannot send it with more or less columns, because their system will reject it the data rows have 33 columns, and I cannot send it with more or less columns, because their system will reject it Here is my pipeline: The first section has 3 SQL Server Execute snaps that get the 3 types of rows, then I union them all together. The select contains 2 fields that cannot be in the resulting text file, and I only need them for the sort snap, to get the rows in the correct order The mapper after to Sort snap is to remove the 2 sort columns The problem I get here is, if I leave null safe access unchecked, then it freaks out because Col6 to Col33 does not exist in 2 of the rows, and if I check null safe access, it creates 6 to 33 in those 2 rows and adds to many fields to those 2 rows in the text file Is there any way to: A) Remove the 2 fields without using a mapper B) Remove the resulting null valued fields after the mapper OR C) Tell the csv formatter to not create the field if it has a null value Thanks7.1KViews0likes7CommentsRepeat Target Path/ Header Name in a csv file
Hi, My CSV has headers which repeat themselves - BLANK2 , BLANK2, BLANK2 , NAME, BLANK2 Mapper gives me error saying that I can’t have same Target Path names. I tried adding header fields to CSV formatter and Binary Header, that didn’t work. I tried adding .string() after the field name- didn’t work I am thinking of a very convoluted way of doing this, but wondering if there is an easy fix that I am unaware of.4.9KViews0likes5CommentsSkipping faulty records in tab delimited file
Greeting! Is there a way we can skip faulty records which has special symbols as it fails the whole file while using the CSV parser. The highlighted record is the one which needs to be removed, or any records which has special symbols which are not accepted by CSV parser. Appreciate all your help! Thanks, F.Solved2.6KViews0likes2CommentsCSV Parser Troubling Error (Cannot complete CSV data parsing)
Hi, I am trying to parse a CSV that is auto-generated by a legacy system. The CSV has no headers. The problem is, the system conditionally adds columns to individual records. So, the CSV sometimes has records with more columns than others. This causes Snaplogic’s CSV Parser to fail (it doesn’t parse the whole file). The parser will only parse up to the row number where the record with the additional columns is. Below is the error message: This sounds far-fetched but is there a way to automate a solution to this problem? Perhaps by being able to conditionally control the columns before the document hits the CSV Parser?3.3KViews0likes3CommentsAppend data to csv file with out repeating the header row
hi there, i am trying to append all the records for the day to that days CSV file which has date in the file name. all works fine, if it doesnt fnds the file for the day it creates the file. if it finds the files then it appends the data to the file. But one thing is keeps adding the column names as well to the file. How can i have the only data be appended and not the column headers? Thanks Manohar5.7KViews0likes4CommentsCSV Format incorrectly naming files input101 and input0 without file extension
I am having a production issue. I have a pipe that up until this point had been naming 2 file inputs correctly and placing them in a zip file to be sent to a destination, these files were being assigned file names of RaInterfaceDistributionsAll.csv and RaInterfaceLinesAll.csv and the zip file was being named ArAutoinvoiceImport.zip. Recently, the 2 .csv files inside of the zip folder are now being named Input101 and Input0, without the .csv file extension, but the zip file is still being named correctly with the correct file extension. The snap of the zip file writer is attached. Any help is greatly appreciated as this is a production issue and just started happening.3.2KViews0likes2CommentsCSV Parsing issue
As per documentation CSV parser settings Escape character says : Leave this property empty if no escape character is used in the input CSV data. However when escape character '' is present in the data it fails. Only if I add \ in escape character it runs successfully however it removes \ within the data. Did anyone faced this and how to get read of this issue?