My source data has more than 1M record which is in a csv format. All those records had error , hence were routed to error view. Now , All those records needs to be logged in a S3 folder. Also i send an email to the team which contains the file name and location.
The data is loaded in a json format in S3 which is still fine but it takes a longer time to open the json file (which is obvious) but can we do this in a more efficient manner ?Sometime the log file does not load at all
Data must go to S3 folder but how we are storing it is open for discussion , like the records can be put in csv, txt or json format.
I had an idea of Splitting those records and saving it as 2-3 json file but now sure , if it is even appealing.
I assume you are using the JSON Formatter to write the file, yes? But how are you formatting it? And what application are you using to try to open the file?
By default, the JSON Formatter will use a very compressed format with no line breaks. Some editors don’t deal well with a file where all the data is on one very long line.
I had issues while trying to zip the file. Like, i could see a zipped file being created but when i went within the zip file, i could not see any file…
Do you have any samples ? so that i can try please?