08-01-2021 11:29 PM
Hi Team,
We have a requirement to parse the xml file (1.5GB) and transform/group the content based on the one of the field value and write write multiple files based on the each group.
=== Sample Input ==
<?xml version="1.0" encoding="UTF-8" ?>===== output File 1===
<?xml version="1.0" encoding="UTF-8" ?>===== output File 2===
<?xml version="1.0" encoding="UTF-8" ?>I have tried using the xml parser , split based on child and add headers back. Problem here is as it is huge data CPU and memory are going high and getting Connection lost error.
Have also tried xslt but still got same issue.
Can you please help me to design the solution with memory optimization.
Thanks in advance.
08-02-2021 10:33 AM
08-02-2021 11:42 AM
@viktor_n The maximum file size of 100MB is only for files uploaded to SLDB (Files tab in Manager when looking at a project or project space). If you’re using a File Reader, S3 File Reader, or something similar you can read larger files.
@acmohan023 Can you share screenshots of the pipeline and pipeline statistics? An example of the files or a clearer example of what you’re trying to map would also be helpful. If I understand you’re initial description correctly you may be doing a Group By N or Group By Field operations and copying to multiple targets, both of which will impact memory on the node.
08-03-2021 11:18 PM
Hi @viktor_n ,
Yes, I am aware of the file size in SLDB.Files are read from the SFTP and not from SnapLogic Manager.
08-03-2021 11:53 PM
@rsramkoski - Have attached the sample input structure and output structure for your reference. (I had pasted request and response structure and it is not visible).
InputAndOutpultSample.txt (1.3 KB)
As file Size is huge , tried to split the flow into multiple pipelines (to process it in chunk and release memory) :
Please suggest if there is any other approach to resolve this issue.