Parsing XML Data and formatting it
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎08-01-2021 11:29 PM
Hi Team,
We have a requirement to parse the xml file (1.5GB) and transform/group the content based on the one of the field value and write write multiple files based on the each group.
=== Sample Input ==
<?xml version="1.0" encoding="UTF-8" ?>test
test
Test1 Test2 Test1===== output File 1===
<?xml version="1.0" encoding="UTF-8" ?>test
test
Test1 Test1===== output File 2===
<?xml version="1.0" encoding="UTF-8" ?>test
test
Test2I have tried using the xml parser , split based on child and add headers back. Problem here is as it is huge data CPU and memory are going high and getting Connection lost error.
Have also tried xslt but still got same issue.
Can you please help me to design the solution with memory optimization.
Thanks in advance.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎08-02-2021 10:33 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎08-02-2021 11:42 AM
@viktor_n The maximum file size of 100MB is only for files uploaded to SLDB (Files tab in Manager when looking at a project or project space). If you’re using a File Reader, S3 File Reader, or something similar you can read larger files.
@acmohan023 Can you share screenshots of the pipeline and pipeline statistics? An example of the files or a clearer example of what you’re trying to map would also be helpful. If I understand you’re initial description correctly you may be doing a Group By N or Group By Field operations and copying to multiple targets, both of which will impact memory on the node.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎08-03-2021 11:18 PM
Hi @viktor_n ,
Yes, I am aware of the file size in SLDB.Files are read from the SFTP and not from SnapLogic Manager.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎08-03-2021 11:53 PM
@rsramkoski - Have attached the sample input structure and output structure for your reference. (I had pasted request and response structure and it is not visible).
InputAndOutpultSample.txt (1.3 KB)
As file Size is huge , tried to split the flow into multiple pipelines (to process it in chunk and release memory) :
- Read the file from SFTP and XML Parse to split the data . Group by N and call 2nd pipeline
- split group , by sort by field ( input field in attached file) call 3rd pipeline
- write files smaller chunk to local sftp
- On completion of all above 3 pipeline , read small chunk files based on file name and write consolidated file per group by field (input field in attached file)
Please suggest if there is any other approach to resolve this issue.
