03-06-2022 05:30 AM
Hello,
The usecase of my solution is to get the data from oracle db and group by 4 fields (FLOWTYPE_ID$ ,PRODAT_ORIGINE,SACC_ID$,BAL) or records satisfying combination of 4 fields (FLOWTYPE_ID$ ,PRODAT_ORIGINE,SACC_ID$,BAL) are sent to same file and upload to sftp
pipeline that is designed is failing for huge data like 20lakhs records at group by fields snap due to Reason:
The available heap memory on the Snaplex node is exhaused, the pipeline is being terminated to recover from the memory pressure. Please could some help me by suggesting alternative approach or some solution to solve this
Regards,
Rashmi
03-06-2022 09:52 AM
Change the Memory Sensitivity
setting from None
to Dynamic
. That allows the Group By Fields snap to break each group of documents with the same field value across multiple output documents (“parts”) when memory is constrained. The part size varies dynamically as memory conditions change.