Forum Discussion
9 Replies
- nganapathirajuFormer Employee
While it is not possible to individually break the document messages into separate individual files on the same pipeline, I dont understand why you want to increase the I/O resources for lets say 10000 messages.
It is plain and clear that writing so many messages into so many files require that many I/O streams to be opened on the network. It is very process intensive.
I am not getting your concept here. What is the reasoning?
Another way of implementing the Queue, is write the whole message to a database table.
id, message
1, JSON.stringify(jms.message1)
2, JSON.stringify(jms.message2)
3, JSON.stringify(jms.message3)When you want to process them, you can individually read the message, parse it and do whatever you want to further process it. Hope it makes sense.
- mohamadelmardinNew Contributor III
@nganapathiraju
the reasoning is because each one of the incoming message on the JMS queue is a large XML Sterling file containing EDI 850 Purchase Order. So when you use JMS consumer it will read all the messages on the queue and put it all into one binary file. So I had to break it because I have business requirements to operate on each message separately because each one contain an 850 PO that needs to be sent to EDW and vendors. The problem is if I use the ForEach execute pipeline to do it in order to operate on each one of them then I won’t be able to use the JMS Consumer and JMS Acknowledge for message assurance in case the message is lost. This is according to the reply received from support@snapplogic.com when we run into the issue of JMS acknowledge configuration. You can refer to it for further reading: (#18557) [GameStop] JMS Acknowledge Snap issue:
https://snaplogic.zendesk.com/hc/en-us/restricted?return_to=https%3A%2F%2Fsnaplogic.zendesk.com%2Fhc%2Fen-us%2Frequests%2F18557Therefore the only way to use JMS acknowledge is with the same JMS consumer in the same pipeline not by using ForEach child pipeline however I need to use the ForEach in order to be able to operate on each document separately. To sum it up, I need a solution where I can break and write each JMS message to a single file while using JMS acknowledge for message assurance in the same pipeline.
Does that make sense?
- nganapathirajuFormer Employee
Ok I get it.
Did you look at the Workaround I suggested. You can use DB snaps to write and the output of the DB snap will include the original content which can be used to acknowledge to JMS.
You can achieve that in one pipeline too.
- tstackFormer Employee
Getting back to your original question, you can write separate documents with the JSON-Formatter, XML-Formatter, and DocumentToBinary snaps. You can then configure the FileWriter snap with an expression for the file name, for example:
'out_' + Date.now() + '.json'
Or, if you just want a sequence number, you can use the ‘snap.in.totalCount’ variable:
'out_' + snap.in.totalCount + '.json'
So, every document generated by the formatter snap will be written to its own file with the name computed by the expression.
To configure the formatters:
- JSON Formatter - Select the ‘Format each document’ option.
- XML Formatter - Clear the ‘Root element’ field. Note that the input document should only have a single field that represents the root element.
- Document To Binary - Set ‘Encode or Decode’ to DOCUMENT.
- sandeepkasaramNew Contributor
@mohamadelmardini For breaking up each document from JMS queue we are following the same method as they suggested above,do we still having issue breaking up the documents ?also the JMS acknowledge we were able to acknowledge the messages based JMS Message ID as suggested above only thing we didn’t do is having the reject acknowledge .
- mohamadelmardinNew Contributor III
Sandeep the original solution wasn’t working when I tested it out. I updated the approach and it is working now. You can check it out at DEV environment.
All is good now.