Forum Discussion
Hi @sriram,
Thanks for your information.
I need to combine all messages into a single consolidated file. I need to ensure the consolidated file is written successfully to azure. Here, will it read all messages for that day? or will it read single message and wait for the notification from acknowledge snap?
if it is for single message, and notification is sent to consumer snap, that message will get committed. now if there is any connection error to azure while writing the consolidated file, I could not read the today messages again.
Is there any way to read from today starting offset?
If you have a pipeline configured with auto-commit unchecked (on the Consumer Snap) along with the Acknowledge Snap, then messages will get acknowledged as and when they are consumed successfully one at a time.
The “Seek Type” field can be set to “Specify Offset” along with a value assigned to the “Offset” field if you want to start from a particular known offset.
Documentation reference: Confluent Kafka Consumer
- GBekkanti7 years agoNew Contributor III
That was not known, because we don’t know how many messages are come for one day. There are 8 partitions, messages are being distributed to partitions in a random fashion. I think we shouldn’t go for offset storing.