Kafka SSL account validation to confluent cloud
Hi, So I am trying to connect to my env in the confluent cloud using the kafka ssl account configuration. After doing everything step by step creating the keystore and truststore file and also creating api key secret in confluent cloud and also the right passwords for the trust/keystore files. https://docs-snaplogic.atlassian.net/wiki/spaces/SD/pages/1802240621/Kafka+SSL+Account My questions has anyone had a use case that needed to configure kafka ssl acount? My configurations: Regards JensSolved4.4KViews0likes4CommentsKafka configuration account
Hi, I am trying to configure kafka account in snaplogic but it always keeps getting timeout. I configured it in Postman and also in a rest post snap and that works with a header Authorization and then Basic xxxx. But in the kafka producer snap the account configuration only needs a bootstrap server (I have this one), lets say test:9092 But I also have a value pair Key: 123 and a value pair Secret: XYZ How do I configure these two value pairs I already tried tons of things like: Same as REST POST: Authorization: Basic xxx, also added Content-Type: application/json Key: 123 and Secret: XYZ Tried kafka.api.key: 123 and kafka.api.secret: XYZ They keep getting timed out. Has someone had a familiar case where they needed to configure a kafka account with a key and a secret and has some more insight in this? Regards Jens1.8KViews0likes0CommentsKafka consumer Skip messages when fail
Hi , We pull the data from Kafka and put it and the database, but we realized the Kafka consumer skips the data/offsets if the pipeline fails. for example: in a run Kafka consumer is supposed to read offset 3,4,5 but it pipeline fails so it skips these offsets in the next run. I tried using Kafka acknowledge snap after data inserted in the database it always time out. anybody has any solutionSolved19KViews0likes16CommentsReliable, High-Throughput Batching with the Kafka Consumer Snap
In this article just published on Medium, we take a closer look at the (Confluent) Kafka Consumer Snap’s new Output Mode setting, and how it can be used to achieve reliable, high-throughput performance for some common use cases where it’s important to process records in batches. Here are the release notes for the 423patches7900 version where this feature was introduced.4.7KViews0likes4CommentsRelease Notes for Confluent Kafka Snap Pack, version 423patches7900
We’re pleased to announce an interim version of our Confluent Kafka Snap Pack, 423patches7900, released today, January 11. This update contains a set of enhancements and changes which will be included and documented more fully in our forthcoming February 2021 GA release (4.24). Below is a summary of the changes in this interim release. Removed Confluent prefix from the label for all Snaps and accounts in this Snap Pack. (The pack itself is still named Confluent Kafka.) Added Wait For Full Count checkbox setting to Kafka Consumer to determine how a positive value for the Message Count setting should be interpreted. Enabled (by default): The Snap continues polling for messages until the specified count is reached. Disabled: If there are fewer messages currently available than the specified count, then the Snap consumes the available messages and terminates. Known issue: The Wait For Full Count check box is activated only when you provide a positive integer value in the Message Count field. However, it is not activated when you use an expression for Message Count even if it evaluates to a positive number. Workaround: To activate this check box, temporarily replace the Message Count expression with a positive integer, select the desired state for Wait For Full Count, and then restore the original value in the Message Count field. This has been fixed in the 4.24 release. Added support for writing and reading record headers. The Kafka Producer Snap has a new Headers table to configure the Key, Value, and Serializer for each header to be written. The Kafka Consumer Snap will read any headers present on the records it consumes. It provides two new settings to configure how the header values should be deserialized: Default Header Deserializer, and Header Deserializers for any headers which require a deserializer other than the default. Added support for writing and reading each record’s timestamp. The Kafka Producer Snap has a new Timestamp setting which can be configured to set each record’s timestamp, which is the number of milliseconds since the epoch (00:00:00 UTC on January 1, 1970). This can be set to an expression that evaluates to a long integer, a string that can be parsed as a long integer, or a date. If no expression is specified, or its value is empty, the timestamp will be set to the current time. Note that this setting is only relevant if the Kafka topic is configured with message.timestamp.type = CreateTime (which is the default). The Kafka Consumer Snap has a new checkbox setting, Include Timestamp, which defaults to disabled for backward compatibility. If enabled, the output for each record will include its timestamp in its metadata. The Kafka Producer Snap has a new checkbox setting, Output Records, to determine the format of each output document when configured with an output view. Disabled (by default): The Snap’s output includes only the basic metadata (topic, partition, offset) for each record, plus the original input document. Enabled: Each output document will contain a more complete representation of the record produced, including its key, value, headers, and timestamp. The Kafka Consumer Snap has a new setting, Output Mode, with two selections: One output document per record (the default): Every record received from Kafka has a corresponding output document. One output document per batch: Use this selection to preserve the batching of records as received from Kafka. Every poll which returns a non-empty set of records will result in a single output document containing this list of records as batch, plus batch_size and batch_index. This mode is especially useful when Auto Commit is disabled and Acknowledge Mode is Wait after each batch of records, depending on the nature of the processing between the Kafka Consumer and the Kafka Acknowledge Snaps. For an in-depth look at this new feature, see this article. Removed Account reference from Kafka Acknowledge, as this Snap does not need an account. Removed the Add 1 to Offsets setting from the Kafka Consumer. Please respond to this post with any questions about this release. Patrick Taylor Principal Software Engineer ptaylor@snaplogic.com2.3KViews4likes0Comments- 1.9KViews0likes0Comments