Skip to main content
Filter by
Sorted by
Tagged with
Best practices
0 votes
7 replies
48 views

I am using Kafka Streams, and trying to figure out the best practice for handling multiple stores that are populated using the same data without having to read it twice. My Kafka Streams service is ...
Andreas10001's user avatar
0 votes
1 answer
96 views

In a spring boot app, I have a message listener (@KafkaListener) with a concurrency of 28 listening from a topic which has 28 partitions. I have default configurations for all consumer properties. ...
Sathish's user avatar
  • 245
-1 votes
1 answer
33 views

I'm experimenting with Kafka consumers and want to understand the behavior of consumer.subscribe() in the following scenario: I have a single consumer group with multiple consumers (all alive, none ...
Oac Roland's user avatar
0 votes
0 answers
93 views

We are using Springboot Kafkalistner annotation to listen to a Kafka topic. In the consumer config, i have set ack mode to COUNT_TIME with the ackCount at 2000 and ackTime to 1 minute and the listener ...
sg2000's user avatar
  • 325
0 votes
1 answer
154 views

I have a topic with 2 partitions which has historic messages. I just made a new consumer group and I'm running 1 flakey consumer. I want reliability (no lost messages) from the point of my first ...
Phil's user avatar
  • 2,341
0 votes
1 answer
26 views

We have a kafka topic and we are planning to introduce canary release to limit our blast radius during prod deployments. One of the idea that we are considering is to have some specific partition (say ...
Rahul Dobriyal's user avatar
1 vote
0 answers
47 views

Assume my kafka log contains messages as below. Offset-1 = non-transactional message Offset-2 = non-transactional message Offset-3 = transactional message1 for transaction T1 . . . offset-13 = ...
manjunath's user avatar
  • 326
0 votes
1 answer
134 views

I am trying generalize MDC log for Correlation-ID. Right now, I am putting MDC log in consume method of Listener class itself after getting consumer message. Then I clear the MDC log before consume ...
Bandita Pradhan's user avatar
1 vote
1 answer
223 views

I am running a kafka cluster and I wanted to migrate to kafka 4.0 however I am not sure if all connected producer and consumers are compatible with it. I've already looked at kafka-consumer-list ...
Verthais's user avatar
  • 487
0 votes
0 answers
37 views

I have a Kafka DLQ (dead-letter queue) topic where I'm sending the failed records, but I don't want to keep reprocessing them until a patch is deployed. Thus, the idea would be to have the consumer ...
scripty's user avatar
0 votes
0 answers
26 views

we have Kafka consumers running on two websphere servers under same cluster. Kafka consumers are configured with auto offset commit and using same consumer group, and polling every 5 seconds. The ...
Jmanroe's user avatar
0 votes
0 answers
49 views

Code is not working with the latest springboot version. My springboot version is 3.4.3 and kafka version is 3.3.3. It is giving me an exception i.e java.lang.ClassCastException : class org....
Bilbo's user avatar
  • 91
1 vote
0 answers
124 views

I have Spring Boot consumer that consumes Avro messages and I have configured it as retry topic with DLT. When SerializationException occur, it causes infinite loop and it produces huge amount of logs....
Dorian Pavetić's user avatar
0 votes
0 answers
257 views

To explain with an example, I am setting spring.kafka.consumer.max-poll-records=1000, but I want the listener to process only 100 records at a time. That is, break the 1000 polled records in to 10 sub-...
cbot's user avatar
  • 163
0 votes
0 answers
75 views

I am working on a distributed event-driven system where multiple components interact via Kafka topics in an asynchronous manner. The flow is as follows: An event is triggered from the backend API and ...
Maddy's user avatar
  • 715
0 votes
0 answers
43 views

I need to modify the current offset for a Kafka topic at runtime from the messages receiving method, marked with the @KafkaListener annotation. The idea is to move the offset at runtime to an the ...
V.Lorz's user avatar
  • 395
1 vote
0 answers
54 views

I'm verifying the Protobuf schema using the Kafka schema registry. The problem is that even though I put in the correct schema, I still get the error Broker: Broker failed to verify record. The schema ...
Sudip Sikdar's user avatar
3 votes
2 answers
867 views

I have a FastAPI application that subscribes to Kafka topic using asynchronous code (i.e., async/await). I have to create a unit test for my application. My code: def create_consumer() -> ...
mascai's user avatar
  • 1,718
0 votes
1 answer
73 views

I am creating a new schema in KafkIO and I am unable to create new schemas sometimes. Simplest example, let's say I create a simple schema, version 1: Create simple schema Then I create a new schema ...
c burke's user avatar
1 vote
0 answers
28 views

As I am relatively new to Kafka, as I understand we have a hard limit of max 1 consumer per partition in Kafka setup. Also Kafka tries to maintain order within same partition at the expanse of ...
Dhruv's user avatar
  • 11
1 vote
0 answers
50 views

I have to consume messages from a topic on a Kafka cluster with 10 partitions where messages are being evenly distributed accros them. I have a ASP.NET application with 10 background services where ...
Boris Marković's user avatar
0 votes
0 answers
62 views

My application polls records from a MSK Kafka cluster. The application maintains offset of each partition and hence has disabled autocommit. It actually never commits offset as it persists offset to ...
Ravi Gupta's user avatar
0 votes
0 answers
11 views

I have a following setup. My Kafka consumer is subscribed to 2 topics.Each topic has 3 partitions. I was expecting, that I can create up to 6 consumers with the same consumer group id (one per ...
user2300369's user avatar
1 vote
0 answers
50 views

Scenario is: The user will give the offset as input and based on the offset we need to give 1000 messages from kafka topic and next offset.The kafka topic contains only one partition. We are trying to ...
World of Titans's user avatar
2 votes
0 answers
807 views

I'm using docker desktop running Kubernetes. I'm setting up my environment using the next configuration: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: kafka-network spec: ...
Ernesto Limon's user avatar

1
2 3 4 5
87