Piotr Minkowski's new article on Spring Kafka offset behavior ("Deep Dive into Kafka Offset Commit with Spring Boot") is worth your time, but the actionable point arrives after some setup. Here it is up front: the consumer offset in Kafka advances when Spring's listener thread finishes processing a batch - which may or may not mirror what your business logic does.
That one rule generates the three failure modes the article walks through.
The first is a single-thread batch mode read (the default). Spring Kafka receives a batch of messages and hands them to a single listener thread one at a time. The offset isn't committed until the thread works through the entire batch. Interrupt that thread - say, with a graceful shutdown that times out - and none of the batch's offsets are committed. On restart, you reprocess everything from the last committed point. At-least-once delivery, which is correct behavior, but only if you're prepared for it.
This makes sense: the entire batch is read as if in a transaction, and the read offset is written at the end of the transaction.
The second scenario uses concurrent listeners: concurrency is set to the partition count and each thread owns one partition. Now offset commits are per-partition. Two threads can finish and commit; the third can be mid-batch when you shut down. On restart, only the uncommitted partition replays. This is strictly better than scenario one for throughput, but the replay exposure is the same in principle - it's just scoped to one partition rather than all of them.
The third scenario is where the real sauce comes in: the silent loss case. This is the one that bites people. If your listener method hands work off to a pool of handlers and returns immediately, Spring Kafka sees a completed listener invocation and commits the offset - even though your processing is still in flight. The incoming messages have been read, the transaction that reads them commits and the read offset advances, but the messages themselves haven't completed processing. Kill the application now and those in-flight messages are gone. The broker thinks they were handled; your thread pool never finished them. This is the async handoff anti-pattern, and it converts Kafka's at-least-once guarantee into at-most-once without you explicitly choosing that.
The fix isn't exotic: don't let your listener return until you're willing to have the offset committed. If you need async processing, you need to manage offset commits manually (using AckMode.MANUAL and explicit acknowledgment) or structure your async handoff so the listener blocks until the work is done. Minkowski links to two earlier articles on the mechanics of both approaches.
The article includes working Spring Boot code and log traces that make each scenario concrete - worth reading in full if you work with Spring Kafka in production.