Introducing Kafka into a Legacy .NET System
Event Forwarders, the Outbox Pattern, and Scheduled Notifications
Series: Modernizing a 15-Year-Old .NET System Without Breaking Production
Part 4 of 7
When teams talk about modernizing a legacy system, the conversation often jumps straight to microservices.
In a 15-year-old .NET system, that jump is usually a mistake.
Before you break a monolith apart, you need a safer step:
> Introduce events first.
This article explains how we introduced Kafka into a large, long-running .NET system using:
- an event forwarder
- the outbox pattern
- scheduled emails and notifications
All without rewriting the core system.
1. Why Kafka Before Microservices
The problem we faced wasn’t deployment size.
It was coupling.
The legacy system:
- handled core business workflows
- directly triggered emails
- synchronously called external systems
- mixed business logic with side effects
Any failure propagated everywhere.
Why Not Start with Microservices?
Because:
- business logic was deeply intertwined
- behavior was not fully understood
- breaking it apart too early would multiply risk
Kafka gave us:
- asynchronous boundaries
- decoupling without refactoring
- a way to observe system behavior before changing it
Key idea:
Events are a wrapper, not a replacement.
2. The Event Forwarder Pattern (Core Concept)
We did not refactor the monolith to publish Kafka events directly.
Instead, we introduced an event forwarder.
What Is an Event Forwarder?
A small, separate component that:
- listens to legacy system events (or DB changes)
- transforms them into domain events
- publishes them to Kafka
The legacy system stays largely unchanged.
Legacy System
|
| (DB / internal events)
v
Event Forwarder
|
| (Kafka)
v
Downstream Consumers
Why This Pattern Works
- No invasive refactoring
- Clear integration boundary
- Easy rollback (disable forwarder)
- Enables parallel evolution
3. What Counts as an Event?
One of the first mistakes teams make is publishing commands instead of facts.
Bad Event
{
"type": "SendInvoiceEmail",
"invoiceId": 123
}
Good Event
{
"type": "InvoiceIssued",
"invoiceId": 123,
"issuedAt": "2025-01-10T09:30:00Z"
}
Events describe what happened, not what should happen.
This distinction:
- keeps producers simple
- prevents tight coupling
- allows multiple consumers with different behaviors
4. Topic Design and Versioning
We used a domain-based naming strategy:
billing.invoice.v1
logistics.stockmove.v1
notifications.email.v1
Rules We Followed
- Version topics explicitly
- Avoid breaking changes
- Prefer additive evolution
- Never reuse topic names for new semantics
This made consumers resilient and upgrades predictable.
5. The Outbox Pattern (Non-Negotiable)
If you publish events directly after a database write, you will lose events.
It’s not “if”.
It’s when.
The Problem
- DB transaction succeeds
- Kafka publish fails
- System state is inconsistent
The Solution: Outbox Pattern
Inside the same DB transaction:
- Write business data
- Write an outbox record
Later, asynchronously:
- Publish outbox records to Kafka
- Mark them as sent
6. Outbox Schema Example
CREATE TABLE OutboxMessages (
Id BIGINT IDENTITY PRIMARY KEY,
AggregateType NVARCHAR(100),
AggregateId NVARCHAR(100),
EventType NVARCHAR(100),
Payload NVARCHAR(MAX),
CreatedAt DATETIME2,
PublishedAt DATETIME2 NULL
);
This table is boring — and that’s good.
7. Writing to the Outbox (EF or Dapper)
EF Example
_db.OutboxMessages.Add(new OutboxMessage
{
AggregateType = "Invoice",
AggregateId = invoice.Id.ToString(),
EventType = "InvoiceIssued",
Payload = JsonSerializer.Serialize(evt),
CreatedAt = DateTime.UtcNow
});
This happens inside the same transaction as business logic.
8. Publishing Outbox Messages (Worker)
var messages = await _db.OutboxMessages
.Where(x => x.PublishedAt == null)
.OrderBy(x => x.Id)
.Take(100)
.ToListAsync();
foreach (var message in messages)
{
await _producer.ProduceAsync(
message.EventType,
message.Payload
);
message.PublishedAt = DateTime.UtcNow;
}
await _db.SaveChangesAsync();
Important Details
- Small batches
- Ordered by ID
- Idempotent consumers (always assume duplicates)
9. “Exactly Once” Is a Myth (Design for Reality)
Kafka guarantees at-least-once delivery.
You must design consumers accordingly.
Consumer Rules
- Use idempotency keys
- Ignore duplicates
- Make handlers retry-safe
Example:
if (_processedEventStore.Exists(eventId))
{
return;
}
This is not optional.
It is foundational.
10. Ordering: Partitioning Matters More Than You Think
Kafka guarantees ordering per partition, not per topic.
We partitioned by:
- aggregate ID (e.g.,
InvoiceId,VisitId)
This ensured:
- events for the same entity were ordered
- concurrency without chaos
11. Scheduling Emails and Notifications
Once events existed, notifications became consumers, not side effects.
We had two categories.
A. Immediate Notifications
Example:
- Invoice issued → email sent
- Stock level breached → alert
Flow:
Kafka Event → Notification Consumer → Email/SMS
Simple, reactive, fast.
B. Scheduled / Delayed Notifications
Examples:
- Daily summary emails
- SLA breach reminders
- “No activity for 24h” alerts
Kafka is not a scheduler.
So we introduced a notification job table.
12. Notification Job Table
CREATE TABLE NotificationJobs (
Id BIGINT IDENTITY PRIMARY KEY,
EventId NVARCHAR(100),
DueAt DATETIME2,
Type NVARCHAR(100),
Payload NVARCHAR(MAX),
SentAt DATETIME2 NULL
);
Flow
- Consumer receives event
- Writes a job with
DueAt - Scheduler worker polls due jobs
- Sends notification
- Marks as sent
This gave us:
- retries
- visibility
- idempotency
- control
13. Avoiding Duplicate Emails (Critical)
Duplicate emails destroy trust.
We enforced:
- unique constraints on
(EventId, Type) - idempotency checks before sending
- safe retries
CREATE UNIQUE INDEX UX_Notification_Event
ON NotificationJobs (EventId, Type);
14. Observability in Event-Driven Systems
Async systems fail silently unless you invest in visibility.
What We Added
- Correlation IDs propagated through events
- Structured logs
- Dead-letter topics
- Metrics on business outcomes (emails sent, alerts triggered)
Logs alone were not enough.
15. What This Enabled (The Real Win)
Once events existed:
- new services could evolve independently
- notifications moved out of the monolith
- integrations became safer
- behavior became observable
We didn’t break the monolith.
We surrounded it.
16. Common Mistakes to Avoid
❌ Publishing events directly from controllers
❌ Treating Kafka like a message queue
❌ Putting business logic in consumers
❌ Ignoring failure paths
❌ Assuming “exactly once”
Final Thoughts
Kafka was not a modernization goal.
It was a stabilization tool.
By introducing:
- event forwarders
- the outbox pattern
- explicit scheduling
We reduced coupling before refactoring.
That sequencing matters.
📘 Series navigation
⬅️ Previous:
Part 3 – Making Legacy .NET Code Testable
➡️ Next:
Part 5 – Modernizing a Legacy Frontend Incrementally
