How I Reduced Kafka Boilerplate by 90% with Curve - A Declarative Event Library for Spring Boot
Source: Dev.to
The Problem: Too Much Boilerplate
In microservices, publishing events to Kafka is essential but repetitive. A typical event‑publishing method looks like this:
@Service
public class UserService {
@Autowired private KafkaTemplate kafka;
@Autowired private ObjectMapper objectMapper;
public User createUser(UserRequest request) {
User user = userRepository.save(new User(request));
try {
// Manual event creation
EventEnvelope event = EventEnvelope.builder()
.eventId(UUID.randomUUID().toString())
.eventType("USER_CREATED")
.occurredAt(Instant.now())
.publishedAt(Instant.now())
.metadata(/* extract actor, trace, source... */)
.payload(/* map to DTO... */)
.build();
// Manual PII masking
String json = maskPii(objectMapper.writeValueAsString(event));
// Manual Kafka send with retry
kafka.send("user-events", json)
.get(30, TimeUnit.SECONDS);
} catch (Exception e) {
log.error("Failed to publish event", e);
sendToDlq(event);
}
return user;
}
}
30+ lines of boilerplate – and you have to repeat this for every event type.
The Solution: Just Add One Annotation
With Curve the same logic becomes:
@Service
public class UserService {
@PublishEvent(eventType = "USER_CREATED")
public User createUser(UserRequest request) {
return userRepository.save(new User(request));
}
}
That’s it. Everything else is handled automatically:
- ✅ Event‑ID generation (Snowflake algorithm)
- ✅ Metadata extraction (actor, trace, source)
- ✅ PII masking/encryption
- ✅ Kafka publishing with retry
- ✅ DLQ on failure
- ✅ Metrics collection
Key Features That Make It Production‑Ready
Automatic PII Protection
Sensitive data is automatically protected with @PiiField:
public class UserEventPayload implements DomainEventPayload {
@PiiField(type = PiiType.EMAIL, strategy = PiiStrategy.MASK)
private String email; // "user@example.com" → "user@***.com"
@PiiField(type = PiiType.PHONE, strategy = PiiStrategy.ENCRYPT)
private String phone; // AES‑256‑GCM encrypted
@PiiField(type = PiiType.ID_NO, strategy = PiiStrategy.HASH)
private String id; // HMAC‑SHA256 hashed
}
Supports AWS KMS and HashiCorp Vault for key management with envelope encryption.
3‑Tier Failure Recovery
Events never get lost, even when Kafka is completely down:
Main Topic → DLQ → Local File Backup → S3 Backup (optional)
Transactional Outbox Pattern
Guarantees atomicity between DB transactions and event publishing:
@PublishEvent(
eventType = "ORDER_CREATED",
outbox = true,
aggregateType = "Order",
aggregateId = "#result.orderId"
)
@Transactional
public Order createOrder(OrderRequest req) {
return orderRepo.save(new Order(req));
}
Uses exponential back‑off and SKIP LOCKED to prevent duplicate processing in multi‑instance environments.
Built‑in Observability
Health check and metrics out of the box:
# Health check
curl http://localhost:8080/actuator/health/curve
{
"status": "UP",
"details": {
"kafkaProducerInitialized": true,
"clusterId": "lkc-abc123",
"nodeCount": 3,
"topic": "event.audit.v1",
"dlqTopic": "event.audit.dlq.v1"
}
}
# Custom metrics
curl http://localhost:8080/actuator/curve-metrics
{
"summary": {
"totalEventsPublished": 1523,
"successRate": "99.80%"
}
}
Architecture: Hexagonal Design
Curve follows Hexagonal Architecture (Ports & Adapters) to keep the core domain framework‑independent:
curve/
├── core/ # Pure domain (no Spring/Kafka)
│ ├── envelope/ # EventEnvelope, Metadata
│ ├── port/ # EventProducer interface
│ └── validation/ # Domain validators
│
├── spring/ # Spring adapter
│ ├── aop/ # @PublishEvent aspect
│ └── context/ # Context providers
│
├── kafka/ # Kafka adapter
│ └── producer/ # KafkaEventProducer
│
├── kms/ # AWS KMS / Vault adapter
└── spring-boot-autoconfigure # Auto‑configuration
This makes the library testable (no framework needed) and extensible (swap Kafka for RabbitMQ, etc.).
Performance
Benchmarked with JMH on AWS EC2 t3.medium (Kafka 3.8, 3‑node cluster):
| Mode | Throughput |
|---|---|
| Sync | ~500 TPS |
| Async | ~10 000+ TPS |
| With MDC context propagation | Trace IDs preserved even in async threads |
Quick Start
Add Dependency
dependencies {
implementation 'io.github.closeup1202:curve:0.1.1'
}
Enable Auto‑Configuration (Spring Boot)
@SpringBootApplication
@EnableCurve // optional if you prefer manual configuration
public class Application { }
Publish an Event
@Service
public class OrderService {
@PublishEvent(eventType = "ORDER_PLACED")
public Order placeOrder(OrderRequest request) {
return orderRepository.save(new Order(request));
}
}
That’s all – Curve will generate the envelope, mask PII, send the message to Kafka, handle retries, push to DLQ on failure, and expose health/metrics endpoints automatically.
License
Curve is released under the Apache License 2.0. Feel free to contribute, open issues, or suggest enhancements!
Configure
spring:
kafka:
bootstrap-servers: localhost:9092
curve:
enabled: true
kafka:
topic: event.audit.v1
dlq-topic: event.audit.dlq.v1
Use
@PublishEvent(eventType = "ORDER_CREATED", severity = EventSeverity.INFO)
public Order createOrder(OrderRequest request) {
return orderRepository.save(new Order(request));
}
Lessons Learned
Hexagonal Architecture Was Worth It
Initially, I considered coupling directly to Spring. But isolating the core domain made:
- Testing 10× easier – no Spring context needed.
- Evolution safer – frameworks can be swapped without breaking core logic.
- Reusability possible – the core can be used in non‑Spring projects.
Security Defaults Matter
I started with a simple StandardEvaluationContext for SpEL but switched to SimpleEvaluationContext to block dangerous operations (constructor calls, type references). A small change, huge security impact.
Documentation Is Critical for Adoption
I spent 30 % of development time on docs:
- 30+ markdown files (Getting Started, Operations, Troubleshooting)
- English + Korean versions
- MkDocs Material for beautiful GitHub Pages
Result:
📦 Maven Central:
📖 Docs:
What do you think? Have you built similar abstraction layers in your projects? I’d love to hear your experiences in the comments! 💬