Database-per-Service: The Microservices Data Boundary
Why each microservice should own its data store, and the real costs of enforcing that boundary.
The database-per-service pattern is one of the harder constraints to enforce when migrating from a monolith. It’s also one of the most important.
Why the Boundary Exists
When two services share a database, you’ve created a hidden coupling point. Schema changes in a shared table require coordinating deployments across every service that touches it. You’ve lost independent deployability — one of the core promises of microservices.
Choosing the Right Store per Service
One benefit of this pattern: you’re free to pick the right tool for each service’s access patterns.
| Service | Access Pattern | Good Fit |
|---|---|---|
| User profiles | Key-value lookup | Redis / DynamoDB |
| Orders | Relational, joins | PostgreSQL |
| Product catalog | Full-text search | Elasticsearch |
| Event log | Append-only, time-series | Kafka / TimescaleDB |
| Recommendations | Graph traversal | Neo4j |
The Cross-Service Query Problem
The hardest consequence: you lose the JOIN. Queries that were trivial in a monolith now require coordination.
Option 1: API Composition
The API gateway or a dedicated service fetches data from each service and assembles the response. Works for simple cases; latency compounds with more services.
Option 2: CQRS + Read Models
Build a dedicated read model that aggregates data from multiple services via events. Higher complexity, better query performance.
The Migration Path
Going from a shared database to database-per-service is a journey, not a switch:
1. Identify service boundaries (bounded contexts)
2. Create service-specific schemas within the shared DB
3. Route all access through the owning service's API
4. Migrate data to separate physical stores
5. Remove old shared tables
The Strangler Fig pattern works well here — incrementally extract services while the monolith still handles the rest.
This is a real cost. Budget for it accordingly.