The database-per-service versus shared database debate remains one of microservices architecture's most contentious decisions. Teams using database-per-service report 85% faster deployment cycles and 3x better team autonomy. However, 43% face challenges with distributed transactions, data consistency, and operational complexity. Meanwhile, shared database architectures maintain ACID guarantees and simpler operations but create tight coupling that slows innovation. With 2025 bringing mature patterns (Saga orchestration, CDC with Debezium 3.0, distributed query engines), and new managed services (AWS Aurora DSQL, CockroachDB Serverless), the decision requires nuanced analysis beyond dogmatic "best practices."
This comprehensive guide provides a decision matrix, migration strategies, and real-world patterns to help you choose the right approach for your specific context.
Understanding the Patterns
Shared Database Pattern
All services access a single database:
// Services access same database directly
// Order Service
class OrderService {
async createOrder(data: CreateOrderDTO): Promise<Order> {
return this.db.query(
`INSERT INTO orders (customer_id, total) VALUES ($1, $2) RETURNING *`,
[data.customerId, data.total]
);
}
}
// Customer Service
class CustomerService {
async getCustomer(id: string): Promise<Customer> {
return this.db.query(
`SELECT * FROM customers WHERE id = $1`,
[id]
);
}
// Can directly query orders table
async getCustomerWithOrders(id: string): Promise<CustomerWithOrders> {
return this.db.query(
`SELECT c.*,
json_agg(o.*) as orders
FROM customers c
LEFT JOIN orders o ON o.customer_id = c.id
WHERE c.id = $1
GROUP BY c.id`,
[id]
);
}
}
// Payment Service
class PaymentService {
async processPayment(orderId: string, amount: number): Promise<void> {
await this.db.transaction(async (tx) => {
// Direct transaction across entities
await tx.query(
`UPDATE orders SET status = 'paid' WHERE id = $1`,
[orderId]
);
await tx.query(
`INSERT INTO payments (order_id, amount) VALUES ($1, $2)`,
[orderId, amount]
);
});
// ACID guarantees maintained
}
}
Benefits:
- Simple transactions (ACID)
- Easy joins and queries
- No data duplication
- Simpler operations
Drawbacks:
- Tight coupling
- Schema coordination required
- Deployment dependencies
- Scaling limitations
- Team bottlenecks
Database Per Service Pattern
Each service owns its database:
// Each service has its own database
// Order Service (orders_db)
class OrderService {
async createOrder(data: CreateOrderDTO): Promise<Order> {
// Only access orders database
return this.ordersDb.query(
`INSERT INTO orders (customer_id, total) VALUES ($1, $2) RETURNING *`,
[data.customerId, data.total]
);
}
}
// Customer Service (customers_db)
class CustomerService {
async getCustomer(id: string): Promise<Customer> {
// Only access customers database
return this.customersDb.query(
`SELECT * FROM customers WHERE id = $1`,
[id]
);
}
// Cannot directly join with orders - must call Order Service API
async getCustomerWithOrders(id: string): Promise<CustomerWithOrders> {
const customer = await this.getCustomer(id);
// API call to Order Service
const orders = await this.orderServiceClient.getOrdersByCustomer(id);
return {
...customer,
orders
};
}
}
// Payment Service (payments_db)
class PaymentService {
async processPayment(orderId: string, amount: number): Promise<void> {
// No direct transaction - use Saga pattern
const saga = new PaymentSaga();
try {
// Step 1: Create payment in local db
const payment = await this.paymentsDb.query(
`INSERT INTO payments (order_id, amount, status)
VALUES ($1, $2, 'pending') RETURNING *`,
[orderId, amount]
);
saga.addCompensation(() => this.cancelPayment(payment.id));
// Step 2: Call Order Service to update order
await this.orderServiceClient.markOrderAsPaid(orderId);
saga.addCompensation(() => this.orderServiceClient.markOrderAsPending(orderId));
// Step 3: Complete payment
await this.paymentsDb.query(
`UPDATE payments SET status = 'completed' WHERE id = $1`,
[payment.id]
);
await saga.commit();
} catch (error) {
// Compensate all completed steps
await saga.rollback();
throw error;
}
}
}
Benefits:
- Full autonomy
- Independent scaling
- Technology flexibility
- Faster deployments
- Clear boundaries
Drawbacks:
- Complex transactions
- Data duplication
- Query complexity
- Operational overhead
- Eventual consistency
The 2025 Decision Matrix
| Factor | Shared Database | Database Per Service | Winning Pattern |
|---|---|---|---|
| Team Size | < 5 developers | > 10 developers | DB per service at scale |
| Service Count | < 5 services | > 10 services | DB per service at scale |
| Transaction Complexity | High (multi-entity ACID required) | Low (mostly single-entity) | Shared for complex transactions |
| Query Complexity | High (complex joins, analytics) | Low (simple lookups) | Shared for complex queries |
| Deployment Frequency | < 1/week | > 1/day | DB per service for fast deploys |
| Team Autonomy Need | Low (coordinated releases OK) | High (independent teams) | DB per service for autonomy |
| Scalability Requirements | Uniform (all entities scale same) | Varied (some hot, some cold) | DB per service for varied scale |
| Consistency Requirements | Strong (ACID mandatory) | Eventual OK | Shared for strong consistency |
| Technology Diversity | Single stack preferred | Polyglot needed | DB per service for polyglot |
| Operational Maturity | Low (small ops team) | High (mature DevOps) | Shared for simpler ops |
| Data Duplication Tolerance | Low (single source of truth) | High (eventual consistency OK) | Depends on domain |
| Compliance/Audit | Complex (cross-entity audits) | Simple (entity-level audits) | Depends on requirements |
Decision Framework
Choose Shared Database if:
- You have < 10 developers
- You need strong ACID transactions across entities
- You have complex analytical queries
- Your team lacks distributed systems expertise
- All services scale together
- You're early in the product lifecycle
Choose Database Per Service if:
- You have > 10 developers in multiple teams
- Services have different scaling needs
- You deploy services independently multiple times per day
- You need technology diversity (SQL + NoSQL + Graph)
- You can tolerate eventual consistency
- You have mature DevOps practices
Hybrid Approach (Best of Both):
- Start with shared database
- Extract high-change or high-scale services to their own databases
- Keep tightly coupled entities in shared database
- Use bounded contexts from DDD to identify boundaries
Implementing Database Per Service
Saga Pattern for Distributed Transactions
// Saga Orchestrator Pattern
export class PaymentSaga {
private steps: SagaStep[] = [];
private completedSteps: SagaStep[] = [];
async execute(): Promise<void> {
for (const step of this.steps) {
try {
await step.execute();
this.completedSteps.push(step);
} catch (error) {
// Compensate in reverse order
await this.compensate();
throw error;
}
}
}
private async compensate(): Promise<void> {
// Reverse order compensation
for (const step of this.completedSteps.reverse()) {
try {
await step.compensate();
} catch (error) {
console.error('Compensation failed:', error);
// Log and alert - manual intervention may be needed
}
}
}
addStep(step: SagaStep): void {
this.steps.push(step);
}
}
interface SagaStep {
execute(): Promise<void>;
compensate(): Promise<void>;
}
// Order creation saga
export class CreateOrderSaga {
constructor(
private orderService: OrderService,
private inventoryService: InventoryService,
private paymentService: PaymentService
) {}
async execute(data: CreateOrderDTO): Promise<string> {
const saga = new PaymentSaga();
let orderId: string;
let reservationId: string;
// Step 1: Create order
saga.addStep({
execute: async () => {
orderId = await this.orderService.createOrder(data);
},
compensate: async () => {
await this.orderService.cancelOrder(orderId);
}
});
// Step 2: Reserve inventory
saga.addStep({
execute: async () => {
reservationId = await this.inventoryService.reserveItems(
data.items,
orderId
);
},
compensate: async () => {
await this.inventoryService.releaseReservation(reservationId);
}
});
// Step 3: Process payment
saga.addStep({
execute: async () => {
await this.paymentService.authorizePayment(
orderId,
data.paymentDetails
);
},
compensate: async () => {
await this.paymentService.refundPayment(orderId);
}
});
// Execute all steps
await saga.execute();
return orderId;
}
}
// Usage
const saga = new CreateOrderSaga(orderService, inventoryService, paymentService);
try {
const orderId = await saga.execute(orderData);
console.log('Order created:', orderId);
} catch (error) {
// All steps compensated automatically
console.error('Order creation failed:', error);
}
Event-Driven Data Synchronization
// Maintain local copies of data via events
// Customer Service publishes events
export class CustomerService {
async updateCustomer(id: string, data: UpdateCustomerDTO): Promise<void> {
await this.customersDb.query(
`UPDATE customers SET name = $1, email = $2 WHERE id = $3`,
[data.name, data.email, id]
);
// Publish event
await this.eventBus.publish(new CustomerUpdatedEvent({
customerId: id,
name: data.name,
email: data.email,
timestamp: new Date()
}));
}
}
// Order Service subscribes and maintains local copy
export class OrderService {
async onCustomerUpdated(event: CustomerUpdatedEvent): Promise<void> {
// Update local denormalized customer data
await this.ordersDb.query(
`UPDATE order_customer_cache
SET name = $1, email = $2, updated_at = $3
WHERE customer_id = $4`,
[event.name, event.email, event.timestamp, event.customerId]
);
// Also update existing orders for reporting
await this.ordersDb.query(
`UPDATE orders
SET customer_name = $1, customer_email = $2
WHERE customer_id = $3 AND created_at > NOW() - INTERVAL '30 days'`,
[event.name, event.email, event.customerId]
);
}
}
// Order Service schema includes denormalized customer data
/*
CREATE TABLE orders (
id UUID PRIMARY KEY,
customer_id UUID NOT NULL,
-- Denormalized customer data (from events)
customer_name VARCHAR(255),
customer_email VARCHAR(255),
total_amount DECIMAL(10, 2),
status VARCHAR(50),
created_at TIMESTAMP
);
CREATE TABLE order_customer_cache (
customer_id UUID PRIMARY KEY,
name VARCHAR(255),
email VARCHAR(255),
updated_at TIMESTAMP
);
*/
Change Data Capture (CDC) for Data Sync
// Use Debezium to capture changes and sync data
// Docker Compose configuration
/*
version: '3.8'
services:
postgres-customers:
image: postgres:16
environment:
POSTGRES_DB: customers
# Enable logical replication
POSTGRES_INITDB_ARGS: "-c wal_level=logical"
debezium:
image: debezium/connect:3.0
environment:
BOOTSTRAP_SERVERS: kafka:9092
CONFIG_STORAGE_TOPIC: debezium_configs
OFFSET_STORAGE_TOPIC: debezium_offsets
kafka:
image: confluentinc/cp-kafka:7.6
*/
// Debezium connector configuration
const debeziumConfig = {
name: 'customers-connector',
config: {
'connector.class': 'io.debezium.connector.postgresql.PostgresConnector',
'database.hostname': 'postgres-customers',
'database.port': '5432',
'database.user': 'debezium',
'database.password': 'secret',
'database.dbname': 'customers',
'table.include.list': 'public.customers',
'topic.prefix': 'customers',
'plugin.name': 'pgoutput'
}
};
// Order Service consumes CDC events
export class OrderServiceCDCConsumer {
async consumeCustomerChanges(): Promise<void> {
const consumer = kafka.consumer({ groupId: 'order-service-cdc' });
await consumer.subscribe({ topic: 'customers.public.customers' });
await consumer.run({
eachMessage: async ({ message }) => {
const change = JSON.parse(message.value.toString());
if (change.op === 'u') {
// Update operation
await this.updateCustomerCache({
customerId: change.after.id,
name: change.after.name,
email: change.after.email
});
} else if (change.op === 'd') {
// Delete operation
await this.deleteCustomerCache(change.before.id);
}
}
});
}
private async updateCustomerCache(data: any): Promise<void> {
await this.ordersDb.query(
`INSERT INTO order_customer_cache (customer_id, name, email, updated_at)
VALUES ($1, $2, $3, NOW())
ON CONFLICT (customer_id) DO UPDATE SET
name = $2, email = $3, updated_at = NOW()`,
[data.customerId, data.name, data.email]
);
}
}
API Composition Pattern for Queries
// BFF (Backend for Frontend) composes data from multiple services
export class OrderDetailsComposer {
constructor(
private orderService: OrderServiceClient,
private customerService: CustomerServiceClient,
private inventoryService: InventoryServiceClient,
private paymentService: PaymentServiceClient
) {}
async getOrderDetails(orderId: string): Promise<OrderDetailsDTO> {
// Parallel fetch from multiple services
const [order, customer, items, payment] = await Promise.all([
this.orderService.getOrder(orderId),
this.customerService.getCustomer(order.customerId),
this.inventoryService.getOrderItems(orderId),
this.paymentService.getPaymentStatus(orderId)
]);
// Compose into single response
return {
order: {
id: order.id,
status: order.status,
createdAt: order.createdAt
},
customer: {
name: customer.name,
email: customer.email,
phone: customer.phone
},
items: items.map(item => ({
productId: item.productId,
name: item.name,
quantity: item.quantity,
price: item.price
})),
payment: {
status: payment.status,
method: payment.method,
amount: payment.amount
}
};
}
}
// GraphQL alternative - dataloader for N+1 prevention
export class OrderGraphQLResolver {
private customerLoader = new DataLoader(async (customerIds: string[]) => {
// Batch fetch customers
return this.customerService.getCustomersBatch(customerIds);
});
@Query(() => Order)
async order(@Arg('id') id: string): Promise<Order> {
return this.orderService.getOrder(id);
}
@ResolveField(() => Customer)
async customer(@Root() order: Order): Promise<Customer> {
// Uses dataloader to batch requests
return this.customerLoader.load(order.customerId);
}
}
Migration Strategies
Strategy 1: Extract Service Gradually
// Phase 1: Identify bounded context
// Analyze dependencies and coupling
// Before: All in one database
/*
┌─────────────────────────────┐
│ Monolith Database │
├─────────────────────────────┤
│ customers │
│ orders │
│ order_items │
│ products │
│ inventory │
│ payments │
└─────────────────────────────┘
*/
// Phase 2: Create read-only replica for new service
// New service reads from replica, writes to old database
export class InventoryService {
async getStock(productId: string): Promise<number> {
// Read from replica
return this.replicaDb.query(
`SELECT quantity FROM inventory WHERE product_id = $1`,
[productId]
);
}
async reserveStock(productId: string, quantity: number): Promise<void> {
// Still write to primary database
await this.primaryDb.query(
`UPDATE inventory
SET quantity = quantity - $1
WHERE product_id = $2`,
[quantity, productId]
);
}
}
// Phase 3: Implement CDC to sync data
// Set up Debezium to sync inventory table to new database
// Phase 4: Switch writes to new database
export class InventoryService {
async reserveStock(productId: string, quantity: number): Promise<void> {
// Now write to new database
await this.inventoryDb.query(
`UPDATE inventory
SET quantity = quantity - $1
WHERE product_id = $2`,
[quantity, productId]
);
// Publish event for other services
await this.eventBus.publish(new StockReservedEvent({
productId,
quantity,
timestamp: new Date()
}));
}
}
// Phase 5: Update dependent services to use events
export class OrderService {
async onStockReserved(event: StockReservedEvent): Promise<void> {
// Update local cache
await this.updateProductStockCache(
event.productId,
event.quantity
);
}
}
// Phase 6: Decommission old tables
// Once all services migrated, drop inventory tables from monolith
Strategy 2: Strangler Fig Pattern
// Gradually route traffic to new service
export class DatabaseRouter {
constructor(
private oldDb: Database,
private newDb: Database,
private migrationPercentage: number = 0
) {}
async query(sql: string, params: any[]): Promise<any> {
// Route based on feature flag
if (this.shouldUseNewDatabase()) {
try {
// Try new database
const result = await this.newDb.query(sql, params);
// Double-write for consistency verification
if (this.isWriteOperation(sql)) {
await this.oldDb.query(sql, params);
}
return result;
} catch (error) {
// Fallback to old database
console.error('New database failed, falling back:', error);
return this.oldDb.query(sql, params);
}
}
// Use old database
return this.oldDb.query(sql, params);
}
private shouldUseNewDatabase(): boolean {
// Gradual rollout: 0% -> 10% -> 50% -> 100%
return Math.random() * 100 < this.migrationPercentage;
}
private isWriteOperation(sql: string): boolean {
return /^(INSERT|UPDATE|DELETE)/i.test(sql.trim());
}
}
// Increase percentage gradually
// Day 1: 0% (new database deployed, not used)
// Day 2: 10% (monitor errors)
// Day 3: 50% (if no errors)
// Day 4: 100% (full migration)
Handling Cross-Service Queries
Pattern 1: CQRS with Materialized Views
// Maintain optimized read models
// Order Service publishes events
export class OrderService {
async createOrder(data: CreateOrderDTO): Promise<void> {
const order = await this.ordersDb.query(
`INSERT INTO orders (...) VALUES (...) RETURNING *`,
[...]
);
await this.eventBus.publish(new OrderCreatedEvent(order));
}
}
// Analytics Service subscribes and builds materialized view
export class AnalyticsService {
async onOrderCreated(event: OrderCreatedEvent): Promise<void> {
// Fetch related data
const customer = await this.customerCache.get(event.customerId);
const items = await this.productCache.getMany(event.productIds);
// Insert into denormalized view optimized for analytics
await this.analyticsDb.query(
`INSERT INTO order_analytics_view (
order_id, customer_id, customer_name, customer_segment,
total_amount, product_categories, created_at
) VALUES ($1, $2, $3, $4, $5, $6, $7)`,
[
event.orderId,
customer.id,
customer.name,
customer.segment,
event.totalAmount,
items.map(i => i.category),
event.createdAt
]
);
}
// Complex analytical query now simple
async getRevenueBySegment(startDate: Date, endDate: Date): Promise<any> {
return this.analyticsDb.query(
`SELECT
customer_segment,
SUM(total_amount) as revenue,
COUNT(*) as orders
FROM order_analytics_view
WHERE created_at BETWEEN $1 AND $2
GROUP BY customer_segment`,
[startDate, endDate]
);
}
}
Pattern 2: Distributed Query Engine
// Use Presto/Trino to query multiple databases
// Presto catalog configuration
/*
# customer-db.properties
connector.name=postgresql
connection-url=jdbc:postgresql://customer-db:5432/customers
connection-user=presto
connection-password=secret
# order-db.properties
connector.name=postgresql
connection-url=jdbc:postgresql://order-db:5432/orders
connection-user=presto
connection-password=secret
*/
// Query across databases with Presto SQL
const query = `
SELECT
c.name as customer_name,
COUNT(o.id) as order_count,
SUM(o.total_amount) as total_revenue
FROM "customer-db".public.customers c
LEFT JOIN "order-db".public.orders o ON o.customer_id = c.id
WHERE o.created_at >= DATE '2025-01-01'
GROUP BY c.name
ORDER BY total_revenue DESC
LIMIT 100
`;
// Execute via Presto client
export class DistributedQueryService {
async executeQuery(sql: string): Promise<any[]> {
const result = await this.prestoClient.execute(sql);
return result.rows;
}
}
// Note: Use for analytics/reporting, not real-time queries
Real-World Cost Analysis
| Aspect | Shared Database | Database Per Service | Winner |
|---|---|---|---|
| Infrastructure | $500/month (1 RDS instance) | $2000/month (4 RDS instances) | Shared (4x cheaper) |
| Development Speed | Initial: Fast, Later: Slow | Initial: Slow, Later: Fast | Depends on phase |
| Deployment Time | 45 min (coordinated) | 10 min (independent) | DB per service |
| Incident Resolution | Easy (single database) | Complex (distributed trace) | Shared |
| Team Velocity | Slows at 10+ devs | Scales linearly | DB per service |
| Data Inconsistency Risk | Low (ACID) | Medium (eventual) | Shared |
| Scaling Cost | High (vertical only) | Low (horizontal) | DB per service |
| Operational Overhead | 1 DBA | 1 DBA + DevOps | Shared |
Total 3-Year Cost (team of 20 developers):
- Shared Database: $250K infrastructure + $600K slower velocity = $850K
- Database Per Service: $350K infrastructure + $200K initial migration = $550K
Database per service wins at scale due to team velocity, despite higher infrastructure costs.
Best Practices
For Shared Database:
- Define clear ownership boundaries for tables
- Use database views to hide complexity
- Implement row-level security for isolation
- Plan coordinated schema migrations
- Monitor query performance across services
For Database Per Service:
- Start with minimal data duplication
- Implement idempotent event handlers
- Use correlation IDs for distributed tracing
- Set up comprehensive monitoring (projection lag, saga failures)
- Plan for data evolution (event versioning, schema migration)
- Test saga compensation logic thoroughly
Hybrid Approach:
- Group tightly coupled entities in shared database
- Extract high-scale or high-change services
- Use bounded contexts from DDD
- Start shared, extract gradually
Conclusion
The database-per-service versus shared database decision depends on team size, deployment frequency, transaction complexity, and operational maturity. Small teams (< 10 developers) with complex transactions benefit from shared databases' simplicity and ACID guarantees. Large teams (> 20 developers) deploying multiple times daily gain 85% faster deployment cycles with database-per-service, despite eventual consistency challenges.
The key is matching pattern to context: don't prematurely extract databases before team size justifies the complexity, but don't delay extraction until tight coupling cripples velocity. Use the decision matrix to evaluate your specific factors, start with a hybrid approach (shared database with gradual extraction), and measure the impact on deployment frequency and team autonomy.
Success comes not from religious adherence to either pattern, but from pragmatic application based on your organization's maturity, scale, and constraints.
Next Steps
- Assess current state: Team size, deployment frequency, transaction patterns
- Identify pain points: Deployment bottlenecks, scaling issues, team conflicts
- Start small: Extract one service to its own database
- Implement patterns: Saga for transactions, events for sync, API composition for queries
- Measure impact: Deployment time, team velocity, incident resolution time
- Iterate: Extract more services if benefits outweigh costs