NenDB Architecture
NenDB is built on Data-Oriented Design (DOD) principles to maximize performance through cache efficiency, static memory allocation, and batch processing. Every architectural decision prioritizes predictable performance and minimal runtime overhead.
Core Architectural Principles
Data-Oriented Design
Structure data for optimal cache performance and memory access patterns rather than object-oriented abstractions.
- • Structure-of-Arrays (SoA) layout
- • Cache-friendly data traversal
- • Minimal memory fragmentation
Batch Processing
Group operations to amortize costs and maximize throughput across network, disk, and system calls.
- • Vectorized operations
- • Reduced per-operation overhead
- • Optimized memory bandwidth usage
Static Allocation
All memory allocated at startup eliminates runtime allocation overhead and garbage collection pauses.
- • Zero runtime allocations
- • Predictable memory usage
- • No GC pauses or fragmentation
System Architecture
Client Layer
Applications connect via TCP protocol with binary serialization for minimal overhead.
TCP Client → Binary Protocol → Batch Requests → NenDB ServerServer Layer
High-performance TCP server with connection pooling and concurrent batch processing.
Connection Pool → Request Parser → Batch Processor → Response SerializerProcessing Layer
Graph algorithms and data operations optimized for cache performance and inline execution.
Query Engine → DOD Algorithms → Inline Functions → Vectorized OperationsStorage Layer
Write-ahead logging with static memory pools and crash-safe batch commits.
Memory Pools → WAL Buffer → Atomic Commits → Disk PersistenceData-Oriented Data Structures
Node Storage (Structure-of-Arrays)
// Traditional Object-Oriented (AoS - Array of Structures)
struct Node {
id: u64,
type: u32,
data: []u8,
edges: []u64,
}
nodes: []Node
// NenDB Data-Oriented (SoA - Structure of Arrays)
struct NodeStorage {
ids: []u64, // All node IDs together
types: []u32, // All node types together
data: [][]u8, // All node data together
edges: [][]u64, // All edge lists together
}Benefits: Better cache locality, vectorization opportunities, and memory bandwidth utilization when processing multiple nodes.
Batch Operation Queue
struct BatchQueue {
operation_types: []OpType, // CREATE, UPDATE, DELETE, QUERY
node_ids: []u64, // Target node IDs
payloads: [][]u8, // Operation payloads
batch_size: usize, // Current batch size
capacity: usize, // Maximum batch capacity
}Benefits: Process multiple operations simultaneously, reduce system call overhead, and optimize memory access patterns.
Memory Management Strategy
Static Allocation Phase
Runtime Constraints
No malloc/free calls during operation - all memory managed through pre-allocated pools
Eliminates allocation pauses, fragmentation, and unpredictable memory overhead
Performance Characteristics
Batch processing with DOD data structures
Cache-optimized data access patterns
Structure-of-arrays memory layout
Key Implementation Details
Inline Functions
Extensive use of inline functions to enable compiler optimizations and eliminate function call overhead in hot paths.
inline fn processNodeBatch(nodes: []Node, operation: Operation) voidZig 0.15.1 Optimizations
Leverages Zig's compile-time features and new I/O interfaces for zero-overhead abstractions.
- • Compile-time memory layout optimization
- • Zero-cost error handling
- • Vectorization hints and SIMD instructions