In-Memory Cache

ZenoCache

Ultrafast Caching Reimagined

Hyper-modern, ultrafast in-memory cache written in Rust for Kubernetes environments. Drop-in Redis replacement with sub-millisecond latency, zero GC pauses, and millions of operations per second.

19μs
P99 Latency
1.8M
Ops/Second
2.7x
Faster than Redis
ZenoCache Logo
Philosophy

Speed Through Simplicity

Built in Rust for zero garbage collection pauses, memory safety, and fearless concurrency. No stop-the-world events.

<1ms
P99 Latency
0
GC Pauses
256
Shards
K8s
Native Integration
Features

Enterprise-Grade Performance

Everything you need for high-performance caching at scale. Built for organizations that demand the best.

Rust-Powered Performance

Written in Rust for zero garbage collection pauses, memory safety, and fearless concurrency. No stop-the-world events.

Redis Protocol Compatible

Full RESP2 protocol support with 31 commands. Drop-in replacement works with any Redis client library.

Smart Eviction Policies

Choose between LRU, TinyLFU, or NoOp eviction. TinyLFU provides superior hit rates for real-world workloads.

Kubernetes Native

Production-ready with health checks, graceful shutdown, HPA support, and ServiceMonitor for Prometheus Operator.

Prometheus Metrics

10+ metrics including hit/miss rates, operation latency histograms, connection counts, and memory usage gauges.

gRPC Protocol

High-performance binary protocol with 12 RPC endpoints for new applications seeking maximum efficiency.

Architecture

Sharded Design for Scale

256 configurable shards provide lock-free-like concurrency without the complexity. Parallel access across cores with minimal contention.

Layer 1

Request Router

AHash-based key distribution across shards. High-quality, non-cryptographic hashing for even distribution.

Layer 2

Shard Manager

256 independent shards with parking_lot RwLock. Per-shard memory limits with automatic eviction.

Layer 3

Eviction Engine

LRU with O(1) operations or TinyLFU with Count-Min Sketch frequency tracking and SLRU segments.

Layer 4

TTL Management

Hierarchical timing wheel with overflow handling. Lazy expiration on read plus background cleanup task.

Performance

Built for Speed

ZenoCache is engineered from the ground up for maximum throughput and minimum latency.

Throughput

  • ~500K ops/sec single-threaded
  • ~3-4M ops/sec with 8 cores
  • Request pipelining support
  • Response batching enabled

Latency

  • P99 latency under 1ms
  • Average lock hold time <1 microsecond
  • Zero garbage collection pauses
  • TCP_NODELAY for minimal network delay

Efficiency

  • Lock contention rate <1%
  • O(1) LRU operations
  • Minimal eviction overhead
  • ~100 bytes per entry overhead
Benchmarks

ZenoCache vs Redis

Real benchmark results comparing ZenoCache against Redis 7. Tested on AMD Ryzen AI MAX+ 395.

Workload ZenoCache Redis Speedup
Single-threaded (100B values) 82,549 ops/s 47,959 ops/s 1.72x
8 Concurrent Clients 379,113 ops/s 191,830 ops/s 1.98x
16 Concurrent Clients 587,917 ops/s 416,300 ops/s 1.41x
Pipeline depth=100 1,360,055 ops/s 538,049 ops/s 2.53x
Batch size=100 (MGET/MSET) 1,799,258 ops/s 674,581 ops/s 2.67x

Latency (Single-threaded)

  • P99: 19μs vs 34μs (Redis) - 44% lower
  • Mean: 11μs vs 19μs (Redis) - 42% lower
  • P50: 11μs vs 18μs (Redis) - 39% lower

Peak Performance

  • 1.8M ops/sec with batching
  • 1.36M ops/sec with pipelining
  • Up to 3.76x faster than Redis

Test Environment

  • AMD Ryzen AI MAX+ 395
  • 62 GB RAM
  • Redis 7 comparison baseline
Protocol

Full Redis Compatibility

31 commands across strings, keys, and server operations. Works with any Redis client library.

String Commands (16)

GET, SET, SETEX, PSETEX, SETNX, GETEX, GETDEL, MGET, MSET, APPEND, STRLEN, INCR, INCRBY, DECR, DECRBY, GETSET

Key Commands (11)

DEL, EXISTS, EXPIRE, PEXPIRE, EXPIREAT, TTL, PTTL, PERSIST, TYPE, KEYS, SCAN

Server Commands (10)

PING, ECHO, INFO, DBSIZE, FLUSHDB, FLUSHALL, TIME, QUIT, COMMAND, CLIENT

gRPC Endpoints (12)

Get, Set, Delete, MultiGet, MultiSet, Exists, Expire, TTL, Ping, Stats, Flush, and more

Eviction

Intelligent Cache Management

Choose the eviction policy that best fits your workload characteristics.

LRU (Least Recently Used)

  • O(1) get, set, and eviction operations
  • Index-based doubly-linked list
  • Best for recency-based access patterns
  • Simple and predictable behavior

TinyLFU

  • Count-Min Sketch frequency tracking
  • Window cache (1%) for burst traffic
  • SLRU segments (protected + probationary)
  • Superior hit rates for real workloads

NoOp (TTL-Only)

  • No capacity-based eviction
  • Keys expire only via TTL
  • Ideal for testing scenarios
  • Specialized use cases
Kubernetes

Cloud-Native Ready

Production-ready Kubernetes manifests with best practices for deployment, scaling, and monitoring.

Health Endpoints

/health, /healthz for liveness probes. /ready, /readyz for readiness probes. Kubernetes-native health checking.

Graceful Shutdown

Configurable shutdown timeout (default 30s). Connection draining on SIGTERM/SIGINT. Zero-downtime deployments.

Complete Manifests

Deployment, Service, ConfigMap, PodDisruptionBudget, HorizontalPodAutoscaler, and Kustomization included.

Prometheus Operator

ServiceMonitor for automatic metric discovery. 18-bucket latency histogram. Hit/miss counters and memory gauges.

Use Cases

Built For

ZenoCache is designed for demanding applications that require predictable, high-performance caching.

Microservices Caching

Session data, API responses, computed results - all with sub-millisecond latency and zero GC pauses affecting your services.

Kubernetes Deployments

Native integration with health checks, graceful shutdown, HPA scaling, and Prometheus metrics out of the box.

Redis Replacement

Drop-in replacement with full RESP2 protocol support. Works with existing Redis clients without code changes.

Latency-Sensitive Applications

Real-time systems, gaming backends, financial applications where consistent sub-millisecond response times matter.

Ready to Accelerate Your Applications?

Experience ultrafast, Rust-powered caching with Redis compatibility. Contact us to discuss your requirements and see ZenoCache in action.