Skip to content

Roadmap

Senren's vision is to be a complete ML platform. Here's what's available now and what's coming.

Current Status (MVP)

✅ Multi-cloud Infrastructure Management

Available now:

  • Redis - In-memory data structures and caching
  • Redis Cluster - Distributed Redis with sharding
  • Dragonfly DB - High-performance Redis-compatible database

Multi-cloud support: - AWS (Kubernetes clusters in any region) - GCP (Kubernetes clusters in any region)

Multi-cluster support: - Multiple clusters per region (for isolation, shadow traffic, experimentation)

✅ State-based Synchronization

Core architecture implemented: - Control plane with PostgreSQL state store - Regional planes for each cloud/region/cluster - Kafka messaging backbone - Outbox pattern for reliable delivery - Kubernetes operators for infrastructure provisioning

✅ Python SDK

Current API:

from senren import SenrenClient, Database

client = SenrenClient(endpoint="control.senren.dev")

client.apply_state(
    databases=[
        Database(
            name="user-cache",
            type="redis-cluster",
            memory_mb=8192,
            regions=["aws:us-east-1", "gcp:us-central1"],
        )
    ]
)

# Query status
status = client.get_status("user-cache")
print(f"Ready: {status.ready}, Host: {status.host}")

Roadmap

Phase 1: Feature Stores (Q2 2025)

Goal: Support online and offline feature stores for ML serving.

Planned additions: - Feature store schema management - Feature versioning and lineage - Point-in-time feature retrieval - Feature freshness monitoring

Infrastructure: - Redis for online features (already supported) - Parquet/Delta Lake for offline features - Feature metadata store

Phase 2: Search & Ranking (Q3 2025)

Goal: Support document stores and search indexes for ranking pipelines.

Planned additions: - Document stores: MongoDB, DocumentDB - Search indexes: Elasticsearch, OpenSearch - Vector search: Qdrant, Weaviate (for semantic search)

Use cases: - Product search and ranking - Semantic search with embeddings - Hybrid search (keyword + vector)

Phase 3: Model Serving (Q4 2025)

Goal: Manage model inference infrastructure.

Planned additions: - Model server provisioning (TorchServe, TensorFlow Serving) - Model version management - A/B testing infrastructure with automatic traffic routing - Shadow traffic testing

Features: - Canary deployments for new models - Traffic splitting (90/10, 50/50, etc.) - Automatic rollback on latency/error spikes

Phase 4: Data Pipelines (2026)

Goal: Connect production data → warehouses → training datasets.

Planned additions: - CDC (Change Data Capture) from production databases - Stream processing with Kafka Streams/Flink - Data warehouse integration (Snowflake, BigQuery, Redshift) - Dataset versioning for training

Use cases: - Continuous training pipelines - Feature engineering at scale - Production-to-training feedback loops

Infrastructure Roadmap

Additional Cloud Providers

  • Azure (2025)
  • On-premise Kubernetes (2025)

Additional Databases

  • PostgreSQL - Managed relational databases
  • Cassandra - Wide-column distributed database
  • ClickHouse - OLAP analytics database

Additional Messaging

  • RabbitMQ - Alternative to Kafka for certain use cases
  • Pulsar - Multi-tenant messaging

How to Influence the Roadmap

Senren's roadmap is driven by user needs. If you have specific requirements:

  1. Open an issue: Describe your use case
  2. Join the community: Discord/Slack (links coming soon)
  3. Contribute: We welcome PRs for new infrastructure types

Current Limitations

What Senren is NOT (yet): - Not a full Kubernetes management platform (use k8s-native tools) - Not a data processing engine (use Spark/Flink) - Not a model training platform (use SageMaker/Vertex AI)

What Senren IS: - Infrastructure provisioning and state management - Multi-cloud, multi-region coordination - State synchronization across distributed clusters

Next Steps