A reliability layer for Solana RPC. Bring your own provider keys — RPC Plane handles intelligent routing, slot-aware health scoring, cross-provider validation, and automatic failover.
# Linux / macOS
curl -sSf https://rpcplane.dev/install.sh | sh
Solana RPC has fragmented into proprietary stacks — different caches, proxies, validator clients, and patches. Two providers can return different answers to the same request, and both report perfect health.
Provider returns data 14+ slots behind tip.
~400ms slot times mean staleness is measured in fractions of a second. Standard monitoring won't catch it.
Some getAccountInfo calls return stale balances.
Partial cache rot inside provider middleware — silent and selective.
4xx/5xx for 30 minutes during a config push.
Provider rolls a bad deploy. Your trading desk eats it until rollout completes.
Transactions accepted but never land.
sendTransaction returns OK; the leader never sees it. Legacy single-provider setups have no recourse.
Historical queries silently return incomplete data.
Slow block replay on the backend. The query succeeds. The data is incomplete.
Every getTransaction is now a line item.
Providers moved to per-call billing. Teams want to cut cost without cutting reliability — provider dashboards aren't built for that.
3+ weeks of degraded Solana performance — intermittent 503s, stale data across an entire region.
Two outages in six months affecting a major wallet's primary RPC pool.
Indexer lagged 20,000+ blocks behind tip due to a single RPC provider failure.
RPC Plane runs locally alongside your application — speaking directly to provider endpoints with your credentials. Routing decisions are made at the application layer, so providers see normal client traffic.
# rpc-plane.toml — minimal config [[providers]] name = "provider-a" url = "https://rpc.provider-a.example/${PROVIDER_A_KEY}" [[providers]] name = "provider-b" url = "https://rpc.provider-b.example/${PROVIDER_B_KEY}" [[providers]] name = "provider-c" url = "https://rpc.provider-c.example/${PROVIDER_C_KEY}" [routing] strategy = "best_score" retry_on = [429, 503]
Route per request based on real-time latency, error rate, slot freshness, and response consistency. Not round-robin. Not random.
The novel differentiator. Detect when providers disagree on state — same request, divergent answers. Route to the freshest. Flag the stale one. No existing product does this.
Per-provider circuit breaker — opens on failure, probes for recovery, resumes traffic automatically. No engineer wake-up needed.
Continuously track each provider's slot height against network tip. Deprioritize drifting nodes before applications notice.
Understands processed / confirmed / finalized semantics. Validates that providers actually respect the commitment level requested.
Knows each provider's pricing model per method. Routes reads to the cheapest healthy provider. Tracks credit burn — alerts before budgets are exhausted.
sendTransaction and simulateTransaction broadcast to every healthy provider in parallel — maximizes landing probability.
Exporter on :9401/metrics — health scores, slot drift, circuit state, failover counts, request durations.
Single binary. Single config. No databases, no Redis, no queues. Drop in alongside your service.
Writes always broadcast to every healthy provider regardless of strategy.
# Linux / macOS curl -sSf https://rpcplane.dev/install.sh | sh
rpc-plane init # writes rpc-plane.toml with # every option and its default
rpc-plane run # proxy listening on :9400 # metrics on :9401/metrics
NAME SCORE SLOT DRIFT LATENCY CIRCUIT -------- ------- ------------ ------ ---------- ------- provider-a 0.912 341892471 0 23.4ms closed provider-b 0.841 341892469 2 31.1ms closed provider-c 0.000 — — — open
The proxy stays free. Paid tiers add visibility, control, and zero-ops on top of the same routing engine.
The proxy. Bring your own provider keys. Run anywhere.
Unified observability and cost analytics across every provider. We host the dashboard; you run the binary.
We run the proxy too. Configure providers in the UI. Zero ops.
Providers come and go. Pricing changes. Outages happen. Architectures diverge. The reliability layer persists.
The proxy is open and free. The dashboard is in active development. Drop your email to get early access and incident-analysis content as we publish it.