RPC Plane
Solana · Mainnet · Devnet · Testnet

Your provider is up.
Your data is wrong.

A reliability layer for Solana RPC. Bring your own provider keys — RPC Plane handles intelligent routing, slot-aware health scoring, cross-provider validation, and automatic failover.

$ install single binary, single config, no databases
# Linux / macOS
curl -sSf https://rpcplane.dev/install.sh | sh
Quick start View source No signup. No API keys. No telemetry by default.
Live routing decision
localhost:9400
Your App
solana-web3.js
Anchor · CLI · trader
RPC Plane
routing engine
health · slot · cost
Provider A
0.91 · 23ms
Provider B
0.84 · 31ms
Provider C
circuit open
BYOP — Bring Your Own Provider. Customer credentials. No data custody.
The problem

"99% uptime" measures HTTP 200.
It doesn't measure data correctness.

Solana RPC has fragmented into proprietary stacks — different caches, proxies, validator clients, and patches. Two providers can return different answers to the same request, and both report perfect health.

Stale slot

Provider returns data 14+ slots behind tip.

~400ms slot times mean staleness is measured in fractions of a second. Standard monitoring won't catch it.

Cache corruption

Some getAccountInfo calls return stale balances.

Partial cache rot inside provider middleware — silent and selective.

Bad rollout

4xx/5xx for 30 minutes during a config push.

Provider rolls a bad deploy. Your trading desk eats it until rollout completes.

Write degradation

Transactions accepted but never land.

sendTransaction returns OK; the leader never sees it. Legacy single-provider setups have no recourse.

Replay lag

Historical queries silently return incomplete data.

Slow block replay on the backend. The query succeeds. The data is incomplete.

Metered billing

Every getTransaction is now a line item.

Providers moved to per-call billing. Teams want to cut cost without cutting reliability — provider dashboards aren't built for that.

Recent public incidents
Jan – Feb 2025
Provider X

3+ weeks of degraded Solana performance — intermittent 503s, stale data across an entire region.

Nov 2024 · Apr 2025
Wallet Y

Two outages in six months affecting a major wallet's primary RPC pool.

Feb 2025
Analytics Z

Indexer lagged 20,000+ blocks behind tip due to a single RPC provider failure.

Each of these would have been mitigated by automatic cross-provider failover.
How it works

Sidecar binary.
Not a network proxy.

RPC Plane runs locally alongside your application — speaking directly to provider endpoints with your credentials. Routing decisions are made at the application layer, so providers see normal client traffic.

  • No man-in-the-middle, no SSL/TLS interception, no data custody.
  • Doesn't break GeoDNS, BGP Anycast, or provider-side failover.
  • Single binary. Single config file. No Postgres, no Redis, no orchestration.
  • Hot-reload config — changes apply without restart.
# rpc-plane.toml — minimal config

[[providers]]
name = "provider-a"
url  = "https://rpc.provider-a.example/${PROVIDER_A_KEY}"

[[providers]]
name = "provider-b"
url  = "https://rpc.provider-b.example/${PROVIDER_B_KEY}"

[[providers]]
name = "provider-c"
url  = "https://rpc.provider-c.example/${PROVIDER_C_KEY}"

[routing]
strategy = "best_score"
retry_on = [429, 503]
Run rpc-plane init to generate a full config with all options and defaults.
Core capabilities

Built for Solana's
consensus, commitment, and write-path semantics.

Intelligent routing

Route per request based on real-time latency, error rate, slot freshness, and response consistency. Not round-robin. Not random.

Cross-provider validation

The novel differentiator. Detect when providers disagree on state — same request, divergent answers. Route to the freshest. Flag the stale one. No existing product does this.

Automatic failover

Per-provider circuit breaker — opens on failure, probes for recovery, resumes traffic automatically. No engineer wake-up needed.

Slot drift detection

Continuously track each provider's slot height against network tip. Deprioritize drifting nodes before applications notice.

Commitment-aware

Understands processed / confirmed / finalized semantics. Validates that providers actually respect the commitment level requested.

Cost-aware routing

Knows each provider's pricing model per method. Routes reads to the cheapest healthy provider. Tracks credit burn — alerts before budgets are exhausted.

Write broadcast

sendTransaction and simulateTransaction broadcast to every healthy provider in parallel — maximizes landing probability.

Prometheus metrics

Exporter on :9401/metrics — health scores, slot drift, circuit state, failover counts, request durations.

Zero infrastructure

Single binary. Single config. No databases, no Redis, no queues. Drop in alongside your service.

Routing strategies

Pick how your reads
are dispatched.

Writes always broadcast to every healthy provider regardless of strategy.

default
best_score
Always route reads to the highest-scoring healthy provider.
probabilistic
weighted_random
Probabilistic selection by configured weight × current health score.
ordered
failover_ordered
Try providers in config order. Skip open circuits.
parallel
parallel_race
Send to every healthy provider, return the fastest valid success.
Quick start

Three commands.
No signup. No telemetry. Just a binary.

1 Install
# Linux / macOS
curl -sSf https://rpcplane.dev/install.sh | sh
Or pull from GitHub Releases · cargo install · Docker.
2 Configure
rpc-plane init
# writes rpc-plane.toml with
# every option and its default
Add your provider URLs. Reference env vars with ${VAR}.
3 Run
rpc-plane run
# proxy listening on :9400
# metrics on :9401/metrics
Point your app at http://localhost:9400 — done.
$ rpc-plane status live provider health
  NAME        SCORE          SLOT   DRIFT     LATENCY  CIRCUIT
  --------  -------  ------------  ------  ----------  -------
  provider-a  0.912   341892471       0      23.4ms     closed
  provider-b  0.841   341892469       2      31.1ms     closed
  provider-c  0.000           —       —           —     open
Linux
x86_64 · aarch64
macOS
x86_64 · arm64
Docker
ghcr.io/rpcplane
Source
cargo install
Roadmap

Free binary today.
Dashboard and managed cloud next.

The proxy stays free. Paid tiers add visibility, control, and zero-ops on top of the same routing engine.

Available now free

Self-hosted binary

The proxy. Bring your own provider keys. Run anywhere.

  • Multi-provider routing
  • Health scoring & slot drift
  • Circuit breakers & failover
  • Cost-aware routing
  • Prometheus metrics
install →
Coming soon paid

Cloud dashboard

Unified observability and cost analytics across every provider. We host the dashboard; you run the binary.

  • Provider health history
  • Routing decision audit log
  • Cost analytics by method & provider
  • Budget pacing & alerts
  • Slack · Discord · PagerDuty
join waitlist →
On the roadmap managed

Full cloud

We run the proxy too. Configure providers in the UI. Zero ops.

  • {tenant}.rpcplane.dev endpoint
  • Encrypted key storage, instant rotation
  • Server-side credit accounting
  • Team access & SSO
  • Usage-based pricing
notify me →
Positioning

We are not another RPC provider.
We are the neutral layer above them.

Providers come and go. Pricing changes. Outages happen. Architectures diverge. The reliability layer persists.

Not
An RPC provider
Not
A token or marketplace
Not
A network proxy
Just
Software that makes existing providers work better, together

Run the binary today.
Get the dashboard when it ships.

The proxy is open and free. The dashboard is in active development. Drop your email to get early access and incident-analysis content as we publish it.