Docs & runbooks

Documentation for Validator Tools.

Setup, operations, and runbooks for EIP-7002 exits, EIP-7251 consolidations, payout routing, credential changes, MEV-Boost and relay policy — plus signing lanes and troubleshooting.

Last updated: Feb 2025 Covers desktop builds for Linux, macOS, Windows
Setup → rehearse → execute flows
Safe / HSM-friendly signing lanes
Queue-aware 7002 / 7251 planning

Quick start

The docs are organized around repeatable runs: prepare, check, rehearse, sign, execute, and record. Use this page as a map: follow setup, connect endpoints, then jump to the operations you need.

Install

Get the desktop build

Choose Linux AppImage, macOS, or Windows installer. Verify checksums/signatures from the releases page before running.

Connect

Add endpoints and profiles

Point the app at your beacon/execution endpoints (local or authenticated providers). Create profiles per environment to avoid mix-ups.

Rehearse

Dry-run before mainnet

Build runs on Holesky or a local net first. Record the exact sequence, then replay on mainnet with diffs and history preserved.

Setup

Install the app, connect endpoints, and keep environments isolated so runs stay predictable.

Install

Desktop builds

  • Download the installer for your OS from the Releases page. Validate hash/signature.
  • No accounts required; works without a backend. Updates follow the same flow.
  • Keep the data directory backed up if you rely on local history and exports.

Network

Endpoints and access

  • Use read-only beacon/EL endpoints where possible; support tokens/basic auth/custom headers.
  • Label endpoints clearly by environment (mainnet, Holesky, staging) to avoid cross-use.
  • Apply rate limits/backoff for hosted providers; errors stay visible in run previews.

Profiles

Environment isolation

  • Create profiles per cluster or customer. Each keeps endpoints, limits, and run history separate.
  • Show the active profile on every run summary to avoid wrong-network mistakes.
  • Include profile and network identifiers in exported filenames/manifests.

Operations and runs

Use run templates to plan exits, partial withdrawals, consolidations, credential changes, and MEV/relay policy in one place.

EIP-7002

Exits and partial withdrawals

Turn large-scale exits into a predictable runbook (dozens to tens of thousands of validators) with queue-aware 7002 flows.

Preconditions and safety model

  • Clarify scope and intent: which validators exit, why, and whether partial withdrawals happen first.
  • Separate roles: planners approve campaigns; ops run nodes; signing can stay offline/HSM.
  • Node health and data quality are non-issues: synced beacon/EL, stable withdrawal mapping, config freeze.

Typical failure modes (ad-hoc)

  • Uneven offboarding: subsets stuck in queues without visibility or control.
  • Opaque tx behaviour: gas/fee handling is “whatever the script did”, failures unnoticed.
  • No clean history: hard to reconstruct which validators were targeted when and with which params.
  • Poor comms: manual status collection for clients/governance.

Validator Tools flow (campaign-based)

  • Connect beacon/EL endpoints; import validator set and labels.
  • Define exit campaign: group by client/pool/region; choose partial-withdraw-first vs direct exit.
  • Generate EIP-7002 requests; configure gas/fee limits; export unsigned payloads for offline signing.
  • Sign externally; import signed payloads; schedule submits with rate limits and maintenance windows.
  • Queue-aware scheduler: max new exits per interval, pause/resume, monitor withdrawal queue conditions.
  • Track states: planned → sent → included → fully exited; campaign progress by group.
  • Export CSV/JSON evidence: what exited, when included, parameters used.

Operational recommendations

  • Rehearse full campaigns on Holesky with synthetic sets before mainnet.
  • Keep campaigns focused (don’t mix migrations and risk exits in one plan).
  • Document policy: fleet fraction allowed “in exit”, pause conditions, approval rules.
  • Use maintenance windows and clear rollback/pause triggers when external conditions change.

Quick sequence (runbook)

  • Scope validators and intent → create campaign → group by client/pool.
  • Build 7002 payloads with fee caps → export unsigned → sign offline/HSM → re-import.
  • Schedule with rate limits and windows → monitor progress/queues → pause if needed.
  • Export final state for governance/client evidence; archive alongside run history.

EIP-7251

Consolidations (MaxEB)

Move a validator fleet between providers without “exit everything, re-deposit” chaos. Use MaxEB to consolidate and migrate 20–200+ validators from provider A to B with predictable payouts and queues.

Why migrations hurt without consolidations

  • Exit + re-deposit churn causes reward gaps, key/deposit overhead, and hard cutovers.
  • Unstructured spreadsheets: no single source of truth for what moved vs pending.
  • Payout confusion: rewards split between providers; manual reconciliation for clients.
  • Queue timing ignored: exits land in bad windows, completion becomes unpredictable.

How Validator Tools helps (consolidation-aware planner)

  • Label by provider/client/pool; see “Provider A vs B” at a glance.
  • Plan scope: pick which validators to consolidate, which to exit, which stay.
  • Propose MaxEB consolidations to shrink validator count for the destination.
  • Simulate target MaxEB layouts and capacity; avoid unnecessary new validators.
  • Generate 7251 consolidations and 7002 exits together with fee/timeout policies.
  • Schedule in waves to control queue pressure; adjust if network conditions change.
  • Export before/after and client-facing reports (CSV/JSON) for reconciliation.

Suggested migration flow (A → B)

  • Connect endpoints; import validators at provider A; tag with “Provider: A”, “Client/Pool”.
  • Define objectives: target validator count/MaxEB at B, timeframe, constraints.
  • Create a migration plan (e.g., “A → B, Client X”): mark validators as consolidate/exit/stay.
  • Let the tool propose consolidations; tweak which indices merge vs exit outright.
  • Model rewards/capacity impact; ensure payout routing stays consistent.
  • Generate ops: 7251 consolidations + any 7002 exits; apply gas/fee windows and risk limits.
  • Execute in waves with scheduling; monitor progress and relay/queue conditions.
  • Export logs/reports for stakeholders; document what moved, when, and resulting balances.

Operational recommendations

  • Migrate client-by-client or pool-by-pool; finish and document each slice.
  • Communicate MaxEB effects: validator count may change but exposure remains aligned.
  • Keep roles separated: business decides scope, ops executes, security controls signing.
  • Rehearse on Holesky/non-critical envs before mainnet moves; reuse templates.
  • Align any “exit-only” validators with the exit guide so they’re tracked alongside consolidations.

Routing

Withdrawals and payouts

Get deposits and withdrawal credentials right the first time: keep owner mapping, 0x00 → 0x01 upgrades, and payout destinations explicit.

Why it matters

  • Deposits set withdrawal credentials; rushing this creates long-lived ownership ambiguity.
  • 0x00 credentials left un-upgraded turn into “forgotten debt” years later.
  • A missing “who owns what” map makes exits and payouts risky under pressure.
  • Every later action (partial withdrawals, exits, consolidations) depends on getting this mapping right.

Common mistakes

  • Wrong/unclear withdrawal address at deposit time.
  • Forgotten 0x00 → 0x01 upgrades; unclear which validators still use 0x00.
  • No canonical mapping of validator index → owner/account → withdrawal address.
  • Owner data split across spreadsheets, emails, and operators’ notes; nothing authoritative.
  • Upgrades rushed without a maintenance window, rollback plan, or verification step.

Using Validator Tools

  • Import validators, current withdrawal credentials, and any existing owner map (CSV/notes).
  • Mark owners/accounts per validator group, attach withdrawal addresses, and store contract/ID references as notes.
  • Filter for 0x00 credentials; stage 0x00 → 0x01 upgrades with verified execution addresses and a scheduled window.
  • Run preflight checks, send upgrades via your signing flow, then confirm on-chain state; log timestamp and operator.
  • Standardize onboarding checklist: requester, owner, withdrawal address confirmation, and proof of control.
  • Export canonical “who owns what” reports (CSV/JSON) for finance/legal, client comms, and exit planning.
  • Keep the map live: whenever validators are added or removed, update the owner/withdrawal mapping in-app.

Operational tips

  • Make withdrawal configuration explicit at deposit time; don’t accept “tool default” addresses.
  • Rehearse onboarding (including 0x00 → 0x01) on testnets for new teams and processes.
  • Keep owner/withdrawal mapping visible next to run history and payouts; avoid side spreadsheets.
  • Plan 0x00 → 0x01 upgrades like any change: window, verification, rollback, and clear approvals.
  • Align onboarding and upgrades with signing guardrails so custody and slashing protection stay consistent.
  • Set a cadence to review for drift: spot-check owners, addresses, and remaining 0x00 credentials.

Suggested flow (quick checklist)

  • Install app on an ops workstation → connect beacon/validator endpoints.
  • Import validator list + current withdrawal credentials → load any owner CSV.
  • Tag owners/accounts and confirm withdrawal addresses per group → attach notes/IDs.
  • Filter 0x00 set → pick target 0x01 addresses → schedule and execute upgrades → verify on-chain.
  • Lock an onboarding template: requester, owner, withdrawal address proof, signing path, support docs.
  • Export “validator → owner → withdrawal address” regularly and store with finance/legal records.

MEV

MEV-Boost and relay policy

Turn MEV-Boost from a black box into an explicit policy that balances profit, liveness, and neutrality.

What MEV-Boost changes

  • Profit: higher expected rewards from builders competing for your blockspace.
  • Liveness: extra moving parts — relays, builders, sidecar — add failure modes.
  • Neutrality: relay choices shape censorship behaviour; make that stance intentional.
  • Roles: builders bid, relays broker, sidecar (mev-boost/commit-boost) runs the auction.

Policy choices to make

  • Relay set composition: censoring vs non-censoring, reliability, fee profile.
  • How many relays to use; timeouts and fallbacks to local blocks when relays fail.
  • What to do during partial outages: prefer neutral-only mode or accept compliant relays.

Using Validator Tools

  • Create named relay sets (e.g., “Neutral default”, “Dual policy”, “Holesky test”) with rationale and constraints.
  • Attach relay policies to validator groups to avoid drift; log who approved each change.
  • Configure fallbacks: wait window for relay bids, when to build locally, and per-group behaviour.
  • Run preflight checks and test changes on Holesky/non-critical envs before mainnet rollout.
  • Monitor MEV share, added reward vs local blocks, and missed/late proposals tied to relay issues.

Relay strategy & operations

  • Mix non-censoring relays with minimal compliant ones if required; diversify operators.
  • Review relay performance and fees regularly; switch sets quickly with an audit trail.
  • Highlight relays that time out or underperform; keep “neutral-only” switch handy for incidents.

Stakeholder answers

  • Yield: MEV monitoring view shows reward delta vs local blocks.
  • Censorship stance: visible in relay policy definitions (censoring/neutral mix).
  • Liveness: missed/late proposals correlated to relay issues and fallback settings.

Signing and safety

Keep signing separate. Use Safe, HSMs, or offline signers with clear prepare → sign → execute stages.

Separation

Prepare locally, sign elsewhere

  • Validator keys, keystores, and seed phrases stay outside the app.
  • Export unsigned payloads and summaries for Safe/HSM/offline signing.
  • Redact secrets from exports; keep UI lock enabled on shared workstations.

Controls

Guardrails against mistakes

Keep validator keys safe, unique, and boring. Most slashing comes from operational mistakes, not adversaries.

Principles

  • One validator key → one active signer. Never two, never “just for a minute”.
  • Slashing DBs are critical state, not cache; migrations must preserve them.
  • Every move leaves an audit trail: who moved what, where, when, and why.

Common failure modes

  • Duplicate validators across environments (rushed migrations, failovers, “bring it up before shutting down”).
  • Broken/missing slashing protection DBs (bad exports, truncation, multiple clients without a shared backend).
  • Over-privileged keys and lax signer policy; remote signers with broad access and weak approvals.

Why this needs its own runbook

  • Slashing is targeted: double/surround votes or conflicting block proposals; usually operator error, not an attacker.
  • Serious key mistakes undo years of uptime; restaking multiplies blast radius across protocols.
  • “Keys in a folder” or “copy the validator dir over SSH” does not scale past a handful of validators.

Different keys, different risk surfaces

  • Validator signing keys (BLS) — slashable if mishandled; treat as high-risk online secrets.
  • Withdrawal/execution keys — not slashable, but control funds directly; tighter custody and approvals.
  • Session/remote signer keys — scoped, but still require policy and monitoring.

Scale and restaking amplify mistakes

  • A single double-sign at scale or across restaked positions can propagate losses across protocols.
  • “Copy the validator directory over SSH” stops being acceptable when hundreds of validators are involved.
  • You need a repeatable lifecycle for keys and protection data, not one-off fixes.

Using Validator Tools

  • Install the desktop app on an ops workstation with access to validator clients or remote signers.
  • Build a live inventory: validators, signers, key IDs/labels, attached slashing DBs, environment tags.
  • Mark a single authoritative signer per validator; flag any apparent duplicates immediately.
  • Link protection DBs to validators; track location, last backup, and whether they’re shared.
  • Use migration wizards: source → target, include slashing DB export/import, switch authority only after verification.
  • Enforce rules: no validator without protection DB; no key live in more than one environment; warnings/blocks on violations.
  • Integrate remote signers/session keys: register endpoints, associate validator sets, and scope permissions.
  • Keep an audit log of key changes, protection moves, and authority flips; export for security reviews.
  • Use dry-run migrations on non-critical validators to validate the flow end-to-end.

Example anti-slashing policy (human-readable)

  • Each validator key MUST have exactly one authoritative signer.
  • Slashing protection DB MUST be present/healthy before a key goes live.
  • All migrations MUST use the migration flow (with DB export/import and verification).
  • Any exception MUST be documented and approved by the on-call lead.

Step by step (safer keys & slashing)

  • Install the app on an ops workstation; connect validator clients/remote signers.
  • Import validator indices, key labels/IDs, and protection DB locations; tag by environment/DC.
  • Select an authoritative signer per validator; mark others as non-authoritative to surface conflicts.
  • Link each validator/signing group to its slashing DB; record last backup and whether it’s shared.
  • When migrating: run the wizard (source → target), include DB export/import, verify, then switch authority.
  • Define anti-slashing rules (no multi-env live keys, no missing DB) and let the app warn/block.
  • Review and export the key-change log regularly for security/governance checks.

Operational tips

  • Rehearse migrations with a non-critical validator end-to-end before scaling up.
  • Back up and monitor slashing DBs; never “clean them up” under disk pressure.
  • Review failover designs to ensure they don’t accidentally double-sign.
  • Enforce “no duplicate keys” as a hard rule; tooling + runbooks should back it, not just habit.
  • Treat slashing DBs as critical infra: backups, health checks, documented move procedures.
  • Start with a small, fully-documented subset; once solid, roll out to the whole fleet.
  • Pair redundancy/failover plans with slashing guardrails so backups never double-sign.

Process

Review and approvals

  • Use multi-person review for large batches or MaxEB moves.
  • Store run history and signed artifacts together for audits and post-mortems.
  • Lock signing behind maintenance windows if your operations require it.

Observability and exports

Keep stakeholders and tools in sync with stable outputs and lightweight embeds.

Redundancy

Uptime without double-signing

Design for high uptime with zero double-sign tolerance. Redundancy belongs to infra; signing authority stays singular.

Goals

  • Stay live enough: validators keep duties during hardware/network issues.
  • Never double-sign: keys must have exactly one active signing authority.

Patterns

  • Single signer + standby infra (cold/attachable standby, no key copies).
  • Remote signer + multiple CL/EL nodes (one signer, resilient infra).
  • DVT/threshold cluster as the sole signer — no extra full-key backups.
  • Anti-pattern: two validator clients with the same keys “for redundancy”.

Risk patterns to avoid

  • “Backup node” with full key copy; switch hits before the primary is stopped.
  • Uncoordinated multi-region setups that drift and run keys concurrently.
  • Mixing DVT with ad-hoc key copies; unclear where signing actually lives.

Using Validator Tools

  • Register CL/EL/VC nodes and remote signers (HSM, signer clusters, DVT) in the infra view.
  • Select a redundancy pattern per validator group (active-standby, remote signer + multi-CL/EL, DVT, single node).
  • Attach validators to exactly one signing authority; treat other nodes as infra-only.
  • Model failover triggers/runbooks (manual, health check, orchestrator) and link to components.
  • Run a double-sign risk check to flag multiple signers per key or unsafe mixes.
  • Track changes over time; keep topology and runbooks in sync.

Operational recommendations

  • Separate infra redundancy from signing authority; one signer per validator key.
  • Treat failover as design, not a script hack; document triggers and approvals.
  • Model before deploying; revisit after major changes (DVT, new regions, signer shifts).
  • Pair with monitoring/alerting so failover conditions are observable and safe.

Monitoring

Monitoring & alerting at scale

Make alerts rare, actionable, and predictable — from “is the node up?” to “which validators are at risk right now?”.

Layers to watch

  • Node health: CL/EL/VC up, synced, peers, RPC responsiveness, CPU/RAM/disk.
  • Validator duties: on-time attestations/proposals; missed/late duties, inclusion delays.
  • Business impact: how many validators/clients affected; fraction of rewards at risk.

Use SLI/SLO framing (e.g., 99.5% on-time duties / 30d, max 1% of validators impacted by a single node).

Failure modes in monitoring

  • Too noisy: transient CPU/peer blips spam channels → real incidents get muted.
  • Too quiet: only host-down alerts; no view of missed duties or validator-level risk.
  • Too fragmented: CL/EL/VC/MEV monitored separately, no unified validator view.

Using Validator Tools

  • Register data sources: beacon/validator APIs, EL RPC, Prometheus/metrics endpoints.
  • Group validators into alert domains (home stakers, client A, pool X) with domain-specific SLOs.
  • Define SLIs/SLOs per domain (on-time duties %, missed proposals frequency, CL/EL uptime).
  • Map SLOs to alert rules: warning vs critical based on breach projection/trend windows.
  • Configure channels/severity per domain (paging vs chat vs email reports).
  • Test with controlled scenarios; review alert history in-app; tune and freeze a baseline policy.

Good alert payloads include

  • Severity + type + domain + affected validators.
  • Symptom window, SLO target, breach projection.
  • Linked services/nodes and the next step/runbook.

Channels & ownership

  • Paging: critical SLO threats (beacon/VC down for key domain, sustained missed duties).
  • Ops chat: warnings (near-capacity node, mild missed attestations, relay issues).
  • Email/reports: monthly SLO summaries, long-term drift, alert tuning history.

Practical recommendations

  • Start from SLOs, not dashboards; design alerts backwards from “what good looks like”.
  • Use domains to avoid one-size-fits-all sensitivity; reflect policy in the tool.
  • Treat monitoring config as policy with change tracking, not casual edits.
  • Review alert history regularly; prune noise; align with capacity/resilience plans.

Reporting

Rewards, taxes & reporting

Turn validator rewards into clean, auditable reports: per-validator rewards/penalties → CSV/JSON exports → accountants and tax workflows.

Why it’s tricky

  • Fragmented data (explorers, CL/EL sources); no canonical CSV for accountants.
  • Components: CL rewards, EL tips/MEV, penalties/slashing — all time-stamped per validator.
  • Tax periods/FX: need valuation rules (periods, base currency, timing of valuation).
  • Entity mapping: indices must tie to entities/clients/pools for statements.

Using Validator Tools

  • Import validators (index/withdrawal) and label Entity/Client/Pool/Strategy.
  • Enable reward/penalty tracking; backfill history; include EL tips/MEV when present.
  • Set reporting periods (monthly/quarterly/annual) and base currency/valuation rules.
  • Review per-validator history and aggregates by labels; fix labeling gaps early.
  • Export CSVs per validator and aggregated per entity/client with timestamps/type/amount/IDs; JSON for pipelines.
  • Generate summary reports for tax/accounting discussions; include assumptions.
  • Lock/mark periods as final; later corrections show as adjustments, not silent changes.

Support for penalties and edge cases

  • Penalties and slashes are separate lines; apply your accounting policy explicitly.
  • Stable column names/IDs make downstream automation repeatable.
  • Easy answers to “how much did Client X earn in Q1?” or “which validators were penalized last year?”.

Practical recommendations

  • Separate data prep (Validator Tools) from tax interpretation (accountant’s policy).
  • Make reporting cadence explicit; align exports with those periods.
  • Pilot on a subset/one period in parallel with your current process; compare totals and format feedback.

Teams

DVT / BYOV & team operations

Give distributed validator teams a shared control panel, not more scripts. Standardise exits and ops for DVT/BYOV and multi-operator setups.

Why it’s different

  • Multiple operators govern a cluster; actions must respect shared approvals/multisig.
  • Scripts don’t scale across organisations: hard to audit, coordinate versions, or onboard new operators.
  • The chain-facing ops are the same (7002 exits, 7251 consolidations, credential changes), but roles are split.

Using Validator Tools

  • Shared inventory: register validators/clusters with labels (DVT group, provider, client, region).
  • Roles & permissions: define who proposes, reviews, executes; use RBAC and session keys to enforce.
  • Plans instead of ad-hoc: create cluster plans (exit, consolidate, credential change) with constraints and visibility for all operators.
  • Approvals/multisig: require reviewer sign-off before execution; map prepared txs to Safe/multisig flows when needed.
  • Execute in-app: build, sign/export, and submit on-chain ops with live status visible to the whole team.
  • Audit history: timeline of proposed/approved/executed actions, per-cluster summaries, exportable logs for governance/incident threads.

Operational recommendations

  • Agree on the common toolset early; avoid per-operator script divergence.
  • Make roles explicit and visible in the GUI, not just in docs.
  • Pilot on a test/non-critical cluster; exercise approvals and visibility before mainnet use.
  • Keep signing authority aligned with the key management guide; DVT/threshold signer = single authority, no extra key copies.

History

Run history

Each run stores intent, checks, payloads, approvals, and execution results. Use it for handoffs, audits, or incident timelines.

Exports

Reports and payloads

  • JSON/CSV exports for accounting, monitoring, and treasury workflows.
  • Include network/profile identifiers and hashes for verification.
  • Keep formats stable so dashboards and automations do not break.

Integrations

Hooks and widgets

  • REST-style hooks and embeddable widgets expose 7002/6110/7251 request and payout state.
  • Use them in internal dashboards or staking explorers for stakeholder visibility.
  • Track relay health, queue depth, and signer status on the live ops board.

Troubleshooting

Common recovery paths for connectivity, queue estimation, and release verification issues.

Connectivity

Endpoint errors

  • Check tokens/headers for authenticated providers; retry with lower concurrency.
  • Validate chain ID and slot/epoch from the endpoint matches the selected profile.
  • Swap to a backup provider if responses are throttled or inconsistent.

Queues

ETA modeling issues

  • Ensure predeploy parameters for 7002/7251 are reachable; refresh when forks upgrade.
  • Rebuild ETAs after long gaps or when queue depth changes materially.
  • Use rehearsal mode on Holesky to confirm ordering before mainnet execution.

Releases

Download verification

  • Match hashes/signatures against the GitHub release page before installing.
  • Verify exported artifacts (manifests + hash) before passing them to a signer.
  • If verification fails, discard the artifact and re-download from the official release.
Need help? Send logs or run summaries (with secrets redacted) to support. Include OS, app version, network, and the endpoints involved.

Resources and references

Download builds, read changelogs, and reach support for operators and teams.

Resource Details Link
Downloads Linux (AppImage), macOS, Windows installers with hashes and signatures. Downloads
GitHub releases Release notes, artifacts, and verification materials. GitHub
Changelog Feature updates, fixes, and telemetry changes. Changelog
Support Email for operators, teams, and security reports. support@eth2tools.org
Verify before you trust. Check installer hashes/signatures and export manifests before they reach any signer or production environment.