Methodology

How we score Solana staking.

VALSYG produces decision-grade scores and risk signals for two parallel surfaces of Solana staking: native validator delegation and liquid staking tokens (LSTs). This page explains how the scoring works at a level any user can verify and any operator can defend against — without disclosing the exact internals that would invite gaming.

public methodologyLast updated · 2026-05-14

Versioned and replayable.Every score VALSYG produces is stamped with the methodology version that produced it, so historical scores remain meaningful when calibration evolves. We disclose the principles, the dimensions, and the data sources publicly. We do not publish the exact weights, thresholds, and fallbacks — those are the surface that would invite gaming if exposed, and they sit behind a methodology that any user can challenge on its merits.

On this page

What this methodology does and why

A delegator chooses where to stake. An LST holder chooses which protocol to hold. Both decisions are routinely made on incomplete information: headline APYs hide commission drag, peg dynamics, and the validator set underneath the wrapper. VALSYG’s job is to compress the relevant public information into a defensible answer to one question: should I be where I am?

Methodology matters because every score the product shows must be reproducible from public inputs. If the answer cannot be reconstructed from the chain and from the validator-metrics providers we name, it is not a score — it is an opinion. We ship opinions only as recommendations, never as scores.

Data sources

All inputs are public.

  • Solana mainnet RPC.Vote accounts, leader schedule, slot outcomes, stake account state, epoch credits — every validator-side input is read directly from the chain.
  • Validator metrics provider. Per-validator commission, skip rate, performance score, and operator metadata from a public validator-stats provider. The product cross-validates against on-chain reads on the fields that overlap.
  • LST protocol data.Per-protocol state — total value locked, current peg, fees, redemption mechanism, underlying validator-set composition — read from on-chain sources where available. Protocol-disclosed APIs are used only for fields the chain does not surface.

We do not use private data feeds, scraped opinion data, or off-chain reputation overlays. If a number on a VALSYG screen cannot be traced back to a public input on this list, that is a bug.

Validator scoring dimensions

A validator’s composite score is a weighted blend of five dimensions. Each one captures a category of behavior that materially affects delegator outcomes. The dimension set is stable and disclosed publicly. The weights and thresholds behind each dimension are recalibrated transparently as the network evolves.

Reliability

How dependably the validator participates in consensus. Captures uptime and skipped-slot behavior over a rolling window. A validator that misses leader slots is either offline or under-resourced; both warrant a lower score regardless of other strengths.

Effective yield

What actually reaches the delegator, after commission, skipped slots, and MEV-related flow effects. Distinct from headline APY: it answers “how much SOL lands in my wallet per epoch?” rather than “what does the marketing number say?”

Concentration

The validator’s share of network stake. Validators with outsized stake share receive lower scores; the dimension exists to surface the contribution your marginal stake would make to network centralization. The intent is to inform, not to coerce.

Commission fairness

Current commission level combined with the validator’s recent history of changing it. A stable rate scores better than the same rate reached by a recent increase. A validator that raises commission right before reward distribution is penalized harder than one with a higher but stable rate.

Stability trend

The direction the underlying signals have been moving. A validator whose metrics are quietly degrading scores lower before any absolute number crosses a threshold. Requires accumulated history to be meaningful; on a fresh validator the dimension returns a neutral mid-band value until enough history exists.

LST protocol scoring

Liquid staking tokens wrap a set of underlying validators. The LST scoring methodology runs in parallel to validator scoring — never folded into one composite — because LST protocols carry categories of risk single validators do not: smart-contract risk, redemption-mechanism risk, peg deviation, governance and pause authority, and audit recency.

The protocol-level dimensions we evaluate cover:

  • Effective yield net of fees— what a holder receives after protocol fees.
  • Peg behavior— the LST/SOL ratio and its historical stability.
  • Redemption robustness— how the protocol’s redemption mechanism behaves under load.
  • Underlying validator quality— the aggregate of the validator scores under the protocol. This is where the two domains connect.
  • Validator-set concentration— how the protocol’s stake is distributed across the underlying set.
  • Protocol maturity— deployment age, audit posture, incident history.
  • Governance transparency— disclosure quality on governance, pause authority, and the fee-change process.

Cross-plane comparisons happen at the recommendation layer, not by mixing components into a single number. A user is never asked to read a validator score and an LST score on the same scale.

Risk signals

Score is a continuous metric; risk signals are discrete events. A signal fires when a specific condition is met and shows up on the decision card with the time of first occurrence. Signals do not directly lower the composite score — they inform the recommendation and alert layers. Their job is to name a change, not to hide inside the number.

Categories of validator-side signals:

  • Commission events — level outside the actionable range, or material recent increases.
  • Performance events — uptime or skip-rate behavior outside acceptable bands.
  • Trend events — sustained negative drift in the underlying signals.
  • Concentration events — outsized stake share.
  • Lifecycle events — delinquent or decommissioned status.

Categories of LST-side signals (when the LST surface is active):

  • Peg deviation outside the protocol’s historical band.
  • Redemption queue stretching beyond expected behavior.
  • Protocol pause or active incident.
  • Audit-status concerns.
  • Material change in underlying validator-set composition or quality.
  • Protocol-fee changes.

How recommendations work

Scores are inputs to recommendations; recommendations are outputs to users.

The recommendation engine compares your current exposure — a specific validator or a specific LST — against candidates and emits one of: stay, switch validators natively, swap LSTs, allocate to or from liquid exposure, unwind to native, or no action. Every recommendation carries:

  • An explicit action path with the friction it implies (epoch-bound for native moves; typically immediate for LST swaps).
  • A typed list of reasons drawn from the same dimensions visible on the score panel, so the user sees what drove the verdict.
  • Evidence— snapshot references, version stamps, on-chain anchors — sufficient to reproduce the recommendation from the same inputs.
  • A confidence band reflecting data freshness, sample maturity, and signal stability on both sides of the comparison.

A user can render any recommendation as “we recommend X because A, B, and C” without a second query. That is the bar.

When the system stays quiet

“No action” is a first-class output, not a fallback. The engine declines to recommend when one or more of the following holds:

  • Data freshness on either side is below an acceptable bound.
  • The protocol or validator involved is young enough that confidence is structurally low.
  • An LST is currently outside its normal peg band, where a recommendation risks being arbitrage-shaped rather than user-shaped.
  • Material risk signals are active on either the current or candidate side.
  • The improvement available from acting is smaller than the friction cost.
  • Current exposure is already well-positioned given visible information.

When the system declines, the user sees why. Silence with no reason is itself a trust failure.

Limitations

We are honest about what this methodology cannot do.

  • We score behavior we can read. Off-chain operator conduct, custodial activity, and information that lives only inside protocol teams is not scored.
  • A score is a current-state summary. It is not a forecast and should not be read as one.
  • LST risks compound: a holder is exposed to validator-side risk and protocol-side risk simultaneously. Our scores describe each plane; combining them into a personal decision remains your call.
  • Calibration drifts as the network does. The methodology is recalibrated openly; historical scores remain valid under the version that produced them.

Trust commitment

  • Every input is public. Every score is reproducible from those inputs.
  • The methodology is versioned. Old scores remain valid under the version that produced them.
  • We disclose data sources, dimensions, and signal categories on this page.
  • We do not certify any validator or LST as “safe.” Audits and incidents are facts; safety is not a label we apply.
  • We do not use LLM-generated narrative for scores or recommendations. Reasoning copy is deterministic and grounded in the same data the user can inspect.

Found a gap? Disagree with how a dimension is described? Email methodology@valsyg.com— substantive corrections ship alongside their rationale.

See also: live system status · back to home