Skip to main content

Documentation Index

Fetch the complete documentation index at: https://none-38c466ad.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

This section defines how stealth efficacy is measured, aggregated, and scaled. Metrics provide not only monitoring but also regression detection, validation of design assumptions, and provisioning for growth across concurrent domains. Metrics provide not only monitoring but also regression detection, validation of design assumptions, and provisioning for growth across concurrent domains. All metrics are reported as global baselines. When available, additional slices are reported without altering definitions or formulas. Optional stratifications
– By operator mode (throughput, stealth, exploratory)
– By adversary pressure tier (clean, challenged, hostile)
– By domain cohort (ASN, CDN, fingerprint family)
– By pipeline stage (timing, concurrency, resolution, fingerprint, transport)

9.1 Objectives

  • Quantify resilience against bans, stability of session reuse, and distinguishability relative to population baselines.
  • Detect regressions through rolling windows and statistically valid confidence intervals.
  • Scale collection and aggregation with bounded compute cost as the number of active domains grows.

9.2 Event Model

Every request contributes structured observations including:
  • Domain, timestamp, status, latency
  • Transport policy chosen
  • TLS path profile
  • Fingerprint and session identifiers
  • Cooldown state
Events are append-only, normalized, and grouped with low-cardinality keys for efficient aggregation.

9.3 Core Stealth Metrics

9.3.1 Ban Rate and Reduction

For window size nn with kk ban-class responses: r^=kn.\hat{r} = \tfrac{k}{n}. Relative reduction from baseline rbaser_{\mathrm{base}}: BRR=100rbasercurrmax(ϵ,rbase).\mathrm{BRR} = 100 \cdot \frac{r_{\mathrm{base}} - r_{\mathrm{curr}}}{\max(\epsilon, r_{\mathrm{base}})}. Confidence intervals use Wilson bounds: p^±=r^+z22n±zr^(1r^)n+z24n21+z2n,z=1.96.\hat{p}_\pm = \frac{\hat{r} + \tfrac{z^2}{2n} \pm z\sqrt{\tfrac{\hat{r}(1-\hat{r})}{n} + \tfrac{z^2}{4n^2}}}{1 + \tfrac{z^2}{n}}, \quad z=1.96.

9.3.2 Session Reuse Lifespan

If UjU_j is the number of requests served by session jj, then with mm sessions: Uˉ=1mj=1mUj.\bar{U} = \tfrac{1}{m}\sum_{j=1}^{m}U_j. Distributions are reported via percentiles (P50,P90,P99\mathrm{P50}, \mathrm{P90}, \mathrm{P99}), partitioned by termination trigger.

9.3.3 Distinguishability

Let P={pi}P=\{p_i\} denote the observed fingerprint distribution and Q={qi}Q=\{q_i\} the population baseline. H(P)=ipilogpi,H(Q)=iqilogqi.H(P) = -\sum_i p_i \log p_i, \quad H(Q) = -\sum_i q_i \log q_i. Gap: ΔH=H(Q)H(P).\Delta H = H(Q) - H(P). Positive ΔH\Delta H implies concentration relative to the baseline, making clustering easier. Optional comparison: Jensen–Shannon divergence JSD(PQ)=12DKL(PM)+12DKL(QM),M=12(P+Q).\mathrm{JSD}(P\Vert Q) = \tfrac{1}{2}D_{\mathrm{KL}}(P\Vert M) + \tfrac{1}{2}D_{\mathrm{KL}}(Q\Vert M), \quad M=\tfrac{1}{2}(P+Q).

9.4 Windows and Aggregation

  • Sliding windows of length WW with step δ\delta.
  • Exponential moving averages:
EMAt=αxt+(1α)EMAt1,α=2w+1.\mathrm{EMA}_t = \alpha x_t + (1-\alpha)\mathrm{EMA}_{t-1}, \quad \alpha = \tfrac{2}{w+1}.
  • Proportions use Wilson intervals; means use bootstrap when nn is small.
  • Aggregates reported both per-domain and globally, weighted by request volume.

9.5 Supporting Diagnostics

  • Cooldown duty cycle D=AA+CD = \tfrac{A}{A+C} per domain.
  • Header–TLS mismatch rate.
  • Transport timeout rates across proxy classes.
  • DNS cache hit ratio and resolution latency.
  • Dispersion of inter-arrival times.
  • Session churn (sessions per fixed number of requests).

9.6 Scaling with Domain Count

Let λd\lambda_d be the request rate for domain dd, and Λ=dλd\Lambda = \sum_d \lambda_d. Event volume per unit time: VΛT,V \approx \Lambda \cdot T, for interval length TT.
  • State cost: linear in domain count NN for counters, histograms, and buffers.
  • Computation cost: linear in event volume VV for aggregation, with entropy and divergence terms scaling with fingerprint pool size.
  • Memory: proportional to NN until buffer saturation.

9.7 Output and Cadence

Aggregates are emitted at regular intervals. Each snapshot includes:
  • Window length and generation time
  • Global aggregates (requests, ban rate with confidence bounds, cooldown duty cycle, entropy gap)
  • Per-domain aggregates (ban rate, cooldown minutes, session reuse distributions, transport/timeouts, TLS profile mix, DNS cache ratios)
Cadence balances timeliness (minute-level alerts) against retention (hourly to daily persistence).

9.8 Coupling Hazards in Metrics

Metrics are interdependent:
  • Cooldowns shorten AA, biasing observed session reuse.
  • Fingerprint rotation frequency alters entropy, exaggerating distinguishability when pool size is small.
  • Session churn feeds back into ban rate estimates when failures trigger premature retirements.
Mitigation: report conditional views (e.g., entropy conditional on fixed reuse cap) to decouple effects.

9.9 Operational Outcome

The metrics framework provides a verifiable evidence base for stealth efficacy. It quantifies resilience, validates design invariants, and detects regressions under drift. By scaling linearly with domains and events, it ensures stealth can be monitored at production scale without introducing new signals.