This section defines how stealth efficacy is measured, aggregated, and scaled. Metrics provide not only monitoring but also regression detection, validation of design assumptions, and provisioning for growth across concurrent domains. Metrics provide not only monitoring but also regression detection, validation of design assumptions, and provisioning for growth across concurrent domains. All metrics are reported as global baselines. When available, additional slices are reported without altering definitions or formulas. Optional stratificationsDocumentation Index
Fetch the complete documentation index at: https://none-38c466ad.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
– By operator mode (throughput, stealth, exploratory)
– By adversary pressure tier (clean, challenged, hostile)
– By domain cohort (ASN, CDN, fingerprint family)
– By pipeline stage (timing, concurrency, resolution, fingerprint, transport)
9.1 Objectives
- Quantify resilience against bans, stability of session reuse, and distinguishability relative to population baselines.
- Detect regressions through rolling windows and statistically valid confidence intervals.
- Scale collection and aggregation with bounded compute cost as the number of active domains grows.
9.2 Event Model
Every request contributes structured observations including:- Domain, timestamp, status, latency
- Transport policy chosen
- TLS path profile
- Fingerprint and session identifiers
- Cooldown state
9.3 Core Stealth Metrics
9.3.1 Ban Rate and Reduction
For window size with ban-class responses: Relative reduction from baseline : Confidence intervals use Wilson bounds:9.3.2 Session Reuse Lifespan
If is the number of requests served by session , then with sessions: Distributions are reported via percentiles (), partitioned by termination trigger.9.3.3 Distinguishability
Let denote the observed fingerprint distribution and the population baseline. Gap: Positive implies concentration relative to the baseline, making clustering easier. Optional comparison: Jensen–Shannon divergence9.4 Windows and Aggregation
- Sliding windows of length with step .
- Exponential moving averages:
- Proportions use Wilson intervals; means use bootstrap when is small.
- Aggregates reported both per-domain and globally, weighted by request volume.
9.5 Supporting Diagnostics
- Cooldown duty cycle per domain.
- Header–TLS mismatch rate.
- Transport timeout rates across proxy classes.
- DNS cache hit ratio and resolution latency.
- Dispersion of inter-arrival times.
- Session churn (sessions per fixed number of requests).
9.6 Scaling with Domain Count
Let be the request rate for domain , and . Event volume per unit time: for interval length .- State cost: linear in domain count for counters, histograms, and buffers.
- Computation cost: linear in event volume for aggregation, with entropy and divergence terms scaling with fingerprint pool size.
- Memory: proportional to until buffer saturation.
9.7 Output and Cadence
Aggregates are emitted at regular intervals. Each snapshot includes:- Window length and generation time
- Global aggregates (requests, ban rate with confidence bounds, cooldown duty cycle, entropy gap)
- Per-domain aggregates (ban rate, cooldown minutes, session reuse distributions, transport/timeouts, TLS profile mix, DNS cache ratios)
9.8 Coupling Hazards in Metrics
Metrics are interdependent:- Cooldowns shorten , biasing observed session reuse.
- Fingerprint rotation frequency alters entropy, exaggerating distinguishability when pool size is small.
- Session churn feeds back into ban rate estimates when failures trigger premature retirements.