Skip to main content

Documentation Index

Fetch the complete documentation index at: https://none-38c466ad.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

The Concurrency Layer mitigates detection risk by controlling bounded parallelism and enforcing structured cooldowns. Concurrency patterns and clustered error bursts are inexpensive to measure yet highly discriminative. Even small deviations, such as several simultaneous requests to the same host or repeated denial responses within a narrow window, are sufficient to fingerprint automation. This layer enforces discipline at the request edge. It constrains per-domain inflight requests and translates bursts of hostile responses into deterministic cooldown intervals. Timing Layer effects remain orthogonal, ensuring concurrency pressure does not escalate into error cascades. Two complementary mechanisms enforce these invariants:
  1. Domain-level concurrency limiting – bounds instantaneous parallelism.
  2. Cooldown management – converts error bursts into tiered cooldowns with deterministic reentry.

4.1 Design Philosophy

Concurrency is treated as a scarce, high-risk resource. Human users rarely sustain multiple simultaneous connections to the same origin, whereas throughput-driven automation often pushes concurrency until bans appear, then retries aggressively and thereby amplifying detection. The design inverts this pattern. Concurrency is held at low, human-plausible levels. When signs of hostility appear, domains are withdrawn from rotation for computed intervals rather than receiving retries. Reentry follows a tiered cooldown ladder with stochastic variation, approximating human abandonment and later return. Common pitfalls
  • Exceeding per-domain concurrency caps under aggregate load
  • Retrying immediately on error bursts, amplifying detection
  • Allowing simultaneous cooldown expirations across many domains
  • Treating concurrency as throughput resource rather than detection vector
  • Failing to separate global throughput goals from per-domain plausibility

4.2 Domain Concurrency Limiting

Concurrency limiting enforces the invariant: 0inflightd(t)M0 \le \text{inflight}_d(t) \le M for each domain dd at time tt, where MM is a small cap consistent with human plausibility. This guarantees that instantaneous parallelism never exceeds bounds, regardless of upstream scheduling. The result is concurrency histograms that align with observed human baselines even under high aggregate load.

4.3 Cooldown Management

Cooldown management translates clusters of denial-class responses into structured pauses. Instead of retrying immediately, the system enforces tiered cooldown intervals that escalate with recurrence. Each cooldown is applied deterministically but with small random variation to prevent synchronous reactivation. Formally, for a domain dd with error sequence EdE_d, once a threshold condition is met, a cooldown interval τd\tau_d is applied from a predefined tier ladder: τd=τkη,\tau_d = \tau_{k} \cdot \eta, where kk indexes the current tier and η\eta is a bounded random factor. Successful interactions bias domains back toward shorter intervals, ensuring recovery over time.

4.4 Formal Model

Concurrency limiting controls the instantaneous parallelism vector; cooldowns regulate the burst error vector. The effective request rate for a domain dd is bounded: E[rated]min ⁣(ME[S],1E[Δt]),E[\text{rate}_d] \le \min\!\Big( \tfrac{M}{E[S]},\, \tfrac{1}{E[\Delta t]} \Big), where MM is the concurrency cap, E[S]E[S] the mean service time, and E[Δt]E[\Delta t] the mean inter-arrival delay from the Timing Layer. With cooldowns, request activity follows a duty cycle: D=AA+C,λdDmin ⁣(ME[S],1E[Δt]),D = \frac{A}{A+C}, \quad \lambda_d \le D \cdot \min\!\Big( \tfrac{M}{E[S]},\, \tfrac{1}{E[\Delta t]} \Big), where AA is active time, CC is cooldown time, and DD the effective duty fraction. As cooldown tiers escalate, CC increases, reducing effective rates for hostile domains without global slowdown. Parameters omitted by design.

4.5 Request Flow

The concurrency layer enforces its contract through the sequence below: Request Flow

4.6 Operational Outcome

The Concurrency Layer ensures that:
  • Parallelism remains within human-like thresholds.
  • Error bursts translate into deterministic cooldowns rather than retries.
  • Domains reenter request flows only after controlled recovery windows.
Concurrency is thereby transformed from a detection handle into a self-regulating stealth mechanism, aligned with human plausibility and resilient against sustained adversarial pressure.