Practical guidance for implementing APTS Scope Enforcement requirements. Each section provides a brief implementation approach, key considerations, and common pitfalls.
Note: This guide is informative, not normative. Recommended defaults and example values are suggested starting points; the Scope Enforcement README contains the authoritative requirements. Where this guide and the README differ, the README governs.
Implementation: Ingest RoE documents in standardized machine-parseable format (JSON/YAML/XML) before test initialization. Validate schema, required fields, and logical consistency before proceeding.
Key Considerations:
Common Pitfalls:
Implementation Aid: See the Rules of Engagement Template appendix for an illustrative JSON/YAML/XML starter structure that customers, operators, and reviewers can use to validate required RoE fields consistently.
Implementation: Validate all IP ranges using CIDR notation, detect overlaps using standard IP libraries, and explicitly prevent testing against reserved ranges (RFC 1918, 169.254.0.0/16, cloud metadata endpoints).
Key Considerations:
Common Pitfalls:
Implementation: Validate domains against authoritative registrars, implement explicit wildcard inclusion/exclusion policies, and explicitly exclude third-party infrastructure domains.
Key Considerations:
Common Pitfalls:
Implementation: Enforce start/end times with microsecond precision, handle daylight saving time transitions, and honor excluded testing windows. Store all timestamps in UTC, convert for display only.
Key Considerations:
Common Pitfalls:
Implementation: Implement the four canonical criticality levels (Critical, Production, Non-Production, Unknown), map corresponding testing restrictions per level, and enforce restrictions at runtime based on asset classification.
Key Considerations:
Common Pitfalls:
Implementation: Before every network action (HTTP request, port scan, API call), perform atomic scope validation: verify target against RoE, IP ranges, domain scope, and temporal boundaries.
Key Considerations:
Common Pitfalls:
Cloud-Native Scope Validation:
For cloud environments, scope validation must account for:
Implementation: Continuously monitor DNS resolution, cloud infrastructure changes, and scope boundary drift. Alert on unexpected IP-to-domain mappings or infrastructure migrations outside scope.
Key Considerations:
Common Pitfalls:
Implementation: Enforce temporal boundaries continuously during testing, with real-time monitoring of time remaining. Implement countdown alerts at T-60 minutes, T-30 minutes, and T-5 minutes before engagement end. Begin graceful shutdown procedures to ensure all testing halts by the engagement end time.
Key Considerations:
Common Pitfalls:
Implementation: Maintain immutable deny lists for critical assets (primary databases, admin interfaces, payment systems). Deny list cannot be modified mid-engagement and takes precedence over all other scope rules.
Key Considerations:
Common Pitfalls:
Implementation: Implement multi-layer protections: prevent direct SQL injection testing, restrict connection attempts to production databases, enforce read-only credential usage, and log all database access attempts.
Key Considerations:
Common Pitfalls:
Implementation: Implement strict tenant isolation checks before all actions. Map tenants to scope and prevent cross-tenant testing. Validate tenant context in every request.
Key Considerations:
Common Pitfalls:
Implementation: Validate IP address against scope before using it, monitor DNS responses for changes during request execution, and reject responses that resolve to out-of-scope IPs.
Key Considerations:
Common Pitfalls:
Implementation: Identify network boundaries (subnet, VLAN, DMZ) from initial scope. Restrict lateral movement to in-scope networks. Implement host-based firewalling rules on test platform.
Key Considerations:
Common Pitfalls:
Implementation: Limit network discovery with configurable host limits (for example, 1000 hosts per subnet), per-host port scan limits (for example, 20 ports per host), and time limits (for example, max 8 hours per discovery phase).
Key Considerations:
Common Pitfalls:
Implementation: Maintain immutable audit logs of all scope decisions, validation failures, and boundary violations. Include timestamp, action, target, decision, and reasoning.
Key Considerations:
Common Pitfalls:
Implementation: Revalidate scope before each test cycle or at a maximum interval of 24 hours, whichever is shorter, for continuous-mode engagements.
Key Considerations:
Common Pitfalls:
Implementation: Assign unique engagement IDs for recurring tests. Track each test cycle separately with independent scope, temporal boundaries, and authorization validity periods.
Key Considerations:
Common Pitfalls:
Implementation: Fingerprint findings using stable identifiers (asset, vulnerability type, location). Assign lifecycle states (new, regressed, resolved, mitigated) and track across cycles.
Key Considerations:
Common Pitfalls:
Implementation: Define configurable testing windows (hours/days), off-peak testing enforcement, and automatic intensity reduction based on production metrics. Implement circuit-breaker pattern.
Key Considerations:
Common Pitfalls:
Implementation: Authenticate deployment triggers using cryptographic signatures. Validate scope and authorization before test start. Enforce hard timeout (24h typical) to prevent runaway testing.
Key Considerations:
Common Pitfalls:
Implementation: Detect overlapping engagements for same assets. Apply most restrictive constraints (earliest end time, lowest criticality level) when conflicts exist. Alert stakeholders.
Key Considerations:
Common Pitfalls:
Implementation: Include agents in RoE documents. Agents validate scope independently. Implement agent kill switch mechanism and time-bomb to prevent runaway agents.
Key Considerations:
Common Pitfalls:
Implementation: Maintain real-time inventory of all testing credentials scoped to engagement. Automatically rotate credentials at engagement end. Log all credential access and usage.
Architecture Pattern - Credential Indirection for LLM-Based Agents:
For platforms using LLM-based agents, implement a credential manager that enforces a strict separation between credential references (which the agent sees) and credential values (which only the tool execution layer sees):
CredentialReference objects containing only the credential identifier, type, username, and role.This pattern ensures that credentials never appear in inference requests sent to model providers, agent reasoning traces, message history, or step-level logs.
Key Considerations:
Common Pitfalls:
Implementation: Support cloud APIs for infrastructure enumeration. Track ephemeral resources (containers, serverless functions) with short lifecycles. Implement scope validation for dynamic IPs.
Key Considerations:
Common Pitfalls:
Implementation: Track API tokens and sessions throughout engagement. Monitor token refresh and expiration. Implement schema-drift detection. Log all business logic traversal.
Key Considerations:
Common Pitfalls:
Implementation: Start with a small, honest metric set rather than a dashboard of everything you can measure. Three or four action-distribution metrics that the operator actually understands beat fifteen that nobody looks at. Derive the baseline from historical engagements of the same class where possible; where the platform is too new, declare the expected distribution explicitly in the engagement configuration and mark the baseline as "declared, not observed." Set thresholds in terms the operator can defend (for example, "two standard deviations above the prior-quarter average for this engagement class") rather than as a default constant copied from a blog post. Route detections to a review queue that a human actually works, not an email alias. Record every detection, decision, and baseline refresh to the audit trail so that a reviewer can later reconstruct how the platform handled unusual behavior.
Key Considerations:
Common Pitfalls:
Phase 1 (implement before any autonomous pentesting begins): SE-001 through SE-006 (RoE validation, IP/domain/temporal scope, asset classification, pre-action validation), SE-008 (temporal compliance), SE-009 (deny lists), SE-015 (scope enforcement audit). These are all APTS Tier 1 requirements and form the minimum viable scope enforcement layer.
Start with SE-001 through SE-006 as the foundation. Nothing should execute without validated scope. Layer deny lists (SE-009) and temporal controls (SE-008) next.
Phase 2 (implement within first 3 engagements): SE-007 (dynamic scope monitoring), SE-010 (production DB safeguards), SE-011 (multi-tenant awareness), SE-012 (DNS rebinding prevention), SE-013 (lateral movement enforcement), SE-016 (scope revalidation), SE-017 (recurring engagement boundaries), SE-019 (rate limiting and production impact controls), SE-020 (deployment-triggered testing), SE-022 (client-side agent scope), SE-023 (credential lifecycle governance), SE-024 (cloud/ephemeral infrastructure), SE-025 (API/business logic governance), SE-026 (out-of-distribution action monitoring, SHOULD). These are primarily APTS Tier 2 requirements.
Prioritize SE-013 (lateral movement) and SE-023 (credential governance) first since they prevent the most damaging scope violations. Add SE-010, SE-012, and SE-024 based on target environment complexity.
Phase 3 (implement for maximum assurance): SE-014 (topology discovery limits, SHOULD), SE-018 (cross-cycle finding correlation, SHOULD), SE-021 (overlapping engagement conflict resolution, Tier 3). These provide advanced capabilities for platforms operating at scale or in multi-customer environments.