What This Post Actually Covers
If you run an OWASP Top 10 scanner against a target and get a clean report, does that mean your application is secure? Obviously not. But the OWASP Top 10 (2021 edition) gives us a shared vocabulary for the vulnerability classes that show up most often in real-world breaches, and any web vulnerability scanner worth using should cover them thoroughly.
This post breaks down how ReconX’s scanner modules map to each OWASP category, where automated OWASP testing works well, and where it falls short. We grouped related categories together because that reflects how these vulnerabilities actually cluster in production applications.
Injection-Class Vulnerabilities (A03)
A03:2021 — Injection dropped from first to third place in the 2021 update, partly because frameworks with parameterized queries have reduced the prevalence of classic SQL injection. But injection as a class — spanning CWE-79 (XSS), CWE-89 (SQLi), CWE-78 (OS command injection), and dozens of others — still accounts for a huge portion of critical findings in penetration tests.
This is where ReconX has the deepest coverage, with five dedicated scanner modules.
SQL Injection Scanner — The SQLi module doesn’t just throw ' OR 1=1-- at parameters and call it a day. It runs four distinct detection strategies in sequence:
- Error-based: Injects malformed syntax and fingerprints the database engine from error messages. Works fast when verbose errors are enabled (more common than you’d think, especially on staging environments that accidentally go public).
- Union-based: Determines column count through
ORDER BYenumeration, then attemptsUNION SELECTextraction. The module handles type mismatches by testing NULL casts across column positions. - Boolean-based blind: Sends pairs of payloads that produce true/false conditions (
AND 1=1vsAND 1=2) and measures response-body diffs. A content-length delta threshold filters out false positives from dynamic page elements. - Time-based blind: Falls back to
SLEEP()/WAITFOR DELAY/pg_sleep()when the application returns identical responses for true and false conditions. Baseline response times are sampled three times before injection to reduce network-jitter false positives.
The module tests GET and POST parameters, cookie values, and HTTP headers (Referer and X-Forwarded-For are common overlooked injection points). It covers MySQL, PostgreSQL, MSSQL, and Oracle syntax variants.
Here’s what a real finding looks like:
CRITICAL: Time-based blind SQL injection in /search
Parameter: q
Payload: ' OR SLEEP(5)--
Response delay: 5.02 seconds (baseline: 0.14s)
Database: MySQL 8.0
CWE: CWE-89
Command Injection Scanner — Tests for OS command injection using multiple payload encodings. It tries semicolon chaining, backtick substitution, $(...) subshells, and pipe-based injection. The module also attempts common filter bypasses: URL encoding, double encoding, newline injection, and null byte insertion. When a target uses blocklists instead of allowlists for input validation, this scanner tends to find a way through.
XXE Scanner — XML External Entity injection testing (CWE-611) against any endpoint that accepts XML input. The module tests standard file-read payloads (file:///etc/passwd), parameter entity expansion, and blind XXE via out-of-band HTTP callbacks. This matters because XXE is trivial to exploit but easy to miss if you only test JSON endpoints — and many APIs accept XML even when they document JSON.
SSTI Detection — Server-side template injection is specific to the template engine in use, so the module sends engine-specific probe expressions: {{7*7}} for Jinja2/Twig, ${7*7} for Freemarker, #set($x=7*7)$x for Velocity, and others. When it gets 49 back in the response, it confirms the engine and escalates to code execution payloads.
CRLF Injection — Tests for HTTP response splitting by injecting \r\n sequences into parameters and headers, checking whether the injected content appears in response headers. A successful CRLF injection can lead to cache poisoning, XSS through injected headers, or session fixation.
Access Control and Authentication (A01, A07)
These two categories are closely related in practice. A01 — Broken Access Control is about whether users can act outside their intended permissions. A07 — Identification and Authentication Failures is about whether the application correctly identifies who a user is in the first place. A failure in either one leads to the same outcome: someone accesses data or functionality they shouldn’t.
A01 jumped from fifth place to first in the 2021 update, and the real-world consequences back that up. The 2023 MOVEit Transfer breach — where the Clop ransomware group exploited an access control flaw (CVE-2023-34362) to steal data from hundreds of organizations including government agencies and Fortune 500 companies — was fundamentally a broken access control issue. An unauthenticated user could reach a SQL injection endpoint that should never have been exposed without authentication.
Between A01 and A07, OWASP references CWE-200, CWE-284, CWE-285, CWE-352, and CWE-306 among others. ReconX covers them with three modules:
IDOR Scanner — Tests for insecure direct object references by manipulating resource identifiers: user IDs, document references, order numbers, file paths. The scanner sends requests with modified identifiers and compares the responses. If changing /api/v2/users/1042/documents to /api/v2/users/1043/documents returns a 200 with different user data, that’s a confirmed IDOR.
Auth Bypass — Probes authentication mechanisms across both categories. For A01, it tests privilege escalation (can a regular user hit admin endpoints?), forced browsing to unprotected routes, and parameter-based access control bypasses. For A07, it checks session fixation, predictable session tokens, default credentials, brute force resistance, and insecure “remember me” implementations. It also tests password reset flows for predictable tokens.
JWT Analysis — JWT issues deserve their own module because they’re so common and the failure modes are specific. The scanner tests for:
- The
"alg": "none"bypass (still works against poorly configured libraries) - Algorithm confusion: accepting HS256 tokens signed with an RS256 public key as the HMAC secret
- Missing
expclaims and expired token acceptance - Weak signing keys (dictionary attack against the HMAC secret)
CRITICAL: JWT algorithm confusion vulnerability
Server accepts tokens signed with HS256 using the
RS256 public key as the HMAC secret.
Impact: Attacker can forge valid authentication tokens.
CWE: CWE-287
API Security — Spans both categories. Tests REST and GraphQL endpoints for missing authorization checks on individual fields and mutations, API key exposure in client-side code, OAuth misconfiguration (open redirects in callback URLs, token leakage), and bearer token handling weaknesses.
Configuration and Infrastructure (A02, A05, A06)
These three categories share a theme: the application code might be fine, but the environment it runs in introduces vulnerabilities. A02 — Cryptographic Failures, A05 — Security Misconfiguration, and A06 — Vulnerable and Outdated Components all come down to how the application is deployed and maintained rather than how it was designed.
A05 in particular is the category with the widest attack surface — it covers everything from default credentials to missing security headers to exposed admin panels — and ReconX dedicates six scanner modules to it.
SSL/TLS Analysis — Evaluates certificate validity, cipher suite strength, and protocol versions. Flags TLS 1.0/1.1 (which should have been disabled years ago but persists on a surprising number of production servers), weak key exchange algorithms, expired or self-signed certificates, and missing certificate chain intermediates. This covers A02’s cryptographic requirements and contributes to A06 by identifying outdated TLS library versions. Related CWEs: CWE-326, CWE-327.
Header Security — Analyzes the full set of security headers. Missing Content-Security-Policy is the most common finding, followed by absent Strict-Transport-Security, X-Content-Type-Options, X-Frame-Options, Referrer-Policy, and Permissions-Policy. The module doesn’t just check presence — it evaluates policy strength. A CSP that includes unsafe-inline and unsafe-eval is barely better than no CSP at all.
CORS Misconfiguration — Dedicated scanner for cross-origin resource sharing policies. Tests for wildcard origins with credentials (the classic Access-Control-Allow-Origin: * combined with Access-Control-Allow-Credentials: true), origin reflection (server echoes back whatever Origin header it receives), and null origin acceptance. CORS misconfigurations are frequently overlooked because they don’t show up in standard functional testing.
MEDIUM: Permissive CORS policy detected
Access-Control-Allow-Origin reflects any origin
Access-Control-Allow-Credentials: true
Impact: Attacker-controlled sites can make authenticated
cross-origin requests and read responses.
CWE: CWE-942
CMS Detection — Identifies WordPress, Joomla, Drupal, and other CMS platforms, then checks for default configurations, exposed admin panels (/wp-admin, /administrator), and version-specific known vulnerabilities. For A06 specifically, it detects outdated plugin versions against known CVE databases.
Directory Brute — Discovers exposed configuration files (.env, web.config, wp-config.php.bak), backup files, development artifacts (/.git, /debug, /phpinfo.php), and administrative interfaces. These are A05 misconfiguration findings that show up in practically every engagement.
WAF Detection and Tech Fingerprint — Two modules that work together. WAF Detection identifies whether a web application firewall is present and what product it is (Cloudflare, AWS WAF, ModSecurity, etc.), which informs how other scanners adjust their payloads. Tech Fingerprint identifies specific versions of web servers, frameworks, and JavaScript libraries, flagging anything with known CVEs for A06 coverage.
Categories Where Automated Testing Has Limits (A04, A08, A09)
We should be upfront about this: A04 — Insecure Design, A08 — Software and Data Integrity Failures, and A09 — Security Logging and Monitoring Failures are categories where no automated OWASP testing tool — ReconX included — can provide the same depth of coverage as a skilled human reviewer.
A04 (Insecure Design) is about fundamental architectural flaws: an application that was designed without threat modeling, that lacks rate limiting on sensitive operations, or that trusts client-side validation. ReconX’s API Security module can flag missing rate limits and overly permissive API designs, and the Auth Bypass module catches predictable reset tokens. The AI engine also correlates findings across modules to surface patterns — like an app that has IDOR flaws, missing CSRF tokens, and no rate limiting all at once, which suggests a systemic design problem rather than isolated bugs. But a scanner can’t tell you that your password reset flow sends the new password in the email body instead of a one-time link. That’s a design review finding.
A08 (Software and Data Integrity) covers supply chain attacks and CI/CD pipeline security — think SolarWinds. What ReconX can check externally: the Header Security module verifies Subresource Integrity (SRI) attributes on externally loaded scripts and stylesheets, and Tech Fingerprint flags CDN-loaded resources without integrity hashes. What it can’t check: whether your build pipeline verifies dependency signatures, or whether your auto-update mechanism validates packages. Those need internal tooling.
A09 (Logging and Monitoring) is inherently difficult to assess from the outside. ReconX’s AI engine looks for indirect signals — verbose error messages that suggest exceptions aren’t being caught and logged properly, inconsistent error response formats that hint at missing centralized error handling — but there’s no way for an external scanner to confirm whether failed login attempts are being logged or whether alerts fire on suspicious activity. This category needs internal audit.
Server-Side Request Forgery — A10
A10:2021 — SSRF was new to the OWASP Top 10 in 2021, and its inclusion reflects how cloud-native architectures have changed the threat landscape. SSRF was always dangerous, but when your application runs on AWS, GCP, or Azure, a successful SSRF means the attacker can potentially reach the instance metadata endpoint at http://169.254.169.254/ — and from there, extract IAM credentials, API tokens, and configuration data. The 2019 Capital One breach was an SSRF to metadata endpoint chain that exposed over 100 million customer records. Related CWEs: CWE-918.
SSRF Scanner — Dedicated module that tests URL parameters, file upload functionality, webhook configurations, and PDF generators (a common overlooked vector because PDF rendering libraries often fetch external resources). The scanner tests for direct SSRF (response content returned to the user) and blind SSRF (out-of-band detection via callback to a ReconX-controlled endpoint). It specifically tests for access to:
- Cloud metadata endpoints (
169.254.169.254,metadata.google.internal) - Internal network ranges (
10.x.x.x,172.16.x.x,192.168.x.x) - Localhost services (
127.0.0.1with common ports like 6379 for Redis, 9200 for Elasticsearch)
Open Redirect Scanner — Included here because open redirects are frequently chained with SSRF. If a server-side component follows a redirect (which many HTTP libraries do by default), an attacker can bypass SSRF allowlists by redirecting through a trusted domain.
HIGH: Blind SSRF in /api/webhooks/test
Parameter: callback_url
Internal service accessible: http://169.254.169.254/
Cloud metadata endpoint reachable from application.
Impact: AWS credentials and instance metadata exposure.
CWE: CWE-918
Beyond the Top 10
The OWASP Top 10 is a starting point, not a finish line. ReconX includes scanners for vulnerability classes outside the Top 10: clickjacking, WebSocket security issues, DNS misconfigurations, subdomain takeover, and XSS (which OWASP folded into A03 but is common enough to warrant dedicated testing). The AI engine correlates findings across all modules — an SSRF combined with an open redirect combined with cloud metadata access is a different severity than any of those findings alone.
Coverage Summary
| OWASP Category | ReconX Scanner(s) | Detection Approach |
|---|---|---|
| A01 — Broken Access Control | IDOR, Auth Bypass, API Security | Parameter manipulation, privilege escalation testing, authorization checks on endpoints |
| A02 — Cryptographic Failures | SSL/TLS Analysis, Header Security | Protocol/cipher enumeration, transport security header validation |
| A03 — Injection | SQL Injection, Command Injection, XXE, SSTI, CRLF | Multi-strategy payload injection with database-specific and engine-specific variants |
| A04 — Insecure Design | API Security, Auth Bypass, AI Analysis | Rate limit detection, design pattern correlation (partial coverage) |
| A05 — Security Misconfiguration | Header Security, CORS, CMS Detection, Directory Brute, WAF Detection, Tech Fingerprint | Header analysis, policy evaluation, exposed file/path discovery |
| A06 — Vulnerable Components | Tech Fingerprint, CMS Detection, SSL/TLS Analysis | Version fingerprinting against CVE databases |
| A07 — Auth Failures | Auth Bypass, JWT Analysis, API Security | Session testing, JWT cryptographic analysis, OAuth flow validation |
| A08 — Integrity Failures | Header Security, Tech Fingerprint | SRI verification, CDN resource integrity checks (limited external coverage) |
| A09 — Logging Failures | AI Analysis | Indirect signal analysis from error handling patterns (limited external coverage) |
| A10 — SSRF | SSRF Scanner, Open Redirect | URL parameter injection, blind out-of-band detection, metadata endpoint probing |
For a deeper look at how the AI engine powers cross-module analysis, read AI in Penetration Testing: Beyond Automated Summaries. To see how ReconX’s OWASP Top 10 coverage compares to Burp Suite, Nuclei, and ZAP, check out our head-to-head comparison.