What We Scan
AccessiGuard runs 22 automated accessibility checks against WCAG 2.1 Level AA guidelines. Here's exactly what we test — and what automated tools can't catch.
⚠️ Important: No automated tool catches everything
Automated scanning typically catches 30-40% of WCAG issues. The remaining 60-70% require manual testing, including keyboard navigation, screen reader testing, cognitive evaluation, and real user testing. Our scans are a strong starting point — not certification. We recommend combining automated scans with periodic manual audits for full compliance.
3
Critical
10
Serious
8
Moderate
1
Minor
Verifies all images have descriptive alt attributes so screen reader users understand image content.
Ensures every form field (input, select, textarea) has an associated label via `for`, `aria-label`, or wrapping `<label>`.
Checks the `<html>` element has a `lang` attribute so screen readers use the correct pronunciation.
Detects links with no discernible text (empty links without text, aria-label, or alt-text images).
Ensures all buttons have accessible names through text content or `aria-label`.
Validates heading levels aren't skipped (h1→h3 without h2), which confuses screen reader navigation.
Checks for a `<main>` element or `role="main"` so screen readers can skip directly to content.
Finds duplicate `id` attributes that break ARIA references and form label associations.
Verifies the page has a `<title>` element for browser tabs and screen reader announcements.
Checks that data tables have `<th>` elements so screen readers can announce column/row context.
Validates that all `role` attributes use valid ARIA role values from the specification.
Flags positive `tabindex` values that disrupt natural keyboard navigation order.
Detects `user-scalable=no` or `maximum-scale=1` in the viewport meta tag that prevents zooming.
Checks for a 'skip to content' link so keyboard users can bypass repetitive navigation.
Ensures all `<iframe>` elements have a descriptive `title` attribute explaining their content.
Flags generic link text like 'click here', 'read more', or 'here' that lacks context.
Checks for exactly one `<h1>` per page — flags missing or multiple h1 elements.
Detects `<li>` elements outside of `<ul>` or `<ol>` containers, breaking semantic structure.
Validates `<dl>` elements only contain `<dt>`, `<dd>`, or `<div>` children.
Checks that common fields (email, phone, name) have `autocomplete` attributes for assistive technology.
Validates that `aria-labelledby` and `aria-describedby` reference existing element IDs.
Ensures required fields have `aria-required="true"` or visual indicators for assistive technology.
These areas require manual testing or browser-based analysis. We're working on expanding coverage, but being honest about limitations is part of our commitment to genuine accessibility.
Requires visual rendering and computed style analysis. We plan to add this with browser-based scanning.
Requires interactive testing — can't be verified from static HTML alone.
Focus trapping, focus order in dynamic content, and modal focus management need runtime testing.
Verifying captions exist and are accurate requires media analysis beyond HTML parsing.
JavaScript-rendered content, ARIA live regions, and route changes need browser-based scanning.
WCAG 2.5.5 requires computed layout analysis, not just HTML inspection.
Content readability and cognitive load are subjective and require human evaluation.
🚀 On the Roadmap
- • Color contrast checking — automated WCAG 2.1 contrast ratio analysis
- • Browser-based scanning — catch issues in JavaScript-rendered content
- • Weekly monitoring — get alerted when new issues appear