Thursday, February 19, 2026

AI Prompt - Usability testing

AI Prompt

"Provide accessibility/usability test cases for [page/flow]. Include keyboard-only navigation, screen reader announcements, focus management, color contrast, error messaging clarity, and timeouts."

Applying Critical Thinking

·         User-diverse: Test as keyboard-only user and as screen reader user; avoid “we use mouse so it’s fine”.

·         Focus and order: Focus order should match visual order and task flow; focus must not be lost in modals, dropdowns, or dynamic content.

·         Meaning not just presence: ARIA and semantics must convey meaning (e.g. “button”, “alert”, “current step”), not just labels.

·         Errors and timeouts: Messages must be clear, associated with fields, and not rely on color alone; timeouts should warn and allow extension where possible.

Generate Test Cases for Each Feature

·         Keyboard-only: Tab through all interactive elements; no trap; Enter/Space activate buttons/links; Escape closes modals; arrow keys in menus/listboxes; skip link works.

·         Screen reader: Landmarks and headings; button/link names and roles; form labels and errors; live regions for dynamic updates; table headers and scope; no redundant announcements.

·         Focus management: Focus moves to modal when opened and returns on close; focus visible (outline/ring); focus not lost after AJAX or route change; first focusable in view on load.

·         Color contrast: Text and UI components meet contrast ratio (e.g. 4.5:1 normal, 3:1 large); focus indicators visible; don’t rely on color alone for required/error/state.

·         Error messaging: Errors are announced (live region or aria-describedby); message text clear and actionable; associated with field; success/error distinguishable without color only.

·         Timeouts: Session timeout: warning before expiry, option to extend; long operations: progress or status announced; no silent failure.

·         Usability: Labels and instructions clear; destructive actions confirmed; consistent patterns (e.g. submit always same place).

Questions on Ambiguities

·         What level are we targeting (WCAG 2.1 A, AA, AAA) and for which pages/flows?

·         Which screen readers and browsers are in scope (e.g. NVDA + Firefox, VoiceOver + Safari, JAWS)?

·         How should session timeout behave: warning at N minutes, extend button, and what happens to in-progress form data?

·         Are error messages written in plain language and reviewed by support/copy?

·         Do we support reduced motion and prefers-color-scheme (dark/light), and are they part of this test set?

·         Who is responsible for remediation (dev vs design) when contrast or focus order fails?

Areas Where Test Ideas Might Be Missed

·         Dynamic content: injected lists, infinite scroll, SPA route changes: focus and announcements after load.

·         Third-party widgets: chat, video, payment iframes: keyboard access and screen reader support inside iframe.

·         CAPTCHA / auth challenges: alternative (e.g. audio CAPTCHA) or exemption path for assistive tech users.

·         Complex widgets: custom combo boxes, date pickers, tree views: full keyboard and ARIA pattern (e.g. roving tabindex).

·         Mobile screen readers: VoiceOver (iOS), TalkBack (Android): gestures and focus different from desktop.

·         RTL and localization: focus order in RTL; translated labels and errors; font size scaling.

·         Timeout during data entry: user types in form; session expires mid-field; ensure data loss is communicated and recovery path exists.

Output Template

Context: [system/feature under test, dependencies, environment]

Assumptions: [e.g., auth method, data availability, feature flags]

Test Types: [usability, accessibility, UI]

Test Cases:

ID: [TC-001]

Type: [usability/accessibility]

Title: [short name]

Preconditions/Setup: [data, env, mocks, flags]

Steps: [ordered steps or request details]

Variations: [inputs/edges/negative cases]

Expected Results: [responses/UI states/metrics]

Cleanup: [teardown/reset]

Coverage notes: [gaps, out-of-scope items, risk areas]

Non-functionals: [perf targets, security considerations, accessibility notes]

Data/fixtures: [test users, payloads, seeds]

Environments: [dev/stage/prod-parity requirements]

Ambiguity Questions:

- [Question 1 about unclear behavior]

- [Question 2 about edge case]

Potential Missed Ideas:

- [Suspicious area where tests might still be thin]

AI in Software Testing: How Artificial Intelligence Is Transforming QA

For years, software testing has lived under pressure: more features, faster releases, fewer bugs, smaller teams. Traditional QA has done her...