Friday, February 13, 2026

AI Prompt - Compatibility Testing / mobile testing

AI Prompt

"Propose compatibility test cases for [feature] across [browsers/devices/OS versions]. Include viewport/resolution, touch vs mouse, offline/poor network, and platform-specific quirks."

Applying Critical Thinking

·         Define the support matrix first: Which browser/device/OS combinations are “must work” vs “best effort”? Document and prioritize.

·         Identify platform-specific behavior: Safe area insets, viewport units, touch events, back/forward cache, autoplay policies, file upload on mobile.

·         Touch vs mouse: Touch has no hover; multi-touch and gestures differ; click delay and scroll behavior vary.

·         Network as a variable: Offline, slow 3G, and flaky WiFi change loading, caching, and error handling; test these explicitly.

Generate Test Cases for Each Feature

·         Viewport/resolution: Key breakpoints (e.g. 320, 768, 1024, 1440); orientation change; foldables; notch/safe area; zoom 100% vs 200%.

·         Touch vs mouse: Tap = click; no hover states on touch; swipe/carousel; long-press; pinch-zoom; double-tap zoom; scroll momentum.

·         Offline/poor network: App loads then goes offline: cached content, error message, retry; slow 3G: timeouts, loading states, partial content.

·         Browser-specific: Safari: date input, flexbox gaps, position:fixed; Chrome: autofill, download behavior; Firefox: scrollbar width; Edge: PDF/legacy.

·         OS-specific: iOS: keyboard resizing, rubber-band scroll, input zoom; Android: back button, share sheet, keyboard variants; system dark/light mode.

·         Forms and inputs: Date/time pickers, file upload (camera vs gallery), autocomplete, virtual keyboard overlap, paste behavior.

·         Media and playback: Video autoplay policies, fullscreen API, picture-in-picture; audio focus; codec support.

Questions on Ambiguities

·         What is the official support matrix (browsers, versions, devices)? Is it documented and agreed with product?

·         How should the app behave when offline (full PWA, read-only cache, or “please connect” only)?

·         Are touch and mouse both primary on hybrid devices (e.g. Surface), or is one secondary?

·         What minimum viewport is supported (e.g. 320px width), and is there a maximum (e.g. ultra-wide)?

·         Are old OS versions (e.g. iOS 14) out of scope, and how are users informed?

·         Do we support reduced motion or high contrast at the compatibility level, or is that accessibility-only?

Areas Where Test Ideas Might Be Missed

·         Back/forward cache (bfcache): pages restored from cache can show stale state or fire unexpected events; test navigate away and back.

·         Third-party cookies / ITP: Safari and others restrict cookies; login or tracking may behave differently.

·         Fonts and assets: system vs web fonts; missing or slow CDN; different font rendering (e.g. Safari vs Chrome).

·         Copy/paste and share: Clipboard API, share target; behavior differs on mobile.

·         Deep links and app links: opening app from link; fallback to browser when app not installed.

·         Updates mid-session: user upgrades OS or browser during a long session; optional test for critical flows.

·         Multiple tabs/windows: same app in two tabs; sync, locking, or messaging between tabs.

Output Template  

Context: [system/feature under test, dependencies, environment]

Assumptions: [e.g., auth method, data availability, feature flags]

Test Types: [compatibility, cross-browser, mobile]

Test Cases:

ID: [TC-001]

Type: [compatibility]

Title: [short name]

Preconditions/Setup: [data, env, mocks, flags]

Steps: [ordered steps or request details]

Variations: [inputs/edges/negative cases]

Expected Results: [responses/UI states/metrics]

Cleanup: [teardown/reset]

Coverage notes: [gaps, out-of-scope items, risk areas]

Non-functionals: [perf targets, security considerations, accessibility notes]

Data/fixtures: [test users, payloads, seeds]

Environments: [dev/stage/prod-parity requirements]

Ambiguity Questions:

- [Question 1 about unclear behavior]

- [Question 2 about edge case]

Potential Missed Ideas:

- [Suspicious area where tests might still be thin]

AI Prompt - Security testing

AI Prompt

"List security test cases for [app/endpoint]. Cover authZ/authN, input validation, common vulns (XSS, SQLi, SSRF, CSRF, IDOR), transport security, logging, and misconfig checks. Include both positive and negative cases."

Applying Critical Thinking

·         Define the trust boundary: What is inside vs outside the system? Every entry point (HTTP, WebSocket, file upload, webhook) is an attack surface.

·         Assume breach for dependencies: Treat third-party libs, APIs, and configs as potentially compromised; test that failures don’t escalate.

·         Positive vs negative: Positive cases prove legitimate users get access and valid inputs succeed; negative cases prove attackers or invalid inputs are rejected or sanitized.

·         Chain thinking: One vuln (e.g. IDOR) can lead to another (e.g. data exfil); consider multi-step attack scenarios.

Generate Test Cases for Each Feature

·         AuthN: Valid login, token refresh, MFA success, session validity. Negative: invalid creds, expired/missing/revoked token, token reuse after logout.

·         AuthZ: Role X can access resource A; tenant isolation. Negative: role Y cannot access A; cross-tenant access; privilege escalation.

·         Input validation: Valid payloads accepted. Negative: oversized input, special chars, null/type confusion, boundary values.

·         XSS: Sanitized output in HTML/JS context. Negative: reflected/stored DOM XSS via user-controlled input.

·         SQLi: Parameterized queries with normal input. Negative: union-based, error-based, blind SQLi payloads.

·         SSRF: Allowed URLs only. Negative: internal IPs, cloud metadata, file://, redirects to internal.

·         CSRF: Request with valid origin/cookie + CSRF token. Negative: request without token, wrong origin, same-site cookie behavior.

·         IDOR: User A can only access A’s resources. Negative: user A accesses B’s resource by changing ID in path/body.

·         Transport: HTTPS only, HSTS, secure cookies. Negative: HTTP downgrade, mixed content, cookie flags.

·         Logging: No PII/secrets in logs; errors don’t leak stack to client. Negative: logs contain passwords, tokens, full cards; verbose errors to user.

·         Misconfig: Secure defaults, no debug in prod. Negative: default creds, debug endpoints exposed, permissive CORS.

Questions on Ambiguities

·         What roles/tenants exist, and what is the exact access matrix (who can read/write what)?

·         Are error messages intentionally generic for unauthenticated/unauthorized, or is some detail allowed for debugging?

·         What input length/type limits are enforced (and where: gateway, app, DB)?

·         Is CSRF protection applied to all state-changing operations, and is it token-based or SameSite-only?

·         Which headers (e.g. X-Forwarded-For, Host) are trusted for routing or logging, and could they be spoofed?

·         Are file uploads restricted by type/size and scanned; where are they stored and how are they served?

·         What secrets (API keys, DB URLs) are in config/env, and are they ever logged or exposed in errors?

Areas Where Test Ideas Might Be Missed

·         Business logic : rate limits, coupon reuse, price manipulation, workflow bypass (e.g. skip payment step).

·         Second-order injection: data stored from one request and executed in another context (e.g. stored XSS, stored SQLi in report generation).

·         Subdomain/redirect : open redirects, subdomain takeover, cookie scope too broad.

·         Dependency vulns: outdated libs with known CVEs; tests don’t cover “what if this dependency is malicious”.

·         Auth bypass via alternate paths: GraphQL introspection, internal APIs, webhooks without signature verification.

·         Time-based and conditional attacks: timing side channels, race conditions on balance/eligibility checks.

·         Client-side only checks: relying on front-end validation without server-side enforcement.

Output Template

Context: [app/endpoint under test, entry points, dependencies, environment]

Assumptions: [auth method (JWT/OAuth/session), tenant model, WAF/gateway in front]

Test Types: [security, authN/authZ]

Test Cases:

ID: [TC-SEC-001]

Type: [security]

Title: [short name]

Preconditions/Setup: [data, env, mocks, flags]

Steps: [ordered steps or request details]

Variations: [inputs/edges/negative cases]

Expected Results: [responses/UI states/metrics]

Cleanup: [teardown/reset]

Coverage notes: [gaps, out-of-scope items, risk areas]

Non-functionals: [perf targets, security considerations, accessibility notes]

Data/fixtures: [test users, payloads, seeds]

Environments: [dev/stage/prod-parity requirements]

Ambiguity Questions:

- [Question 1 about unclear behavior]

- [Question 2 about edge case]

Potential Missed Ideas:

- [Suspicious area where tests might still be thin]

AI Prompt - Performance or Load Testing

Prompt (performance level)

Prompt text

“Draft performance and load test scenarios for \[service/endpoint]. Specify SLAs/SLOs, target RPS, concurrency patterns, payload sizes, ramp-up, soak, spike, and metrics to collect (latency percentiles, errors, saturation).”

How to apply critical thinking

·         Clarify performance requirements

·         SLAs (e.g., p95 < 300 ms) and SLOs (e.g., 99.9% of requests under 500 ms).

·         Expected peak vs normal traffic, growth projections.

·         Generate performance scenarios

·         Baseline: light load to validate harness and get reference metrics.

·         Load: sustained target RPS with realistic traffic mix.

·         Stress/spike: sudden surges above normal to find breaking points.

·         Soak: long-running tests to detect leaks, slow degradation.

·         Concurrency: many simultaneous users, with realistic think times.

·         Questions on ambiguities

·         What’s the exact definition of “failure” for this service?

·         Which resources are most constrained (CPU, memory, DB connections, network)?

·         Are there autoscaling policies or rate limits that influence scenarios?

·         What test ideas might be missed

·         Multi-tenant contention: one tenant starving others.

·         Thundering herd effects after outages or cache flushes.

·         Performance in non-happy paths (errors, retries, timeouts).

Output template (performance/load testing)

Context: [service/endpoint, architecture (monolith/microservice), criticality level]

Assumptions: [SLA/SLO targets, expected traffic patterns, caching, autoscaling]

Test Types: [performance, load, stress, soak]

Test Cases:

ID: [TC-PERF-001]

Type: [performance/load]

Title: [e.g., "Sustained load at target RPS"]

Preconditions/Setup:

  - [Prod-like environment, data size, configs]

  - [Monitoring/observability in place]

Steps:

  1. [Configure load tool with target RPS, concurrency, payload mix]

  2. [Run test for defined duration]

Variations:

  - [Baseline low load]

  - [Target steady-state load]

  - [Spike from low to 2x/3x target]

  - [Soak at target for N hours]

  - [Different payload sizes and request mixes]

Expected Results:

  - [Latency percentiles within SLO]

  - [Error rate within acceptable limits]

  - [Resource utilization in safe bounds, no leaks]

Cleanup:

  - [Reset environment, clear queues/caches, roll back configs if changed]

Coverage notes:

  - [Which scenarios/traffic patterns are validated; untested extremes]

Non-functionals:

  - [Capacity headroom, resiliency under stress, autoscaling behavior]

Data/fixtures:

  - [Realistic datasets, representative payload samples]

Environments:

  - [Dedicated perf env, or carefully controlled prod-like setup]

Ambiguity Questions:

- [Exact SLOs, business impact of degradation, allowed mitigation strategies]

Potential Missed Ideas:

- [Downstream dependency limits, batch jobs overlapping with peak traffic, cold-start behavior]

AI Prompt - End to End Testing

AI Prompt (E2E level)

Prompt text

“Outline E2E test flows for user goal \[describe]. Include start state, navigation steps, data setup/cleanup, cross-browser/device considerations, and success/failure assertions.”

How to apply critical thinking

·         Think in user journeys, not screens

·         What represents a complete success for this goal from the user’s point of view?

·         What critical checkpoints must be true across multiple systems?

·         Generate E2E flows

·         Primary journey: start → navigate → perform actions → see confirmation.

·         Alternative flows: different entry points (deep links, notifications).

·         Failure flows: invalid input, backend failure, partial completion, recovery.

·         Cross-system checks: UI, API, DB, emails/notifications, logs.

·         Questions on ambiguities

·         Which parts are okay to mock vs must be real?

·         What’s the acceptable flakiness risk vs coverage (how many browsers/regions)?

·         Which 2–3 E2E flows are truly business-critical?

·         What test ideas might be missed

·         Long-running flows (multi-session, multi-day tasks).

·         E2E behavior across releases (backwards compatibility).

·         User switching devices mid-flow.

Output template (E2E testing)

Context: [user goal, e.g., "complete checkout with discount"], systems involved

Assumptions: [data seeding approach, real vs mocked dependencies, browsers/devices]

Test Types: [E2E, regression]

Test Cases:

ID: [TC-E2E-001]

Type: [E2E]

Title: [e.g., "User completes checkout with valid card"]

Preconditions/Setup:

  - [User account state, inventory, promo codes]

  - [Email/SMS delivery system configured]

Steps:

  1. [User logs in/registers]

  2. [Adds item to cart]

  3. [Applies promo]

  4. [Completes payment]

  5. [Views confirmation page/email]

Variations:

  - [Different payment methods, shipping options]

  - [Abandoned cart then resume]

  - [Failed payment then retry]

  - [Different browser/device combos]

Expected Results:

  - [UI confirmations, order in DB, payment captured, notifications sent]

Cleanup:

  - [Cancel/refund orders, restock inventory, clean test data]

Coverage notes:

  - [Critical path coverage, dependencies involved, known gaps and reasons]

Non-functionals:

  - [Latency of end-to-end flow, robustness under typical load]

Data/fixtures:

  - [Seed scripts, test accounts, cards, promo codes]

Environments:

  - [Staging/prod-parity with all integrated services]

Ambiguity Questions:

- [What is in-scope E2E vs already covered by integration tests?]

Potential Missed Ideas:

- [Cross-region flows, multi-tenant behavior, rollback/compensation flows]

AI in Software Testing: How Artificial Intelligence Is Transforming QA

For years, software testing has lived under pressure: more features, faster releases, fewer bugs, smaller teams. Traditional QA has done her...