AI Prompt
"List security test cases for [app/endpoint].
Cover authZ/authN, input validation, common vulns (XSS, SQLi, SSRF, CSRF,
IDOR), transport security, logging, and misconfig checks. Include both positive
and negative cases."
Applying Critical Thinking
·
Define
the trust boundary: What is inside vs outside the system? Every
entry point (HTTP, WebSocket, file upload, webhook) is an attack surface.
·
Assume breach
for dependencies: Treat third-party libs, APIs, and configs as
potentially compromised; test that failures don’t escalate.
·
Positive vs
negative: Positive cases prove legitimate users get access and valid
inputs succeed; negative cases prove attackers or invalid inputs are
rejected or sanitized.
·
Chain thinking: One
vuln (e.g. IDOR) can lead to another (e.g. data exfil); consider
multi-step attack scenarios.
Generate Test Cases for Each Feature
·
AuthN:
Valid login, token refresh, MFA success, session validity. Negative:
invalid creds, expired/missing/revoked token, token reuse after logout.
·
AuthZ:
Role X can access resource A; tenant isolation. Negative: role Y cannot
access A; cross-tenant access; privilege escalation.
·
Input
validation: Valid payloads accepted. Negative: oversized input,
special chars, null/type confusion, boundary values.
·
XSS:
Sanitized output in HTML/JS context. Negative: reflected/stored DOM XSS
via user-controlled input.
·
SQLi:
Parameterized queries with normal input. Negative: union-based, error-based,
blind SQLi payloads.
·
SSRF:
Allowed URLs only. Negative: internal IPs, cloud metadata, file://, redirects
to internal.
·
CSRF:
Request with valid origin/cookie + CSRF token. Negative: request without token,
wrong origin, same-site cookie behavior.
·
IDOR:
User A can only access A’s resources. Negative: user A accesses B’s
resource by changing ID in path/body.
·
Transport:
HTTPS only, HSTS, secure cookies. Negative: HTTP downgrade, mixed content,
cookie flags.
·
Logging:
No PII/secrets in logs; errors don’t leak stack to client. Negative: logs
contain passwords, tokens, full cards; verbose errors to user.
·
Misconfig:
Secure defaults, no debug in prod. Negative: default creds, debug
endpoints exposed, permissive CORS.
Questions on Ambiguities
·
What roles/tenants exist, and what is the exact
access matrix (who can read/write what)?
·
Are error messages intentionally
generic for unauthenticated/unauthorized, or is some detail allowed for
debugging?
·
What input length/type limits are enforced (and
where: gateway, app, DB)?
·
Is CSRF protection applied to all state-changing
operations, and is it token-based or SameSite-only?
·
Which headers (e.g. X-Forwarded-For, Host) are
trusted for routing or logging, and could they be spoofed?
·
Are file uploads restricted by type/size and
scanned; where are they stored and how are they served?
·
What secrets (API keys, DB URLs) are in
config/env, and are they ever logged or exposed in errors?
Areas Where Test Ideas Might Be Missed
·
Business logic : rate limits, coupon reuse,
price manipulation, workflow bypass (e.g. skip payment step).
·
Second-order injection: data stored from one
request and executed in another context (e.g. stored XSS, stored SQLi in report
generation).
·
Subdomain/redirect : open redirects, subdomain
takeover, cookie scope too broad.
·
Dependency vulns: outdated libs with known CVEs;
tests don’t cover “what if this dependency is malicious”.
·
Auth bypass via alternate paths: GraphQL
introspection, internal APIs, webhooks without signature verification.
·
Time-based and conditional attacks: timing side
channels, race conditions on balance/eligibility checks.
·
Client-side only checks: relying on front-end
validation without server-side enforcement.
Output Template
Context: [app/endpoint under test, entry points, dependencies, environment]
Assumptions: [auth method (JWT/OAuth/session), tenant model, WAF/gateway in front]
Test Types: [security, authN/authZ]
Test Cases:
ID: [TC-SEC-001]
Type: [security]
Title: [short name]
Preconditions/Setup: [data, env, mocks, flags]
Steps: [ordered steps or request details]
Variations: [inputs/edges/negative cases]
Expected Results: [responses/UI states/metrics]
Cleanup: [teardown/reset]
Coverage notes: [gaps, out-of-scope items, risk areas]
Non-functionals: [perf targets, security considerations, accessibility notes]
Data/fixtures: [test users, payloads, seeds]
Environments: [dev/stage/prod-parity requirements]
Ambiguity Questions:
- [Question 1 about unclear behavior]
- [Question 2 about edge case]
Potential Missed Ideas:
- [Suspicious area where tests might still be thin]