Showing posts with label ai prompts. Show all posts
Showing posts with label ai prompts. Show all posts

Friday, February 13, 2026

AI Prompt - API Testing

Prompt (API level)

Prompt text

“Create API test cases for endpoint \[METHOD /path]. List preconditions, request variations (valid/invalid/missing/edge), auth cases, rate limits, pagination, idempotency, and expected status codes/payloads.”

How to apply critical thinking

·         Understand the API contract

·         Request/response schemas, required vs optional fields, business rules.

·         Authentication/authorization rules per role/tenant.

·         Generate comprehensive test cases

·         Valid requests: minimal valid, full valid, typical real-world payloads.

·         Invalid/malformed: missing required fields, wrong types, extra fields, oversized payloads.

·         Auth: no token, expired token, wrong roles, different tenants.

·         Rate/pagination/idempotency: hitting rate limits, page boundaries, repeating same request.

·         Questions on ambiguities

·         What is considered a client error vs server error?

·         How precisely does pagination behave when data changes between calls?

·         Are error codes standardized across APIs?

·         What test ideas might be missed

·         Security aspects (injection, sensitive data leakage, error message detail).

·         Backwards compatibility when fields are added/removed.

·         Localization of messages in error payloads (if applicable).

Output template (API testing)

Context: [HTTP method, path, domain area (e.g., "POST /orders")]

Assumptions: [auth mechanism, versioning strategy, content type, rate limits]

Test Types: [API, security (as applicable)]

Test Cases:

ID: [TC-API-001]

Type: [API]

Title: [e.g., "Create order with minimal valid payload"]

Preconditions/Setup:

  - [Existing users, auth token, seed data]

  - [Feature flags on/off]

Steps:

  1. [Send request with defined payload/headers/query params]

Variations:

  - [Valid minimal]

  - [Valid full]

  - [Invalid type]

  - [Missing required field]

  - [Oversized payload]

  - [Unauthenticated / unauthorized]

  - [High-rate sequence to trigger rate limit]

Expected Results:

  - [Status codes per variation]

  - [Response payload, headers]

  - [Side effects, e.g., DB records, events]

Cleanup:

  - [Delete created entities, reset quotas if needed]

Coverage notes:

  - [Covered status codes, auth paths, pagination, idempotency]

Non-functionals:

  - [Latency SLAs, payload size constraints, security checks]

Data/fixtures:

  - [Sample JWTs, user roles, example payloads]

Environments:

  - [API test env with prod-like gateway, WAF rules if possible]

Ambiguity Questions:

- [Unclear edge behaviors on pagination, updates during reads, error code mapping]

Potential Missed Ideas:

- [Negative security tests, content-negotiation, multi-tenant isolation]

AI Prompt - Integration testing

AI Prompt (integration level)

Prompt text

“Propose integration test cases for how \[component A] interacts with \[component B/service]. Include setup data, API contracts, dependency assumptions, and validation points. Cover success, failure, and timeout scenarios.”

How to apply critical thinking

·         Understand the integration surface

·         What exact APIs/events/interfaces are used between A and B?

·         What are the guarantees on order, idempotency, and eventual consistency?

·         Generate test cases for each interaction

·         Happy path: component A calls B with valid data; B responds as expected.

·         Contract mismatch: missing fields, versioned schemas, optional vs required fields.

·         Failure modes: B returns 4xx/5xx, malformed response, partial data.

·         Timeouts/retries: B is slow or unreachable; how does A behave?

·         Data consistency: verify that side effects (DB writes, messages) are consistent across both components.

·         Questions on ambiguities

·         Who is the source of truth for shared fields?

·         Expected behavior when one side is upgraded and the other is not?

·         How long should A wait before giving up on B?

·         What test ideas might be missed

·         Race conditions when multiple A instances talk to B concurrently.

·         Integration behavior under partial outages (e.g., one region down).

·         Back-pressure or throttling scenarios across components.

Output template (integration testing)

Context: [component A, component B/service, integration style (REST, queue, db, event bus)]

Assumptions: [API version, auth mechanism, network reliability, contract stability]

Test Types: [integration]

Test Cases:

ID: [TC-IT-001]

Type: [integration]

Title: [e.g., "A saves order when B confirms payment"]

Preconditions/Setup: 

  - [Deploy A+B in test env]

  - [Seed DBs or queues]

  - [Configure stubs only for external systems, not between A and B]

Steps:

  1. [Trigger A via API/event]

  2. [A calls B as part of flow]

Variations:

  - [B returns success]

  - [B returns 4xx, 5xx]

  - [B times out]

Expected Results:

  - [A’s behavior: commit/rollback, messages emitted, logs, metrics]

Cleanup:

  - [Clear DB, queues, reset configs]

Coverage notes:

  - [Which integration paths and contracts are validated; untested paths]

Non-functionals:

  - [Latency expectations across A↔B, retry limits]

Data/fixtures:

  - [Shared IDs, domain objects, schema versions]

Environments:

  - [Integration env with A+B only; prod-like configs]

Ambiguity Questions:

- [Ownership of fields, error-handling strategy, retry/backoff policy]

Potential Missed Ideas:

- [Multi-instance interactions, failover, schema evolution tests]

AI Prompt - Unit Test case generation

AI Prompt (unit level)

Prompt text

“Generate unit test cases for \[function/module] in \[language/framework]. Include inputs, expected outputs, edge cases, and mocks/stubs needed. Cover error handling and boundary conditions.”

How to apply critical thinking

·         Clarify the contract

·         What are the exact inputs, outputs, and side effects?

·         What invariants must always hold true?

·         What are explicit vs. implicit preconditions (e.g., non-null, sorted arrays, valid IDs)?

·         Generate strong test cases for each feature

·         Nominal cases: typical valid inputs that represent common usage.

·         Boundaries: min/max values, empty collections, zero/one/many elements, off-by-one.

·         Error handling: invalid types, null/undefined, out-of-range, malformed data.

·         State-related: same input called multiple times (idempotency), caching, internal state changes.

·         Mocked dependencies: happy/failure paths from downstream functions, repositories, external services.

·         Questions on ambiguities

·         What should happen on unexpected input types or missing parameters?

·         Are there performance constraints (e.g., max list size) that should influence tests?

·         How should the function behave when its dependency returns partial/empty data?

·         What test ideas might be missed

·         Rare but realistic values (e.g., leap-year dates, large numbers, special characters).

·         Concurrency or re-entrancy (same function called from multiple threads).

·         Backwards compatibility behavior if this function replaces an older one.

Output template (unit testing)

Context: [function/module name, language/framework, key responsibility]

Assumptions: [input types, error-handling strategy, side-effect rules]

Test Types: [unit]

Test Cases:

ID: [TC-UT-001]

Type: [unit]

Title: [short name, e.g., "returns total for valid items"]

Preconditions/Setup: [instantiate class, configure dependency mocks/stubs, sample data]

Steps: 

  1. [Call function with specific inputs]

Variations: 

  - [Variation 1: boundary]

  - [Variation 2: invalid input]

  - [Variation 3: dependency failure via mock]

Expected Results: 

  - [Return value/assertions]

  - [Side-effects, e.g., "no DB call", "log written"]

Cleanup: [reset mocks, static state]

Coverage notes: [what paths are covered, branches not covered, known gaps]

Non-functionals: [micro-performance expectations if any, e.g., "completes < 5 ms"]

Data/fixtures: [inline literals, factory methods, builders]

Environments: [local dev, unit test runner, CI]

Ambiguity Questions:

- [Question 1 about unclear behavior]

- [Question 2 about edge case]

Potential Missed Ideas:

- [Suspicious area where tests might still be thin]


AI Prompt - API Testing

Prompt (API level) Prompt text “Create API test cases for endpoint \[METHOD /path]. List preconditions, request variations (valid/invalid/...