Friday, February 13, 2026

AI Prompt - Unit Test case generation

AI Prompt (unit level)

Prompt text

“Generate unit test cases for \[function/module] in \[language/framework]. Include inputs, expected outputs, edge cases, and mocks/stubs needed. Cover error handling and boundary conditions.”

How to apply critical thinking

·         Clarify the contract

·         What are the exact inputs, outputs, and side effects?

·         What invariants must always hold true?

·         What are explicit vs. implicit preconditions (e.g., non-null, sorted arrays, valid IDs)?

·         Generate strong test cases for each feature

·         Nominal cases: typical valid inputs that represent common usage.

·         Boundaries: min/max values, empty collections, zero/one/many elements, off-by-one.

·         Error handling: invalid types, null/undefined, out-of-range, malformed data.

·         State-related: same input called multiple times (idempotency), caching, internal state changes.

·         Mocked dependencies: happy/failure paths from downstream functions, repositories, external services.

·         Questions on ambiguities

·         What should happen on unexpected input types or missing parameters?

·         Are there performance constraints (e.g., max list size) that should influence tests?

·         How should the function behave when its dependency returns partial/empty data?

·         What test ideas might be missed

·         Rare but realistic values (e.g., leap-year dates, large numbers, special characters).

·         Concurrency or re-entrancy (same function called from multiple threads).

·         Backwards compatibility behavior if this function replaces an older one.

Output template (unit testing)

Context: [function/module name, language/framework, key responsibility]

Assumptions: [input types, error-handling strategy, side-effect rules]

Test Types: [unit]

Test Cases:

ID: [TC-UT-001]

Type: [unit]

Title: [short name, e.g., "returns total for valid items"]

Preconditions/Setup: [instantiate class, configure dependency mocks/stubs, sample data]

Steps: 

  1. [Call function with specific inputs]

Variations: 

  - [Variation 1: boundary]

  - [Variation 2: invalid input]

  - [Variation 3: dependency failure via mock]

Expected Results: 

  - [Return value/assertions]

  - [Side-effects, e.g., "no DB call", "log written"]

Cleanup: [reset mocks, static state]

Coverage notes: [what paths are covered, branches not covered, known gaps]

Non-functionals: [micro-performance expectations if any, e.g., "completes < 5 ms"]

Data/fixtures: [inline literals, factory methods, builders]

Environments: [local dev, unit test runner, CI]

Ambiguity Questions:

- [Question 1 about unclear behavior]

- [Question 2 about edge case]

Potential Missed Ideas:

- [Suspicious area where tests might still be thin]


AI Prompt - API Testing

Prompt (API level) Prompt text “Create API test cases for endpoint \[METHOD /path]. List preconditions, request variations (valid/invalid/...