Showing posts with label ai prompt Performance load test cases. Show all posts
Showing posts with label ai prompt Performance load test cases. Show all posts

Friday, February 13, 2026

AI Prompt - Performance or Load Testing

Prompt (performance level)

Prompt text

“Draft performance and load test scenarios for \[service/endpoint]. Specify SLAs/SLOs, target RPS, concurrency patterns, payload sizes, ramp-up, soak, spike, and metrics to collect (latency percentiles, errors, saturation).”

How to apply critical thinking

·         Clarify performance requirements

·         SLAs (e.g., p95 < 300 ms) and SLOs (e.g., 99.9% of requests under 500 ms).

·         Expected peak vs normal traffic, growth projections.

·         Generate performance scenarios

·         Baseline: light load to validate harness and get reference metrics.

·         Load: sustained target RPS with realistic traffic mix.

·         Stress/spike: sudden surges above normal to find breaking points.

·         Soak: long-running tests to detect leaks, slow degradation.

·         Concurrency: many simultaneous users, with realistic think times.

·         Questions on ambiguities

·         What’s the exact definition of “failure” for this service?

·         Which resources are most constrained (CPU, memory, DB connections, network)?

·         Are there autoscaling policies or rate limits that influence scenarios?

·         What test ideas might be missed

·         Multi-tenant contention: one tenant starving others.

·         Thundering herd effects after outages or cache flushes.

·         Performance in non-happy paths (errors, retries, timeouts).

Output template (performance/load testing)

Context: [service/endpoint, architecture (monolith/microservice), criticality level]

Assumptions: [SLA/SLO targets, expected traffic patterns, caching, autoscaling]

Test Types: [performance, load, stress, soak]

Test Cases:

ID: [TC-PERF-001]

Type: [performance/load]

Title: [e.g., "Sustained load at target RPS"]

Preconditions/Setup:

  - [Prod-like environment, data size, configs]

  - [Monitoring/observability in place]

Steps:

  1. [Configure load tool with target RPS, concurrency, payload mix]

  2. [Run test for defined duration]

Variations:

  - [Baseline low load]

  - [Target steady-state load]

  - [Spike from low to 2x/3x target]

  - [Soak at target for N hours]

  - [Different payload sizes and request mixes]

Expected Results:

  - [Latency percentiles within SLO]

  - [Error rate within acceptable limits]

  - [Resource utilization in safe bounds, no leaks]

Cleanup:

  - [Reset environment, clear queues/caches, roll back configs if changed]

Coverage notes:

  - [Which scenarios/traffic patterns are validated; untested extremes]

Non-functionals:

  - [Capacity headroom, resiliency under stress, autoscaling behavior]

Data/fixtures:

  - [Realistic datasets, representative payload samples]

Environments:

  - [Dedicated perf env, or carefully controlled prod-like setup]

Ambiguity Questions:

- [Exact SLOs, business impact of degradation, allowed mitigation strategies]

Potential Missed Ideas:

- [Downstream dependency limits, batch jobs overlapping with peak traffic, cold-start behavior]

AI Prompt - Compatibility Testing / mobile testing

AI Prompt "Propose compatibility test cases for [feature] across [browsers/devices/OS versions]. Include viewport/resolution, touch v...