Showing posts with label ai prompt data migration test cases. Show all posts
Showing posts with label ai prompt data migration test cases. Show all posts

Thursday, February 19, 2026

AI Prompt - Data Integrity / Migration Testing

AI Prompt

"Generate test cases for migrating data from [source] to [target]. Cover mapping rules, null/invalid handling, rounding, duplicates, referential integrity, rollback, and reconciliation checks."

Applying Critical Thinking

·         Define the golden source: Which system is authoritative for each entity? What is “correct” when source and target disagree?

·         Map every field: Document type, range, nullability, and transformation; explicit handling for unknown or invalid values.

·         Relationships first: Parent-child and foreign keys; order of migration and dependency graph; what happens to orphans.

·         Rollback is part of the design: Test rollback as a first-class scenario; ensure idempotency or clear “migrated” markers so re-runs are safe.

Generate Test Cases for Each Feature

·         Mapping rules: Each field: source → target mapping; default values; derived fields; conditional logic (e.g. if status=X then target flag Y).

·         Null/invalid: Source null → target null or default; invalid enum/date/number → reject, default, or error row; empty string vs null.

·         Rounding/type: Decimals: precision and rounding mode; dates: timezone and truncation; strings: length limits and encoding; integers: overflow.

·         Duplicates: Same business key in source multiple times: first wins, last wins, merge, or reject; how duplicates are logged.

·         Referential integrity: Parent migrated before child; FKs valid; orphans: fail, skip, or default parent; circular refs handled.

·         Rollback: After partial/full run: rollback script leaves target in consistent state; re-run after rollback behaves as defined.

·         Reconciliation: Record counts (per table/type); checksums or hashes for critical fields; spot checks: sample IDs in source vs target; delta report.

·         Idempotency: Running migration twice: no duplicate rows, no double application of side effects; safe resume from checkpoint.

Questions on Ambiguities

·         What is the authoritative definition of each entity (source vs target) after go-live?

·         How should invalid or legacy-bad data be handled: reject row, write to quarantine table, or apply default and log?

·         What precision and rounding apply to money and percentages (e.g. half-up, bank rounding)?

·         Duplicate key strategy: which duplicate wins, and are others logged for manual review?

·         What is the order of migration (tables/entities) and the rollback order; are there cross-system dependencies (e.g. message queue)?

·         Who signs off on reconciliation (counts vs full checksum vs sampling) and what is the go/no-go criterion?

Areas Where Test Ideas Might Be Missed

·         Concurrent writes: source or target updated during migration; locking or snapshot strategy.

·         Large objects / blobs: size limits, streaming, and checksum for binaries; timeouts.

·         Soft deletes and history: migrate only active rows vs full history; deleted parent and child handling.

·         Encoded/encrypted fields: decrypt in source, transform, re-encrypt in target; key rotation during migration.

·         Audit and metadata: created_at, updated_at, migrated_at; preserving vs overwriting.

·         Feature flags or tenant scope: migrate only certain tenants or segments; rest stay on source until phased.

·         Downstream consumers: after cutover, do downstream systems see consistent data (e.g. cache invalidation, event replay).

Output Template

Context: [system/feature under test, dependencies, environment]

Assumptions: [e.g., auth method, data availability, feature flags]

Test Types: [data integrity, migration]

Test Cases:

ID: [TC-001]

Type: [data integrity/migration]

Title: [short name]

Preconditions/Setup: [data, env, mocks, flags]

Steps: [ordered steps or request details]

Variations: [inputs/edges/negative cases]

Expected Results: [responses/UI states/metrics]

Cleanup: [teardown/reset]

Coverage notes: [gaps, out-of-scope items, risk areas]

Non-functionals: [perf targets, security considerations, accessibility notes]

Data/fixtures: [test users, payloads, seeds]

Environments: [dev/stage/prod-parity requirements]

Ambiguity Questions:

- [Question 1 about unclear behavior]

- [Question 2 about edge case]

Potential Missed Ideas:

- [Suspicious area where tests might still be thin]

AI in Software Testing: How Artificial Intelligence Is Transforming QA

For years, software testing has lived under pressure: more features, faster releases, fewer bugs, smaller teams. Traditional QA has done her...