Writing Unit Tests with AI

Let AI generate your test suite. A practical workflow for writing tests that actually catch bugs, not just boost coverage numbers.

3 steps
codingdeveloper-tools
1

List all test cases before writing code

Ask AI to list every test case for a function before you write a single line of test code. This reveals edge cases you would have missed.

Example prompt
List all test cases I should write for this function. Include: happy path scenarios, edge cases (empty input, null, zero, negative numbers, max values), error cases (invalid input, exceptions), and boundary conditions. Do not write test code yet — just list the cases. Function:

[PASTE FUNCTION]
2

Generate test code from the case list

Now pass the case list back to AI and ask it to write the actual test code. Review each test for correctness — AI sometimes inverts assertions.

Example prompt
Write [JEST/PYTEST/VITEST/etc.] unit tests for the following function, covering all these test cases: [PASTE CASE LIST FROM PREVIOUS STEP]. Use descriptive test names. Mock external dependencies. Include setup/teardown if needed. Function:

[PASTE FUNCTION]
3

Ask AI to find gaps in your existing tests

Paste your existing test file and the source code. Ask AI what scenarios are not covered and what could still break.

Example prompt
Review my existing tests against the source code and identify: 1) Test cases that are missing, 2) Tests that test the wrong thing or have incorrect assertions, 3) Code paths that are not covered, 4) Scenarios that could break in production but are not tested. Source:

[PASTE SOURCE]

Existing tests:

[PASTE TESTS]
Ready to try AI/ML API?
Follow this playbook with the actual tool
View Tool