Vitest Unit Testing in Practice: TDD Workflow and Coverage Reports
3 AM. Staring at Test Suites: 1 failed, 47 passed in the terminal. Every code change meant waiting 28 seconds for tests to finish. Then another 28 seconds. My coffee had gone cold hours ago.
That was my reality last year before migrating from Jest to Vitest.
Back then, the project had nearly 500 test cases. Every npm test run gave me time to scroll through two pages of Hacker News. After switching to Vitest, the same tests finished in just over 3 seconds. Honestly, I nearly cried.
So today I want to talk about two things: how to get that fast testing experience with Vitest, and more interestingly—how to use TDD (Test-Driven Development) to make writing tests less painful. I’ll walk you through a complete price formatting function example, go through the Red-Green-Refactor cycle, then cover coverage configuration, mocking techniques, and Vitest UI for debugging.
Ready?
Why Vitest + TDD
Let’s start with some numbers.
SitePoint ran a comparison in 2026: 50,000 test cases, Vitest finished in 3 seconds. Jest? 28 to 34 seconds. That’s not a small difference—it’s an order of magnitude.
Speed is just one reason. If you’ve used Jest with ESM modules, you probably hit that wall—install babel, configure transformers, pray that the magic strings in jest.config.js actually work. Vitest is different. It supports ESM natively, no transpilation config needed. Your code runs exactly as written, simple and clean.
Another thing Vite users will appreciate: Vitest reuses your Vite config. Aliases, environment variables, plugins configured in vite.config.ts—the test environment inherits them automatically. No need to write another moduleNameMapper like you would for Jest. When I first discovered this, I sat there stunned for a few seconds—testing config could actually be this painless?
Now let’s talk about method. TDD—many have heard of it, few stick with it. The core is a cycle called Red-Green-Refactor: write a failing test first (red), then write just enough code to pass (green), finally refactor and clean up. Sounds counterintuitive, right? Write tests before code?
But here’s the benefit: every line of code you write exists to make a test pass. No extra logic, no “just in case” code. And because tests come first, you’re forced to think through what the function should do, what it returns, where the boundaries are. This constraint actually makes the design clearer.
Vitest’s watch mode makes this cycle incredibly smooth. Save a file, tests run instantly, results show right in the terminal. No window switching, no manual commands—like having a co-pilot constantly checking: “Hey, that change broke a test” or “All green now.” That instant feedback puts you into a flow state without even noticing.
TDD Practice: Building a Function from Scratch
Let’s stop talking and actually do it. We’ll develop a formatPrice() function using TDD, converting numbers to currency display. Like turning 1234.5 into ¥1,234.50.
Red Phase: Write a Failing Test
Open your project, create formatPrice.test.ts:
// formatPrice.test.ts
import { describe, it, expect } from 'vitest'
import { formatPrice } from './formatPrice'
describe('formatPrice', () => {
it('should format number as Chinese yuan currency', () => {
expect(formatPrice(1234.5)).toBe('¥1,234.50')
})
})
Run npx vitest now. You’ll get a big red error: Cannot find module './formatPrice'. Because the function doesn’t exist yet.
That’s exactly right. This is the Red phase—a failing test means you’ve defined a requirement that hasn’t been implemented. Some people think writing tests first is weird, but think about it: if you write code first then tests, how do you know the test actually checks what you intended?
Green Phase: Write Minimal Passing Code
Now create formatPrice.ts with just enough code to pass:
// formatPrice.ts
export function formatPrice(value: number): string {
return '¥1,234.50' // Hard-code the return value first
}
Run tests again. Green!
Wait, you might say: “Isn’t that cheating?” No, it’s not. TDD emphasizes writing “just enough” code to pass tests, nothing more. Hard-coded values, minimal logic—as long as tests pass, you have a verifiable foundation. Then add more tests, modify code, step by step.
Let’s add another test case:
it('should handle different values correctly', () => {
expect(formatPrice(0)).toBe('¥0.00')
expect(formatPrice(99.99)).toBe('¥99.99')
})
Tests turn red again. Now you can’t hard-code anymore—real logic needed:
export function formatPrice(value: number): string {
return `¥${value.toFixed(2).replace(/\B(?=(\d{3})+(?!\d))/g, ',')}`
}
Run tests. All green. That regex is ugly, but it works for now.
Refactor Phase: Clean Up the Code
Tests pass, but the code could be cleaner. Now you can refactor safely—tests will guard against mistakes.
// Refactored version
export function formatPrice(value: number): string {
// Use Intl.NumberFormat for robustness
return new Intl.NumberFormat('zh-CN', {
style: 'currency',
currency: 'CNY',
minimumFractionDigits: 2,
}).format(value)
}
Run tests. Still green. Refactoring complete.
See, that’s a full Red-Green-Refactor cycle. Start with failure, write minimal code, then improve structure. Tests protect you throughout—no fear of breaking things. Each step is small, so your mental load stays light. No need to figure out all edge cases at once—tests will remind you.
In real projects, I usually run this cycle in watch mode. Save file → tests auto-run → see results → modify code → save → run again. The whole thing takes seconds, never leaving the editor. That feeling of “change something and instantly know if it’s right”—really satisfying.
Coverage Configuration and CI Integration
Writing tests means knowing how much you’ve actually covered. Coverage reports handle that.
Basic Configuration
Add coverage config to vitest.config.ts:
// vitest.config.ts
import { defineConfig } from 'vitest/config'
export default defineConfig({
test: {
coverage: {
provider: 'v8', // or 'istanbul', v8 is faster by default
reporter: ['text', 'html', 'json-summary'],
reportsDirectory: './coverage',
include: ['src/**/*.ts'],
exclude: ['src/**/*.test.ts', 'src/types/**'],
thresholds: {
statements: 80,
branches: 75,
functions: 80,
lines: 80,
},
},
},
})
Provider has two options: v8 and istanbul. v8 uses V8 engine’s native coverage API, faster; istanbul is the older solution with better compatibility. For pure Vite/Node environments, v8 is enough.
Reporter specifies output formats: text prints to terminal, html generates visual reports, json-summary for CI tools.
Threshold Settings
Let me explain thresholds. It has four dimensions:
- statements: Statement coverage, how many code lines were executed
- branches: Branch coverage, whether each if/else branch was tested
- functions: Function coverage, how many functions were called
- lines: Line coverage, similar to statements but calculated differently
I usually set thresholds between 75%-85%. Too low is pointless, too high exhausts the team—some code (boundary checks, error handling) really can’t reach 100%.
GitHub Actions Integration
Coverage’s real value: automatically blocking PRs that don’t meet thresholds. Add to .github/workflows/test.yml:
- name: Run tests with coverage
run: npm run test -- --coverage
- name: Check coverage threshold
run: |
COVERAGE=$(cat coverage/coverage-summary.json | jq '.total.lines.pct')
if (( $(echo "$COVERAGE < 80" | bc -l) )); then
echo "Coverage $COVERAGE% is below threshold 80%"
exit 1
fi
If coverage drops below 80%, the PR can’t merge. Team members must ensure tests are sufficient before submitting.
Reading Reports
After running npx vitest --coverage, the terminal shows something like:
% Stmts % Branch % Funcs % Lines Uncovered Line
----------|----------|----------|----------|----------------
82.45 | 76.32 | 85.71 | 82.45 | 23-25, 67
Uncovered Line lists lines not tested. Open coverage/index.html for detailed visual reports—green means tested, red means missed.
Honestly, when I first started chasing coverage, I had OCD about testing every line. Later realized it wasn’t necessary. 80% coverage usually covers core logic and main branches. The remaining 20% is often extreme edge cases—forcing tests there wastes time.
Mock Trio: vi.fn, vi.spy, vi.mock
Testing’s biggest headache: handling external dependencies—API requests, timers, third-party libraries. Vitest provides three mocking approaches, each with different uses.
vi.fn(): Create Fake Functions
When you need a “fake” function—don’t care what it originally was, only care how it’s called—use vi.fn().
test('callback should be called once', () => {
const callback = vi.fn()
callMeMaybe(callback)
expect(callback).toHaveBeenCalledTimes(1)
expect(callback).toHaveBeenCalledWith('hello')
})
Here callback is a brand-new function recording call count, arguments, return values. Use mockReturnValue to specify returns, mockImplementation for behavior.
vi.spy(): Monitor Real Functions
Sometimes you don’t want to replace a function—just want to see if it was called, what arguments it got. Use vi.spyOn.
test('should call console.log', () => {
const logSpy = vi.spyOn(console, 'log')
greet('World')
expect(logSpy).toHaveBeenCalledWith('Hello, World!')
logSpy.mockRestore() // Don't forget to restore
})
spy keeps the original function’s behavior, just monitors alongside. Remember mockRestore() after use, otherwise other tests get affected.
vi.mock(): Replace Entire Modules
When you need to simulate API responses, replace third-party libraries—use vi.mock. It replaces the whole module.
// Mock axios
vi.mock('axios', () => ({
default: {
get: vi.fn(() => Promise.resolve({ data: { name: 'test' } }))
}
}))
test('getUser should return user data', async () => {
const user = await getUser(1)
expect(user.name).toBe('test')
expect(axios.get).toHaveBeenCalledWith('/users/1')
})
vi.mock has a pitfall: it gets hoisted to file top, regardless of whether you write it in a function or if statement. So mock content can’t depend on other variables.
Which One to Use?
Simple answer:
- Need a fake function? Use
vi.fn() - Want to monitor a real function? Use
vi.spy() - Replacing an entire module? Use
vi.mock()
I used to confuse these three. Later found a mnemonic: fn is “make fake”, spy is “peek at”, mock is “swap out”. Kind of catchy.
Cleanup Matters
Tests shouldn’t affect each other—that’s reliability’s foundation. After each test, clean up mocks:
afterEach(() => {
vi.restoreAllMocks()
})
Or enable globally in vitest config:
test: {
restoreMocks: true
}
Vitest UI and Debugging Tips
Terminal test results work fine, but if you want a more visual experience, try Vitest UI.
Launch Visual Interface
npx vitest --ui
Browser opens automatically with test list on the left, details on the right. Click any test to see full output, error stack, execution time. There’s also a coverage button—opens the HTML report we mentioned.
This thing’s perfect for debugging. Test fails? No need to dig through terminal logs—see error info directly in UI, code right beside it. Modify, save, interface auto-refreshes.
Watch Mode: Run Only Affected Tests
During normal development, I run npx vitest in watch mode. Its incremental testing is smart—change utils/formatPrice.ts, it only runs tests related to that file, not everything.
With many tests, this differential run saves serious time. One project I have with 800+ tests: full run takes 4 seconds, incremental usually just hundreds of milliseconds.
Debugging Tips
Test fails? Here’s what I use:
Run single test: Add .only after it
it.only('this test has issues, run it alone', () => {
// ...
})
Skip a test: Add .skip
it.skip('skip this for now', () => {
// ...
})
Update snapshots: Component structure changed, snapshot test fails
npx vitest -u # -u is short for --update
console.log debugging: Yes, the old way works best. Vitest shows console output completely in test results.
Common Errors
| Error | Cause | Solution |
|---|---|---|
Cannot find module | Path alias not configured | Check vitest.config alias |
vi.mock is not a function | Wrong import method | Use import { vi } from 'vitest' |
| Test timezone wrong | Default is UTC | Set process.env.TZ = 'Asia/Shanghai' in setup |
I’ve hit all these pitfalls. Especially the timezone one—CI tests failing locally, spent half a day debugging before realizing it was a timezone issue.
Vitest TDD Practice Workflow
Build a function from scratch using TDD, configure coverage reports
⏱️ Estimated time: 30 min
- 1
Step1: Install Vitest
Add Vitest to your Vite project:
npm add -D vitest
No extra config needed—Vitest reuses Vite settings automatically - 2
Step2: Red Phase: Write Failing Test
Create test file with a guaranteed failing test:
import { describe, it, expect } from 'vitest'
import { formatPrice } from './formatPrice'
describe('formatPrice', () => {
it('should format currency', () => {
expect(formatPrice(1234.5)).toBe('¥1,234.50')
})
})
Run npx vitest, confirm test fails (red) - 3
Step3: Green Phase: Write Minimal Code
Create implementation with just enough to pass:
export function formatPrice(value: number): string {
return '¥1,234.50' // Hard-code first
}
Run tests, confirm passing (green) - 4
Step4: Refactor Phase: Improve Code
Refactor using Intl.NumberFormat:
export function formatPrice(value: number): string {
return new Intl.NumberFormat('zh-CN', {
style: 'currency',
currency: 'CNY',
}).format(value)
}
Confirm tests still pass - 5
Step5: Configure Coverage
Add to vitest.config.ts:
test: {
coverage: {
provider: 'v8',
thresholds: { statements: 80, branches: 75 }
}
}
Run npx vitest --coverage to see report
Conclusion
After all this, it boils down to one thing: Vitest + TDD makes writing tests less painful.
Speed-wise, dropping from Jest’s tens of seconds to just a few seconds—that change isn’t just numbers, it’s experience. No more switching between coding and waiting for tests, no more “change one line, wait forever” torture. Native ESM support, Vite config reuse—real convenience.
TDD’s Red-Green-Refactor cycle sounds counterintuitive, but try it a few times and you’ll see the benefits: each step is tiny, each step verified, mental load stays light. No need to design the perfect solution upfront—tests help discover issues, iterate and improve.
Coverage configuration and mocking techniques are tool-level stuff—mastering them helps write more robust tests. But the real goal is building the habit of writing tests—not chasing numbers, but gaining confidence in your code.
If your project already uses Vite, migrating from Jest to Vitest costs almost nothing. Add npm add -D vitest, change Jest syntax to Vitest’s (basically identical), and it runs. Still hesitant? Try it on a small module first, experience that watch mode instant feedback.
Now go run your first test with Vitest, try the TDD cycle. Feel that “change something and instantly know if it’s right” confidence. Maybe you’ll end up like me—actually enjoying writing tests.
FAQ
What's the main difference between Vitest and Jest?
How does TDD's Red-Green-Refactor cycle work?
• Red: Write a failing test first, defining the requirement
• Green: Write minimal code to pass—hard-coding is fine
• Refactor: Improve code structure with tests protecting you
Each step is small, mental load stays light, tests guard the whole way.
What coverage threshold should I set?
How do I choose between vi.fn, vi.spy, and vi.mock?
• vi.fn(): Create brand-new fake function, tracks calls and args
• vi.spy(): Monitor existing function, keeps original behavior, needs mockRestore() after use
• vi.mock(): Replace entire module, for simulating APIs or third-party libs
Mnemonic: fn makes fake, spy peeks, mock swaps out.
What's the advantage of Vitest watch mode?
How do I fix timezone issues in tests?
10 min read · Published on: Apr 29, 2026 · Modified on: Apr 29, 2026
Related Posts
GitHub Actions Matrix Builds: A Practical Guide to Multi-Platform Multi-Version Parallel Testing
GitHub Actions Matrix Builds: A Practical Guide to Multi-Platform Multi-Version Parallel Testing
Nginx Load Balancing in Practice: upstream Configuration and Health Checks
Nginx Load Balancing in Practice: upstream Configuration and Health Checks
Supabase Realtime in Practice: WebSocket Connection Management and Reconnection Strategies


Comments
Sign in with GitHub to leave a comment