testing-anti-patterns
About
This Claude Skill helps developers avoid common testing anti-patterns when writing or modifying tests. It prevents testing mock behavior, adding test-only methods to production code, and mocking without understanding dependencies. Use it to ensure tests verify real behavior rather than mock implementations and maintain clean production code.
Documentation
Testing Anti-Patterns
Overview
Tests must verify real behavior, not mock behavior. Mocks are a means to isolate, not the thing being tested.
Core principle: Test what the code does, not what the mocks do.
Following strict TDD prevents these anti-patterns.
The Iron Laws
1. NEVER test mock behavior
2. NEVER add test-only methods to production classes
3. NEVER mock without understanding dependencies
Anti-Pattern 1: Testing Mock Behavior
The violation:
// ❌ BAD: Testing that the mock exists
test('renders sidebar', () => {
render(<Page />);
expect(screen.getByTestId('sidebar-mock')).toBeInTheDocument();
});
Why this is wrong:
- You're verifying the mock works, not that the component works
- Test passes when mock is present, fails when it's not
- Tells you nothing about real behavior
your human partner's correction: "Are we testing the behavior of a mock?"
The fix:
// ✅ GOOD: Test real component or don't mock it
test('renders sidebar', () => {
render(<Page />); // Don't mock sidebar
expect(screen.getByRole('navigation')).toBeInTheDocument();
});
// OR if sidebar must be mocked for isolation:
// Don't assert on the mock - test Page's behavior with sidebar present
Gate Function
BEFORE asserting on any mock element:
Ask: "Am I testing real component behavior or just mock existence?"
IF testing mock existence:
STOP - Delete the assertion or unmock the component
Test real behavior instead
Anti-Pattern 2: Test-Only Methods in Production
The violation:
// ❌ BAD: destroy() only used in tests
class Session {
async destroy() { // Looks like production API!
await this._workspaceManager?.destroyWorkspace(this.id);
// ... cleanup
}
}
// In tests
afterEach(() => session.destroy());
Why this is wrong:
- Production class polluted with test-only code
- Dangerous if accidentally called in production
- Violates YAGNI and separation of concerns
- Confuses object lifecycle with entity lifecycle
The fix:
// ✅ GOOD: Test utilities handle test cleanup
// Session has no destroy() - it's stateless in production
// In test-utils/
export async function cleanupSession(session: Session) {
const workspace = session.getWorkspaceInfo();
if (workspace) {
await workspaceManager.destroyWorkspace(workspace.id);
}
}
// In tests
afterEach(() => cleanupSession(session));
Gate Function
BEFORE adding any method to production class:
Ask: "Is this only used by tests?"
IF yes:
STOP - Don't add it
Put it in test utilities instead
Ask: "Does this class own this resource's lifecycle?"
IF no:
STOP - Wrong class for this method
Anti-Pattern 3: Mocking Without Understanding
The violation:
// ❌ BAD: Mock breaks test logic
test('detects duplicate server', () => {
// Mock prevents config write that test depends on!
vi.mock('ToolCatalog', () => ({
discoverAndCacheTools: vi.fn().mockResolvedValue(undefined)
}));
await addServer(config);
await addServer(config); // Should throw - but won't!
});
Why this is wrong:
- Mocked method had side effect test depended on (writing config)
- Over-mocking to "be safe" breaks actual behavior
- Test passes for wrong reason or fails mysteriously
The fix:
// ✅ GOOD: Mock at correct level
test('detects duplicate server', () => {
// Mock the slow part, preserve behavior test needs
vi.mock('MCPServerManager'); // Just mock slow server startup
await addServer(config); // Config written
await addServer(config); // Duplicate detected ✓
});
Gate Function
BEFORE mocking any method:
STOP - Don't mock yet
1. Ask: "What side effects does the real method have?"
2. Ask: "Does this test depend on any of those side effects?"
3. Ask: "Do I fully understand what this test needs?"
IF depends on side effects:
Mock at lower level (the actual slow/external operation)
OR use test doubles that preserve necessary behavior
NOT the high-level method the test depends on
IF unsure what test depends on:
Run test with real implementation FIRST
Observe what actually needs to happen
THEN add minimal mocking at the right level
Red flags:
- "I'll mock this to be safe"
- "This might be slow, better mock it"
- Mocking without understanding the dependency chain
Anti-Pattern 4: Incomplete Mocks
The violation:
// ❌ BAD: Partial mock - only fields you think you need
const mockResponse = {
status: 'success',
data: { userId: '123', name: 'Alice' }
// Missing: metadata that downstream code uses
};
// Later: breaks when code accesses response.metadata.requestId
Why this is wrong:
- Partial mocks hide structural assumptions - You only mocked fields you know about
- Downstream code may depend on fields you didn't include - Silent failures
- Tests pass but integration fails - Mock incomplete, real API complete
- False confidence - Test proves nothing about real behavior
The Iron Rule: Mock the COMPLETE data structure as it exists in reality, not just fields your immediate test uses.
The fix:
// ✅ GOOD: Mirror real API completeness
const mockResponse = {
status: 'success',
data: { userId: '123', name: 'Alice' },
metadata: { requestId: 'req-789', timestamp: 1234567890 }
// All fields real API returns
};
Gate Function
BEFORE creating mock responses:
Check: "What fields does the real API response contain?"
Actions:
1. Examine actual API response from docs/examples
2. Include ALL fields system might consume downstream
3. Verify mock matches real response schema completely
Critical:
If you're creating a mock, you must understand the ENTIRE structure
Partial mocks fail silently when code depends on omitted fields
If uncertain: Include all documented fields
Anti-Pattern 5: Integration Tests as Afterthought
The violation:
✅ Implementation complete
❌ No tests written
"Ready for testing"
Why this is wrong:
- Testing is part of implementation, not optional follow-up
- TDD would have caught this
- Can't claim complete without tests
The fix:
TDD cycle:
1. Write failing test
2. Implement to pass
3. Refactor
4. THEN claim complete
When Mocks Become Too Complex
Warning signs:
- Mock setup longer than test logic
- Mocking everything to make test pass
- Mocks missing methods real components have
- Test breaks when mock changes
your human partner's question: "Do we need to be using a mock here?"
Consider: Integration tests with real components often simpler than complex mocks
TDD Prevents These Anti-Patterns
Why TDD helps:
- Write test first → Forces you to think about what you're actually testing
- Watch it fail → Confirms test tests real behavior, not mocks
- Minimal implementation → No test-only methods creep in
- Real dependencies → You see what the test actually needs before mocking
If you're testing mock behavior, you violated TDD - you added mocks without watching test fail against real code first.
Quick Reference
| Anti-Pattern | Fix |
|---|---|
| Assert on mock elements | Test real component or unmock it |
| Test-only methods in production | Move to test utilities |
| Mock without understanding | Understand dependencies first, mock minimally |
| Incomplete mocks | Mirror real API completely |
| Tests as afterthought | TDD - tests first |
| Over-complex mocks | Consider integration tests |
Red Flags
- Assertion checks for
*-mocktest IDs - Methods only called in test files
- Mock setup is >50% of test
- Test fails when you remove mock
- Can't explain why mock is needed
- Mocking "just to be safe"
The Bottom Line
Mocks are tools to isolate, not things to test.
If TDD reveals you're testing mock behavior, you've gone wrong.
Fix: Test real behavior or question why you're mocking at all.
Quick Install
/plugin add https://github.com/LerianStudio/ring/tree/main/testing-anti-patternsCopy and paste this command in Claude Code to install this skill
GitHub 仓库
Related Skills
evaluating-llms-harness
TestingThis Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.
go-test
MetaThe go-test skill provides expertise in Go's standard testing package and best practices. It helps developers implement table-driven tests, subtests, benchmarks, and coverage strategies while following Go conventions. Use it when writing test files, creating mocks, detecting race conditions, or organizing integration tests in Go projects.
generating-unit-tests
MetaThis skill automatically generates comprehensive unit tests from source code when developers request test creation. It supports multiple testing frameworks like Jest, pytest, and JUnit, intelligently detecting the appropriate one or using a specified framework. Use it when asking to "generate tests," "create unit tests," or using the "gut" shortcut with file paths.
webapp-testing
TestingThis Claude Skill provides a Playwright-based toolkit for testing local web applications through Python scripts. It enables frontend verification, UI debugging, screenshot capture, and log viewing while managing server lifecycles. Use it for browser automation tasks but run scripts directly rather than reading their source code to avoid context pollution.
