go-test
About
The go-test skill provides expertise in Go's standard testing package and best practices. It helps developers implement table-driven tests, subtests, benchmarks, and coverage strategies while following Go conventions. Use it when writing test files, creating mocks, detecting race conditions, or organizing integration tests in Go projects.
Documentation
Go Testing Code Guide
Test File Structure
One-to-one matching with the file under test. Test files should be located in the same directory as the target file.
File Naming
Format: {target-file-name}_test.go.
Example: user.go → user_test.go
Test Hierarchy
Organize by method (function) unit as major sections, and by test case as minor sections. Complex methods can have intermediate sections by scenario.
Test Coverage Selection
Omit obvious or overly simple logic (simple getters, constant returns). Prioritize testing business logic, conditional branches, and code with external dependencies.
Test Case Composition
At least one basic success case is required. Focus primarily on failure cases, boundary values, edge cases, and exception scenarios.
Test Independence
Each test should be executable independently. No test execution order dependencies. Initialize shared state for each test.
Given-When-Then Pattern
Structure test code in three stages—Given (setup), When (execution), Then (assertion). Separate stages with comments or blank lines for complex tests.
Test Data
Use hardcoded meaningful values. Avoid random data as it causes unreproducible failures. Fix seeds if necessary.
Mocking Principles
Mock external dependencies (API, DB, file system). For modules within the same project, prefer actual usage; mock only when complexity is high.
Test Reusability
Extract repeated mocking setups, fixtures, and helper functions into common utilities. Be careful not to harm test readability through excessive abstraction.
Integration/E2E Testing
Unit tests are the priority. Write integration/E2E tests when complex flows or multi-module interactions are difficult to understand from code alone. Place in separate directories (tests/integration, tests/e2e).
Test Naming
Test names should clearly express "what is being tested". Recommended format: "should do X when Y". Focus on behavior rather than implementation details.
Assertion Count
Multiple related assertions in one test are acceptable, but separate tests when validating different concepts.
Test Functions
Format: func TestXxx(t *testing.T). Write TestMethodName functions per method, compose subtests with t.Run().
Subtests
Pattern: t.Run("case name", func(t *testing.T) {...}). Each case should be independently executable. Call t.Parallel() when running in parallel.
Table-Driven Tests
Recommended when multiple cases have similar structure. Define cases with []struct{ name, input, want, wantErr }.
Example:
tests := []struct {
name string
input int
want int
wantErr bool
}{
{"normal case", 5, 10, false},
{"negative input", -1, 0, true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := Func(tt.input)
if (err != nil) != tt.wantErr { ... }
if got != tt.want { ... }
})
}
Mocking
Utilize interface-based dependency injection. Prefer manual mocking; consider gomock for complex cases. Define test-only implementations within _test.go.
Error Verification
Use errors.Is() and errors.As(). Avoid string comparison of error messages; verify with sentinel errors or error types instead.
Setup/Teardown
Use TestMain(m *testing.M) for global setup/teardown. For individual test preparation, do it within each test function or extract to helper functions.
Test Helpers
Extract repeated setup/verification into testXxx(t *testing.T, ...) helpers. Receive *testing.T as first argument and call t.Helper().
Benchmarks
Write func BenchmarkXxx(b *testing.B) for performance-critical code. Loop with b.N and use b.ResetTimer() to exclude setup time.
Quick Install
/plugin add https://github.com/KubrickCode/ai-config-toolkit/tree/main/go-testCopy and paste this command in Claude Code to install this skill
GitHub 仓库
Related Skills
evaluating-llms-harness
TestingThis Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.
llamaindex
MetaLlamaIndex is a data framework for building RAG-powered LLM applications, specializing in document ingestion, indexing, and querying. It provides key features like vector indices, query engines, and agents, and supports over 300 data connectors. Use it for document Q&A, chatbots, and knowledge retrieval when building data-centric applications.
business-rule-documentation
MetaThis skill provides standardized templates for systematically documenting business logic and domain knowledge following Domain-Driven Design principles. It helps developers capture business rules, process flows, decision trees, and terminology glossaries to maintain consistency between requirements and implementation. Use it when documenting domain models, creating business rule repositories, or bridging communication between business and technical teams.
issue-documentation
MetaThis skill provides standardized templates for comprehensive issue documentation across platforms like GitHub, Jira, and Linear. It helps developers systematically document problems, reproduction steps, root causes, and solutions. Use it when creating bug reports, incident tracking, or troubleshooting guides to ensure clear communication.
