Managing Snapshot Tests
About
This skill helps developers manage snapshot test failures by intelligently analyzing diffs and selectively updating snapshots across Jest, Vitest, Playwright, and Storybook. It distinguishes intentional changes from regressions to maintain test integrity. Use it when handling snapshot failures, updates, or by triggering the `/snapshot-manager` command.
Quick Install
Claude Code
Recommended/plugin add https://github.com/jeremylongshore/claude-code-plugins-plus-skillsgit clone https://github.com/jeremylongshore/claude-code-plugins-plus-skills.git ~/.claude/skills/Managing Snapshot TestsCopy and paste this command in Claude Code to install this skill
Documentation
Overview
This skill empowers Claude to efficiently manage snapshot tests by analyzing differences, selectively updating snapshots based on intentional changes, and identifying potential regressions. It provides a streamlined approach to maintain snapshot test suites across various JavaScript testing frameworks.
How It Works
- Analyzing Failures: Reviews failed snapshot diffs, highlighting intentional and unintentional changes with side-by-side comparisons.
- Selective Updating: Updates specific snapshots that reflect intentional UI or code changes, while preserving snapshots that have caught regressions.
- Batch Processing: Allows for batch updating of related snapshots to streamline the update process.
When to Use This Skill
This skill activates when you need to:
- Analyze snapshot test failures after code changes.
- Update snapshot tests to reflect intentional UI changes.
- Identify and preserve snapshots that are catching regressions.
Examples
Example 1: Updating Snapshots After UI Changes
User request: "I've made some UI changes and now my snapshot tests are failing. Can you update the snapshots?"
The skill will:
- Analyze the snapshot failures, identifying the diffs caused by the UI changes.
- Update the relevant snapshot files to reflect the new UI.
Example 2: Investigating Unexpected Snapshot Changes
User request: "My snapshot tests are failing, but I don't expect any UI changes. Can you help me figure out what's going on?"
The skill will:
- Analyze the snapshot failures, highlighting the unexpected diffs.
- Present the diffs to the user for review, indicating potential regressions.
Best Practices
- Clear Communication: Clearly state the intention behind updating or analyzing snapshots.
- Framework Awareness: Specify the testing framework (Jest, Vitest, etc.) if known for more accurate analysis.
- Selective Updates: Avoid blindly updating all snapshots. Focus on intentional changes and investigate unexpected diffs.
Integration
This skill works independently but can be used in conjunction with other code analysis and testing tools to provide a comprehensive testing workflow.
GitHub Repository
Related Skills
content-collections
MetaThis skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.
creating-opencode-plugins
MetaThis skill provides the structure and API specifications for creating OpenCode plugins that hook into 25+ event types like commands, files, and LSP operations. It offers implementation patterns for JavaScript/TypeScript modules that intercept and extend the AI assistant's lifecycle. Use it when you need to build event-driven plugins for monitoring, custom handling, or extending OpenCode's capabilities.
sglang
MetaSGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.
evaluating-llms-harness
TestingThis Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.
