Back to Skills

managing-snapshot-tests

jeremylongshore
Updated Today
79 views
409
51
409
View on GitHub
Metaaitestingdesign

About

This skill helps developers manage snapshot tests by analyzing failures and selectively updating snapshots to distinguish intentional changes from regressions. It's triggered when users mention snapshot tests, failures, or request updates using commands like "/snapshot-manager". The skill supports popular testing frameworks including Jest, Vitest, Playwright, and Storybook.

Documentation

Overview

This skill empowers Claude to efficiently manage snapshot tests by analyzing differences, selectively updating snapshots based on intentional changes, and identifying potential regressions. It provides a streamlined approach to maintain snapshot test suites across various JavaScript testing frameworks.

How It Works

  1. Analyzing Failures: Reviews failed snapshot diffs, highlighting intentional and unintentional changes with side-by-side comparisons.
  2. Selective Updating: Updates specific snapshots that reflect intentional UI or code changes, while preserving snapshots that have caught regressions.
  3. Batch Processing: Allows for batch updating of related snapshots to streamline the update process.

When to Use This Skill

This skill activates when you need to:

  • Analyze snapshot test failures after code changes.
  • Update snapshot tests to reflect intentional UI changes.
  • Identify and preserve snapshots that are catching regressions.

Examples

Example 1: Updating Snapshots After UI Changes

User request: "I've made some UI changes and now my snapshot tests are failing. Can you update the snapshots?"

The skill will:

  1. Analyze the snapshot failures, identifying the diffs caused by the UI changes.
  2. Update the relevant snapshot files to reflect the new UI.

Example 2: Investigating Unexpected Snapshot Changes

User request: "My snapshot tests are failing, but I don't expect any UI changes. Can you help me figure out what's going on?"

The skill will:

  1. Analyze the snapshot failures, highlighting the unexpected diffs.
  2. Present the diffs to the user for review, indicating potential regressions.

Best Practices

  • Clear Communication: Clearly state the intention behind updating or analyzing snapshots.
  • Framework Awareness: Specify the testing framework (Jest, Vitest, etc.) if known for more accurate analysis.
  • Selective Updates: Avoid blindly updating all snapshots. Focus on intentional changes and investigate unexpected diffs.

Integration

This skill works independently but can be used in conjunction with other code analysis and testing tools to provide a comprehensive testing workflow.

Quick Install

/plugin add https://github.com/jeremylongshore/claude-code-plugins-plus/tree/main/snapshot-test-manager

Copy and paste this command in Claude Code to install this skill

GitHub 仓库

jeremylongshore/claude-code-plugins-plus
Path: backups/skills-migration-20251108-070147/plugins/testing/snapshot-test-manager/skills/snapshot-test-manager
aiautomationclaude-codedevopsmarketplacemcp

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill