conducting-browser-compatibility-tests
About
This skill automates cross-browser compatibility testing for web applications using tools like BrowserStack, Selenium Grid, or Playwright. It tests across Chrome, Firefox, Safari, and Edge to identify browser-specific bugs and ensure consistent functionality. Use it via the `/browser-test` command to generate a detailed report with compatibility issues and visual screenshots.
Documentation
Overview
This skill automates cross-browser compatibility testing, ensuring that web applications function correctly across various browsers and devices. It leverages BrowserStack, Selenium Grid, and Playwright to execute tests and identify browser-specific issues.
How It Works
- Configuring Browser Matrix: Defines the target browsers (Chrome, Firefox, Safari, Edge), versions, operating systems, and device configurations for testing.
- Generating Cross-Browser Tests: Creates and configures tests to run across the defined browser matrix, handling browser-specific quirks and setting up parallel execution for efficiency.
- Executing Tests: Runs the tests in parallel using BrowserStack, Selenium Grid, or Playwright, capturing screenshots and logs for analysis.
- Generating Compatibility Report: Compiles a detailed report highlighting any compatibility issues, including screenshots and error logs, for easy identification and resolution.
When to Use This Skill
This skill activates when you need to:
- Ensure a web application functions correctly across different browsers and devices.
- Identify browser-specific bugs or compatibility issues.
- Automate cross-browser testing as part of a CI/CD pipeline.
Examples
Example 1: Testing a new feature
User request: "Test browser compatibility for the new shopping cart feature."
The skill will:
- Configure the browser matrix with the latest versions of Chrome, Firefox, Safari, and Edge.
- Execute tests specifically targeting the shopping cart functionality across the configured browsers.
- Generate a report highlighting any compatibility issues encountered with the shopping cart feature, including screenshots.
Example 2: Regression testing after an update
User request: "/bt"
The skill will:
- Use the default browser matrix (or a previously defined configuration).
- Run all existing tests across the configured browsers and devices.
- Provide a comprehensive report detailing any regressions or new compatibility issues introduced by the recent update.
Best Practices
- Configuration: Clearly define the target browser matrix to ensure comprehensive testing.
- Test Design: Write tests that are robust and cover a wide range of user interactions.
- Report Analysis: Carefully analyze the generated reports to identify and address compatibility issues promptly.
Integration
This skill can be integrated into a CI/CD pipeline using other tools to automate cross-browser testing as part of the deployment process. It can also work with issue tracking systems to automatically create tickets for identified compatibility bugs.
Quick Install
/plugin add https://github.com/jeremylongshore/claude-code-plugins-plus/tree/main/browser-compatibility-testerCopy and paste this command in Claude Code to install this skill
GitHub 仓库
Related Skills
sglang
MetaSGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.
llamaguard
OtherLlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.
evaluating-llms-harness
TestingThis Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.
langchain
MetaLangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.
