webapp-testing
About
This Claude Skill provides a Playwright-based toolkit for testing local web applications through Python scripts. It enables frontend verification, UI debugging, screenshot capture, and log viewing while managing server lifecycles. Use it for browser automation tasks but run scripts directly rather than reading their source code to avoid context pollution.
Documentation
Web Application Testing
To test local web applications, write native Python Playwright scripts.
Helper Scripts Available:
scripts/with_server.py- Manages server lifecycle (supports multiple servers)
Always run scripts with --help first to see usage. DO NOT read the source until you try running the script first and find that a customized solution is abslutely necessary. These scripts can be very large and thus pollute your context window. They exist to be called directly as black-box scripts rather than ingested into your context window.
Decision Tree: Choosing Your Approach
User task → Is it static HTML?
├─ Yes → Read HTML file directly to identify selectors
│ ├─ Success → Write Playwright script using selectors
│ └─ Fails/Incomplete → Treat as dynamic (below)
│
└─ No (dynamic webapp) → Is the server already running?
├─ No → Run: python scripts/with_server.py --help
│ Then use the helper + write simplified Playwright script
│
└─ Yes → Reconnaissance-then-action:
1. Navigate and wait for networkidle
2. Take screenshot or inspect DOM
3. Identify selectors from rendered state
4. Execute actions with discovered selectors
Example: Using with_server.py
To start a server, run --help first, then use the helper:
Single server:
python scripts/with_server.py --server "npm run dev" --port 5173 -- python your_automation.py
Multiple servers (e.g., backend + frontend):
python scripts/with_server.py \
--server "cd backend && python server.py" --port 3000 \
--server "cd frontend && npm run dev" --port 5173 \
-- python your_automation.py
To create an automation script, include only Playwright logic (servers are managed automatically):
from playwright.sync_api import sync_playwright
with sync_playwright() as p:
browser = p.chromium.launch(headless=True) # Always launch chromium in headless mode
page = browser.new_page()
page.goto('http://localhost:5173') # Server already running and ready
page.wait_for_load_state('networkidle') # CRITICAL: Wait for JS to execute
# ... your automation logic
browser.close()
Reconnaissance-Then-Action Pattern
-
Inspect rendered DOM:
page.screenshot(path='/tmp/inspect.png', full_page=True) content = page.content() page.locator('button').all() -
Identify selectors from inspection results
-
Execute actions using discovered selectors
Common Pitfall
❌ Don't inspect the DOM before waiting for networkidle on dynamic apps
✅ Do wait for page.wait_for_load_state('networkidle') before inspection
Best Practices
- Use bundled scripts as black boxes - To accomplish a task, consider whether one of the scripts available in
scripts/can help. These scripts handle common, complex workflows reliably without cluttering the context window. Use--helpto see usage, then invoke directly. - Use
sync_playwright()for synchronous scripts - Always close the browser when done
- Use descriptive selectors:
text=,role=, CSS selectors, or IDs - Add appropriate waits:
page.wait_for_selector()orpage.wait_for_timeout()
Reference Files
- examples/ - Examples showing common patterns:
element_discovery.py- Discovering buttons, links, and inputs on a pagestatic_html_automation.py- Using file:// URLs for local HTMLconsole_logging.py- Capturing console logs during automation
Quick Install
/plugin add https://github.com/lifangda/claude-plugins/tree/main/webapp-testingCopy and paste this command in Claude Code to install this skill
GitHub 仓库
Related Skills
evaluating-llms-harness
TestingThis Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.
langchain
MetaLangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.
go-test
MetaThe go-test skill provides expertise in Go's standard testing package and best practices. It helps developers implement table-driven tests, subtests, benchmarks, and coverage strategies while following Go conventions. Use it when writing test files, creating mocks, detecting race conditions, or organizing integration tests in Go projects.
llamaindex
MetaLlamaIndex is a data framework for building RAG-powered LLM applications, specializing in document ingestion, indexing, and querying. It provides key features like vector indices, query engines, and agents, and supports over 300 data connectors. Use it for document Q&A, chatbots, and knowledge retrieval when building data-centric applications.
