Browse and install Claude Skills to enhance your development workflow. Currently showing 625 skills.
This skill enables Claude to audit access control implementations using the access-control-auditor plugin to identify vulnerabilities and misconfigurations. It's designed for analyzing IAM policies, ACLs, and other access mechanisms in cloud environments, applications, or infrastructure. Use it when users request access control audits, permission checks, or security reviews to ensure compliance and identify privilege escalation risks.
/plugin add https://github.com/jeremylongshore/claude-code-plugins-plus/tree/main/access-control-auditor
This Claude Skill automates overnight software development by using Git hooks to enforce test-driven development (TDD). It autonomously builds features, refactors code, and fixes bugs while ensuring all tests pass before allowing commits. Use it when you need autonomous coding that guarantees production-ready, fully tested code.
/plugin add https://github.com/jeremylongshore/claude-code-plugins-plus/tree/main/overnight-dev
The agent-context-loader automatically detects and loads AGENTS.md files from the current working directory when starting a session or changing directories. It ensures agent-specific instructions are incorporated into Claude Code's context alongside CLAUDE.md, enabling specialized agent behaviors. This skill triggers automatically without user action or can be manually requested to load agent context.
/plugin add https://github.com/jeremylongshore/claude-code-plugins-plus/tree/main/agent-context-loader
This Claude Skill automatically activates to assist with all Google Cloud Agent Development Kit (ADK) operations. It helps developers build and deploy production-ready agent systems, including multi-agent architectures, containerized deployments, and RAG implementations. Use it for agent scaffolding, orchestration, and CI/CD pipeline setup when working with Google's agent frameworks.
/plugin add https://github.com/jeremylongshore/claude-code-plugins-plus/tree/main/agent-sdk-master
Vertex AI Media Master automatically activates for Google Vertex AI multimodal operations including video processing, audio generation, and image creation. It handles tasks like marketing campaign automation, social media content creation, and ad creative generation. Use this skill when working with Vertex AI's multimodal features, especially for media asset workflows and content generation.
/plugin add https://github.com/jeremylongshore/claude-code-plugins-plus/tree/main/vertex-media-master
The yaml-master skill automatically activates when working with YAML files, configurations, and CI/CD pipelines. It provides intelligent validation, schema inference, linting, and format conversion between YAML/JSON/TOML/XML. This skill helps developers manage YAML-based workflows with deep understanding of specifications and common anti-patterns.
/plugin add https://github.com/jeremylongshore/claude-code-plugins-plus/tree/main/yaml-master
This Claude Skill automatically enforces Taskwarrior integration for all coding tasks when task management terms are mentioned. It decomposes work into tracked tasks with full lifecycle management (add → start → done) and integrates with Timewarrior for automatic time tracking. Use this skill to ensure all coding work is properly tracked and accounted for before implementation begins.
/plugin add https://github.com/jeremylongshore/claude-code-plugins-plus/tree/main/001-jeremy-taskwarrior-integration
This skill helps developers implement Real User Monitoring (RUM) to track user experience metrics like Core Web Vitals and page load times. It guides you through selecting a RUM platform, designing an instrumentation strategy, and adding the necessary tracking code. Use it when you need to set up performance monitoring for a web application.
/plugin add https://github.com/jeremylongshore/claude-code-plugins-plus/tree/main/real-user-monitoring
This skill automatically detects performance regressions in CI/CD pipelines by analyzing metrics like response time and throughput. It compares current performance against baselines or thresholds and performs statistical significance analysis to identify degradation. Use it to catch performance issues early when users mention regression detection, baseline comparison, or performance budget violations.
/plugin add https://github.com/jeremylongshore/claude-code-plugins-plus/tree/main/performance-regression-detector
This skill provides comprehensive performance optimization advice for software projects when developers request improvements, reviews, or bottleneck analysis. It analyzes frontend, backend, and infrastructure layers to identify issues and anti-patterns, then delivers prioritized, actionable recommendations with estimated gains. Use it for performance tuning by triggering phrases like "optimize performance" or "improve speed."
/plugin add https://github.com/jeremylongshore/claude-code-plugins-plus/tree/main/performance-optimization-advisor
This skill enables Claude to aggregate and centralize performance metrics from various sources like applications, databases, and external services. It is triggered by requests to consolidate metrics for monitoring and helps design a metrics taxonomy, choose aggregation tools, and set up dashboards and alerts.
/plugin add https://github.com/jeremylongshore/claude-code-plugins-plus/tree/main/metrics-aggregator
This skill enables Claude to detect potential memory leaks and analyze memory usage patterns when developers request memory analysis. It identifies common issues like unremoved event listeners, uncancelled timers, and circular references that prevent garbage collection. Use it to proactively find and fix memory leaks for improved application performance and stability.
/plugin add https://github.com/jeremylongshore/claude-code-plugins-plus/tree/main/memory-leak-detector
This skill enables Claude to generate CI/CD pipeline configurations for platforms like GitHub Actions, GitLab CI, and Jenkins. It automates software delivery by creating pipelines for stages such as testing, building, security scanning, and deployment. Use it when you need to set up or automate a CI/CD workflow for multi-environment deployments.
/plugin add https://github.com/jeremylongshore/claude-code-plugins-plus/tree/main/skill-adapter
LlamaIndex is a data framework for building RAG-powered LLM applications, specializing in document ingestion, indexing, and querying. It provides key features like vector indices, query engines, and agents, and supports over 300 data connectors. Use it for document Q&A, chatbots, and knowledge retrieval when building data-centric applications.
/plugin add https://github.com/zechenzhangAGI/AI-research-SKILLs/tree/main/llamaindex
LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.
/plugin add https://github.com/zechenzhangAGI/AI-research-SKILLs/tree/main/langchain
This Claude Skill serves LLMs with high throughput using vLLM's PagedAttention and continuous batching. It's ideal for deploying production LLM APIs, optimizing inference performance, or serving models with limited GPU memory. The skill supports OpenAI-compatible endpoints, multiple quantization methods, and tensor parallelism.
/plugin add https://github.com/zechenzhangAGI/AI-research-SKILLs/tree/main/vllm
TensorRT-LLM is an NVIDIA library that optimizes LLM inference for maximum throughput and lowest latency on NVIDIA GPUs. It is ideal for production deployments requiring 10-100x faster performance than PyTorch, supporting features like quantization and multi-GPU scaling. Use it when you need top performance on NVIDIA hardware, opting for alternatives like vLLM for simpler setups or llama.cpp for CPU/Apple Silicon.
/plugin add https://github.com/zechenzhangAGI/AI-research-SKILLs/tree/main/tensorrt-llm
SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.
/plugin add https://github.com/zechenzhangAGI/AI-research-SKILLs/tree/main/sglang
llama-cpp enables efficient LLM inference on non-NVIDIA hardware including CPUs, Apple Silicon, and consumer GPUs. It's ideal for edge deployment, Macs, or when CUDA is unavailable, offering GGUF quantization for reduced memory usage. This provides 4-10× speedup over PyTorch on CPU with minimal dependencies.
/plugin add https://github.com/zechenzhangAGI/AI-research-SKILLs/tree/main/llama-cpp
This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.
/plugin add https://github.com/zechenzhangAGI/AI-research-SKILLs/tree/main/lm-evaluation-harness
This Claude Skill provides expert guidance for PyTorch Fully Sharded Data Parallel (FSDP) training, helping developers implement distributed training solutions. It covers key features like parameter sharding, mixed precision, CPU offloading, and FSDP2 for large-scale model training. Use this skill when working with FSDP APIs, debugging distributed training code, or learning best practices for sharded data parallelism.
/plugin add https://github.com/zechenzhangAGI/AI-research-SKILLs/tree/main/pytorch-fsdp
This Claude Skill trains large language models (2B-462B parameters) using NVIDIA's Megatron-Core framework with advanced parallelism strategies. Use it when training models over 1B parameters, needing maximum GPU efficiency (47% MFU on H100), or requiring tensor/pipeline/sequence parallelism. It's a production-ready framework proven on models like Nemotron, LLaMA, and DeepSeek.
/plugin add https://github.com/zechenzhangAGI/AI-research-SKILLs/tree/main/megatron-core
This skill provides expert guidance for distributed training using Microsoft's DeepSpeed library. It helps developers implement optimization techniques like ZeRO stages, pipeline parallelism, and mixed-precision training. Use this skill when working with DeepSpeed features, debugging code, or learning best practices for large-scale model training.
/plugin add https://github.com/zechenzhangAGI/AI-research-SKILLs/tree/main/deepspeed
HuggingFace Accelerate provides the simplest API for adding distributed training to PyTorch scripts with just 4 lines of code. It offers a unified interface for multiple distributed training frameworks like DeepSpeed, FSDP, and DDP while handling automatic device placement and mixed precision. This makes it ideal for developers who want to quickly scale their PyTorch training across multiple GPUs or nodes without complex configuration.
/plugin add https://github.com/zechenzhangAGI/AI-research-SKILLs/tree/main/accelerate