phoenix-observability
About
Phoenix is an open-source AI observability platform for tracing, evaluating, and monitoring LLM applications. It provides detailed traces for debugging, runs evaluations on datasets, and offers real-time monitoring for production systems. Key capabilities include experiment pipelines and self-hosted observability without vendor lock-in.
Quick Install
Claude Code
Recommendednpx skills add davila7/claude-code-templates -a claude-code/plugin add https://github.com/davila7/claude-code-templatesgit clone https://github.com/davila7/claude-code-templates.git ~/.claude/skills/phoenix-observabilityCopy and paste this command in Claude Code to install this skill
GitHub Repository
Related Skills
railway-metrics
OtherThis skill queries Railway service metrics including CPU, memory, network, and disk usage to monitor performance and debug issues. It's triggered when developers ask about resource utilization or service performance, and requires environment and service IDs from the Railway CLI. The skill provides actionable insights through Bash commands that fetch real-time analytics data.
evaluating-code-models
MetaThis skill benchmarks code generation models using industry-standard evaluations like HumanEval and MBPP across multiple programming languages. It calculates pass@k metrics for comparing model performance, testing multi-language support, and measuring code quality. Developers should use it when rigorously evaluating or comparing coding models, as it's the same tool powering HuggingFace's code leaderboards.
langsmith-observability
MetaLangSmith provides LLM observability for tracing, evaluating, and monitoring AI applications. Developers should use it for debugging prompts and chains, systematic output evaluation, and monitoring production systems. Its key capabilities include performance tracing, dataset testing, and analysis of latency and token usage.
evaluating-llms-harness
TestingThis skill runs standardized LLM evaluations across 60+ academic benchmarks like MMLU and GSM8K using the industry-standard lm-evaluation-harness. Use it for benchmarking model quality, comparing different models, or tracking training progress with support for HuggingFace, vLLM, and API-based models. It provides a consistent, widely-adopted method for reporting academic results.
