Back to Skills

code-review

davila7
Updated Today
46 views
18,478
1,685
18,478
View on GitHub
Testingtestingdesign

About

This skill performs automated code reviews based on Sentry's engineering practices, ideal for analyzing pull requests or code changes. It checks for security vulnerabilities, performance issues, testing gaps, and design problems using a structured checklist. Developers should use it to get consistent, actionable feedback on code quality.

Quick Install

Claude Code

Recommended
Plugin CommandRecommended
/plugin add https://github.com/davila7/claude-code-templates
Git CloneAlternative
git clone https://github.com/davila7/claude-code-templates.git ~/.claude/skills/code-review

Copy and paste this command in Claude Code to install this skill

Documentation

Sentry Code Review

Follow these guidelines when reviewing code for Sentry projects.

Review Checklist

Identifying Problems

Look for these issues in code changes:

  • Runtime errors: Potential exceptions, null pointer issues, out-of-bounds access
  • Performance: Unbounded O(n²) operations, N+1 queries, unnecessary allocations
  • Side effects: Unintended behavioral changes affecting other components
  • Backwards compatibility: Breaking API changes without migration path
  • ORM queries: Complex Django ORM with unexpected query performance
  • Security vulnerabilities: Injection, XSS, access control gaps, secrets exposure

Design Assessment

  • Do component interactions make logical sense?
  • Does the change align with existing project architecture?
  • Are there conflicts with current requirements or goals?

Test Coverage

Every PR should have appropriate test coverage:

  • Functional tests for business logic
  • Integration tests for component interactions
  • End-to-end tests for critical user paths

Verify tests cover actual requirements and edge cases. Avoid excessive branching or looping in test code.

Long-Term Impact

Flag for senior engineer review when changes involve:

  • Database schema modifications
  • API contract changes
  • New framework or library adoption
  • Performance-critical code paths
  • Security-sensitive functionality

Feedback Guidelines

Tone

  • Be polite and empathetic
  • Provide actionable suggestions, not vague criticism
  • Phrase as questions when uncertain: "Have you considered...?"

Approval

  • Approve when only minor issues remain
  • Don't block PRs for stylistic preferences
  • Remember: the goal is risk reduction, not perfect code

Common Patterns to Flag

Python/Django

# Bad: N+1 query
for user in users:
    print(user.profile.name)  # Separate query per user

# Good: Prefetch related
users = User.objects.prefetch_related('profile')

TypeScript/React

// Bad: Missing dependency in useEffect
useEffect(() => {
  fetchData(userId);
}, []);  // userId not in deps

// Good: Include all dependencies
useEffect(() => {
  fetchData(userId);
}, [userId]);

Security

# Bad: SQL injection risk
cursor.execute(f"SELECT * FROM users WHERE id = {user_id}")

# Good: Parameterized query
cursor.execute("SELECT * FROM users WHERE id = %s", [user_id])

References

GitHub Repository

davila7/claude-code-templates
Path: cli-tool/components/skills/sentry/code-review
anthropicanthropic-claudeclaudeclaude-code

Related Skills

content-collections

Meta

This skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.

View skill

creating-opencode-plugins

Meta

This skill provides the structure and API specifications for creating OpenCode plugins that hook into 25+ event types like commands, files, and LSP operations. It offers implementation patterns for JavaScript/TypeScript modules that intercept and extend the AI assistant's lifecycle. Use it when you need to build event-driven plugins for monitoring, custom handling, or extending OpenCode's capabilities.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill