← Back to Skills

coderabbit-review

LerianStudio
Updated Today
9 views
4
4
View on GitHub
Developmentai

About

The coderabbit-review skill runs CodeRabbit CLI for external AI-powered code review, providing severity-based findings and language-specific best practices. Use it alongside internal reviews for external validation, broader pattern analysis, and additional security scanning. It complements rather than replaces internal reviewers by offering an independent perspective.

Documentation

CodeRabbit Review

Run CodeRabbit CLI for external AI-powered code review.

Core principle: External validation complements internal reviews. CodeRabbit provides language-specific best practices; Ring reviewers understand your requirements.

When to Use

Use CodeRabbit review when:

  • Want external perspective (different from Ring's internal reviewers)
  • Need language-specific linting and best practices
  • Want to validate against broader code pattern datasets
  • Need additional security vulnerability scanning

Use alongside Ring's internal reviews:

  • Internal reviews (code/business/security) - Ring agents in parallel, context-aware
  • External review (CodeRabbit) - Independent validation, pattern-based

Don't use as replacement for:

  • Ring's internal reviewers (use both, not either/or)
  • Requirements validation (CodeRabbit doesn't know your PRD)
  • Business logic verification (CodeRabbit doesn't understand domain)

Review Types

All changes (default):

/review-cr

Reviews both committed and uncommitted changes.

Committed only:

/review-cr committed

Reviews only committed changes (staged history).

Uncommitted only:

/review-cr uncommitted

Reviews only working directory changes (unstaged).

Custom base branch:

/review-cr --base develop
/review-cr committed --base main

Compare against specific branch instead of repository default.

The Process

1. Argument Parsing

Parse arguments in any order:

  • Scan for --base flag: extract next argument as base branch
  • Check for review type: uncommitted, committed, or all
  • Defaults: type=all, base=repository default branch

2. Validation

Validate base branch if specified:

git rev-parse --verify <branch>

If branch doesn't exist:

  • Message: "Base branch '<branch>' not found. Available branches: <list>"
  • Exit gracefully without running review

After validation, sanitize branch name:

branch_safe=$(echo "$branch" | sed 's/[^a-zA-Z0-9._/-]//g')

This prevents command injection via branch names.

Check for existing CodeRabbit processes:

pgrep -f "coderabbit review"

If process exists:

  • Message: "A CodeRabbit review is already running. Wait for completion or kill process."
  • Exit gracefully

3. Execute Review

Run in background:

coderabbit review --prompt-only --type <type> [--base <branch_safe>]

Use Bash tool with run_in_background: true.

CodeRabbit context sources:

  • ~/.claude/CLAUDE.md - Your global coding standards
  • <project>/.claude/claude.md - Project-specific guidelines
  • <project>/.coderabbit.yaml - CodeRabbit configuration

These provide coding standards and preferences to the review.

4. Monitor Progress

Check every 30 seconds:

  • Use BashOutput tool to monitor background process
  • Look for final summary in output
  • Maximum wait: 45 minutes (90 checks)

Progress messages:

  • At 5 minutes: "CodeRabbit is still analyzing... (can take 7-30 minutes for large changesets)"
  • Continue monitoring

If BashOutput fails:

  • Wait 10 seconds and retry once
  • If retry fails: "Unable to monitor review process. Check with: ps aux | grep coderabbit"

If timeout (45 minutes):

  • Report: "Review exceeded 45 minute timeout. Terminating runaway process..."
  • Graceful termination:
    pid=$(pgrep -f "coderabbit review")
    kill -TERM $pid
    sleep 5
    pgrep -f "coderabbit review" && kill -KILL $pid
    
  • Verify termination: pgrep -f "coderabbit review" returns nothing
  • Message: "CodeRabbit process terminated after timeout. Prevents resource exhaustion."

5. Parse and Present Findings

Extract issues from output:

  • Group by severity: CRITICAL, HIGH, MEDIUM, LOW, INFO
  • For each issue:
    • Severity badge (πŸ”΄ CRITICAL, 🟠 HIGH, 🟑 MEDIUM, πŸ”΅ LOW, βšͺ INFO)
    • File path and line number
    • Issue description
    • Suggested fix approach (if provided)

Example output:

πŸ”΄ CRITICAL - src/auth.ts:42
JWT secret hardcoded in source code
Fix: Move to environment variable

🟠 HIGH - src/api/users.ts:156
SQL injection vulnerability in query builder
Fix: Use parameterized queries

🟑 MEDIUM - src/utils/validator.ts:89
Missing input validation for email field
Fix: Add email format validation

6. Create Todos for Critical Issues

Use TodoWrite for CRITICAL and HIGH severity:

Format: "Fix [severity] in [file]:[line] - [brief description]"

Example:

  • "Fix CRITICAL in src/auth.ts:42 - Remove hardcoded JWT secret"
  • "Fix HIGH in src/api/users.ts:156 - SQL injection vulnerability"

7. Prompt for Action

CodeRabbit review complete. Found X issues (Y critical, Z high).
Would you like me to fix these issues?

Error Handling

CodeRabbit CLI not installed:

Message: "CodeRabbit CLI not installed. Install with:
         curl -fsSL https://cli.coderabbit.ai/install.sh | sh"
Exit gracefully

Not authenticated:

Message: "Not authenticated with CodeRabbit. Run: coderabbit auth login"
Exit gracefully

No changes to review:

Message: "No changes found to review. Make some changes first."
Exit gracefully

Review takes longer than expected:

Message at 5 minutes: "CodeRabbit is still analyzing...
                       (this can take 7-30 minutes for large changesets)"
Continue monitoring

Integration with Ring Reviews

Recommended workflow:

  1. Internal review first (fast, context-aware):

    Dispatch Ring's 3 reviewers in parallel:
    - code-reviewer
    - business-logic-reviewer
    - security-reviewer
    
  2. External review second (validation, patterns):

    /review-cr
    
  3. Compare findings:

    • Issues found by both β†’ High confidence, fix immediately
    • Issues found only by Ring β†’ Context-specific, trust internal
    • Issues found only by CodeRabbit β†’ Pattern-based, evaluate relevance
  4. Handle by severity:

    • Critical/High (any source) β†’ Fix immediately
    • Medium β†’ Fix if time permits
    • Low β†’ Add TODO comments
    • Info β†’ Consider for future improvements

Red Flags

Never:

  • Use CodeRabbit as sole review method (complement Ring's reviewers)
  • Skip validation of base branch (security risk)
  • Ignore timeouts (can cause resource exhaustion)
  • Run multiple CodeRabbit reviews simultaneously (process conflict)

Always:

  • Sanitize branch names before using in commands
  • Monitor for completion (don't fire and forget)
  • Create todos for Critical/High issues
  • Terminate runaway processes after timeout

Anti-Patterns

Wrong:

  • "CodeRabbit found nothing, so we're good" β†’ Still run Ring reviewers
  • "Let's run both reviews in parallel" β†’ Run Ring first (faster), then CodeRabbit
  • "Skip CodeRabbit, Ring reviews are enough" β†’ External validation catches different issues
  • "CodeRabbit says fix, so fix blindly" β†’ Evaluate if pattern applies to your context

Right:

  • Use Ring reviewers for requirements/business logic
  • Use CodeRabbit for language-specific patterns
  • Combine findings from both sources
  • Prioritize Critical/High from either source

Required Patterns

This skill uses these universal patterns:

  • State Tracking: See skills/shared-patterns/state-tracking.md
  • Failure Recovery: See skills/shared-patterns/failure-recovery.md
  • Exit Criteria: See skills/shared-patterns/exit-criteria.md
  • TodoWrite: See skills/shared-patterns/todowrite-integration.md

Apply ALL patterns when using this skill.

Quick Install

/plugin add https://github.com/LerianStudio/ring/tree/main/coderabbit-review

Copy and paste this command in Claude Code to install this skill

GitHub δ»“εΊ“

LerianStudio/ring
Path: skills/coderabbit-review

Related Skills

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill