gemini
About
This Claude Skill runs the Gemini CLI for large-scale code reviews, plan analysis, and processing of very large contexts exceeding 200k tokens. It is designed for comprehensive, multi-file analysis across entire codebases or documentation sets. Developers should use it for tasks requiring complex reasoning and extensive context windows.
Quick Install
Claude Code
Recommended/plugin add https://github.com/iamladi/cautious-computing-machine--sdlc-plugingit clone https://github.com/iamladi/cautious-computing-machine--sdlc-plugin.git ~/.claude/skills/geminiCopy and paste this command in Claude Code to install this skill
Documentation
Gemini Skill Guide
When to Use Gemini
- Code Review: Comprehensive code reviews across multiple files
- Plan Review: Analyzing architectural plans, technical specifications, or project roadmaps
- Big Context Processing: Tasks requiring >200k tokens of context (entire codebases, documentation sets)
- Multi-file Analysis: Understanding relationships and patterns across many files
Running a Task
-
Ask the user (via
AskUserQuestion) which model to use in a single prompt. Latest models (all support thinking):gemini-2.5-pro(state-of-the-art, best for complex reasoning in code/math/STEM)gemini-2.5-flash(best price-performance, recommended for most tasks)gemini-2.5-flash-lite(fastest, most cost-efficient, high throughput)
-
Select the approval mode based on the task:
default: Prompt for approval (safest, for read-only analysis)auto_edit: Auto-approve edit tools only (for code reviews with suggestions)yolo: Auto-approve all tools (for trusted automated changes)
-
Assemble the command with appropriate options:
-m, --model <MODEL>- Model selection--approval-mode <default|auto_edit|yolo>- Control tool approval-y, --yolo- Alternative to--approval-mode yolo-i, --prompt-interactive "prompt"- Execute prompt and continue interactively--include-directories <DIR>- Additional directories to include in workspace-s, --sandbox- Run in sandbox mode for isolation
-
Default to
--approval-mode defaultfor read-only analysis tasks unless edits are necessary. -
Run the command and capture the output. For non-interactive mode, use positional arguments:
gemini -m gemini-2.5-flash --approval-mode default "Review this codebase for security issues" -
For interactive sessions with an initial prompt:
gemini -m gemini-2.5-flash -i "Review the authentication system" --approval-mode auto_edit -
After Gemini completes, inform the user: "The Gemini analysis is complete. You can start a new Gemini session for follow-up analysis or continue exploring the findings."
Quick Reference
| Use case | Approval mode | Key flags |
|---|---|---|
| Code review (read-only) | default | -m gemini-2.5-flash --approval-mode default |
| Code review with suggestions | auto_edit | -m gemini-2.5-flash --approval-mode auto_edit |
| Big context analysis | default | -m gemini-2.5-pro --approval-mode default |
| Plan/architecture review | default | -m gemini-2.5-pro --approval-mode default |
| Automated refactoring | yolo or -y | -m gemini-2.5-flash --approval-mode yolo |
| High-throughput tasks | default | -m gemini-2.5-flash-lite --approval-mode default |
| Multi-directory analysis | Depends on task | --include-directories <DIR1> --include-directories <DIR2> |
| Interactive with initial prompt | Match task needs | -i "prompt" --approval-mode <mode> |
Model Selection Guide
| Model | Best for | Context window |
|---|---|---|
gemini-2.5-pro | State-of-the-art reasoning, complex code/math/STEM problems | 1M input / 65k output |
gemini-2.5-flash | Best price-performance, large-scale processing, agentic tasks | 1M input / 65k output |
gemini-2.5-flash-lite | Fastest & most cost-efficient, high throughput | 1M input / 65k output |
All Gemini 2.5 models support thinking capabilities for enhanced reasoning and accuracy.
Common Use Cases
Code Review
gemini -m gemini-2.5-flash --approval-mode default \
"Perform a comprehensive code review focusing on:
1. Security vulnerabilities
2. Performance issues
3. Code quality and maintainability
4. Best practices violations"
Plan Review
gemini -m gemini-2.5-pro --approval-mode default \
"Review this architectural plan for:
1. Scalability concerns
2. Missing components
3. Integration challenges
4. Alternative approaches"
Big Context Analysis
gemini -m gemini-2.5-pro --approval-mode default \
"Analyze the entire codebase to understand:
1. Overall architecture
2. Key patterns and conventions
3. Potential technical debt
4. Refactoring opportunities"
Following Up
- Gemini CLI sessions are typically one-shot or interactive. Unlike Codex, there's no built-in resume functionality.
- For follow-up analysis, start a new Gemini session with context from previous findings.
- When proposing follow-up actions, restate the chosen model and approval mode.
- Use
AskUserQuestionafter each Gemini command to confirm next steps or gather clarifications.
Error Handling
- Stop and report failures whenever
gemini --versionor a Gemini command exits non-zero. - Request direction before retrying failed commands.
- Before using high-impact flags (
--approval-mode yolo,-y,--sandbox), ask the user for permission usingAskUserQuestionunless already granted. - When output includes warnings or partial results, summarize them and ask how to adjust using
AskUserQuestion.
Tips for Large Context Processing
- Be specific: Provide clear, structured prompts for what to analyze
- Use include-directories: Explicitly specify all relevant directories
- Choose the right model:
- Use
gemini-2.5-profor complex reasoning and maximum analysis depth - Use
gemini-2.5-flashfor most code reviews (best price-performance) - Use
gemini-2.5-flash-litefor high-volume, fast processing
- Use
- Leverage thinking capabilities: All 2.5 models have built-in thinking for better reasoning
- Break down complex tasks: Even with large context, structured analysis is more effective
- Save findings: Ask Gemini to output structured reports that can be saved for reference
GitHub Repository
Related Skills
content-collections
MetaThis skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.
creating-opencode-plugins
MetaThis skill provides the structure and API specifications for creating OpenCode plugins that hook into 25+ event types like commands, files, and LSP operations. It offers implementation patterns for JavaScript/TypeScript modules that intercept and extend the AI assistant's lifecycle. Use it when you need to build event-driven plugins for monitoring, custom handling, or extending OpenCode's capabilities.
langchain
MetaLangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.
cloudflare-turnstile
MetaThis skill provides comprehensive guidance for implementing Cloudflare Turnstile as a CAPTCHA-alternative bot protection system. It covers integration for forms, login pages, API endpoints, and frameworks like React/Next.js/Hono, while handling invisible challenges that maintain user experience. Use it when migrating from reCAPTCHA, debugging error codes, or implementing token validation and E2E tests.
