Enables Claude to leverage Google Gemini 2.5 Pro for extended thinking, code analysis, and problem-solving
🤖 Claude + Gemini = Your Ultimate AI Development Team
The ultimate development partner for Claude - a Model Context Protocol server that gives Claude access to Google's Gemini 2.5 Pro for extended thinking, code analysis, and problem-solving. Automatically reads files and directories, passing their contents to Gemini for analysis within its 1M token context.
Getting Started
Tools Reference
Advanced Topics
Resources
Claude is brilliant, but sometimes you need:
chat
)thinkdeep
)codereview
)precommit
)debug
)analyze
)analyze
)"main.py, src/, tests/"
This server makes Gemini your development sidekick, handling what Claude can't or extending what Claude starts.
Prompt Used:
Study the code properly, think deeply about what this does and then see if there's any room for improvement in
terms of performance optimizations, brainstorm with gemini on this to get feedback and then confirm any change by
first adding a unit test with `measure` and measuring current code and then implementing the optimization and
measuring again to ensure it improved, then share results. Check with gemini in between as you make tweaks.
The final implementation resulted in a 26% improvement in JSON parsing performance for the selected library, reducing processing time through targeted, collaborative optimizations guided by Gemini’s analysis and Claude’s refinement.
Choose one of the following options:
Option A: Docker (Recommended - No Python Required!)
Option B: Traditional Setup
mcp
package)Visit Google AI Studio and generate an API key. For best results with Gemini 2.5 Pro, use a paid API key as the free tier has limited access to the latest models.
# Clone to your preferred location
git clone https://github.com/BeehiveInnovations/gemini-mcp-server.git
cd gemini-mcp-server
Now choose your setup method:
# 1. Generate the .env file with your current directory as workspace
# macOS/Linux:
./setup-docker-env.sh
# Windows (Command Prompt):
setup-docker-env.bat
# Windows (PowerShell):
.\setup-docker-env.ps1
Important: The setup script will:
.env
file with your API key (automatically uses $GEMINI_API_KEY
if already in your environment)To update the app: Simply run the setup script again - it will rebuild everything automatically.
Docker File Access: Docker containers can only access files within mounted directories. The generated configuration mounts your home directory by default. To access files elsewhere, modify the -v
parameter in the configuration.
# 2. Edit .env to add your Gemini API key (if not already set in environment)
# The .env file will contain:
# WORKSPACE_ROOT=/your/current/directory (automatically set)
# GEMINI_API_KEY=your-gemini-api-key-here (automatically set if $GEMINI_API_KEY exists)
# 3. Copy the configuration from step 1 into Claude Desktop
That's it! The setup script handles everything - building the Docker image, setting up the environment, and configuring your API key.
# Run the setup script to install dependencies
# macOS/Linux:
./setup.sh
# Windows:
setup.bat
Note the full path - you'll need it in the next step:
/Users/YOUR_USERNAME/gemini-mcp-server
C:\Users\YOUR_USERNAME\gemini-mcp-server
Important: The setup script will:
If you encounter any issues during setup, see the Troubleshooting section.
Add the server to your claude_desktop_config.json
:
Find your config file:
~/Library/Application Support/Claude/claude_desktop_config.json
%APPDATA%\Claude\claude_desktop_config.json
Or use Claude Desktop UI (macOS):
Choose your configuration based on your setup method:
How it works: Claude Desktop launches Docker, which runs the MCP server in a container. The communication happens through stdin/stdout, just like running a regular command.
All Platforms (macOS/Linux/Windows):
{
"mcpServers": {
"gemini": {
"command": "docker",
"args": [
"run",
"--rm",
"-i",
"--env-file", "/path/to/gemini-mcp-server/.env",
"-v", "/path/to/your/project:/workspace:ro",
"gemini-mcp-server:latest"
]
}
}
}
Important for Docker setup:
/path/to/gemini-mcp-server/.env
with the full path to your .env file-v
parameter)-v /specific/project:/workspace:ro
)-i
flag connects the container's stdin/stdout to ClaudePath Format Notes:
/
in Docker paths (e.g., C:/Users/john/project
)Example for macOS/Linux:
{
"mcpServers": {
"gemini": {
"command": "docker",
"args": [
"run",
"--rm",
"-i",
"--env-file", "/path/to/gemini-mcp-server/.env",
"-e", "WORKSPACE_ROOT=/Users/YOUR_USERNAME",
"-e", "MCP_PROJECT_ROOT=/workspace",
"-v", "/Users/YOUR_USERNAME:/workspace:ro",
"gemini-mcp-server:latest"
]
}
}
}
Example for Windows:
{
"mcpServers": {
"gemini": {
"command": "docker",
"args": [
"run",
"--rm",
"-i",
"--env-file", "C:/path/to/gemini-mcp-server/.env",
"-e", "WORKSPACE_ROOT=C:/Users/YOUR_USERNAME",
"-e", "MCP_PROJECT_ROOT=/workspace",
"-v", "C:/Users/YOUR_USERNAME:/workspace:ro",
"gemini-mcp-server:latest"
]
}
}
}
Note: Run
setup-docker-env.sh
(macOS/Linux) orsetup-docker-env.ps1
(Windows) to generate this configuration automatically with your paths.
macOS/Linux:
{
"mcpServers": {
"gemini": {
"command": "/Users/YOUR_USERNAME/gemini-mcp-server/run_gemini.sh",
"env": {
"GEMINI_API_KEY": "your-gemini-api-key-here"
}
}
}
}
Windows (Native Python):
{
"mcpServers": {
"gemini": {
"command": "C:\\Users\\YOUR_USERNAME\\gemini-mcp-server\\run_gemini.bat",
"env": {
"GEMINI_API_KEY": "your-gemini-api-key-here"
}
}
}
}
Windows (Using WSL):
{
"mcpServers": {
"gemini": {
"command": "wsl.exe",
"args": ["/home/YOUR_WSL_USERNAME/gemini-mcp-server/run_gemini.sh"],
"env": {
"GEMINI_API_KEY": "your-gemini-api-key-here"
}
}
}
}
Completely quit and restart Claude Desktop for the changes to take effect.
claude mcp add-from-claude-desktop -s user
For Traditional Setup (macOS/Linux):
claude mcp add gemini -s user -e GEMINI_API_KEY=your-gemini-api-key-here -- /path/to/gemini-mcp-server/run_gemini.sh
For Traditional Setup (Windows):
claude mcp add gemini -s user -e GEMINI_API_KEY=your-gemini-api-key-here -- C:\path\to\gemini-mcp-server\run_gemini.bat
For Docker Setup:
claude mcp add gemini -s user -- docker run --rm -i --env-file /path/to/gemini-mcp-server/.env -v /home:/workspace:ro gemini-mcp-server:latest
Replace /path/to/gemini-mcp-server
with the actual path where you cloned the repository.
Just ask Claude naturally:
thinkdeep
codereview
debug
analyze
chat
chat
chat
Quick Tool Selection Guide:
chat
(brainstorm ideas, get second opinions, validate approaches)thinkdeep
(extends Claude's analysis, finds edge cases)codereview
(bugs, security, performance issues)precommit
(validate git changes before committing)debug
(root cause analysis, error tracing)analyze
(architecture, patterns, dependencies)get_version
(version and configuration details)Pro Tip: You can control the depth of Gemini's analysis with thinking modes to manage token costs. For quick tasks use "minimal" or "low" to save tokens, for complex problems use "high" or "max" when quality matters more than cost. Learn more about thinking modes
The Docker setup provides a consistent, hassle-free experience across all platforms without worrying about Python versions or dependencies.
The setup scripts do all the heavy lifting for you:
Run the setup script for your platform:
# macOS/Linux:
./setup-docker-env.sh
# Windows (PowerShell):
.\setup-docker-env.ps1
# Windows (Command Prompt):
setup-docker-env.bat
The script automatically:
.env
file with your workspace and API key (if $GEMINI_API_KEY
is set)docker build
needed!Edit .env
to add your Gemini API key (only if not already in your environment)
Copy the configuration into Claude Desktop
That's it! No manual Docker commands needed. To update: Just run the setup script again.
/workspace
inside the container-i
flag preserves the MCP communication channel# Test that the server starts correctly
docker run --rm -i --env-file .env -v "$(pwd):/workspace:ro" gemini-mcp-server:latest
# You should see "INFO:__main__:Gemini API key found"
# Press Ctrl+C to exit
For the smoothest experience on Windows, we recommend running the server natively:
Install Python on Windows
Set up the project
cd C:\Users\YOUR_USERNAME\gemini-mcp-server
python -m venv venv
.\venv\Scripts\activate
pip install -r requirements.txt
Configure Claude Desktop using the Windows native configuration shown above
If you prefer to use WSL (Windows Subsystem for Linux):
Prerequisites
~/gemini-mcp-server
)Set up in WSL
# Inside WSL terminal
cd ~/gemini-mcp-server
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
chmod +x run_gemini.sh
Configure Claude Desktop using the WSL configuration shown above
Important WSL Notes:
~/
) rather than on Windows (/mnt/c/
)run_gemini.sh
has Unix line endings (LF, not CRLF)wsl.exe -d Ubuntu-22.04
Tools Overview:
chat
- Collaborative thinking and development conversationsthinkdeep
- Extended reasoning and problem-solvingcodereview
- Professional code review with severity levelsprecommit
- Validate git changes before committingdebug
- Root cause analysis and debugginganalyze
- General-purpose file and code analysisget_version
- Get server version and configurationchat
- General Development Chat & Collaborative ThinkingYour thinking partner - bounce ideas, get second opinions, brainstorm collaboratively
Thinking Mode: Default is medium
(8,192 tokens). Use low
for quick questions to save tokens, or high
for complex discussions when thoroughness matters.
Basic Usage:
"Use gemini to explain how async/await works in Python"
"Get gemini to compare Redis vs Memcached for session storage"
"Share my authentication design with gemini and get their opinion"
"Brainstorm with gemini about scaling strategies for our API"
Managing Token Costs:
# Save tokens (~6k) for simple questions
"Use gemini with minimal thinking to explain what a REST API is"
"Chat with gemini using low thinking mode about Python naming conventions"
# Use default for balanced analysis
"Get gemini to review my database schema design" (uses default medium)
# Invest tokens for complex discussions
"Use gemini with high thinking to brainstorm distributed system architecture"
Collaborative Workflow:
"Research the best message queue for our use case (high throughput, exactly-once delivery).
Use gemini to compare RabbitMQ, Kafka, and AWS SQS. Based on gemini's analysis and your research,
recommend the best option with implementation plan."
"Design a caching strategy for our API. Get gemini's input on Redis vs Memcached vs in-memory caching.
Combine both perspectives to create a comprehensive caching implementation guide."
Key Features:
"Use gemini to explain this algorithm with context from algorithm.py"
thinkdeep
- Extended Reasoning PartnerGet a second opinion to augment Claude's own extended thinking
Thinking Mode: Default is high
(16,384 tokens) for deep analysis. Claude will automatically choose the best mode based on complexity - use low
for quick validations, medium
for standard problems, high
for complex issues (default), or max
for extremely complex challenges requiring deepest analysis.
Basic Usage:
"Use gemini to think deeper about my authentication design"
"Use gemini to extend my analysis of this distributed system architecture"
With Web Search (for exploring new technologies):
"Use gemini to think deeper about using HTMX vs React for this project - enable web search to explore current best practices"
"Get gemini to think deeper about implementing WebAuthn authentication with web search enabled for latest standards"
Managing Token Costs:
# Claude will intelligently select the right mode, but you can override:
"Use gemini to think deeper with medium thinking about this refactoring approach" (saves ~8k tokens vs default)
"Get gemini to think deeper using low thinking to validate my basic approach" (saves ~14k tokens vs default)
# Use default high for most complex problems
"Use gemini to think deeper about this security architecture" (uses default high - 16k tokens)
# For extremely complex challenges requiring maximum depth
"Use gemini with max thinking to solve this distributed consensus problem" (adds ~16k tokens vs default)
Collaborative Workflow:
"Design an authentication system for our SaaS platform. Then use gemini to review your design
for security vulnerabilities. After getting gemini's feedback, incorporate the suggestions and
show me the final improved design."
"Create an event-driven architecture for our order processing system. Use gemini to think deeper
about event ordering and failure scenarios. Then integrate gemini's insights and present the enhanced architecture."
Key Features:
"Use gemini to think deeper about my API design with reference to api/routes.py"
codereview
- Professional Code ReviewComprehensive code analysis with prioritized feedback
Thinking Mode: Default is medium
(8,192 tokens). Use high
for security-critical code (worth the extra tokens) or low
for quick style checks (saves ~6k tokens).
Basic Usage:
"Use gemini to review auth.py for issues"
"Use gemini to do a security review of auth/ focusing on authentication"
Managing Token Costs:
# Save tokens for style/formatting reviews
"Use gemini with minimal thinking to check code style in utils.py" (saves ~8k tokens)
"Review this file with gemini using low thinking for basic issues" (saves ~6k tokens)
# Default for standard reviews
"Use gemini to review the API endpoints" (uses default medium)
# Invest tokens for critical code
"Get gemini to review auth.py with high thinking mode for security issues" (adds ~8k tokens)
"Use gemini with max thinking to audit our encryption module" (adds ~24k tokens - justified for security)
Collaborative Workflow:
"Refactor the authentication module to use dependency injection. Then use gemini to
review your refactoring for any security vulnerabilities. Based on gemini's feedback,
make any necessary adjustments and show me the final secure implementation."
"Optimize the slow database queries in user_service.py. Get gemini to review your optimizations
for potential regressions or edge cases. Incorporate gemini's suggestions and present the final optimized queries."
Key Features:
"Use gemini to review src/ against PEP8 standards"
"Get gemini to review auth/ - only report critical vulnerabilities"
precommit
- Pre-Commit ValidationComprehensive review of staged/unstaged git changes across multiple repositories
Thinking Mode: Default is medium
(8,192 tokens). Use high
or max
for critical releases when thorough validation justifies the token cost.
Basic Usage:
"Use gemini to review my pending changes before I commit"
"Get gemini to validate all my git changes match the original requirements"
"Review pending changes in the frontend/ directory"
Managing Token Costs:
# Save tokens for small changes
"Use gemini with low thinking to review my README updates" (saves ~6k tokens)
"Review my config changes with gemini using minimal thinking" (saves ~8k tokens)
# Default for regular commits
"Use gemini to review my feature changes" (uses default medium)
# Invest tokens for critical releases
"Use gemini with high thinking to review changes before production release" (adds ~8k tokens)
"Get gemini to validate all changes with max thinking for this security patch" (adds ~24k tokens - worth it!)
Collaborative Workflow:
"I've implemented the user authentication feature. Use gemini to review all pending changes
across the codebase to ensure they align with the security requirements. Fix any issues
gemini identifies before committing."
"Review all my changes for the API refactoring task. Get gemini to check for incomplete
implementations or missing test coverage. Update the code based on gemini's findings."
Key Features:
Parameters:
path
: Starting directory to search for repos (default: current directory)original_request
: The requirements for contextcompare_to
: Compare against a branch/tag instead of local changesreview_type
: full|security|performance|quickseverity_filter
: Filter by issue severitymax_depth
: How deep to search for nested reposdebug
- Expert Debugging AssistantRoot cause analysis for complex problems
Thinking Mode: Default is medium
(8,192 tokens). Use high
for tricky bugs (investment in finding root cause) or low
for simple errors (save tokens).
Basic Usage:
"Use gemini to debug this TypeError: 'NoneType' object has no attribute 'split'"
"Get gemini to debug why my API returns 500 errors with the full stack trace: [paste traceback]"
With Web Search (for unfamiliar errors):
"Use gemini to debug this cryptic Kubernetes error with web search enabled to find similar issues"
"Debug this React hydration error with gemini - enable web search to check for known solutions"
Managing Token Costs:
# Save tokens for simple errors
"Use gemini with minimal thinking to debug this syntax error" (saves ~8k tokens)
"Debug this import error with gemini using low thinking" (saves ~6k tokens)
# Default for standard debugging
"Use gemini to debug why this function returns null" (uses default medium)
# Invest tokens for complex bugs
"Use gemini with high thinking to debug this race condition" (adds ~8k tokens)
"Get gemini to debug this memory leak with max thinking mode" (adds ~24k tokens - find that leak!)
Collaborative Workflow:
"I'm getting 'ConnectionPool limit exceeded' errors under load. Debug the issue and use
gemini to analyze it deeper with context from db/pool.py. Based on gemini's root cause analysis,
implement a fix and get gemini to validate the solution will scale."
"Debug why tests fail randomly on CI. Once you identify potential causes, share with gemini along
with test logs and CI configuration. Apply gemini's debugging strategy, then use gemini to
suggest preventive measures."
Key Features:
analyze
- Smart File AnalysisGeneral-purpose code understanding and exploration
Thinking Mode: Default is medium
(8,192 tokens). Use high
for architecture analysis (comprehensive insights worth the cost) or low
for quick file overviews (save ~6k tokens).
Basic Usage:
"Use gemini to analyze main.py to understand how it works"
"Get gemini to do an architecture analysis of the src/ directory"
With Web Search (for unfamiliar code):
"Use gemini to analyze this GraphQL schema with web search enabled to understand best practices"
"Analyze this Rust code with gemini - enable web search to look up unfamiliar patterns and idioms"
Managing Token Costs:
# Save tokens for quick overviews
"Use gemini with minimal thinking to analyze what config.py does" (saves ~8k tokens)
"Analyze this utility file with gemini using low thinking" (saves ~6k tokens)
# Default for standard analysis
"Use gemini to analyze the API structure" (uses default medium)
# Invest tokens for deep analysis
"Use gemini with high thinking to analyze the entire codebase architecture" (adds ~8k tokens)
"Get gemini to analyze system design with max thinking for refactoring plan" (adds ~24k tokens)
Collaborative Workflow:
"Analyze our project structure in src/ and identify architectural improvements. Share your
analysis with gemini for a deeper review of design patterns and anti-patterns. Based on both
analyses, create a refactoring roadmap."
"Perform a security analysis of our authentication system. Use gemini to analyze auth/, middleware/, and api/ for vulnerabilities.
Combine your findings with gemini's to create a comprehensive security report."
Key Features:
use_websearch
, can look up framework documentation, design patterns, and best practices relevant to the code being analyzedget_version
- Server Information"Use gemini for its version"
"Get gemini to show server configuration"
All tools that work with files support both individual files and entire directories. The server automatically expands directories, filters for relevant code files, and manages token limits.
analyze
- Analyze files or directories
files
: List of file paths or directories (required)question
: What to analyze (required)analysis_type
: architecture|performance|security|quality|generaloutput_format
: summary|detailed|actionablethinking_mode
: minimal|low|medium|high|max (default: medium)use_websearch
: Enable web search for documentation and best practices (default: false)"Use gemini to analyze the src/ directory for architectural patterns"
"Get gemini to analyze main.py and tests/ to understand test coverage"
codereview
- Review code files or directories
files
: List of file paths or directories (required)review_type
: full|security|performance|quickfocus_on
: Specific aspects to focus onstandards
: Coding standards to enforceseverity_filter
: critical|high|medium|allthinking_mode
: minimal|low|medium|high|max (default: medium)"Use gemini to review the entire api/ directory for security issues"
"Get gemini to review src/ with focus on performance, only show critical issues"
debug
- Debug with file context
error_description
: Description of the issue (required)error_context
: Stack trace or logsfiles
: Files or directories related to the issueruntime_info
: Environment detailsprevious_attempts
: What you've triedthinking_mode
: minimal|low|medium|high|max (default: medium)use_websearch
: Enable web search for error messages and solutions (default: false)"Use gemini to debug this error with context from the entire backend/ directory"
thinkdeep
- Extended analysis with file context
current_analysis
: Your current thinking (required)problem_context
: Additional contextfocus_areas
: Specific aspects to focus onfiles
: Files or directories for contextthinking_mode
: minimal|low|medium|high|max (default: max)use_websearch
: Enable web search for documentation and insights (default: false)"Use gemini to think deeper about my design with reference to the src/models/ directory"
"Design a real-time collaborative editor. Use gemini to think deeper about edge cases and scalability.
Implement an improved version incorporating gemini's suggestions."
"Implement JWT authentication. Get gemini to do a security review. Fix any issues gemini identifies and
show me the secure implementation."
"Debug why our API crashes under load. Use gemini to analyze deeper with context from api/handlers/. Implement a
fix based on gemini's root cause analysis."
The server recognizes natural phrases. Just talk normally:
Claude will automatically pick the right tool based on your request:
codereview
debug
analyze
thinkdeep
All file operations use paths, not content, so your terminal stays readable even with large files.
Tools can reference files for additional context:
"Use gemini to debug this error with context from app.py and config.py"
"Get gemini to think deeper about my design, reference the current architecture.md"
To help choose the right tool for your needs:
Decision Flow:
debug
codereview
analyze
thinkdeep
chat
Key Distinctions:
analyze
vs codereview
: analyze explains, codereview prescribes fixeschat
vs thinkdeep
: chat is open-ended, thinkdeep extends specific analysisdebug
vs codereview
: debug diagnoses runtime errors, review finds static issuesClaude automatically manages thinking modes based on task complexity, but you can also manually control Gemini's reasoning depth to balance between response quality and token consumption. Each thinking mode uses a different amount of tokens, directly affecting API costs and response time.
Mode | Token Budget | Use Case | Cost Impact |
---|---|---|---|
minimal |
128 tokens | Simple, straightforward tasks | Lowest cost |
low |
2,048 tokens | Basic reasoning tasks | 16x more than minimal |
medium |
8,192 tokens | Default - Most development tasks | 64x more than minimal |
high |
16,384 tokens | Complex problems requiring thorough analysis (default for thinkdeep ) |
128x more than minimal |
max |
32,768 tokens | Exhaustive reasoning | 256x more than minimal |
Claude automatically selects appropriate thinking modes, but you can override this by explicitly requesting a specific mode in your prompts. Remember: higher thinking modes = more tokens = higher cost but better quality:
Your Goal | Example Prompt |
---|---|
Auto-managed (recommended) | "Use gemini to review auth.py" (Claude picks appropriate mode) |
Override for simple tasks | "Use gemini to format this code with minimal thinking" |
Override for deep analysis | "Use gemini to review this security module with high thinking mode" |
Override for maximum depth | "Get gemini to think deeper with max thinking about this architecture" |
Compare approaches | "First analyze this with low thinking, then again with high thinking" |
In most cases, let Claude automatically manage thinking modes for optimal balance of cost and quality. Override manually when you have specific requirements:
Use lower modes (minimal
, low
) to save tokens when:
Use higher modes (high
, max
) when quality justifies the cost:
Token Cost Examples:
minimal
(128 tokens) vs max
(32,768 tokens) = 256x difference in thinking tokensminimal
instead of the default medium
saves ~8,000 thinking tokenshigh
or max
mode are a worthwhile investmentExamples by scenario:
# Quick style check
"Use gemini to review formatting in utils.py with minimal thinking"
# Security audit
"Get gemini to do a security review of auth/ with thinking mode high"
# Complex debugging
"Use gemini to debug this race condition with max thinking mode"
# Architecture analysis
"Analyze the entire src/ directory architecture with high thinking"
The MCP protocol has a combined request+response limit of approximately 25K tokens. This server intelligently works around this limitation by automatically handling large prompts as files:
How it works:
prompt.txt
Example scenario:
# You have a massive code review request with detailed context
User: "Use gemini to review this code: [50,000+ character detailed analysis]"
# Server detects the large prompt and responds:
Gemini MCP: "The prompt is too large for MCP's token limits (>50,000 characters).
Please save the prompt text to a temporary file named 'prompt.txt' and resend
the request with an empty prompt string and the absolute file path included
in the files parameter, along with any other files you wish to share as context."
# Claude automatically handles this:
- Saves your prompt to /tmp/prompt.txt
- Resends: "Use gemini to review this code" with files=["/tmp/prompt.txt", "/path/to/code.py"]
# Server processes the large prompt through Gemini's 1M context
# Returns comprehensive analysis within MCP's response limits
This feature ensures you can send arbitrarily large prompts to Gemini without hitting MCP's protocol limitations, while maximizing the available space for detailed responses.
Tools can request additional context from Claude during execution. When Gemini needs more information to provide a thorough analysis, it will ask Claude for specific files or clarification, enabling true collaborative problem-solving.
Example: If Gemini is debugging an error but needs to see a configuration file that wasn't initially provided, it can request:
{
"status": "requires_clarification",
"question": "I need to see the database configuration to understand this connection error",
"files_needed": ["config/database.yml", "src/db_connection.py"]
}
Claude will then provide the requested files and Gemini can continue with a more complete analysis.
Smart web search recommendations for enhanced analysis
Web search is now enabled by default for all tools. Instead of performing searches directly, Gemini intelligently analyzes when additional information from the web would enhance its response and provides specific search recommendations for Claude to execute.
How it works:
Example:
User: "Use gemini to debug this FastAPI async error"
Gemini's Response:
[... debugging analysis ...]
**Recommended Web Searches for Claude:**
- "FastAPI async def vs def performance 2024" - to verify current best practices for async endpoints
- "FastAPI BackgroundTasks memory leak" - to check for known issues with the version you're using
- "FastAPI lifespan context manager pattern" - to explore proper resource management patterns
Claude can then search for these specific topics and provide you with the most current information.
Benefits:
Disabling web search: If you prefer Gemini to work only with its training data, you can disable web search:
"Use gemini to review this code with use_websearch false"
All tools now return structured JSON responses for consistent handling:
{
"status": "success|error|requires_clarification",
"content": "The actual response content",
"content_type": "text|markdown|json",
"metadata": {"tool_name": "analyze", ...}
}
This enables better integration, error handling, and support for the dynamic context request feature.
The server includes several configurable properties that control its behavior:
GEMINI_MODEL
: "gemini-2.5-pro-preview-06-05"
- The latest Gemini 2.5 Pro model with native thinking supportMAX_CONTEXT_TOKENS
: 1,000,000
- Maximum input context (1M tokens for Gemini 2.5 Pro)Different tools use optimized temperature settings:
TEMPERATURE_ANALYTICAL
: 0.2
- Used for code review and debugging (focused, deterministic)TEMPERATURE_BALANCED
: 0.5
- Used for general chat (balanced creativity/accuracy)TEMPERATURE_CREATIVE
: 0.7
- Used for deep thinking and architecture (more creative)All file paths must be absolute paths.
When using any Gemini tool, always provide absolute paths:
✅ "Use gemini to analyze /Users/you/project/src/main.py"
❌ "Use gemini to analyze ./src/main.py" (will be rejected)
By default, the server allows access to files within your home directory. This is necessary for the server to work with any file you might want to analyze from Claude.
To restrict access to a specific project directory, set the MCP_PROJECT_ROOT
environment variable:
"env": {
"GEMINI_API_KEY": "your-key",
"MCP_PROJECT_ROOT": "/Users/you/specific-project"
}
This creates a sandbox limiting file access to only that directory and its subdirectories.
Clone the repository:
git clone https://github.com/BeehiveInnovations/gemini-mcp-server.git
cd gemini-mcp-server
Create virtual environment:
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
Install dependencies:
pip install -r requirements.txt
Set your Gemini API key:
export GEMINI_API_KEY="your-api-key-here"
The server uses carefully crafted system prompts to give each tool specialized expertise:
prompts/tool_prompts.py
BaseTool
and implements get_system_prompt()
User Request → Tool Selection → System Prompt + Context → Gemini Response
Each tool has a unique system prompt that defines its role and approach:
thinkdeep
: Acts as a senior development partner, challenging assumptions and finding edge casescodereview
: Expert code reviewer with security/performance focus, uses severity levelsdebug
: Systematic debugger providing root cause analysis and prevention strategiesanalyze
: Code analyst focusing on architecture, patterns, and actionable insightsTo modify tool behavior, you can:
prompts/tool_prompts.py
for global changesget_system_prompt()
in a tool class for tool-specific changestemperature
parameter to adjust response style (0.2 for focused, 0.7 for creative)We welcome contributions! The modular architecture makes it easy to add new tools:
tools/
BaseTool
get_system_prompt()
)prompts/tool_prompts.py
TOOLS
dict in server.py
See existing tools for examples.
The project includes comprehensive unit tests that use mocks and don't require a Gemini API key:
# Run all unit tests
python -m pytest tests/ --ignore=tests/test_live_integration.py -v
# Run with coverage
python -m pytest tests/ --ignore=tests/test_live_integration.py --cov=. --cov-report=html
To test actual API integration:
# Set your API key
export GEMINI_API_KEY=your-api-key-here
# Run live integration tests
python tests/test_live_integration.py
The project includes GitHub Actions workflows that:
The CI pipeline works without any secrets and will pass all tests using mocked responses. Live integration tests only run if a GEMINI_API_KEY
secret is configured in the repository.
Error: spawn P:\path\to\run_gemini.bat ENOENT
This error occurs when Claude Desktop (running on Windows) can't properly execute the server. Common causes:
Wrong execution environment: You're trying to run WSL-based code from Windows
wsl.exe
(see Windows Setup Guide above)Path format mismatch: Using Linux paths (/mnt/c/...
) in Windows context
wsl.exe
Missing dependencies: Python or required packages not installed in the execution environment
Testing your setup:
test_wsl_setup.bat
to verify your WSL configurationpython --version
(Windows) or wsl python3 --version
(WSL)"ModuleNotFoundError: No module named 'mcp'" or "No matching distribution found for mcp"
mcp
package requires Python 3.10+python3 --version
or python --version
./setup.sh
setup.bat
# macOS/Linux:
source venv/bin/activate
pip install -r requirements.txt
# Windows:
venv\Scripts\activate.bat
pip install -r requirements.txt
"Virtual environment not found" warning
"GEMINI_API_KEY environment variable is required"
env
section of your MCP server config"Connection failed" in Claude Desktop
\\
for Windows paths)chmod +x run_gemini.sh
)Performance issues with WSL
/mnt/c/
) are slower to access from WSL~/gemini-mcp-server
)MIT License - see LICENSE file for details.
Built with the power of Claude + Gemini collaboration 🤝