5.1 KiB
Gemini CLI documentation
Gemini CLI is an open-source AI agent that brings the power of Gemini directly into your terminal. It is designed to be a terminal-first, extensible, and powerful tool for developers, engineers, SREs, and beyond.
Gemini CLI integrates with your local environment. It can read and edit files, execute shell commands, and search the web, all while maintaining your project context.
Get started
Begin your journey with Gemini CLI by setting up your environment and learning the basics.
- Quickstart: A streamlined guide to get you chatting in minutes.
- Installation: Instructions for macOS, Linux, and Windows.
- Authentication: Set up access using Google OAuth, API keys, or Vertex AI.
- Examples: View common usage scenarios to inspire your own workflows.
Use Gemini CLI
Master the core capabilities that let Gemini CLI interact with your system safely and effectively.
- Using the CLI: Learn the basics of the command-line interface.
- File management: Grant the model the ability to read code and apply changes directly to your files.
- Shell commands: Allow the model to run builds, tests, and git commands.
- Memory management: Teach Gemini CLI facts about your project and preferences that persist across sessions.
- Project context: Use
GEMINI.mdfiles to provide persistent context for your projects. - Web search and fetch: Enable the model to fetch real-time information from the internet.
- Session management: Save, resume, and organize your chat sessions.
Configuration
Customize Gemini CLI to match your workflow and preferences.
- Settings: Control response creativity, output verbosity, and more.
- Model selection: Choose the best Gemini model for your specific task.
- Ignore files: Use
.geminiignoreto keep sensitive files out of the model's context. - Trusted folders: Define security boundaries for file access and execution.
- Token caching: Optimize performance and cost by caching context.
- Themes: Personalize the visual appearance of the CLI.
Advanced features
Explore powerful features for complex workflows and enterprise environments.
- Headless mode: Run Gemini CLI in scripts or CI/CD pipelines for automated reasoning.
- Sandboxing: Execute untrusted code or tools in a secure, isolated container.
- Checkpointing: Save and restore workspace state to recover from experimental changes.
- Custom commands: Create shortcuts for frequently used prompts.
- System prompt override: Customize the core instructions given to the model.
- Telemetry: Understand how usage data is collected and managed.
- Enterprise: Manage configurations and policies for large teams.
Extensions
Extend Gemini CLI's capabilities with new tools and behaviors using extensions.
- Introduction: Learn about the extension system and how to manage extensions.
- Writing extensions: Learn how to create your first extension.
- Extensions reference: Deeply understand the extension format, commands, and configuration.
- Best practices: Learn strategies for building great extensions.
- Extensions releasing: Learn how to share your extensions with the world.
Ecosystem and extensibility
Connect Gemini CLI to external services and other development tools.
- MCP servers: Connect to external services using the Model Context Protocol.
- IDE integration: Use Gemini CLI alongside VS Code.
- Hooks: (Preview) Write scripts that run on specific CLI events.
- Agent skills: (Preview) Add specialized expertise and workflows.
- Sub-agents: (Preview) Delegate tasks to specialized agents.
Development and reference
Deep dive into the architecture and contribute to the project.
- Architecture: Understand the technical design of Gemini CLI.
- Command reference: A complete list of available commands.
- Local development: Set up your environment to contribute to Gemini CLI.
- Contributing: Learn how to submit pull requests and report issues.
- FAQ: Answers to common questions.
- Troubleshooting: Solutions for common issues.