Updated ToC on docs intro; updated title casing to match Google style (#13717)

This commit is contained in:
David Huntsperger
2025-12-01 11:38:48 -08:00
committed by GitHub
parent bde8b78a88
commit 26f050ff10
58 changed files with 660 additions and 642 deletions

View File

@@ -1,4 +1,4 @@
# How to Contribute
# How to contribute
We would love to accept your patches and contributions to this project. This
document includes:

View File

@@ -37,7 +37,7 @@ input:
- **Interaction:** `packages/core` invokes these tools based on requests
from the Gemini model.
## Interaction Flow
## Interaction flow
A typical interaction with the Gemini CLI follows this flow:
@@ -69,7 +69,7 @@ A typical interaction with the Gemini CLI follows this flow:
7. **Display to user:** The CLI package formats and displays the response to
the user in the terminal.
## Key Design Principles
## Key design principles
- **Modularity:** Separating the CLI (frontend) from the Core (backend) allows
for independent development and potential future extensions (e.g., different

View File

@@ -1,3 +1,3 @@
# Authentication Setup
# Authentication setup
See: [Getting Started - Authentication Setup](../get-started/authentication.md).

View File

@@ -6,19 +6,19 @@ AI-powered tools. This allows you to safely experiment with and apply code
changes, knowing you can instantly revert back to the state before the tool was
run.
## How It Works
## How it works
When you approve a tool that modifies the file system (like `write_file` or
`replace`), the CLI automatically creates a "checkpoint." This checkpoint
includes:
1. **A Git Snapshot:** A commit is made in a special, shadow Git repository
1. **A Git snapshot:** A commit is made in a special, shadow Git repository
located in your home directory (`~/.gemini/history/<project_hash>`). This
snapshot captures the complete state of your project files at that moment.
It does **not** interfere with your own project's Git repository.
2. **Conversation History:** The entire conversation you've had with the agent
2. **Conversation history:** The entire conversation you've had with the agent
up to that point is saved.
3. **The Tool Call:** The specific tool call that was about to be executed is
3. **The tool call:** The specific tool call that was about to be executed is
also stored.
If you want to undo the change or simply go back, you can use the `/restore`
@@ -35,7 +35,7 @@ repository while the conversation history and tool calls are saved in a JSON
file in your project's temporary directory, typically located at
`~/.gemini/tmp/<project_hash>/checkpoints`.
## Enabling the Feature
## Enabling the feature
The Checkpointing feature is disabled by default. To enable it, you need to edit
your `settings.json` file.
@@ -56,12 +56,12 @@ Add the following key to your `settings.json`:
}
```
## Using the `/restore` Command
## Using the `/restore` command
Once enabled, checkpoints are created automatically. To manage them, you use the
`/restore` command.
### List Available Checkpoints
### List available checkpoints
To see a list of all saved checkpoints for the current project, simply run:
@@ -74,7 +74,7 @@ typically composed of a timestamp, the name of the file being modified, and the
name of the tool that was about to be run (e.g.,
`2025-06-22T10-00-00_000Z-my-file.txt-write_file`).
### Restore a Specific Checkpoint
### Restore a specific checkpoint
To restore your project to a specific checkpoint, use the checkpoint file from
the list:

View File

@@ -1,4 +1,4 @@
# CLI Commands
# CLI commands
Gemini CLI supports several built-in commands to help you manage your session,
customize the interface, and control its behavior. These commands are prefixed
@@ -26,7 +26,7 @@ Slash commands provide meta-level control over the CLI itself.
- **Description:** Saves the current conversation history. You must add a
`<tag>` for identifying the conversation state.
- **Usage:** `/chat save <tag>`
- **Details on Checkpoint Location:** The default locations for saved chat
- **Details on checkpoint location:** The default locations for saved chat
checkpoints are:
- Linux/macOS: `~/.gemini/tmp/<project_hash>/`
- Windows: `C:\Users\<YourUsername>\.gemini\tmp\<project_hash>\`
@@ -256,13 +256,13 @@ Slash commands provide meta-level control over the CLI itself.
file, making it simpler for them to provide project-specific instructions to
the Gemini agent.
### Custom Commands
### Custom commands
Custom commands allow you to create personalized shortcuts for your most-used
prompts. For detailed instructions on how to create, manage, and use them,
please see the dedicated [Custom Commands documentation](./custom-commands.md).
## Input Prompt Shortcuts
## Input prompt shortcuts
These shortcuts apply directly to the input prompt for text manipulation.
@@ -320,7 +320,7 @@ your prompt to Gemini. These commands include git-aware filtering.
- If the `read_many_files` tool encounters an error (e.g., permission issues),
this will also be reported.
## Shell mode & passthrough commands (`!`)
## Shell mode and passthrough commands (`!`)
The `!` prefix lets you interact with your system's shell directly from within
Gemini CLI.
@@ -348,7 +348,7 @@ Gemini CLI.
- **Caution for all `!` usage:** Commands you execute in shell mode have the
same permissions and impact as if you ran them directly in your terminal.
- **Environment Variable:** When a command is executed via `!` or in shell mode,
- **Environment variable:** When a command is executed via `!` or in shell mode,
the `GEMINI_CLI=1` environment variable is set in the subprocess's
environment. This allows scripts or tools to detect if they are being run from
within the Gemini CLI.

View File

@@ -1,4 +1,4 @@
# Gemini CLI Configuration
# Gemini CLI configuration
Gemini CLI offers several ways to configure its behavior, including environment
variables, command-line arguments, and settings files. This document outlines
@@ -144,7 +144,7 @@ contain other project-specific files related to Gemini CLI's operation, such as:
be ignored if `--allowed-mcp-server-names` is set.
- **Default**: No MCP servers excluded.
- **Example:** `"excludeMCPServers": ["myNodeServer"]`.
- **Security Note:** This uses simple string matching on MCP server names,
- **Security note:** This uses simple string matching on MCP server names,
which can be modified. If you're a system administrator looking to prevent
users from bypassing this, consider configuring the `mcpServers` at the
system settings level such that the user will not be able to configure any
@@ -423,7 +423,7 @@ contain other project-specific files related to Gemini CLI's operation, such as:
}
```
## Shell History
## Shell history
The CLI keeps a history of shell commands you run. To avoid conflicts between
different projects, this history is stored in a project-specific directory
@@ -434,7 +434,7 @@ within your user's home folder.
path.
- The history is stored in a file named `shell_history`.
## Environment Variables & `.env` Files
## Environment variables and `.env` files
Environment variables are a common way to configure applications, especially for
sensitive information like API keys or for settings that might change between
@@ -449,7 +449,7 @@ loading order is:
the home directory.
3. If still not found, it looks for `~/.env` (in the user's home directory).
**Environment Variable Exclusion:** Some environment variables (like `DEBUG` and
**Environment variable exclusion:** Some environment variables (like `DEBUG` and
`DEBUG_MODE`) are automatically excluded from being loaded from project `.env`
files to prevent interference with gemini-cli behavior. Variables from
`.gemini/.env` files are never excluded. You can customize this behavior using
@@ -486,7 +486,7 @@ the `excludedProjectEnvVars` setting in your `settings.json` file.
- Required for using Code Assist or Vertex AI.
- If using Vertex AI, ensure you have the necessary permissions in this
project.
- **Cloud Shell Note:** When running in a Cloud Shell environment, this
- **Cloud shell note:** When running in a Cloud Shell environment, this
variable defaults to a special project allocated for Cloud Shell users. If
you have `GOOGLE_CLOUD_PROJECT` set in your global environment in Cloud
Shell, it will be overridden by this default. To use a different project in
@@ -547,7 +547,7 @@ the `excludedProjectEnvVars` setting in your `settings.json` file.
relative. `~` is supported for the home directory. **Note: This will
overwrite the file if it already exists.**
## Command-Line Arguments
## Command-line arguments
Arguments passed directly when running the CLI can override other configurations
for that specific session.
@@ -606,7 +606,7 @@ for that specific session.
- **`--version`**:
- Displays the version of the CLI.
## Context Files (Hierarchical Instructional Context)
## Context files (hierarchical instructional context)
While not strictly configuration for the CLI's _behavior_, context files
(defaulting to `GEMINI.md` but configurable via the `contextFileName` setting)
@@ -622,7 +622,7 @@ context.
that you want the Gemini model to be aware of during your interactions. The
system is designed to manage this instructional context hierarchically.
### Example Context File Content (e.g., `GEMINI.md`)
### Example context file content (e.g., `GEMINI.md`)
Here's a conceptual example of what a context file at the root of a TypeScript
project might contain:
@@ -663,23 +663,23 @@ more relevant and precise your context files are, the better the AI can assist
you. Project-specific context files are highly encouraged to establish
conventions and context.
- **Hierarchical Loading and Precedence:** The CLI implements a sophisticated
- **Hierarchical loading and precedence:** The CLI implements a sophisticated
hierarchical memory system by loading context files (e.g., `GEMINI.md`) from
several locations. Content from files lower in this list (more specific)
typically overrides or supplements content from files higher up (more
general). The exact concatenation order and final context can be inspected
using the `/memory show` command. The typical loading order is:
1. **Global Context File:**
1. **Global context file:**
- Location: `~/.gemini/<contextFileName>` (e.g., `~/.gemini/GEMINI.md` in
your user home directory).
- Scope: Provides default instructions for all your projects.
2. **Project Root & Ancestors Context Files:**
2. **Project root and ancestors context files:**
- Location: The CLI searches for the configured context file in the
current working directory and then in each parent directory up to either
the project root (identified by a `.git` folder) or your home directory.
- Scope: Provides context relevant to the entire project or a significant
portion of it.
3. **Sub-directory Context Files (Contextual/Local):**
3. **Sub-directory context files (contextual/local):**
- Location: The CLI also scans for the configured context file in
subdirectories _below_ the current working directory (respecting common
ignore patterns like `node_modules`, `.git`, etc.). The breadth of this
@@ -687,15 +687,15 @@ conventions and context.
with a `memoryDiscoveryMaxDirs` field in your `settings.json` file.
- Scope: Allows for highly specific instructions relevant to a particular
component, module, or subsection of your project.
- **Concatenation & UI Indication:** The contents of all found context files are
concatenated (with separators indicating their origin and path) and provided
as part of the system prompt to the Gemini model. The CLI footer displays the
count of loaded context files, giving you a quick visual cue about the active
instructional context.
- **Importing Content:** You can modularize your context files by importing
- **Concatenation and UI indication:** The contents of all found context files
are concatenated (with separators indicating their origin and path) and
provided as part of the system prompt to the Gemini model. The CLI footer
displays the count of loaded context files, giving you a quick visual cue
about the active instructional context.
- **Importing content:** You can modularize your context files by importing
other Markdown files using the `@path/to/file.md` syntax. For more details,
see the [Memory Import Processor documentation](../core/memport.md).
- **Commands for Memory Management:**
- **Commands for memory management:**
- Use `/memory refresh` to force a re-scan and reload of all context files
from all configured locations. This updates the AI's instructional context.
- Use `/memory show` to display the combined instructional context currently
@@ -742,7 +742,7 @@ sandbox image:
BUILD_SANDBOX=1 gemini -s
```
## Usage Statistics
## Usage statistics
To help us improve the Gemini CLI, we collect anonymized usage statistics. This
data helps us understand how the CLI is used, identify common issues, and
@@ -750,22 +750,22 @@ prioritize new features.
**What we collect:**
- **Tool Calls:** We log the names of the tools that are called, whether they
- **Tool calls:** We log the names of the tools that are called, whether they
succeed or fail, and how long they take to execute. We do not collect the
arguments passed to the tools or any data returned by them.
- **API Requests:** We log the Gemini model used for each request, the duration
- **API requests:** We log the Gemini model used for each request, the duration
of the request, and whether it was successful. We do not collect the content
of the prompts or responses.
- **Session Information:** We collect information about the configuration of the
- **Session information:** We collect information about the configuration of the
CLI, such as the enabled tools and the approval mode.
**What we DON'T collect:**
- **Personally Identifiable Information (PII):** We do not collect any personal
- **Personally identifiable information (PII):** We do not collect any personal
information, such as your name, email address, or API keys.
- **Prompt and Response Content:** We do not log the content of your prompts or
- **Prompt and response content:** We do not log the content of your prompts or
the responses from the Gemini model.
- **File Content:** We do not log the content of any files that are read or
- **File content:** We do not log the content of any files that are read or
written by the CLI.
**How to opt out:**

View File

@@ -1,4 +1,4 @@
# Custom Commands
# Custom commands
Custom commands let you save and reuse your favorite or most frequently used
prompts as personal shortcuts within Gemini CLI. You can create commands that
@@ -9,9 +9,9 @@ all your projects, streamlining your workflow and ensuring consistency.
Gemini CLI discovers commands from two locations, loaded in a specific order:
1. **User Commands (Global):** Located in `~/.gemini/commands/`. These commands
1. **User commands (global):** Located in `~/.gemini/commands/`. These commands
are available in any project you are working on.
2. **Project Commands (Local):** Located in
2. **Project commands (local):** Located in
`<your-project-root>/.gemini/commands/`. These commands are specific to the
current project and can be checked into version control to be shared with
your team.
@@ -30,7 +30,7 @@ separator (`/` or `\`) being converted to a colon (`:`).
- A file at `<project>/.gemini/commands/git/commit.toml` becomes the namespaced
command `/git:commit`.
## TOML File Format (v1)
## TOML file format (v1)
Your command definition files must be written in the TOML format and use the
`.toml` file extension.
@@ -60,7 +60,7 @@ replace that placeholder with the text the user typed after the command name.
The behavior of this injection depends on where it is used:
**A. Raw injection (outside Shell commands)**
**A. Raw injection (outside shell commands)**
When used in the main body of the prompt, the arguments are injected exactly as
the user typed them.
@@ -77,7 +77,7 @@ prompt = "Please provide a code fix for the issue described here: {{args}}."
The model receives:
`Please provide a code fix for the issue described here: "Button is misaligned".`
**B. Using arguments in Shell commands (inside `!{...}` blocks)**
**B. Using arguments in shell commands (inside `!{...}` blocks)**
When you use `{{args}}` inside a shell injection block (`!{...}`), the arguments
are automatically **shell-escaped** before replacement. This allows you to
@@ -156,7 +156,7 @@ When you run `/changelog 1.2.0 added "New feature"`, the final text sent to the
model will be the original prompt followed by two newlines and the command you
typed.
### 3. Executing Shell commands with `!{...}`
### 3. Executing shell commands with `!{...}`
You can make your commands dynamic by executing shell commands directly within
your `prompt` and injecting their output. This is ideal for gathering context
@@ -302,7 +302,7 @@ Your response should include:
"""
```
**3. Run the Command:**
**3. Run the command:**
That's it! You can now run your command in the CLI. First, you might add a file
to the context, and then invoke your command:

View File

@@ -1,11 +1,11 @@
# Gemini CLI for the Enterprise
# Gemini CLI for the enterprise
This document outlines configuration patterns and best practices for deploying
and managing Gemini CLI in an enterprise environment. By leveraging system-level
settings, administrators can enforce security policies, manage tool access, and
ensure a consistent experience for all users.
> **A Note on Security:** The patterns described in this document are intended
> **A note on security:** The patterns described in this document are intended
> to help administrators create a more controlled and secure environment for
> using Gemini CLI. However, they should not be considered a foolproof security
> boundary. A determined user with sufficient privileges on their local machine
@@ -14,7 +14,7 @@ ensure a consistent experience for all users.
> managed environment, not to defend against a malicious actor with local
> administrative rights.
## Centralized Configuration: The System Settings File
## Centralized configuration: The system settings file
The most powerful tools for enterprise administration are the system-wide
settings files. These files allow you to define a baseline configuration
@@ -33,11 +33,11 @@ settings (like `theme`) is:
This means the System Overrides file has the final say. For settings that are
arrays (`includeDirectories`) or objects (`mcpServers`), the values are merged.
**Example of Merging and Precedence:**
**Example of merging and precedence:**
Here is how settings from different levels are combined.
- **System Defaults `system-defaults.json`:**
- **System defaults `system-defaults.json`:**
```json
{
@@ -89,7 +89,7 @@ Here is how settings from different levels are combined.
}
```
- **System Overrides `settings.json`:**
- **System overrides `settings.json`:**
```json
{
"ui": {
@@ -108,7 +108,7 @@ Here is how settings from different levels are combined.
This results in the following merged configuration:
- **Final Merged Configuration:**
- **Final merged configuration:**
```json
{
"ui": {
@@ -159,7 +159,7 @@ This results in the following merged configuration:
By using the system settings file, you can enforce the security and
configuration patterns described below.
## Restricting Tool Access
## Restricting tool access
You can significantly enhance security by controlling which tools the Gemini
model can use. This is achieved through the `tools.core` and `tools.exclude`
@@ -197,12 +197,12 @@ environment to a blocklist.
}
```
**Security Note:** Blocklisting with `excludeTools` is less secure than
**Security note:** Blocklisting with `excludeTools` is less secure than
allowlisting with `coreTools`, as it relies on blocking known-bad commands, and
clever users may find ways to bypass simple string-based blocks. **Allowlisting
is the recommended approach.**
### Disabling YOLO Mode
### Disabling YOLO mode
To ensure that users cannot bypass the confirmation prompt for tool execution,
you can disable YOLO mode at the policy level. This adds a critical layer of
@@ -222,14 +222,14 @@ approval.
This setting is highly recommended in an enterprise environment to prevent
unintended tool execution.
## Managing Custom Tools (MCP Servers)
## Managing custom tools (MCP servers)
If your organization uses custom tools via
[Model-Context Protocol (MCP) servers](../core/tools-api.md), it is crucial to
understand how server configurations are managed to apply security policies
effectively.
### How MCP Server Configurations are Merged
### How MCP server configurations are merged
Gemini CLI loads `settings.json` files from three levels: System, Workspace, and
User. When it comes to the `mcpServers` object, these configurations are
@@ -246,12 +246,12 @@ This means a user **cannot** override the definition of a server that is already
defined in the system-level settings. However, they **can** add new servers with
unique names.
### Enforcing a Catalog of Tools
### Enforcing a catalog of tools
The security of your MCP tool ecosystem depends on a combination of defining the
canonical servers and adding their names to an allowlist.
### Restricting Tools Within an MCP Server
### Restricting tools within an MCP server
For even greater security, especially when dealing with third-party MCP servers,
you can restrict which specific tools from a server are exposed to the model.
@@ -280,7 +280,7 @@ third-party MCP server, even if the server offers other tools like
}
```
#### More Secure Pattern: Define and Add to Allowlist in System Settings
#### More secure pattern: Define and add to allowlist in system settings
To create a secure, centrally-managed catalog of tools, the system administrator
**must** do both of the following in the system-level `settings.json` file:
@@ -293,7 +293,7 @@ To create a secure, centrally-managed catalog of tools, the system administrator
any servers that are not on this list. If this setting is omitted, the CLI
will merge and allow any server defined by the user.
**Example System `settings.json`:**
**Example system `settings.json`:**
1. Add the _names_ of all approved servers to an allowlist. This will prevent
users from adding their own servers.
@@ -322,12 +322,12 @@ Any server a user defines will either be overridden by the system definition (if
it has the same name) or blocked because its name is not in the `mcp.allowed`
list.
### Less Secure Pattern: Omitting the Allowlist
### Less secure pattern: Omitting the allowlist
If the administrator defines the `mcpServers` object but fails to also specify
the `mcp.allowed` allowlist, users may add their own servers.
**Example System `settings.json`:**
**Example system `settings.json`:**
This configuration defines servers but does not enforce the allowlist. The
administrator has NOT included the "mcp.allowed" setting.
@@ -347,7 +347,7 @@ In this scenario, a user can add their own server in their local
results, the user's server will be added to the list of available tools and
allowed to run.
## Enforcing Sandboxing for Security
## Enforcing sandboxing for security
To mitigate the risk of potentially harmful operations, you can enforce the use
of sandboxing for all tool execution. The sandbox isolates tool execution in a
@@ -367,14 +367,14 @@ You can also specify a custom, hardened Docker image for the sandbox by building
a custom `sandbox.Dockerfile` as described in the
[Sandboxing documentation](./sandbox.md).
## Controlling Network Access via Proxy
## Controlling network access via proxy
In corporate environments with strict network policies, you can configure Gemini
CLI to route all outbound traffic through a corporate proxy. This can be set via
an environment variable, but it can also be enforced for custom tools via the
`mcpServers` configuration.
**Example (for an MCP Server):**
**Example (for an MCP server):**
```json
{
@@ -391,7 +391,7 @@ an environment variable, but it can also be enforced for custom tools via the
}
```
## Telemetry and Auditing
## Telemetry and auditing
For auditing and monitoring purposes, you can configure Gemini CLI to send
telemetry data to a central location. This allows you to track tool usage and
@@ -434,7 +434,7 @@ prompted to switch to the enforced method. In non-interactive mode, the CLI will
exit with an error if the configured authentication method does not match the
enforced one.
## Putting It All Together: Example System `settings.json`
## Putting it all together: Example system `settings.json`
Here is an example of a system `settings.json` file that combines several of the
patterns discussed above to create a secure, controlled environment for Gemini

View File

@@ -1,4 +1,4 @@
# Ignoring Files
# Ignoring files
This document provides an overview of the Gemini Ignore (`.geminiignore`)
feature of the Gemini CLI.

View File

@@ -1,4 +1,4 @@
# Provide Context with GEMINI.md Files
# Provide context with GEMINI.md files
Context files, which use the default name `GEMINI.md`, are a powerful feature
for providing instructional context to the Gemini model. You can use these files

View File

@@ -1,4 +1,4 @@
# Headless Mode
# Headless mode
Headless mode allows you to run Gemini CLI programmatically from command line
scripts and automation tools without any interactive UI. This is ideal for
@@ -45,9 +45,9 @@ The headless mode provides a headless interface to Gemini CLI that:
- Enables automation and scripting workflows
- Provides consistent exit codes for error handling
## Basic Usage
## Basic usage
### Direct Prompts
### Direct prompts
Use the `--prompt` (or `-p`) flag to run in headless mode:
@@ -55,7 +55,7 @@ Use the `--prompt` (or `-p`) flag to run in headless mode:
gemini --prompt "What is machine learning?"
```
### Stdin Input
### Stdin input
Pipe input to Gemini CLI from your terminal:
@@ -63,7 +63,7 @@ Pipe input to Gemini CLI from your terminal:
echo "Explain this code" | gemini
```
### Combining with File Input
### Combining with file input
Read from files and process with Gemini:
@@ -71,9 +71,9 @@ Read from files and process with Gemini:
cat README.md | gemini --prompt "Summarize this documentation"
```
## Output Formats
## Output formats
### Text Output (Default)
### Text output (default)
Standard human-readable output:
@@ -87,12 +87,12 @@ Response format:
The capital of France is Paris.
```
### JSON Output
### JSON output
Returns structured data including response, statistics, and metadata. This
format is ideal for programmatic processing and automation scripts.
#### Response Schema
#### Response schema
The JSON output follows this high-level structure:
@@ -140,7 +140,7 @@ The JSON output follows this high-level structure:
}
```
#### Example Usage
#### Example usage
```bash
gemini -p "What is the capital of France?" --output-format json
@@ -218,14 +218,14 @@ Response:
}
```
### Streaming JSON Output
### Streaming JSON output
Returns real-time events as newline-delimited JSON (JSONL). Each significant
action (initialization, messages, tool calls, results) emits immediately as it
occurs. This format is ideal for monitoring long-running operations, building
UIs with live progress, and creating automation pipelines that react to events.
#### When to Use Streaming JSON
#### When to use streaming JSON
Use `--output-format stream-json` when you need:
@@ -237,7 +237,7 @@ Use `--output-format stream-json` when you need:
timestamps
- **Pipeline integration** - Stream events to logging/monitoring systems
#### Event Types
#### Event types
The streaming format emits 6 event types:
@@ -248,7 +248,7 @@ The streaming format emits 6 event types:
5. **`error`** - Non-fatal errors and warnings
6. **`result`** - Final session outcome with aggregated stats
#### Basic Usage
#### Basic usage
```bash
# Stream events to console
@@ -261,7 +261,7 @@ gemini --output-format stream-json --prompt "Analyze this code" > events.jsonl
gemini --output-format stream-json --prompt "List files" | jq -r '.type'
```
#### Example Output
#### Example output
Each line is a complete JSON event:
@@ -274,7 +274,7 @@ Each line is a complete JSON event:
{"type":"result","status":"success","stats":{"total_tokens":250,"input_tokens":50,"output_tokens":200,"duration_ms":3000,"tool_calls":1},"timestamp":"2025-10-10T12:00:05.000Z"}
```
### File Redirection
### File redirection
Save output to files or pipe to other commands:
@@ -292,7 +292,7 @@ gemini -p "Explain microservices" | wc -w
gemini -p "List programming languages" | grep -i "python"
```
## Configuration Options
## Configuration options
Key command-line options for headless usage:

View File

@@ -7,17 +7,17 @@ overview of Gemini CLI, see the [main documentation page](../index.md).
## Basic features
- **[Commands](./commands.md):** A reference for all built-in slash commands
- **[Custom Commands](./custom-commands.md):** Create your own commands and
- **[Custom commands](./custom-commands.md):** Create your own commands and
shortcuts for frequently used prompts.
- **[Headless Mode](./headless.md):** Use Gemini CLI programmatically for
- **[Headless mode](./headless.md):** Use Gemini CLI programmatically for
scripting and automation.
- **[Model Selection](./model.md):** Configure the Gemini AI model used by the
- **[Model selection](./model.md):** Configure the Gemini AI model used by the
CLI.
- **[Settings](./settings.md):** Configure various aspects of the CLI's behavior
and appearance.
- **[Themes](./themes.md):** Customizing the CLI's appearance with different
themes.
- **[Keyboard Shortcuts](./keyboard-shortcuts.md):** A reference for all
- **[Keyboard shortcuts](./keyboard-shortcuts.md):** A reference for all
keyboard shortcuts to improve your workflow.
- **[Tutorials](./tutorials.md):** Step-by-step guides for common tasks.
@@ -25,18 +25,18 @@ overview of Gemini CLI, see the [main documentation page](../index.md).
- **[Checkpointing](./checkpointing.md):** Automatically save and restore
snapshots of your session and files.
- **[Enterprise Configuration](./enterprise.md):** Deploying and manage Gemini
- **[Enterprise configuration](./enterprise.md):** Deploying and manage Gemini
CLI in an enterprise environment.
- **[Sandboxing](./sandbox.md):** Isolate tool execution in a secure,
containerized environment.
- **[Telemetry](./telemetry.md):** Configure observability to monitor usage and
performance.
- **[Token Caching](./token-caching.md):** Optimize API costs by caching tokens.
- **[Trusted Folders](./trusted-folders.md):** A security feature to control
- **[Token caching](./token-caching.md):** Optimize API costs by caching tokens.
- **[Trusted folders](./trusted-folders.md):** A security feature to control
which projects can use the full capabilities of the CLI.
- **[Ignoring Files (.geminiignore)](./gemini-ignore.md):** Exclude specific
- **[Ignoring files (.geminiignore)](./gemini-ignore.md):** Exclude specific
files and directories from being accessed by tools.
- **[Context Files (GEMINI.md)](./gemini-md.md):** Provide persistent,
- **[Context files (GEMINI.md)](./gemini-md.md):** Provide persistent,
hierarchical context to the model.
## Non-interactive mode
@@ -58,4 +58,4 @@ gemini -p "What is fine tuning?"
```
For comprehensive documentation on headless usage, scripting, automation, and
advanced examples, see the **[Headless Mode](./headless.md)** guide.
advanced examples, see the **[Headless mode](./headless.md)** guide.

View File

@@ -1,4 +1,4 @@
# Gemini CLI Keyboard Shortcuts
# Gemini CLI keyboard shortcuts
Gemini CLI ships with a set of default keyboard shortcuts for editing input,
navigating history, and controlling the UI. Use this reference to learn the
@@ -110,7 +110,7 @@ available combinations.
<!-- KEYBINDINGS-AUTOGEN:END -->
## Additional Context-Specific Shortcuts
## Additional context-specific shortcuts
- `Ctrl+Y`: Toggle YOLO (auto-approval) mode for tool calls.
- `Shift+Tab`: Toggle Auto Edit (auto-accept edits) mode.

View File

@@ -1,31 +1,31 @@
## Model Routing
## Model routing
Gemini CLI includes a model routing feature that automatically switches to a
fallback model in case of a model failure. This feature is enabled by default
and provides resilience when the primary model is unavailable.
## How it Works
## How it works
Model routing is not based on prompt complexity, but is a fallback mechanism.
Here's how it works:
1. **Model Failure:** If the currently selected model fails to respond (for
1. **Model failure:** If the currently selected model fails to respond (for
example, due to a server error or other issue), the CLI will initiate the
fallback process.
2. **User Consent:** The CLI will prompt you to ask if you want to switch to
2. **User consent:** The CLI will prompt you to ask if you want to switch to
the fallback model. This is handled by the `fallbackModelHandler`.
3. **Fallback Activation:** If you consent, the CLI will activate the fallback
3. **Fallback activation:** If you consent, the CLI will activate the fallback
mode by calling `config.setFallbackMode(true)`.
4. **Model Switch:** On the next request, the CLI will use the
4. **Model switch:** On the next request, the CLI will use the
`DEFAULT_GEMINI_FLASH_MODEL` as the fallback model. This is handled by the
`resolveModel` function in
`packages/cli/src/zed-integration/zedIntegration.ts` which checks if
`isInFallbackMode()` is true.
### Model Selection Precedence
### Model selection precedence
The model used by Gemini CLI is determined by the following order of precedence:
@@ -37,5 +37,5 @@ The model used by Gemini CLI is determined by the following order of precedence:
3. **`model.name` in `settings.json`:** If neither of the above are set, the
model specified in the `model.name` property of your `settings.json` file
will be used.
4. **Default Model:** If none of the above are set, the default model will be
4. **Default model:** If none of the above are set, the default model will be
used. The default model is `auto`

View File

@@ -1,4 +1,4 @@
# Gemini CLI Model Selection (`/model` Command)
# Gemini CLI model selection (`/model` command)
Select your Gemini CLI model. The `/model` command opens a dialog where you can
configure the model used by Gemini CLI, giving you more control over your
@@ -21,7 +21,7 @@ Running this command will open a dialog with your model options:
| Flash | For tasks that need a balance of speed and reasoning. | gemini-2.5-flash |
| Flash-Lite | For simple tasks that need to be done quickly. | gemini-2.5-flash-lite |
### Gemini 3 Pro and Preview Features
### Gemini 3 Pro and preview features
Note: Gemini 3 is not currently available on all account types. To learn more
about Gemini 3 access, refer to

View File

@@ -87,7 +87,7 @@ Built-in profiles (set via `SEATBELT_PROFILE` env var):
- `restrictive-open`: Strict restrictions, network allowed
- `restrictive-closed`: Maximum restrictions
### Custom Sandbox Flags
### Custom sandbox flags
For container-based sandboxing, you can inject custom flags into the `docker` or
`podman` command using the `SANDBOX_FLAGS` environment variable. This is useful

View File

@@ -1,4 +1,4 @@
# Gemini CLI Settings (`/settings` Command)
# Gemini CLI settings (`/settings` command)
Control your Gemini CLI experience with the `/settings` command. The `/settings`
command opens a dialog to view and edit all your Gemini CLI settings, including

View File

@@ -3,27 +3,27 @@
Learn how to enable and setup OpenTelemetry for Gemini CLI.
- [Observability with OpenTelemetry](#observability-with-opentelemetry)
- [Key Benefits](#key-benefits)
- [OpenTelemetry Integration](#opentelemetry-integration)
- [Key benefits](#key-benefits)
- [OpenTelemetry integration](#opentelemetry-integration)
- [Configuration](#configuration)
- [Google Cloud Telemetry](#google-cloud-telemetry)
- [Google Cloud telemetry](#google-cloud-telemetry)
- [Prerequisites](#prerequisites)
- [Direct Export (Recommended)](#direct-export-recommended)
- [Collector-Based Export (Advanced)](#collector-based-export-advanced)
- [Local Telemetry](#local-telemetry)
- [File-based Output (Recommended)](#file-based-output-recommended)
- [Collector-Based Export (Advanced)](#collector-based-export-advanced-1)
- [Logs and Metrics](#logs-and-metrics)
- [Direct export (recommended)](#direct-export-recommended)
- [Collector-based export (advanced)](#collector-based-export-advanced)
- [Local telemetry](#local-telemetry)
- [File-based output (recommended)](#file-based-output-recommended)
- [Collector-based export (advanced)](#collector-based-export-advanced-1)
- [Logs and metrics](#logs-and-metrics)
- [Logs](#logs)
- [Sessions](#sessions)
- [Tools](#tools)
- [Files](#files)
- [API](#api)
- [Model Routing](#model-routing)
- [Chat and Streaming](#chat-and-streaming)
- [Model routing](#model-routing)
- [Chat and streaming](#chat-and-streaming)
- [Resilience](#resilience)
- [Extensions](#extensions)
- [Agent Runs](#agent-runs)
- [Agent runs](#agent-runs)
- [IDE](#ide)
- [UI](#ui)
- [Metrics](#metrics)
@@ -31,40 +31,40 @@ Learn how to enable and setup OpenTelemetry for Gemini CLI.
- [Sessions](#sessions-1)
- [Tools](#tools-1)
- [API](#api-1)
- [Token Usage](#token-usage)
- [Token usage](#token-usage)
- [Files](#files-1)
- [Chat and Streaming](#chat-and-streaming-1)
- [Model Routing](#model-routing-1)
- [Agent Runs](#agent-runs-1)
- [Chat and streaming](#chat-and-streaming-1)
- [Model routing](#model-routing-1)
- [Agent runs](#agent-runs-1)
- [UI](#ui-1)
- [Performance](#performance)
- [GenAI Semantic Convention](#genai-semantic-convention)
- [GenAI semantic convention](#genai-semantic-convention)
## Key Benefits
## Key benefits
- **🔍 Usage Analytics**: Understand interaction patterns and feature adoption
- **🔍 Usage analytics**: Understand interaction patterns and feature adoption
across your team
- **⚡ Performance Monitoring**: Track response times, token consumption, and
- **⚡ Performance monitoring**: Track response times, token consumption, and
resource utilization
- **🐛 Real-time Debugging**: Identify bottlenecks, failures, and error patterns
- **🐛 Real-time debugging**: Identify bottlenecks, failures, and error patterns
as they occur
- **📊 Workflow Optimization**: Make informed decisions to improve
- **📊 Workflow optimization**: Make informed decisions to improve
configurations and processes
- **🏢 Enterprise Governance**: Monitor usage across teams, track costs, ensure
- **🏢 Enterprise governance**: Monitor usage across teams, track costs, ensure
compliance, and integrate with existing monitoring infrastructure
## OpenTelemetry Integration
## OpenTelemetry integration
Built on **[OpenTelemetry]** — the vendor-neutral, industry-standard
observability framework — Gemini CLI's observability system provides:
- **Universal Compatibility**: Export to any OpenTelemetry backend (Google
- **Universal compatibility**: Export to any OpenTelemetry backend (Google
Cloud, Jaeger, Prometheus, Datadog, etc.)
- **Standardized Data**: Use consistent formats and collection methods across
- **Standardized data**: Use consistent formats and collection methods across
your toolchain
- **Future-Proof Integration**: Connect with existing and future observability
- **Future-proof integration**: Connect with existing and future observability
infrastructure
- **No Vendor Lock-in**: Switch between backends without changing your
- **No vendor lock-in**: Switch between backends without changing your
instrumentation
[OpenTelemetry]: https://opentelemetry.io/
@@ -89,9 +89,9 @@ Environment variables can be used to override the settings in the file.
`true` or `1` will enable the feature. Any other value will disable it.
For detailed information about all configuration options, see the
[Configuration Guide](../get-started/configuration.md).
[Configuration guide](../get-started/configuration.md).
## Google Cloud Telemetry
## Google Cloud telemetry
### Prerequisites
@@ -130,7 +130,7 @@ Before using either method below, complete these steps:
--project="$OTLP_GOOGLE_CLOUD_PROJECT"
```
### Direct Export (Recommended)
### Direct export (recommended)
Sends telemetry directly to Google Cloud services. No collector needed.
@@ -150,7 +150,7 @@ Sends telemetry directly to Google Cloud services. No collector needed.
- Metrics: https://console.cloud.google.com/monitoring/metrics-explorer
- Traces: https://console.cloud.google.com/traces/list
### Collector-Based Export (Advanced)
### Collector-based export (advanced)
For custom processing, filtering, or routing, use an OpenTelemetry collector to
forward data to Google Cloud.
@@ -184,11 +184,11 @@ forward data to Google Cloud.
- Open `~/.gemini/tmp/<projectHash>/otel/collector-gcp.log` to view local
collector logs.
## Local Telemetry
## Local telemetry
For local development and debugging, you can capture telemetry data locally:
### File-based Output (Recommended)
### File-based output (recommended)
1. Enable telemetry in your `.gemini/settings.json`:
```json
@@ -204,7 +204,7 @@ For local development and debugging, you can capture telemetry data locally:
2. Run Gemini CLI and send prompts.
3. View logs and metrics in the specified file (e.g., `.gemini/telemetry.log`).
### Collector-Based Export (Advanced)
### Collector-based export (advanced)
1. Run the automation script:
```bash
@@ -220,7 +220,7 @@ For local development and debugging, you can capture telemetry data locally:
3. View traces at http://localhost:16686 and logs/metrics in the collector log
file.
## Logs and Metrics
## Logs and metrics
The following section describes the structure of logs and metrics generated for
Gemini CLI.
@@ -378,9 +378,7 @@ Captures Gemini API requests, responses, and errors.
- **Attributes**:
- `model` (string)
#### Model Routing
Tracks model selections via slash commands and router decisions.
#### Model routing
- `gemini_cli.slash_command`: A slash command was executed.
- **Attributes**:
@@ -401,9 +399,7 @@ Tracks model selections via slash commands and router decisions.
- `failed` (boolean)
- `error_message` (string, optional)
#### Chat and Streaming
Observes streaming integrity, compression, and retry behavior.
#### Chat and streaming
- `gemini_cli.chat_compression`: Chat context was compressed.
- **Attributes**:
@@ -489,9 +485,7 @@ Tracks extension lifecycle and settings changes.
- `extension_source` (string)
- `status` (string)
#### Agent Runs
Tracks agent lifecycle and outcomes.
#### Agent runs
- `gemini_cli.agent.start`: Agent run started.
- **Attributes**:
@@ -567,7 +561,7 @@ Tracks API request volume and latency.
- `model`
- Note: Overlaps with `gen_ai.client.operation.duration` (GenAI conventions).
##### Token Usage
##### Token usage
Tracks tokens used by model and type.
@@ -595,7 +589,7 @@ Counts file operations with basic context.
- `function_name`
- `type` ("added" or "removed")
##### Chat and Streaming
##### Chat and streaming
Resilience counters for compression, invalid chunks, and retries.
@@ -614,7 +608,7 @@ Resilience counters for compression, invalid chunks, and retries.
- `gemini_cli.chat.content_retry_failure.count` (Counter, Int): Counts requests
where all content retries failed.
##### Model Routing
##### Model routing
Routing latency/failures and slash-command selections.
@@ -635,7 +629,7 @@ Routing latency/failures and slash-command selections.
- `routing.decision_source` (string)
- `routing.error_message` (string)
##### Agent Runs
##### Agent runs
Agent lifecycle metrics: runs, durations, and turns.
@@ -727,7 +721,7 @@ Optional performance monitoring for startup, CPU/memory, and phase timing.
- `current_value` (number)
- `baseline_value` (number)
#### GenAI Semantic Convention
#### GenAI semantic convention
The following metrics comply with [OpenTelemetry GenAI semantic conventions] for
standardized observability across GenAI applications:

View File

@@ -4,19 +4,19 @@ Gemini CLI supports a variety of themes to customize its color scheme and
appearance. You can change the theme to suit your preferences via the `/theme`
command or `"theme":` configuration setting.
## Available Themes
## Available themes
Gemini CLI comes with a selection of pre-defined themes, which you can list
using the `/theme` command within Gemini CLI:
- **Dark Themes:**
- **Dark themes:**
- `ANSI`
- `Atom One`
- `Ayu`
- `Default`
- `Dracula`
- `GitHub`
- **Light Themes:**
- **Light themes:**
- `ANSI Light`
- `Ayu Light`
- `Default Light`
@@ -24,7 +24,7 @@ using the `/theme` command within Gemini CLI:
- `Google Code`
- `Xcode`
### Changing Themes
### Changing themes
1. Enter `/theme` into Gemini CLI.
2. A dialog or selection prompt appears, listing the available themes.
@@ -36,7 +36,7 @@ using the `/theme` command within Gemini CLI:
by a file path), you must remove the `"theme"` setting from the file before you
can change the theme using the `/theme` command.
### Theme Persistence
### Theme persistence
Selected themes are saved in Gemini CLI's
[configuration](../get-started/configuration.md) so your preference is
@@ -44,13 +44,13 @@ remembered across sessions.
---
## Custom Color Themes
## Custom color themes
Gemini CLI allows you to create your own custom color themes by specifying them
in your `settings.json` file. This gives you full control over the color palette
used in the CLI.
### How to Define a Custom Theme
### How to define a custom theme
Add a `customThemes` block to your user, project, or system `settings.json`
file. Each custom theme is defined as an object with a unique name and a set of
@@ -93,7 +93,7 @@ This object supports the keys `primary`, `secondary`, `link`, `accent`, and
`response`. When `text.response` is provided it takes precedence over
`text.primary` for rendering model responses in chat.
**Required Properties:**
**Required properties:**
- `name` (must match the key in the `customThemes` object and be a string)
- `type` (must be the string `"custom"`)
@@ -117,7 +117,7 @@ for a full list of supported names.
You can define multiple custom themes by adding more entries to the
`customThemes` object.
### Loading Themes from a File
### Loading themes from a file
In addition to defining custom themes in `settings.json`, you can also load a
theme directly from a JSON file by specifying the file path in your
@@ -162,17 +162,17 @@ custom theme defined in `settings.json`.
}
```
**Security Note:** For your safety, Gemini CLI will only load theme files that
**Security note:** For your safety, Gemini CLI will only load theme files that
are located within your home directory. If you attempt to load a theme from
outside your home directory, a warning will be displayed and the theme will not
be loaded. This is to prevent loading potentially malicious theme files from
untrusted sources.
### Example Custom Theme
### Example custom theme
<img src="../assets/theme-custom.png" alt="Custom theme example" width="600" />
### Using Your Custom Theme
### Using your custom theme
- Select your custom theme using the `/theme` command in Gemini CLI. Your custom
theme will appear in the theme selection dialog.
@@ -184,7 +184,7 @@ untrusted sources.
---
## Dark Themes
## Dark themes
### ANSI
@@ -210,7 +210,7 @@ untrusted sources.
<img src="/assets/theme-github.png" alt="GitHub theme" width="600">
## Light Themes
## Light themes
### ANSI Light

View File

@@ -1,4 +1,4 @@
# Token Caching and Cost Optimization
# Token caching and cost optimization
Gemini CLI automatically optimizes API costs through token caching when using
API key authentication (Gemini API key or Vertex AI). This feature reuses

View File

@@ -5,7 +5,7 @@ which projects can use the full capabilities of the Gemini CLI. It prevents
potentially malicious code from running by asking you to approve a folder before
the CLI loads any project-specific configurations from it.
## Enabling the Feature
## Enabling the feature
The Trusted Folders feature is **disabled by default**. To use it, you must
first enable it in your settings.
@@ -22,7 +22,7 @@ Add the following to your user `settings.json` file:
}
```
## How It Works: The Trust Dialog
## How it works: The trust dialog
Once the feature is enabled, the first time you run the Gemini CLI from a
folder, a dialog will automatically appear, prompting you to make a choice:
@@ -38,58 +38,58 @@ folder, a dialog will automatically appear, prompting you to make a choice:
Your choice is saved in a central file (`~/.gemini/trustedFolders.json`), so you
will only be asked once per folder.
## Why Trust Matters: The Impact of an Untrusted Workspace
## Why trust matters: The impact of an untrusted workspace
When a folder is **untrusted**, the Gemini CLI runs in a restricted "safe mode"
to protect you. In this mode, the following features are disabled:
1. **Workspace Settings are Ignored**: The CLI will **not** load the
1. **Workspace settings are ignored**: The CLI will **not** load the
`.gemini/settings.json` file from the project. This prevents the loading of
custom tools and other potentially dangerous configurations.
2. **Environment Variables are Ignored**: The CLI will **not** load any `.env`
2. **Environment variables are ignored**: The CLI will **not** load any `.env`
files from the project.
3. **Extension Management is Restricted**: You **cannot install, update, or
3. **Extension management is restricted**: You **cannot install, update, or
uninstall** extensions.
4. **Tool Auto-Acceptance is Disabled**: You will always be prompted before any
4. **Tool auto-acceptance is disabled**: You will always be prompted before any
tool is run, even if you have auto-acceptance enabled globally.
5. **Automatic Memory Loading is Disabled**: The CLI will not automatically
5. **Automatic memory loading is disabled**: The CLI will not automatically
load files into context from directories specified in local settings.
6. **MCP Servers Do Not Connect**: The CLI will not attempt to connect to any
6. **MCP servers do not connect**: The CLI will not attempt to connect to any
[Model Context Protocol (MCP)](../tools/mcp-server.md) servers.
7. **Custom Commands are Not Loaded**: The CLI will not load any custom
7. **Custom commands are not loaded**: The CLI will not load any custom
commands from .toml files, including both project-specific and global user
commands.
Granting trust to a folder unlocks the full functionality of the Gemini CLI for
that workspace.
## Managing Your Trust Settings
## Managing your trust settings
If you need to change a decision or see all your settings, you have a couple of
options:
- **Change the Current Folder's Trust**: Run the `/permissions` command from
- **Change the current folder's trust**: Run the `/permissions` command from
within the CLI. This will bring up the same interactive dialog, allowing you
to change the trust level for the current folder.
- **View All Trust Rules**: To see a complete list of all your trusted and
- **View all trust rules**: To see a complete list of all your trusted and
untrusted folder rules, you can inspect the contents of the
`~/.gemini/trustedFolders.json` file in your home directory.
## The Trust Check Process (Advanced)
## The trust check process (advanced)
For advanced users, it's helpful to know the exact order of operations for how
trust is determined:
1. **IDE Trust Signal**: If you are using the
1. **IDE trust signal**: If you are using the
[IDE Integration](../ide-integration/index.md), the CLI first asks the IDE
if the workspace is trusted. The IDE's response takes highest priority.
2. **Local Trust File**: If the IDE is not connected, the CLI checks the
2. **Local trust file**: If the IDE is not connected, the CLI checks the
central `~/.gemini/trustedFolders.json` file.

View File

@@ -35,7 +35,7 @@ _PowerShell_
Remove-Item -Path (Join-Path $env:LocalAppData "npm-cache\_npx") -Recurse -Force
```
## Method 2: Using npm (Global Install)
## Method 2: Using npm (global install)
If you installed the CLI globally (e.g., `npm install -g @google/gemini-cli`),
use the `npm uninstall` command with the `-g` flag to remove it.

View File

@@ -1,4 +1,4 @@
# Gemini CLI Core
# Gemini CLI core
Gemini CLI's core package (`packages/core`) is the backend portion of Gemini
CLI, handling communication with the Gemini API, managing tools, and processing

View File

@@ -27,21 +27,21 @@ More content here.
@./shared/configuration.md
```
## Supported Path Formats
## Supported path formats
### Relative Paths
### Relative paths
- `@./file.md` - Import from the same directory
- `@../file.md` - Import from parent directory
- `@./components/file.md` - Import from subdirectory
### Absolute Paths
### Absolute paths
- `@/absolute/path/to/file.md` - Import using absolute path
## Examples
### Basic Import
### Basic import
```markdown
# My GEMINI.md
@@ -55,7 +55,7 @@ Welcome to my project!
@./features/overview.md
```
### Nested Imports
### Nested imports
The imported files can themselves contain imports, creating a nested structure:
@@ -73,9 +73,9 @@ The imported files can themselves contain imports, creating a nested structure:
@./shared/title.md
```
## Safety Features
## Safety features
### Circular Import Detection
### Circular import detection
The processor automatically detects and prevents circular imports:
@@ -89,37 +89,37 @@ The processor automatically detects and prevents circular imports:
@./file-a.md <!-- This will be detected and prevented -->
```
### File Access Security
### File access security
The `validateImportPath` function ensures that imports are only allowed from
specified directories, preventing access to sensitive files outside the allowed
scope.
### Maximum Import Depth
### Maximum import depth
To prevent infinite recursion, there's a configurable maximum import depth
(default: 5 levels).
## Error Handling
## Error handling
### Missing Files
### Missing files
If a referenced file doesn't exist, the import will fail gracefully with an
error comment in the output.
### File Access Errors
### File access errors
Permission issues or other file system errors are handled gracefully with
appropriate error messages.
## Code Region Detection
## Code region detection
The import processor uses the `marked` library to detect code blocks and inline
code spans, ensuring that `@` imports inside these regions are properly ignored.
This provides robust handling of nested code blocks and complex Markdown
structures.
## Import Tree Structure
## Import tree structure
The processor returns an import tree that shows the hierarchy of imported files,
similar to Claude's `/memory` feature. This helps users debug problems with
@@ -143,7 +143,7 @@ Memory Files
The tree preserves the order that files were imported and shows the complete
import chain for debugging purposes.
## Comparison to Claude Code's `/memory` (`claude.md`) Approach
## Comparison to Claude Code's `/memory` (`claude.md`) approach
Claude Code's `/memory` feature (as seen in `claude.md`) produces a flat, linear
document by concatenating all included files, always marking file boundaries
@@ -154,7 +154,7 @@ for reconstructing the hierarchy if needed.
> [!NOTE] The import tree is mainly for clarity during development and has
> limited relevance to LLM consumption.
## API Reference
## API reference
### `processImports(content, basePath, debugMode?, importState?)`
@@ -225,7 +225,7 @@ directory if no `.git` is found)
## Troubleshooting
### Common Issues
### Common issues
1. **Import not working**: Check that the file exists and the path is correct
2. **Circular import warnings**: Review your import structure for circular
@@ -235,7 +235,7 @@ directory if no `.git` is found)
4. **Path resolution issues**: Use absolute paths if relative paths aren't
resolving correctly
### Debug Mode
### Debug mode
Enable debug mode to see detailed logging of the import process:

View File

@@ -1,4 +1,4 @@
# Policy Engine
# Policy engine
:::note This feature is currently in testing. To enable it, set
`tools.enableMessageBusIntegration` to `true` in your `settings.json` file. :::
@@ -49,7 +49,7 @@ The `toolName` in the rule must match the name of the tool being called.
wildcard. A `toolName` of `my-server__*` will match any tool from the
`my-server` MCP.
#### Arguments Pattern
#### Arguments pattern
If `argsPattern` is specified, the tool's arguments are converted to a stable
JSON string, which is then tested against the provided regular expression. If
@@ -64,7 +64,7 @@ There are three possible decisions a rule can enforce:
- `ask_user`: The user is prompted to approve or deny the tool call. (In
non-interactive mode, this is treated as `deny`.)
### Priority system & tiers
### Priority system and tiers
The policy engine uses a sophisticated priority system to resolve conflicts when
multiple rules match a single tool call. The core principle is simple: **the
@@ -112,12 +112,12 @@ outcome.
A rule matches a tool call if all of its conditions are met:
1. **Tool Name**: The `toolName` in the rule must match the name of the tool
1. **Tool name**: The `toolName` in the rule must match the name of the tool
being called.
- **Wildcards**: For Model-hosting-protocol (MCP) servers, you can use a
wildcard. A `toolName` of `my-server__*` will match any tool from the
`my-server` MCP.
2. **Arguments Pattern**: If `argsPattern` is specified, the tool's arguments
2. **Arguments pattern**: If `argsPattern` is specified, the tool's arguments
are converted to a stable JSON string, which is then tested against the
provided regular expression. If the arguments don't match the pattern, the
rule does not apply.
@@ -220,7 +220,7 @@ decision = "allow"
priority = 200
```
**2. Using a Wildcard**
**2. Using a wildcard**
To create a rule that applies to _all_ tools on a specific MCP server, specify
only the `mcpName`.

View File

@@ -1,11 +1,11 @@
# Gemini CLI Core: Tools API
# Gemini CLI core: Tools API
The Gemini CLI core (`packages/core`) features a robust system for defining,
registering, and executing tools. These tools extend the capabilities of the
Gemini model, allowing it to interact with the local environment, fetch web
content, and perform various actions beyond simple text generation.
## Core Concepts
## Core concepts
- **Tool (`tools.ts`):** An interface and base class (`BaseTool`) that defines
the contract for all tools. Each tool must have:
@@ -32,35 +32,35 @@ content, and perform various actions beyond simple text generation.
- `returnDisplay`: A user-friendly string (often Markdown) or a special object
(like `FileDiff`) for display in the CLI.
- **Returning Rich Content:** Tools are not limited to returning simple text.
- **Returning rich content:** Tools are not limited to returning simple text.
The `llmContent` can be a `PartListUnion`, which is an array that can contain
a mix of `Part` objects (for images, audio, etc.) and `string`s. This allows a
single tool execution to return multiple pieces of rich content.
- **Tool Registry (`tool-registry.ts`):** A class (`ToolRegistry`) responsible
- **Tool registry (`tool-registry.ts`):** A class (`ToolRegistry`) responsible
for:
- **Registering Tools:** Holding a collection of all available built-in tools
- **Registering tools:** Holding a collection of all available built-in tools
(e.g., `ReadFileTool`, `ShellTool`).
- **Discovering Tools:** It can also discover tools dynamically:
- **Command-based Discovery:** If `tools.discoveryCommand` is configured in
- **Discovering tools:** It can also discover tools dynamically:
- **Command-based discovery:** If `tools.discoveryCommand` is configured in
settings, this command is executed. It's expected to output JSON
describing custom tools, which are then registered as `DiscoveredTool`
instances.
- **MCP-based Discovery:** If `mcp.serverCommand` is configured, the
- **MCP-based discovery:** If `mcp.serverCommand` is configured, the
registry can connect to a Model Context Protocol (MCP) server to list and
register tools (`DiscoveredMCPTool`).
- **Providing Schemas:** Exposing the `FunctionDeclaration` schemas of all
- **Providing schemas:** Exposing the `FunctionDeclaration` schemas of all
registered tools to the Gemini model, so it knows what tools are available
and how to use them.
- **Retrieving Tools:** Allowing the core to get a specific tool by name for
- **Retrieving tools:** Allowing the core to get a specific tool by name for
execution.
## Built-in Tools
## Built-in tools
The core comes with a suite of pre-defined tools, typically found in
`packages/core/src/tools/`. These include:
- **File System Tools:**
- **File system tools:**
- `LSTool` (`ls.ts`): Lists directory contents.
- `ReadFileTool` (`read-file.ts`): Reads the content of a single file.
- `WriteFileTool` (`write-file.ts`): Writes content to a file.
@@ -70,26 +70,26 @@ The core comes with a suite of pre-defined tools, typically found in
requiring confirmation).
- `ReadManyFilesTool` (`read-many-files.ts`): Reads and concatenates content
from multiple files or glob patterns (used by the `@` command in CLI).
- **Execution Tools:**
- **Execution tools:**
- `ShellTool` (`shell.ts`): Executes arbitrary shell commands (requires
careful sandboxing and user confirmation).
- **Web Tools:**
- **Web tools:**
- `WebFetchTool` (`web-fetch.ts`): Fetches content from a URL.
- `WebSearchTool` (`web-search.ts`): Performs a web search.
- **Memory Tools:**
- **Memory tools:**
- `MemoryTool` (`memoryTool.ts`): Interacts with the AI's memory.
Each of these tools extends `BaseTool` and implements the required methods for
its specific functionality.
## Tool Execution Flow
## Tool execution flow
1. **Model Request:** The Gemini model, based on the user's prompt and the
1. **Model request:** The Gemini model, based on the user's prompt and the
provided tool schemas, decides to use a tool and returns a `FunctionCall`
part in its response, specifying the tool name and arguments.
2. **Core Receives Request:** The core parses this `FunctionCall`.
3. **Tool Retrieval:** It looks up the requested tool in the `ToolRegistry`.
4. **Parameter Validation:** The tool's `validateToolParams()` method is
2. **Core receives request:** The core parses this `FunctionCall`.
3. **Tool retrieval:** It looks up the requested tool in the `ToolRegistry`.
4. **Parameter validation:** The tool's `validateToolParams()` method is
called.
5. **Confirmation (if needed):**
- The tool's `shouldConfirmExecute()` method is called.
@@ -99,27 +99,27 @@ its specific functionality.
6. **Execution:** If validated and confirmed (or if no confirmation is needed),
the core calls the tool's `execute()` method with the provided arguments and
an `AbortSignal` (for potential cancellation).
7. **Result Processing:** The `ToolResult` from `execute()` is received by the
7. **Result processing:** The `ToolResult` from `execute()` is received by the
core.
8. **Response to Model:** The `llmContent` from the `ToolResult` is packaged as
8. **Response to model:** The `llmContent` from the `ToolResult` is packaged as
a `FunctionResponse` and sent back to the Gemini model so it can continue
generating a user-facing response.
9. **Display to User:** The `returnDisplay` from the `ToolResult` is sent to
9. **Display to user:** The `returnDisplay` from the `ToolResult` is sent to
the CLI to show the user what the tool did.
## Extending with Custom Tools
## Extending with custom tools
While direct programmatic registration of new tools by users isn't explicitly
detailed as a primary workflow in the provided files for typical end-users, the
architecture supports extension through:
- **Command-based Discovery:** Advanced users or project administrators can
- **Command-based discovery:** Advanced users or project administrators can
define a `tools.discoveryCommand` in `settings.json`. This command, when run
by the Gemini CLI core, should output a JSON array of `FunctionDeclaration`
objects. The core will then make these available as `DiscoveredTool`
instances. The corresponding `tools.callCommand` would then be responsible for
actually executing these custom tools.
- **MCP Server(s):** For more complex scenarios, one or more MCP servers can be
- **MCP server(s):** For more complex scenarios, one or more MCP servers can be
set up and configured via the `mcpServers` setting in `settings.json`. The
Gemini CLI core can then discover and use tools exposed by these servers. As
mentioned, if you have multiple MCP servers, the tool names will be prefixed

View File

@@ -1,4 +1,4 @@
# Example Proxy Script
# Example proxy script
The following is an example of a proxy script that can be used with the
`GEMINI_SANDBOX_PROXY_COMMAND` environment variable. This script only allows

View File

@@ -1,4 +1,4 @@
# Extension Releasing
# Extension releasing
There are two primary ways of releasing extensions to users:
@@ -64,7 +64,7 @@ If you plan on doing cherry picks, you may want to avoid having your default
branch be the stable branch to avoid force-pushing to the default branch which
should generally be avoided.
## Releasing through Github releases
## Releasing through GitHub releases
Gemini CLI extensions can be distributed through
[GitHub Releases](https://docs.github.com/en/repositories/releasing-projects-on-github/about-releases).
@@ -105,9 +105,9 @@ To ensure Gemini CLI can automatically find the correct release asset for each
platform, you must follow this naming convention. The CLI will search for assets
in the following order:
1. **Platform and Architecture-Specific:**
1. **Platform and architecture-Specific:**
`{platform}.{arch}.{name}.{extension}`
2. **Platform-Specific:** `{platform}.{name}.{extension}`
2. **Platform-specific:** `{platform}.{name}.{extension}`
3. **Generic:** If only one asset is provided, it will be used as a generic
fallback.

View File

@@ -1,4 +1,4 @@
# Getting Started with Gemini CLI Extensions
# Getting started with Gemini CLI extensions
This guide will walk you through creating your first Gemini CLI extension.
You'll learn how to set up a new extension, add a custom tool via an MCP server,
@@ -10,7 +10,7 @@ file.
Before you start, make sure you have the Gemini CLI installed and a basic
understanding of Node.js and TypeScript.
## Step 1: Create a New Extension
## Step 1: Create a new extension
The easiest way to start is by using one of the built-in templates. We'll use
the `mcp-server` example as our foundation.
@@ -32,7 +32,7 @@ my-first-extension/
└── tsconfig.json
```
## Step 2: Understand the Extension Files
## Step 2: Understand the extension files
Let's look at the key files in your new extension.
@@ -124,7 +124,7 @@ These are standard configuration files for a TypeScript project. The
`package.json` file defines dependencies and a `build` script, and
`tsconfig.json` configures the TypeScript compiler.
## Step 3: Build and Link Your Extension
## Step 3: Build and link your extension
Before you can use the extension, you need to compile the TypeScript code and
link the extension to your Gemini CLI installation for local development.
@@ -158,7 +158,7 @@ link the extension to your Gemini CLI installation for local development.
Now, restart your Gemini CLI session. The new `fetch_posts` tool will be
available. You can test it by asking: "fetch posts".
## Step 4: Add a Custom Command
## Step 4: Add a custom command
Custom commands provide a way to create shortcuts for complex prompts. Let's add
a command that searches for a pattern in your code.
@@ -186,7 +186,7 @@ a command that searches for a pattern in your code.
After saving the file, restart the Gemini CLI. You can now run
`/fs:grep-code "some pattern"` to use your new command.
## Step 5: Add a Custom `GEMINI.md`
## Step 5: Add a custom `GEMINI.md`
You can provide persistent context to the model by adding a `GEMINI.md` file to
your extension. This is useful for giving the model instructions on how to
@@ -222,7 +222,7 @@ need this for extensions built to expose commands and prompts.
Restart the CLI again. The model will now have the context from your `GEMINI.md`
file in every session where the extension is active.
## Step 6: Releasing Your Extension
## Step 6: Releasing your extension
Once you are happy with your extension, you can share it with others. The two
primary ways of releasing extensions are via a Git repository or through GitHub

View File

@@ -1,4 +1,4 @@
# Gemini CLI Extensions
# Gemini CLI extensions
_This documentation is up-to-date with the v0.4.0 release._

View File

@@ -1,4 +1,4 @@
# Frequently Asked Questions (FAQ)
# Frequently asked questions (FAQ)
This page provides answers to common questions and solutions to frequent
problems encountered while using Gemini CLI.

View File

@@ -1,4 +1,4 @@
# Gemini CLI Authentication Setup
# Gemini CLI authentication setup
Gemini CLI requires authentication using Google's services. Before using Gemini
CLI, configure **one** of the following authentication methods:
@@ -10,12 +10,12 @@ CLI, configure **one** of the following authentication methods:
- Headless (non-interactive) mode
- Google Cloud Environments (Cloud Shell, Compute Engine, etc.)
## Quick Check: Running in Google Cloud Shell?
## Quick check: Running in Google Cloud Shell?
If you are running the Gemini CLI within a Google Cloud Shell environment,
authentication is typically automatic using your Cloud Shell credentials.
### Other Google Cloud Environments (e.g., Compute Engine)
### Other Google Cloud environments (e.g., Compute Engine)
Some other Google Cloud environments, such as Compute Engine VMs, might also
support automatic authentication. In these environments, Gemini CLI can
@@ -25,7 +25,7 @@ environment's metadata server.
If automatic authentication does not occur in your environment, you will need to
use one of the interactive methods described below.
## Authenticate in Interactive mode
## Authenticate in interactive mode
When you run Gemini CLI through the command-line, Gemini CLI will provide the
following options:
@@ -61,7 +61,7 @@ logging in with your Google account.
> The browser will be redirected to a `localhost` URL that the CLI listens on
> during setup.
#### (Optional) Set your Google Cloud Project
#### (Optional) Set your Google Cloud project
When you log in using a Google account, you may be prompted to select a
`GOOGLE_CLOUD_PROJECT`.
@@ -98,7 +98,7 @@ export GOOGLE_CLOUD_PROJECT_ID="YOUR_PROJECT_ID"
To make this setting persistent, see
[Persisting Environment Variables](#persisting-environment-variables).
### Use Gemini API Key
### Use Gemini API key
If you don't want to authenticate using your Google account, you can use an API
key from Google AI Studio.
@@ -143,7 +143,7 @@ export GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID"
export GOOGLE_CLOUD_LOCATION="YOUR_PROJECT_LOCATION"
```
#### A. Vertex AI - Application Default Credentials (ADC) using `gcloud`
#### A. Vertex AI - application default credentials (ADC) using `gcloud`
Consider this method of authentication if you have Google Cloud CLI installed.
@@ -168,7 +168,7 @@ unset GOOGLE_API_KEY GEMINI_API_KEY
3. Ensure `GOOGLE_CLOUD_PROJECT` (or `GOOGLE_CLOUD_PROJECT_ID`) and
`GOOGLE_CLOUD_LOCATION` are set.
#### B. Vertex AI - Service Account JSON key
#### B. Vertex AI - service account JSON key
Consider this method of authentication in non-interactive environments, CI/CD,
or if your organization restricts user-based ADC or API key creation.
@@ -218,7 +218,7 @@ unset GOOGLE_API_KEY GEMINI_API_KEY
To make any of these Vertex AI environment variable settings persistent, see
[Persisting Environment Variables](#persisting-environment-variables).
## Persisting Environment Variables
## Persisting environment variables
To avoid setting environment variables in every terminal session, you can:
@@ -263,7 +263,7 @@ If you have not already logged in with an authentication credential (such as a
Google account), you **must** configure authentication using environment
variables:
1. **Gemini API Key:** Set `GEMINI_API_KEY`.
1. **Gemini API key:** Set `GEMINI_API_KEY`.
2. **Vertex AI:**
- Set `GOOGLE_GENAI_USE_VERTEXAI=true`.
- **With Google Cloud API Key:** Set `GOOGLE_API_KEY`.

View File

@@ -1,6 +1,6 @@
# Gemini CLI Configuration
# Gemini CLI configuration
**Note on Deprecated Configuration Format**
**Note on deprecated configuration format**
This document describes the legacy v1 format for the `settings.json` file. This
format is now deprecated.
@@ -132,7 +132,7 @@ contain other project-specific files related to Gemini CLI's operation, such as:
}
```
### Troubleshooting File Search Performance
### Troubleshooting file search performance
If you are experiencing performance issues with file searching (e.g., with `@`
completions), especially in projects with a very large number of files, here are
@@ -144,12 +144,12 @@ a few things you can try in order of recommendation:
the total number of files crawled is the most effective way to improve
performance.
2. **Disable Fuzzy Search:** If ignoring files is not enough, you can disable
2. **Disable fuzzy search:** If ignoring files is not enough, you can disable
fuzzy search by setting `disableFuzzySearch` to `true` in your
`settings.json` file. This will use a simpler, non-fuzzy matching algorithm,
which can be faster.
3. **Disable Recursive File Search:** As a last resort, you can disable
3. **Disable recursive file search:** As a last resort, you can disable
recursive file search entirely by setting `enableRecursiveFileSearch` to
`false`. This will be the fastest option as it avoids a recursive crawl of
your project. However, it means you will need to type the full path to files
@@ -194,7 +194,7 @@ a few things you can try in order of recommendation:
`--allowed-mcp-server-names` is set.
- **Default:** All MCP servers are available for use by the Gemini model.
- **Example:** `"allowMCPServers": ["myPythonServer"]`.
- **Security Note:** This uses simple string matching on MCP server names,
- **Security note:** This uses simple string matching on MCP server names,
which can be modified. If you're a system administrator looking to prevent
users from bypassing this, consider configuring the `mcpServers` at the
system settings level such that the user will not be able to configure any
@@ -208,7 +208,7 @@ a few things you can try in order of recommendation:
be ignored if `--allowed-mcp-server-names` is set.
- **Default**: No MCP servers excluded.
- **Example:** `"excludeMCPServers": ["myNodeServer"]`.
- **Security Note:** This uses simple string matching on MCP server names,
- **Security note:** This uses simple string matching on MCP server names,
which can be modified. If you're a system administrator looking to prevent
users from bypassing this, consider configuring the `mcpServers` at the
system settings level such that the user will not be able to configure any
@@ -538,7 +538,7 @@ a few things you can try in order of recommendation:
}
```
## Shell History
## Shell history
The CLI keeps a history of shell commands you run. To avoid conflicts between
different projects, this history is stored in a project-specific directory
@@ -549,7 +549,7 @@ within your user's home folder.
path.
- The history is stored in a file named `shell_history`.
## Environment Variables & `.env` Files
## Environment variables and `.env` files
Environment variables are a common way to configure applications, especially for
sensitive information like API keys or for settings that might change between
@@ -566,7 +566,7 @@ loading order is:
the home directory.
3. If still not found, it looks for `~/.env` (in the user's home directory).
**Environment Variable Exclusion:** Some environment variables (like `DEBUG` and
**Environment variable exclusion:** Some environment variables (like `DEBUG` and
`DEBUG_MODE`) are automatically excluded from being loaded from project `.env`
files to prevent interference with gemini-cli behavior. Variables from
`.gemini/.env` files are never excluded. You can customize this behavior using
@@ -591,7 +591,7 @@ the `excludedProjectEnvVars` setting in your `settings.json` file.
- Required for using Code Assist or Vertex AI.
- If using Vertex AI, ensure you have the necessary permissions in this
project.
- **Cloud Shell Note:** When running in a Cloud Shell environment, this
- **Cloud Shell note:** When running in a Cloud Shell environment, this
variable defaults to a special project allocated for Cloud Shell users. If
you have `GOOGLE_CLOUD_PROJECT` set in your global environment in Cloud
Shell, it will be overridden by this default. To use a different project in
@@ -639,7 +639,7 @@ the `excludedProjectEnvVars` setting in your `settings.json` file.
- Specifies the endpoint for the code assist server.
- This is useful for development and testing.
## Command-Line Arguments
## Command-line arguments
Arguments passed directly when running the CLI can override other configurations
for that specific session.
@@ -714,7 +714,7 @@ for that specific session.
- **`--version`**:
- Displays the version of the CLI.
## Context Files (Hierarchical Instructional Context)
## Context files (hierarchical instructional context)
While not strictly configuration for the CLI's _behavior_, context files
(defaulting to `GEMINI.md` but configurable via the `contextFileName` setting)
@@ -730,7 +730,7 @@ context.
that you want the Gemini model to be aware of during your interactions. The
system is designed to manage this instructional context hierarchically.
### Example Context File Content (e.g., `GEMINI.md`)
### Example context file content (e.g., `GEMINI.md`)
Here's a conceptual example of what a context file at the root of a TypeScript
project might contain:
@@ -771,23 +771,23 @@ more relevant and precise your context files are, the better the AI can assist
you. Project-specific context files are highly encouraged to establish
conventions and context.
- **Hierarchical Loading and Precedence:** The CLI implements a sophisticated
- **Hierarchical loading and precedence:** The CLI implements a sophisticated
hierarchical memory system by loading context files (e.g., `GEMINI.md`) from
several locations. Content from files lower in this list (more specific)
typically overrides or supplements content from files higher up (more
general). The exact concatenation order and final context can be inspected
using the `/memory show` command. The typical loading order is:
1. **Global Context File:**
1. **Global context file:**
- Location: `~/.gemini/<contextFileName>` (e.g., `~/.gemini/GEMINI.md` in
your user home directory).
- Scope: Provides default instructions for all your projects.
2. **Project Root & Ancestors Context Files:**
2. **Project root and ancestors context files:**
- Location: The CLI searches for the configured context file in the
current working directory and then in each parent directory up to either
the project root (identified by a `.git` folder) or your home directory.
- Scope: Provides context relevant to the entire project or a significant
portion of it.
3. **Sub-directory Context Files (Contextual/Local):**
3. **Sub-directory context files (contextual/local):**
- Location: The CLI also scans for the configured context file in
subdirectories _below_ the current working directory (respecting common
ignore patterns like `node_modules`, `.git`, etc.). The breadth of this
@@ -795,15 +795,15 @@ conventions and context.
with a `memoryDiscoveryMaxDirs` field in your `settings.json` file.
- Scope: Allows for highly specific instructions relevant to a particular
component, module, or subsection of your project.
- **Concatenation & UI Indication:** The contents of all found context files are
concatenated (with separators indicating their origin and path) and provided
as part of the system prompt to the Gemini model. The CLI footer displays the
count of loaded context files, giving you a quick visual cue about the active
instructional context.
- **Importing Content:** You can modularize your context files by importing
- **Concatenation and UI indication:** The contents of all found context files
are concatenated (with separators indicating their origin and path) and
provided as part of the system prompt to the Gemini model. The CLI footer
displays the count of loaded context files, giving you a quick visual cue
about the active instructional context.
- **Importing content:** You can modularize your context files by importing
other Markdown files using the `@path/to/file.md` syntax. For more details,
see the [Memory Import Processor documentation](../core/memport.md).
- **Commands for Memory Management:**
- **Commands for memory management:**
- Use `/memory refresh` to force a re-scan and reload of all context files
from all configured locations. This updates the AI's instructional context.
- Use `/memory show` to display the combined instructional context currently
@@ -850,7 +850,7 @@ sandbox image:
BUILD_SANDBOX=1 gemini -s
```
## Usage Statistics
## Usage statistics
To help us improve the Gemini CLI, we collect anonymized usage statistics. This
data helps us understand how the CLI is used, identify common issues, and
@@ -858,22 +858,22 @@ prioritize new features.
**What we collect:**
- **Tool Calls:** We log the names of the tools that are called, whether they
- **Tool calls:** We log the names of the tools that are called, whether they
succeed or fail, and how long they take to execute. We do not collect the
arguments passed to the tools or any data returned by them.
- **API Requests:** We log the Gemini model used for each request, the duration
- **API requests:** We log the Gemini model used for each request, the duration
of the request, and whether it was successful. We do not collect the content
of the prompts or responses.
- **Session Information:** We collect information about the configuration of the
- **Session information:** We collect information about the configuration of the
CLI, such as the enabled tools and the approval mode.
**What we DON'T collect:**
- **Personally Identifiable Information (PII):** We do not collect any personal
- **Personally identifiable information (PII):** We do not collect any personal
information, such as your name, email address, or API keys.
- **Prompt and Response Content:** We do not log the content of your prompts or
- **Prompt and response content:** We do not log the content of your prompts or
the responses from the Gemini model.
- **File Content:** We do not log the content of any files that are read or
- **File content:** We do not log the content of any files that are read or
written by the CLI.
**How to opt out:**

View File

@@ -1,6 +1,6 @@
# Gemini CLI Configuration
# Gemini CLI configuration
> **Note on Configuration Format, 9/17/25:** The format of the `settings.json`
> **Note on configuration format, 9/17/25:** The format of the `settings.json`
> file has been updated to a new, more organized structure.
>
> - The new format will be supported in the stable release starting
@@ -950,7 +950,7 @@ of v0.3.0:
}
```
## Shell History
## Shell history
The CLI keeps a history of shell commands you run. To avoid conflicts between
different projects, this history is stored in a project-specific directory
@@ -961,7 +961,7 @@ within your user's home folder.
path.
- The history is stored in a file named `shell_history`.
## Environment Variables & `.env` Files
## Environment variables and `.env` files
Environment variables are a common way to configure applications, especially for
sensitive information like API keys or for settings that might change between
@@ -978,7 +978,7 @@ loading order is:
the home directory.
3. If still not found, it looks for `~/.env` (in the user's home directory).
**Environment Variable Exclusion:** Some environment variables (like `DEBUG` and
**Environment variable exclusion:** Some environment variables (like `DEBUG` and
`DEBUG_MODE`) are automatically excluded from being loaded from project `.env`
files to prevent interference with gemini-cli behavior. Variables from
`.gemini/.env` files are never excluded. You can customize this behavior using
@@ -1003,7 +1003,7 @@ the `advanced.excludedEnvVars` setting in your `settings.json` file.
- Required for using Code Assist or Vertex AI.
- If using Vertex AI, ensure you have the necessary permissions in this
project.
- **Cloud Shell Note:** When running in a Cloud Shell environment, this
- **Cloud Shell note:** When running in a Cloud Shell environment, this
variable defaults to a special project allocated for Cloud Shell users. If
you have `GOOGLE_CLOUD_PROJECT` set in your global environment in Cloud
Shell, it will be overridden by this default. To use a different project in
@@ -1072,7 +1072,7 @@ the `advanced.excludedEnvVars` setting in your `settings.json` file.
- Specifies the endpoint for the code assist server.
- This is useful for development and testing.
## Command-Line Arguments
## Command-line arguments
Arguments passed directly when running the CLI can override other configurations
for that specific session.
@@ -1167,7 +1167,7 @@ for that specific session.
- **`--record-responses`**:
- Path to a file to record model responses for testing.
## Context Files (Hierarchical Instructional Context)
## Context files (hierarchical instructional context)
While not strictly configuration for the CLI's _behavior_, context files
(defaulting to `GEMINI.md` but configurable via the `context.fileName` setting)
@@ -1183,7 +1183,7 @@ context.
that you want the Gemini model to be aware of during your interactions. The
system is designed to manage this instructional context hierarchically.
### Example Context File Content (e.g., `GEMINI.md`)
### Example context file content (e.g., `GEMINI.md`)
Here's a conceptual example of what a context file at the root of a TypeScript
project might contain:
@@ -1224,23 +1224,23 @@ more relevant and precise your context files are, the better the AI can assist
you. Project-specific context files are highly encouraged to establish
conventions and context.
- **Hierarchical Loading and Precedence:** The CLI implements a sophisticated
- **Hierarchical loading and precedence:** The CLI implements a sophisticated
hierarchical memory system by loading context files (e.g., `GEMINI.md`) from
several locations. Content from files lower in this list (more specific)
typically overrides or supplements content from files higher up (more
general). The exact concatenation order and final context can be inspected
using the `/memory show` command. The typical loading order is:
1. **Global Context File:**
1. **Global context file:**
- Location: `~/.gemini/<configured-context-filename>` (e.g.,
`~/.gemini/GEMINI.md` in your user home directory).
- Scope: Provides default instructions for all your projects.
2. **Project Root & Ancestors Context Files:**
2. **Project root and ancestors context files:**
- Location: The CLI searches for the configured context file in the
current working directory and then in each parent directory up to either
the project root (identified by a `.git` folder) or your home directory.
- Scope: Provides context relevant to the entire project or a significant
portion of it.
3. **Sub-directory Context Files (Contextual/Local):**
3. **Sub-directory context files (contextual/local):**
- Location: The CLI also scans for the configured context file in
subdirectories _below_ the current working directory (respecting common
ignore patterns like `node_modules`, `.git`, etc.). The breadth of this
@@ -1249,15 +1249,15 @@ conventions and context.
file.
- Scope: Allows for highly specific instructions relevant to a particular
component, module, or subsection of your project.
- **Concatenation & UI Indication:** The contents of all found context files are
concatenated (with separators indicating their origin and path) and provided
as part of the system prompt to the Gemini model. The CLI footer displays the
count of loaded context files, giving you a quick visual cue about the active
instructional context.
- **Importing Content:** You can modularize your context files by importing
- **Concatenation and UI indication:** The contents of all found context files
are concatenated (with separators indicating their origin and path) and
provided as part of the system prompt to the Gemini model. The CLI footer
displays the count of loaded context files, giving you a quick visual cue
about the active instructional context.
- **Importing content:** You can modularize your context files by importing
other Markdown files using the `@path/to/file.md` syntax. For more details,
see the [Memory Import Processor documentation](../core/memport.md).
- **Commands for Memory Management:**
- **Commands for memory management:**
- Use `/memory refresh` to force a re-scan and reload of all context files
from all configured locations. This updates the AI's instructional context.
- Use `/memory show` to display the combined instructional context currently
@@ -1304,7 +1304,7 @@ sandbox image:
BUILD_SANDBOX=1 gemini -s
```
## Usage Statistics
## Usage statistics
To help us improve the Gemini CLI, we collect anonymized usage statistics. This
data helps us understand how the CLI is used, identify common issues, and
@@ -1312,22 +1312,22 @@ prioritize new features.
**What we collect:**
- **Tool Calls:** We log the names of the tools that are called, whether they
- **Tool calls:** We log the names of the tools that are called, whether they
succeed or fail, and how long they take to execute. We do not collect the
arguments passed to the tools or any data returned by them.
- **API Requests:** We log the Gemini model used for each request, the duration
- **API requests:** We log the Gemini model used for each request, the duration
of the request, and whether it was successful. We do not collect the content
of the prompts or responses.
- **Session Information:** We collect information about the configuration of the
- **Session information:** We collect information about the configuration of the
CLI, such as the enabled tools and the approval mode.
**What we DON'T collect:**
- **Personally Identifiable Information (PII):** We do not collect any personal
- **Personally identifiable information (PII):** We do not collect any personal
information, such as your name, email address, or API keys.
- **Prompt and Response Content:** We do not log the content of your prompts or
- **Prompt and response content:** We do not log the content of your prompts or
the responses from the Gemini model.
- **File Content:** We do not log the content of any files that are read or
- **File content:** We do not log the content of any files that are read or
written by the CLI.
**How to opt out:**

View File

@@ -1,6 +1,6 @@
Note: This page will be replaced by [installation.md](installation.md).
# Gemini CLI Installation, Execution, and Deployment
# Gemini CLI installation, execution, and deployment
Install and run Gemini CLI. This document provides an overview of Gemini CLI's
installation methods and deployment architecture.
@@ -44,7 +44,7 @@ downloading the Gemini CLI package from the NPM registry.
For security and isolation, Gemini CLI can be run inside a container. This is
the default way that the CLI executes tools that might have side effects.
- **Directly from the Registry:** You can run the published sandbox image
- **Directly from the registry:** You can run the published sandbox image
directly. This is useful for environments where you only have Docker and want
to run the CLI.
```bash
@@ -63,7 +63,7 @@ the default way that the CLI executes tools that might have side effects.
Contributors to the project will want to run the CLI directly from the source
code.
- **Development Mode:** This method provides hot-reloading and is useful for
- **Development mode:** This method provides hot-reloading and is useful for
active development.
```bash
# From the root of the repository

View File

@@ -1,4 +1,4 @@
# Gemini CLI Examples
# Gemini CLI examples
Not sure where to get started with Gemini CLI? This document covers examples on
how to use Gemini CLI for a variety of tasks.
@@ -57,7 +57,7 @@ Gemini CLI will return an explanation based on the actual source code:
The `chalk` library is a popular npm package for styling terminal output with
colors. After analyzing the source code, here's how it works:
- **Core Functionality:** The main file sets up a chainable API. Each color or
- **Core functionality:** The main file sets up a chainable API. Each color or
modifier (like `bold` or `italic`) is a getter that appends the corresponding
ANSI escape code to an internal stack.
@@ -65,7 +65,7 @@ colors. After analyzing the source code, here's how it works:
getters. The `red` getter adds the red color code, and the `bold` getter adds
the bold code.
- **Output Generation:** When the chain is treated as a string (e.g., in
- **Output generation:** When the chain is treated as a string (e.g., in
`console.log`), a final `toString()` method is called. This method joins all
the stored ANSI codes, wraps them around the input string ('Hello'), and adds
a reset code at the end. This produces the final, styled string that the

View File

@@ -1,4 +1,4 @@
# Gemini 3 Pro on Gemini CLI (Join the Waitlist)
# Gemini 3 Pro on Gemini CLI (join the waitlist)
Were excited to bring Gemini 3 Pro to Gemini CLI. For Google AI Ultra users
(Google AI Ultra for Business is not currently supported) and paid Gemini and
@@ -8,7 +8,7 @@ For everyone else, we're gradually expanding access
waitlist now to access Gemini 3 Pro once approved.
**Note:** Please wait until you have been approved to use Gemini 3 Pro to enable
**Preview Features**. If enabled early, the CLI will fallback to Gemini 2.5 Pro.
**preview features**. If enabled early, the CLI will fallback to Gemini 2.5 Pro.
## Do I need to join the waitlist?
@@ -81,7 +81,7 @@ CLI waits longer between each retry, when the system is busy. If the retry
doesn't happen immediately, please wait a few minutes for the request to
process.
## Model selection & routing types
## Model selection and routing types
When using Gemini CLI, you may want to control how your requests are routed
between models. By default, Gemini CLI uses **Auto** routing.

View File

@@ -1,4 +1,4 @@
# Get Started with Gemini CLI
# Get started with Gemini CLI
Welcome to Gemini CLI! This guide will help you install, configure, and start
using the Gemini CLI to enhance your workflow right from your terminal.

View File

@@ -1,4 +1,4 @@
# Gemini CLI Installation, Execution, and Deployment
# Gemini CLI installation, execution, and deployment
Install and run Gemini CLI. This document provides an overview of Gemini CLI's
installation methods and deployment architecture.
@@ -42,7 +42,7 @@ downloading the Gemini CLI package from the NPM registry.
For security and isolation, Gemini CLI can be run inside a container. This is
the default way that the CLI executes tools that might have side effects.
- **Directly from the Registry:** You can run the published sandbox image
- **Directly from the registry:** You can run the published sandbox image
directly. This is useful for environments where you only have Docker and want
to run the CLI.
```bash
@@ -61,13 +61,13 @@ the default way that the CLI executes tools that might have side effects.
Contributors to the project will want to run the CLI directly from the source
code.
- **Development Mode:** This method provides hot-reloading and is useful for
- **Development mode:** This method provides hot-reloading and is useful for
active development.
```bash
# From the root of the repository
npm run start
```
- **Production-like mode (Linked package):** This method simulates a global
- **Production-like mode (linked package):** This method simulates a global
installation by linking your local package. It's useful for testing a local
build in a production workflow.

View File

@@ -1,4 +1,4 @@
# Gemini CLI Companion Plugin: Interface Specification
# Gemini CLI companion plugin: Interface specification
> Last Updated: September 15, 2025
@@ -9,11 +9,11 @@ awareness) are provided by the official extension
This specification is for contributors who wish to bring similar functionality
to other editors like JetBrains IDEs, Sublime Text, etc.
## I. The Communication Interface
## I. The communication interface
Gemini CLI and the IDE plugin communicate through a local communication channel.
### 1. Transport Layer: MCP over HTTP
### 1. Transport layer: MCP over HTTP
The plugin **MUST** run a local HTTP server that implements the **Model Context
Protocol (MCP)**.
@@ -25,24 +25,24 @@ Protocol (MCP)**.
- **Port:** The server **MUST** listen on a dynamically assigned port (i.e.,
listen on port `0`).
### 2. Discovery Mechanism: The Port File
### 2. Discovery mechanism: The port file
For Gemini CLI to connect, it needs to discover which IDE instance it's running
in and what port your server is using. The plugin **MUST** facilitate this by
creating a "discovery file."
- **How the CLI Finds the File:** The CLI determines the Process ID (PID) of the
- **How the CLI finds the file:** The CLI determines the Process ID (PID) of the
IDE it's running in by traversing the process tree. It then looks for a
discovery file that contains this PID in its name.
- **File Location:** The file must be created in a specific directory:
- **File location:** The file must be created in a specific directory:
`os.tmpdir()/gemini/ide/`. Your plugin must create this directory if it
doesn't exist.
- **File Naming Convention:** The filename is critical and **MUST** follow the
- **File naming convention:** The filename is critical and **MUST** follow the
pattern: `gemini-ide-server-${PID}-${PORT}.json`
- `${PID}`: The process ID of the parent IDE process. Your plugin must
determine this PID and include it in the filename.
- `${PORT}`: The port your MCP server is listening on.
- **File Content & Workspace Validation:** The file **MUST** contain a JSON
- **File content and workspace validation:** The file **MUST** contain a JSON
object with the following structure:
```json
@@ -79,7 +79,7 @@ creating a "discovery file."
server (e.g., `Authorization: Bearer a-very-secret-token`). Your server
**MUST** validate this token on every request and reject any that are
unauthorized.
- **Tie-Breaking with Environment Variables (Recommended):** For the most
- **Tie-breaking with environment variables (recommended):** For the most
reliable experience, your plugin **SHOULD** both create the discovery file and
set the `GEMINI_CLI_IDE_SERVER_PORT` environment variable in the integrated
terminal. The file serves as the primary discovery mechanism, but the
@@ -88,18 +88,18 @@ creating a "discovery file."
`GEMINI_CLI_IDE_SERVER_PORT` variable to identify and connect to the correct
window's server.
## II. The Context Interface
## II. The context interface
To enable context awareness, the plugin **MAY** provide the CLI with real-time
information about the user's activity in the IDE.
### `ide/contextUpdate` Notification
### `ide/contextUpdate` notification
The plugin **MAY** send an `ide/contextUpdate`
[notification](https://modelcontextprotocol.io/specification/2025-06-18/basic/index#notifications)
to the CLI whenever the user's context changes.
- **Triggering Events:** This notification should be sent (with a recommended
- **Triggering events:** This notification should be sent (with a recommended
debounce of 50ms) when:
- A file is opened, closed, or focused.
- The user's cursor position or text selection changes in the active file.
@@ -136,16 +136,16 @@ to the CLI whenever the user's context changes.
Virtual files (e.g., unsaved files without a path, editor settings pages)
**MUST** be excluded.
### How the CLI Uses This Context
### How the CLI uses this context
After receiving the `IdeContext` object, the CLI performs several normalization
and truncation steps before sending the information to the model.
- **File Ordering:** The CLI uses the `timestamp` field to determine the most
- **File ordering:** The CLI uses the `timestamp` field to determine the most
recently used files. It sorts the `openFiles` list based on this value.
Therefore, your plugin **MUST** provide an accurate Unix timestamp for when a
file was last focused.
- **Active File:** The CLI considers only the most recent file (after sorting)
- **Active file:** The CLI considers only the most recent file (after sorting)
to be the "active" file. It will ignore the `isActive` flag on all other files
and clear their `cursor` and `selectedText` fields. Your plugin should focus
on setting `isActive: true` and providing cursor/selection details only for
@@ -156,14 +156,14 @@ and truncation steps before sending the information to the model.
While the CLI handles the final truncation, it is highly recommended that your
plugin also limits the amount of context it sends.
## III. The Diffing Interface
## III. The diffing interface
To enable interactive code modifications, the plugin **MAY** expose a diffing
interface. This allows the CLI to request that the IDE open a diff view, showing
proposed changes to a file. The user can then review, edit, and ultimately
accept or reject these changes directly within the IDE.
### `openDiff` Tool
### `openDiff` tool
The plugin **MUST** register an `openDiff` tool on its MCP server.
@@ -194,7 +194,7 @@ The plugin **MUST** register an `openDiff` tool on its MCP server.
The actual outcome of the diff (acceptance or rejection) is communicated
asynchronously via notifications.
### `closeDiff` Tool
### `closeDiff` tool
The plugin **MUST** register a `closeDiff` tool on its MCP server.
@@ -219,7 +219,7 @@ The plugin **MUST** register a `closeDiff` tool on its MCP server.
**MUST** have `isError: true` and include a `TextContent` block in the
`content` array describing the error.
### `ide/diffAccepted` Notification
### `ide/diffAccepted` notification
When the user accepts the changes in a diff view (e.g., by clicking an "Apply"
or "Save" button), the plugin **MUST** send an `ide/diffAccepted` notification
@@ -238,7 +238,7 @@ to the CLI.
}
```
### `ide/diffRejected` Notification
### `ide/diffRejected` notification
When the user rejects the changes (e.g., by closing the diff view without
accepting), the plugin **MUST** send an `ide/diffRejected` notification to the
@@ -254,14 +254,14 @@ CLI.
}
```
## IV. The Lifecycle Interface
## IV. The lifecycle interface
The plugin **MUST** manage its resources and the discovery file correctly based
on the IDE's lifecycle.
- **On Activation (IDE startup/plugin enabled):**
- **On activation (IDE startup/plugin enabled):**
1. Start the MCP server.
2. Create the discovery file.
- **On Deactivation (IDE shutdown/plugin disabled):**
- **On deactivation (IDE shutdown/plugin disabled):**
1. Stop the MCP server.
2. Delete the discovery file.

View File

@@ -1,4 +1,4 @@
# IDE Integration
# IDE integration
Gemini CLI can integrate with your IDE to provide a more seamless and
context-aware experience. This integration allows the CLI to understand your
@@ -11,18 +11,18 @@ support VS Code extensions. To build support for other editors, see the
## Features
- **Workspace Context:** The CLI automatically gains awareness of your workspace
- **Workspace context:** The CLI automatically gains awareness of your workspace
to provide more relevant and accurate responses. This context includes:
- The **10 most recently accessed files** in your workspace.
- Your active cursor position.
- Any text you have selected (up to a 16KB limit; longer selections will be
truncated).
- **Native Diffing:** When Gemini suggests code modifications, you can view the
- **Native diffing:** When Gemini suggests code modifications, you can view the
changes directly within your IDE's native diff viewer. This allows you to
review, edit, and accept or reject the suggested changes seamlessly.
- **VS Code Commands:** You can access Gemini CLI features directly from the VS
- **VS Code commands:** You can access Gemini CLI features directly from the VS
Code Command Palette (`Cmd+Shift+P` or `Ctrl+Shift+P`):
- `Gemini CLI: Run`: Starts a new Gemini CLI session in the integrated
terminal.
@@ -32,18 +32,18 @@ support VS Code extensions. To build support for other editors, see the
- `Gemini CLI: View Third-Party Notices`: Displays the third-party notices for
the extension.
## Installation and Setup
## Installation and setup
There are three ways to set up the IDE integration:
### 1. Automatic Nudge (Recommended)
### 1. Automatic nudge (recommended)
When you run Gemini CLI inside a supported editor, it will automatically detect
your environment and prompt you to connect. Answering "Yes" will automatically
run the necessary setup, which includes installing the companion extension and
enabling the connection.
### 2. Manual Installation from CLI
### 2. Manual installation from CLI
If you previously dismissed the prompt or want to install the extension
manually, you can run the following command inside Gemini CLI:
@@ -54,13 +54,13 @@ manually, you can run the following command inside Gemini CLI:
This will find the correct extension for your IDE and install it.
### 3. Manual Installation from a Marketplace
### 3. Manual installation from a marketplace
You can also install the extension directly from a marketplace.
- **For Visual Studio Code:** Install from the
[VS Code Marketplace](https://marketplace.visualstudio.com/items?itemName=google.gemini-cli-vscode-ide-companion).
- **For VS Code Forks:** To support forks of VS Code, the extension is also
- **For VS Code forks:** To support forks of VS Code, the extension is also
published on the
[Open VSX Registry](https://open-vsx.org/extension/google/gemini-cli-vscode-ide-companion).
Follow your editor's instructions for installing extensions from this
@@ -75,7 +75,7 @@ You can also install the extension directly from a marketplace.
## Usage
### Enabling and Disabling
### Enabling and disabling
You can control the IDE integration from within the CLI:
@@ -91,7 +91,7 @@ You can control the IDE integration from within the CLI:
When enabled, Gemini CLI will automatically attempt to connect to the IDE
companion extension.
### Checking the Status
### Checking the status
To check the connection status and see the context the CLI has received from the
IDE, run:
@@ -106,7 +106,7 @@ recently opened files it is aware of.
> [!NOTE] The file list is limited to 10 recently accessed files within your
> workspace and only includes local files on disk.)
### Working with Diffs
### Working with diffs
When you ask Gemini to modify a file, it can open a diff view directly in your
editor.
@@ -131,14 +131,14 @@ accepting them.
If you select Yes, allow always in the CLI, changes will no longer show up in
the IDE as they will be auto-accepted.
## Using with Sandboxing
## Using with sandboxing
If you are using Gemini CLI within a sandbox, please be aware of the following:
- **On macOS:** The IDE integration requires network access to communicate with
the IDE companion extension. You must use a Seatbelt profile that allows
network access.
- **In a Docker Container:** If you run Gemini CLI inside a Docker (or Podman)
- **In a Docker container:** If you run Gemini CLI inside a Docker (or Podman)
container, the IDE integration can still connect to the VS Code extension
running on your host machine. The CLI is configured to automatically find the
IDE server on `host.docker.internal`. No special configuration is usually
@@ -150,7 +150,7 @@ If you are using Gemini CLI within a sandbox, please be aware of the following:
If you encounter issues with IDE integration, here are some common error
messages and how to resolve them.
### Connection Errors
### Connection errors
- **Message:**
`🔴 Disconnected: Failed to connect to IDE companion extension in [IDE Name]. Please ensure the extension is running. To install the extension, run /ide install.`
@@ -170,7 +170,7 @@ messages and how to resolve them.
- **Solution:** Run `/ide enable` to try and reconnect. If the issue
continues, open a new terminal window or restart your IDE.
### Configuration Errors
### Configuration errors
- **Message:**
`🔴 Disconnected: Directory mismatch. Gemini CLI is running in a different location than the open workspace in [IDE Name]. Please run the CLI from one of the following directories: [List of directories]`
@@ -184,7 +184,7 @@ messages and how to resolve them.
- **Cause:** You have no workspace open in your IDE.
- **Solution:** Open a workspace in your IDE and restart the CLI.
### General Errors
### General errors
- **Message:**
`IDE integration is not supported in your current environment. To use this feature, run Gemini CLI in one of these supported IDEs: [List of IDEs]`

View File

@@ -1,10 +1,10 @@
# Welcome to Gemini CLI documentation
This documentation provides a comprehensive guide to installing, using, and
developing Gemini CLI. This tool lets you interact with Gemini models through a
command-line interface.
developing Gemini CLI, a tool that lets you interact with Gemini models through
a command-line interface.
## Overview
## Gemini CLI overview
Gemini CLI brings the capabilities of Gemini models to your terminal in an
interactive Read-Eval-Print Loop (REPL) environment. Gemini CLI consists of a
@@ -18,41 +18,58 @@ file system operations, running shells, and web fetching, which are managed by
This documentation is organized into the following sections:
### Overview
- **[Architecture overview](./architecture.md):** Understand the high-level
design of Gemini CLI, including its components and how they interact.
- **[Contribution guide](../CONTRIBUTING.md):** Information for contributors and
developers, including setup, building, testing, and coding conventions.
### Get started
- **[Gemini CLI Quickstart](./get-started/index.md):** Let's get started with
- **[Gemini CLI quickstart](./get-started/index.md):** Let's get started with
Gemini CLI.
- **[Installation](./get-started/installation.md):** Install and run Gemini CLI.
- **[Authentication](./get-started/authentication.md):** Authenticate Gemini
CLI.
- **[Configuration](./get-started/configuration.md):** Information on
configuring the CLI.
- **[Examples](./get-started/examples.md):** Example usage of Gemini CLI.
- **[Get started with Gemini 3](./get-started/gemini-3.md):** Learn how to
- **[Gemini 3 Pro on Gemini CLI](./get-started/gemini-3.md):** Learn how to
enable and use Gemini 3.
- **[Authentication](./get-started/authentication.md):** Authenticate to Gemini
CLI.
- **[Configuration](./get-started/configuration.md):** Learn how to configure
the CLI.
- **[Installation](./get-started/installation.md):** Install and run Gemini CLI.
- **[Examples](./get-started/examples.md):** Example usage of Gemini CLI.
### CLI
- **[CLI overview](./cli/index.md):** Overview of the command-line interface.
- **[Introduction: Gemini CLI](./cli/index.md):** Overview of the command-line
interface.
- **[Commands](./cli/commands.md):** Description of available CLI commands.
- **[Enterprise](./cli/enterprise.md):** Gemini CLI for enterprise.
- **[Model Selection](./cli/model.md):** Select the model used to process your
commands with `/model`.
- **[Settings](./cli/settings.md):** Configure various aspects of the CLI's
behavior and appearance with `/settings`.
- **[Themes](./cli/themes.md):** Themes for Gemini CLI.
- **[Token Caching](./cli/token-caching.md):** Token caching and optimization.
- **[Tutorials](./cli/tutorials.md):** Tutorials for Gemini CLI.
- **[Checkpointing](./cli/checkpointing.md):** Documentation for the
checkpointing feature.
- **[Custom commands](./cli/custom-commands.md):** Create your own commands and
shortcuts for frequently used prompts.
- **[Enterprise](./cli/enterprise.md):** Gemini CLI for enterprise.
- **[Headless mode](./cli/headless.md):** Use Gemini CLI programmatically for
scripting and automation.
- **[Keyboard shortcuts](./cli/keyboard-shortcuts.md):** A reference for all
keyboard shortcuts to improve your workflow.
- **[Model selection](./cli/model.md):** Select the model used to process your
commands with `/model`.
- **[Sandbox](./cli/sandbox.md):** Isolate tool execution in a secure,
containerized environment.
- **[Settings](./cli/settings.md):** Configure various aspects of the CLI's
behavior and appearance with `/settings`.
- **[Telemetry](./cli/telemetry.md):** Overview of telemetry in the CLI.
- **[Themes](./cli/themes.md):** Themes for Gemini CLI.
- **[Token caching](./cli/token-caching.md):** Token caching and optimization.
- **[Trusted Folders](./cli/trusted-folders.md):** An overview of the Trusted
Folders security feature.
- **[Tutorials](./cli/tutorials.md):** Tutorials for Gemini CLI.
- **[Uninstall](./cli/uninstall.md):** Methods for uninstalling the Gemini CLI.
### Core
- **[Gemini CLI core overview](./core/index.md):** Information about Gemini CLI
core.
- **[Introduction: Gemini CLI core](./core/index.md):** Information about Gemini
CLI core.
- **[Memport](./core/memport.md):** Using the Memory Import Processor.
- **[Tools API](./core/tools-api.md):** Information on how the core manages and
exposes tools.
@@ -61,51 +78,58 @@ This documentation is organized into the following sections:
### Tools
- **[Gemini CLI tools overview](./tools/index.md):** Information about Gemini
CLI's tools.
- **[File System Tools](./tools/file-system.md):** Documentation for the
- **[Introduction: Gemini CLI tools](./tools/index.md):** Information about
Gemini CLI's tools.
- **[File system tools](./tools/file-system.md):** Documentation for the
`read_file` and `write_file` tools.
- **[MCP servers](./tools/mcp-server.md):** Using MCP servers with Gemini CLI.
- **[Shell Tool](./tools/shell.md):** Documentation for the `run_shell_command`
- **[Shell tool](./tools/shell.md):** Documentation for the `run_shell_command`
tool.
- **[Web Fetch Tool](./tools/web-fetch.md):** Documentation for the `web_fetch`
- **[Web fetch tool](./tools/web-fetch.md):** Documentation for the `web_fetch`
tool.
- **[Web Search Tool](./tools/web-search.md):** Documentation for the
- **[Web search tool](./tools/web-search.md):** Documentation for the
`google_web_search` tool.
- **[Memory Tool](./tools/memory.md):** Documentation for the `save_memory`
- **[Memory tool](./tools/memory.md):** Documentation for the `save_memory`
tool.
- **[Todo Tool](./tools/todos.md):** Documentation for the `write_todos` tool.
- **[Todo tool](./tools/todos.md):** Documentation for the `write_todos` tool.
- **[MCP servers](./tools/mcp-server.md):** Using MCP servers with Gemini CLI.
### Extensions
- **[Extensions](./extensions/index.md):** How to extend the CLI with new
functionality.
- **[Get Started with Extensions](./extensions/getting-started-extensions.md):**
- **[Introduction: Extensions](./extensions/index.md):** How to extend the CLI
with new functionality.
- **[Get Started with extensions](./extensions/getting-started-extensions.md):**
Learn how to build your own extension.
- **[Extension Releasing](./extensions/extension-releasing.md):** How to release
- **[Extension releasing](./extensions/extension-releasing.md):** How to release
Gemini CLI extensions.
### IDE integration
- **[IDE Integration](./ide-integration/index.md):** Connect the CLI to your
editor.
- **[IDE Companion Extension Spec](./ide-integration/ide-companion-spec.md):**
- **[Introduction to IDE integration](./ide-integration/index.md):** Connect the
CLI to your editor.
- **[IDE companion extension spec](./ide-integration/ide-companion-spec.md):**
Spec for building IDE companion extensions.
### About the Gemini CLI project
### Development
- **[Architecture Overview](./architecture.md):** Understand the high-level
design of Gemini CLI, including its components and how they interact.
- **[Contributing & Development Guide](../CONTRIBUTING.md):** Information for
contributors and developers, including setup, building, testing, and coding
conventions.
- **[NPM](./npm.md):** Details on how the project's packages are structured.
- **[Troubleshooting Guide](./troubleshooting.md):** Find solutions to common
problems.
- **[FAQ](./faq.md):** Frequently asked questions.
- **[Terms of Service and Privacy Notice](./tos-privacy.md):** Information on
the terms of service and privacy notices applicable to your use of Gemini CLI.
- **[Releases](./releases.md):** Information on the project's releases and
deployment cadence.
- **[Changelog](./changelogs/index.md):** Highlights and notable changes to
Gemini CLI.
- **[Integration tests](./integration-tests.md):** Information about the
integration testing framework used in this project.
- **[Issue and PR automation](./issue-and-pr-automation.md):** A detailed
overview of the automated processes we use to manage and triage issues and
pull requests.
### Support
- **[FAQ](./faq.md):** Frequently asked questions.
- **[Troubleshooting guide](./troubleshooting.md):** Find solutions to common
problems.
- **[Quota and pricing](./quota-and-pricing.md):** Learn about the free tier and
paid options.
- **[Terms of service and privacy notice](./tos-privacy.md):** Information on
the terms of service and privacy notices applicable to your use of Gemini CLI.
We hope this documentation helps you make the most of Gemini CLI!

View File

@@ -1,4 +1,4 @@
# Integration Tests
# Integration tests
This document provides information about the integration testing framework used
in this project.
@@ -86,7 +86,7 @@ with the deflake script or workflow to make sure that it is not flaky.
npm run deflake -- --runs=5 --command="npm run test:e2e -- -- --test-name-pattern '<your-new-test-name>'"
```
#### Deflake Workflow
#### Deflake workflow
```bash
gh workflow run deflake.yml --ref <your-branch> -f test_name_pattern="<your-test-name-pattern>"

View File

@@ -1,4 +1,4 @@
# Automation and Triage Processes
# Automation and triage processes
This document provides a detailed overview of the automated processes we use to
manage and triage issues and pull requests. Our goal is to provide prompt
@@ -6,7 +6,7 @@ feedback and ensure that contributions are reviewed and integrated efficiently.
Understanding this automation will help you as a contributor know what to expect
and how to best interact with our repository bots.
## Guiding Principle: Issues and Pull Requests
## Guiding principle: Issues and pull requests
First and foremost, almost every Pull Request (PR) should be linked to a
corresponding Issue. The issue describes the "what" and the "why" (the bug or
@@ -16,12 +16,12 @@ automation is built around this principle.
---
## Detailed Automation Workflows
## Detailed automation workflows
Here is a breakdown of the specific automation workflows that run in our
repository.
### 1. When you open an Issue: `Automated Issue Triage`
### 1. When you open an issue: `Automated Issue Triage`
This is the first bot you will interact with when you create an issue. Its job
is to perform an initial analysis and apply the correct labels.
@@ -48,7 +48,7 @@ is to perform an initial analysis and apply the correct labels.
- If the `status/need-information` label is added, please provide the
requested details in a comment.
### 2. When you open a Pull Request: `Continuous Integration (CI)`
### 2. When you open a pull request: `Continuous Integration (CI)`
This workflow ensures that all changes meet our quality standards before they
can be merged.
@@ -70,7 +70,7 @@ can be merged.
- If a check fails (a red "X" ❌), click the "Details" link next to the failed
check to view the logs, identify the problem, and push a fix.
### 3. Ongoing Triage for Pull Requests: `PR Auditing and Label Sync`
### 3. Ongoing triage for pull requests: `PR Auditing and Label Sync`
This workflow runs periodically to ensure all open PRs are correctly linked to
issues and have consistent labels.
@@ -93,7 +93,7 @@ issues and have consistent labels.
- This will ensure your PR is correctly categorized and moves through the
review process smoothly.
### 4. Ongoing Triage for Issues: `Scheduled Issue Triage`
### 4. Ongoing triage for issues: `Scheduled Issue Triage`
This is a fallback workflow to ensure that no issue gets missed by the triage
process.
@@ -110,7 +110,7 @@ process.
ensure every issue is eventually categorized, even if the initial triage
fails.
### 5. Release Automation
### 5. Release automation
This workflow handles the process of packaging and publishing new versions of
the Gemini CLI.

View File

@@ -1,9 +1,9 @@
# Local Development Guide
# Local development guide
This guide provides instructions for setting up and using local development
features, such as development tracing.
## Development Tracing
## Development tracing
Development traces (dev traces) are OpenTelemetry (OTel) traces that help you
debug your code by instrumenting interesting events like model calls, tool
@@ -15,7 +15,7 @@ behaviour and debugging issues. They are disabled by default.
To enable dev traces, set the `GEMINI_DEV_TRACING=true` environment variable
when running Gemini CLI.
### Viewing Dev Traces
### Viewing dev traces
You can view dev traces using either Jaeger or the Genkit Developer UI.
@@ -23,7 +23,7 @@ You can view dev traces using either Jaeger or the Genkit Developer UI.
Genkit provides a web-based UI for viewing traces and other telemetry data.
1. **Start the Genkit Telemetry Server:**
1. **Start the Genkit telemetry server:**
Run the following command to start the Genkit server:
@@ -37,7 +37,7 @@ Genkit provides a web-based UI for viewing traces and other telemetry data.
Genkit Developer UI: http://localhost:4000
```
2. **Run Gemini CLI with Dev Tracing:**
2. **Run Gemini CLI with dev tracing:**
In a separate terminal, run your Gemini CLI command with the
`GEMINI_DEV_TRACING` environment variable:
@@ -46,7 +46,7 @@ Genkit provides a web-based UI for viewing traces and other telemetry data.
GEMINI_DEV_TRACING=true gemini
```
3. **View the Traces:**
3. **View the traces:**
Open the Genkit Developer UI URL in your browser and navigate to the
**Traces** tab to view the traces.
@@ -84,7 +84,7 @@ You can view dev traces in the Jaeger UI. To get started, follow these steps:
For more detailed information on telemetry, see the
[telemetry documentation](./cli/telemetry.md).
### Instrumenting Code with Dev Traces
### Instrumenting code with dev traces
You can add dev traces to your own code for more detailed instrumentation. This
is useful for debugging and understanding the flow of execution.

View File

@@ -1,4 +1,4 @@
# Package Overview
# Package overview
This monorepo contains two main packages: `@google/gemini-cli` and
`@google/gemini-cli-core`.
@@ -25,7 +25,7 @@ Node.js package with its own dependencies. This allows it to be used as a
standalone package in other projects, if needed. All transpiled js code in the
`dist` folder is included in the package.
## NPM Workspaces
## NPM workspaces
This project uses
[NPM Workspaces](https://docs.npmjs.com/cli/v10/using-npm/workspaces) to manage
@@ -33,7 +33,7 @@ the packages within this monorepo. This simplifies development by allowing us to
manage dependencies and run scripts across multiple packages from the root of
the project.
### How it Works
### How it works
The root `package.json` file defines the workspaces for this project:
@@ -46,17 +46,17 @@ The root `package.json` file defines the workspaces for this project:
This tells NPM that any folder inside the `packages` directory is a separate
package that should be managed as part of the workspace.
### Benefits of Workspaces
### Benefits of workspaces
- **Simplified Dependency Management**: Running `npm install` from the root of
- **Simplified dependency management**: Running `npm install` from the root of
the project will install all dependencies for all packages in the workspace
and link them together. This means you don't need to run `npm install` in each
package's directory.
- **Automatic Linking**: Packages within the workspace can depend on each other.
- **Automatic linking**: Packages within the workspace can depend on each other.
When you run `npm install`, NPM will automatically create symlinks between the
packages. This means that when you make changes to one package, the changes
are immediately available to other packages that depend on it.
- **Simplified Script Execution**: You can run scripts in any package from the
- **Simplified script execution**: You can run scripts in any package from the
root of the project using the `--workspace` flag. For example, to run the
`build` script in the `cli` package, you can run
`npm run build --workspace @google/gemini-cli`.

View File

@@ -1,4 +1,4 @@
# Gemini CLI: Quotas and Pricing
# Gemini CLI: Quotas and pricing
Gemini CLI offers a generous free tier that covers the use cases for many
individual developers. For enterprise / professional usage, or if you need
@@ -24,7 +24,7 @@ Generally, there are three categories to choose from:
- Pay-As-You-Go: The most flexible option for professional use, long-running
tasks, or when you need full control over your usage.
## Free Usage
## Free usage
Your journey begins with a generous free tier, perfect for experimentation and
light use.
@@ -44,7 +44,7 @@ Assist for individuals. This includes:
Learn more at
[Gemini Code Assist for Individuals Limits](https://developers.google.com/gemini-code-assist/resources/quotas#quotas-for-agent-mode-gemini-cli).
### Log in with Gemini API Key (Unpaid)
### Log in with Gemini API Key (unpaid)
If you are using a Gemini API key, you can also benefit from a free tier. This
includes:
@@ -101,7 +101,7 @@ Gemini CLI by upgrading to one of the following subscriptions:
[Learn more about Gemini Code Assist Standard and Enterprise license limits](https://developers.google.com/gemini-code-assist/resources/quotas#quotas-for-agent-mode-gemini-cli).
## Pay As You Go
## Pay as you go
If you hit your daily request limits or exhaust your Gemini Pro quota even after
upgrading, the most flexible solution is to switch to a pay-as-you-go model,
@@ -131,7 +131,7 @@ Its important to highlight that when using an API key, you pay per token/call
This can be more expensive for many small calls with few tokens, but it's the
only way to ensure your workflow isn't interrupted by quota limits.
## Gemini for Workspace plans
## Gemini for workspace plans
These plans currently apply only to the use of Gemini web-based products
provided by Google-based experiences (for example, the Gemini web app or the
@@ -139,7 +139,7 @@ Flow video editor). These plans do not apply to the API usage which powers the
Gemini CLI. Supporting these plans is under active consideration for future
support.
## Tips to Avoid High Costs
## Tips to avoid high costs
When using a Pay as you Go API key, be mindful of your usage to avoid unexpected
costs.

View File

@@ -1,21 +1,21 @@
# Release Confidence Strategy
# Release confidence strategy
This document outlines the strategy for gaining confidence in every release of
the Gemini CLI. It serves as a checklist and quality gate for release manager to
ensure we are shipping a high-quality product.
## The Goal
## The goal
To answer the question, "Is this release _truly_ ready for our users?" with a
high degree of confidence, based on a holistic evaluation of automated signals,
manual verification, and data.
## Level 1: Automated Gates (Must Pass)
## Level 1: Automated gates (must pass)
These are the baseline requirements. If any of these fail, the release is a
no-go.
### 1. CI/CD Health
### 1. CI/CD health
All workflows in `.github/workflows/ci.yml` must pass on the `main` branch (for
nightly) or the release branch (for preview/stable).
@@ -31,7 +31,7 @@ nightly) or the release branch (for preview/stable).
pass.
- **Build:** The project must build and bundle successfully.
### 2. End-to-End (E2E) Tests
### 2. End-to-end (E2E) tests
All workflows in `.github/workflows/e2e.yml` must pass.
@@ -39,7 +39,7 @@ All workflows in `.github/workflows/e2e.yml` must pass.
- **Sandboxing:** Tests must pass with both `sandbox:none` and `sandbox:docker`
on Linux.
### 3. Post-Deployment Smoke Tests
### 3. Post-deployment smoke tests
After a release is published to npm, the `smoke-test.yml` workflow runs. This
must pass to confirm the package is installable and the binary is executable.
@@ -48,11 +48,11 @@ must pass to confirm the package is installable and the binary is executable.
correct version without error.
- **Platform:** Currently runs on `ubuntu-latest`.
## Level 2: Manual Verification & Dogfooding
## Level 2: Manual verification and dogfooding
Automated tests cannot catch everything, especially UX issues.
### 1. Dogfooding via `preview` Tag
### 1. Dogfooding via `preview` tag
The weekly release cadence promotes code from `main` -> `nightly` -> `preview`
-> `stable`.
@@ -66,7 +66,7 @@ The weekly release cadence promotes code from `main` -> `nightly` -> `preview`
- **Goal:** To catch regressions and UX issues in day-to-day usage before they
reach the broad user base.
### 2. Critical User Journey (CUJ) Checklist
### 2. Critical user journey (CUJ) checklist
Before promoting a `preview` release to `stable`, a release manager must
manually run through this checklist.
@@ -84,15 +84,15 @@ manually run through this checklist.
- [ ] API Key
- [ ] Vertex AI
- **Basic Prompting:**
- **Basic prompting:**
- [ ] Run `gemini "Tell me a joke"` and verify a sensible response.
- [ ] Run in interactive mode: `gemini`. Ask a follow-up question to test
context.
- **Piped Input:**
- **Piped input:**
- [ ] Run `echo "Summarize this" | gemini` and verify it processes stdin.
- **Context Management:**
- **Context management:**
- [ ] In interactive mode, use `@file` to add a local file to context. Ask a
question about it.
@@ -100,20 +100,20 @@ manually run through this checklist.
- [ ] In interactive mode run `/settings` and make modifications
- [ ] Validate that setting is changed
- **Function Calling:**
- **Function calling:**
- [ ] In interactive mode, ask gemini to "create a file named hello.md with
the content 'hello world'" and verify the file is created correctly.
If any of these CUJs fail, the release is a no-go until a patch is applied to
the `preview` channel.
### 3. Pre-Launch Bug Bash (Tier 1 & 2 Launches)
### 3. Pre-Launch bug bash (tier 1 and 2 launches)
For high-impact releases, an organized bug bash is required to ensure a higher
level of quality and to catch issues across a wider range of environments and
use cases.
**Definition of Tiers:**
**Definition of tiers:**
- **Tier 1:** Industry-Moving News 🚀
- **Tier 2:** Important News for Our Users 📣
@@ -125,7 +125,7 @@ use cases.
A bug bash must be scheduled at least **72 hours in advance** of any Tier 1 or
Tier 2 launch.
**Rule of Thumb:**
**Rule of thumb:**
A bug bash should be considered for any release that involves:
@@ -134,22 +134,22 @@ A bug bash should be considered for any release that involves:
- Media relations or press outreach
- A "Turbo" launch event
## Level 3: Telemetry & Data Review
## Level 3: Telemetry and data review
### Dashboard Health
### Dashboard health
- [ ] Go to `go/gemini-cli-dash`.
- [ ] Navigate to the "Tool Call" tab.
- [ ] Validate that there are no spikes in errors for the release you would like
to promote.
### Model Evaluation
### Model evaluation
- [ ] Navigate to `go/gemini-cli-offline-evals-dash`.
- [ ] Make sure that the release you want to promote's recurring run is within
average eval runs.
## The "Go/No-Go" Decision
## The "go/no-go" decision
Before triggering the `Release: Promote` workflow to move `preview` to `stable`:

View File

@@ -1,4 +1,4 @@
# Gemini CLI Releases
# Gemini CLI releases
## `dev` vs `prod` environment
@@ -22,7 +22,7 @@ More information can be found about these systems in the
| Core | @google/gemini-cli-core | @google-gemini/gemini-cli-core A2A Server |
| A2A Server | @google/gemini-cli-a2a-server | @google-gemini/gemini-cli-a2a-server |
## Release Cadence and Tags
## Release cadence and tags
We will follow https://semver.org/ as closely as possible but will call out when
or if we have to deviate from it. Our weekly releases will be minor version
@@ -66,24 +66,24 @@ npm install -g @google/gemini-cli@latest
npm install -g @google/gemini-cli@nightly
```
## Weekly Release Promotion
## Weekly release promotion
Each Tuesday, the on-call engineer will trigger the "Promote Release" workflow.
This single action automates the entire weekly release process:
1. **Promotes Preview to Stable:** The workflow identifies the latest `preview`
1. **Promotes preview to stable:** The workflow identifies the latest `preview`
release and promotes it to `stable`. This becomes the new `latest` version
on npm.
2. **Promotes Nightly to Preview:** The latest `nightly` release is then
2. **Promotes nightly to preview:** The latest `nightly` release is then
promoted to become the new `preview` version.
3. **Prepares for next Nightly:** A pull request is automatically created and
3. **Prepares for next nightly:** A pull request is automatically created and
merged to bump the version in `main` in preparation for the next nightly
release.
This process ensures a consistent and reliable release cadence with minimal
manual intervention.
### Source of Truth for Versioning
### Source of truth for versioning
To ensure the highest reliability, the release promotion process uses the **NPM
registry as the single source of truth** for determining the current version of
@@ -92,16 +92,16 @@ each release channel (`stable`, `preview`, and `nightly`).
1. **Fetch from NPM:** The workflow begins by querying NPM's `dist-tags`
(`latest`, `preview`, `nightly`) to get the exact version strings for the
packages currently available to users.
2. **Cross-Check for Integrity:** For each version retrieved from NPM, the
2. **Cross-check for integrity:** For each version retrieved from NPM, the
workflow performs a critical integrity check:
- It verifies that a corresponding **git tag** exists in the repository.
- It verifies that a corresponding **GitHub Release** has been created.
3. **Halt on Discrepancy:** If either the git tag or the GitHub Release is
- It verifies that a corresponding **GitHub release** has been created.
3. **Halt on discrepancy:** If either the git tag or the GitHub Release is
missing for a version listed on NPM, the workflow will immediately fail.
This strict check prevents promotions from a broken or incomplete previous
release and alerts the on-call engineer to a release state inconsistency
that must be manually resolved.
4. **Calculate Next Version:** Only after these checks pass does the workflow
4. **Calculate next version:** Only after these checks pass does the workflow
proceed to calculate the next semantic version based on the trusted version
numbers retrieved from NPM.
@@ -109,14 +109,14 @@ This NPM-first approach, backed by integrity checks, makes the release process
highly robust and prevents the kinds of versioning discrepancies that can arise
from relying solely on git history or API outputs.
## Manual Releases
## Manual releases
For situations requiring a release outside of the regular nightly and weekly
promotion schedule, and NOT already covered by patching process, you can use the
`Release: Manual` workflow. This workflow provides a direct way to publish a
specific version from any branch, tag, or commit SHA.
### How to Create a Manual Release
### How to create a manual release
1. Navigate to the **Actions** tab of the repository.
2. Select the **Release: Manual** workflow from the list.
@@ -144,7 +144,7 @@ The workflow will then proceed to test (if not skipped), build, and publish the
release. If the workflow fails during a non-dry run, it will automatically
create a GitHub issue with the failure details.
## Rollback/Rollforward
## Rollback/rollforward
In the event that a release has a critical regression, you can quickly roll back
to a previous stable version or roll forward to a new patch by changing the npm
@@ -154,7 +154,7 @@ way to do this.
This is the preferred method for both rollbacks and rollforwards, as it does not
require a full release cycle.
### How to Change a Release Tag
### How to change a release tag
1. Navigate to the **Actions** tab of the repository.
2. Select the **Release: Change Tags** workflow from the list.
@@ -181,13 +181,13 @@ channel to the specified version.
If a critical bug that is already fixed on `main` needs to be patched on a
`stable` or `preview` release, the process is now highly automated.
### How to Patch
### How to patch
#### 1. Create the Patch Pull Request
#### 1. Create the patch pull request
There are two ways to create a patch pull request:
**Option A: From a GitHub Comment (Recommended)**
**Option A: From a GitHub comment (recommended)**
After a pull request containing the fix has been merged, a maintainer can add a
comment on that same PR with the following format:
@@ -212,7 +212,7 @@ The `Release: Patch from Comment` workflow will automatically find the merge
commit SHA and trigger the `Release: Patch (1) Create PR` workflow. If the PR is
not yet merged, it will post a comment indicating the failure.
**Option B: Manually Triggering the Workflow**
**Option B: Manually triggering the workflow**
Navigate to the **Actions** tab and run the **Release: Patch (1) Create PR**
workflow.
@@ -229,17 +229,17 @@ This workflow will automatically:
4. Cherry-pick your specified commit into the hotfix branch.
5. Create a pull request from the hotfix branch back to the release branch.
#### 2. Review and Merge
#### 2. Review and merge
Review the automatically created pull request(s) to ensure the cherry-pick was
successful and the changes are correct. Once approved, merge the pull request.
**Security Note:** The `release/*` branches are protected by branch protection
**Security note:** The `release/*` branches are protected by branch protection
rules. A pull request to one of these branches requires at least one review from
a code owner before it can be merged. This ensures that no unauthorized code is
released.
#### 2.5. Adding Multiple Commits to a Hotfix (Advanced)
#### 2.5. Adding multiple commits to a hotfix (advanced)
If you need to include multiple fixes in a single patch release, you can add
additional commits to the hotfix branch after the initial patch PR has been
@@ -280,7 +280,7 @@ This approach allows you to group related fixes into a single patch release
while maintaining full control over what gets included and how conflicts are
resolved.
#### 3. Automatic Release
#### 3. Automatic release
Upon merging the pull request, the `Release: Patch (2) Trigger` workflow is
automatically triggered. It will then start the `Release: Patch (3) Release`
@@ -293,21 +293,21 @@ workflow, which will:
This fully automated process ensures that patches are created and released
consistently and reliably.
#### Troubleshooting: Older Branch Workflows
#### Troubleshooting: Older branch workflows
**Issue**: If the patch trigger workflow fails with errors like "Resource not
accessible by integration" or references to non-existent workflow files (e.g.,
`patch-release.yml`), this indicates the hotfix branch contains an outdated
version of the workflow files.
**Root Cause**: When a PR is merged, GitHub Actions runs the workflow definition
**Root cause**: When a PR is merged, GitHub Actions runs the workflow definition
from the **source branch** (the hotfix branch), not from the target branch (the
release branch). If the hotfix branch was created from an older release branch
that predates workflow improvements, it will use the old workflow logic.
**Solutions**:
**Option 1: Manual Trigger (Quick Fix)** Manually trigger the updated workflow
**Option 1: Manual trigger (quick fix)** Manually trigger the updated workflow
from the branch with the latest workflow code:
```bash
@@ -337,7 +337,7 @@ gh workflow run release-patch-2-trigger.yml --ref main \
the latest workflow improvements (usually `main`, but could be a feature branch
if testing updates).
**Option 2: Update the Hotfix Branch** Merge the latest main branch into your
**Option 2: Update the hotfix branch** Merge the latest main branch into your
hotfix branch to get the updated workflows:
```bash
@@ -348,7 +348,7 @@ git push
Then close and reopen the PR to retrigger the workflow with the updated version.
**Option 3: Direct Release Trigger** Skip the trigger workflow entirely and
**Option 3: Direct release trigger** Skip the trigger workflow entirely and
directly run the release workflow:
```bash
@@ -367,7 +367,7 @@ We also run a Google cloud build called
docker to match your release. This will also be moved to GH and combined with
the main release file once service account permissions are sorted out.
## Release Validation
## Release validation
After pushing a new release smoke testing should be performed to ensure that the
packages are working as expected. This can be done by installing the packages
@@ -384,7 +384,7 @@ correctly.
is recommended to ensure that the packages are working as expected. We'll
codify this more in the future.
## Local Testing and Validation: Changes to the Packaging and Publishing Process
## Local testing and validation: Changes to the packaging and publishing process
If you need to test the release process without actually publishing to NPM or
creating a public GitHub release, you can trigger the workflow manually from the
@@ -428,7 +428,7 @@ tarballs will be created in the root of each package's directory (e.g.,
By performing a dry run, you can be confident that your changes to the packaging
process are correct and that the packages will be published successfully.
## Release Deep Dive
## Release deep dive
The release process creates two distinct types of artifacts for different
distribution channels: standard packages for the NPM registry and a single,
@@ -436,14 +436,14 @@ self-contained executable for GitHub Releases.
Here are the key stages:
**Stage 1: Pre-Release Sanity Checks and Versioning**
**Stage 1: Pre-release sanity checks and versioning**
- **What happens:** Before any files are moved, the process ensures the project
is in a good state. This involves running tests, linting, and type-checking
(`npm run preflight`). The version number in the root `package.json` and
`packages/cli/package.json` is updated to the new release version.
**Stage 2: Building the Source Code for NPM**
**Stage 2: Building the source code for NPM**
- **What happens:** The TypeScript source code in `packages/core/src` and
`packages/cli/src` is compiled into standard JavaScript.
@@ -454,7 +454,7 @@ Here are the key stages:
into plain JavaScript that can be run by Node.js. The `core` package is built
first as the `cli` package depends on it.
**Stage 3: Publishing Standard Packages to NPM**
**Stage 3: Publishing standard packages to NPM**
- **What happens:** The `npm publish` command is run for the
`@google/gemini-cli-core` and `@google/gemini-cli` packages.
@@ -463,12 +463,12 @@ Here are the key stages:
`npm` will handle installing the `@google/gemini-cli-core` dependency
automatically. The code in these packages is not bundled into a single file.
**Stage 4: Assembling and Creating the GitHub Release Asset**
**Stage 4: Assembling and creating the GitHub release asset**
This stage happens _after_ the NPM publish and creates the single-file
executable that enables `npx` usage directly from the GitHub repository.
1. **The JavaScript Bundle is Created:**
1. **The JavaScript bundle is created:**
- **What happens:** The built JavaScript from both `packages/core/dist` and
`packages/cli/dist`, along with all third-party JavaScript dependencies,
are bundled by `esbuild` into a single, executable JavaScript file (e.g.,
@@ -479,7 +479,7 @@ executable that enables `npx` usage directly from the GitHub repository.
run the CLI without a full `npm install`, as all dependencies (including
the `core` package) are included directly.
2. **The `bundle` Directory is Assembled:**
2. **The `bundle` directory is assembled:**
- **What happens:** A temporary `bundle` folder is created at the project
root. The single `gemini.js` executable is placed inside it, along with
other essential files.
@@ -491,7 +491,7 @@ executable that enables `npx` usage directly from the GitHub repository.
- **Why:** This creates a clean, self-contained directory with everything
needed to run the CLI and understand its license and usage.
3. **The GitHub Release is Created:**
3. **The GitHub release is created:**
- **What happens:** The contents of the `bundle` directory, including the
`gemini.js` executable, are attached as assets to a new GitHub Release.
- **Why:** This makes the single-file version of the CLI available for
@@ -499,12 +499,12 @@ executable that enables `npx` usage directly from the GitHub repository.
`npx https://github.com/google-gemini/gemini-cli` command, which downloads
and runs this specific bundled asset.
**Summary of Artifacts**
**Summary of artifacts**
- **NPM:** Publishes standard, un-bundled Node.js packages. The primary artifact
is the code in `packages/cli/dist`, which depends on
`@google/gemini-cli-core`.
- **GitHub Release:** Publishes a single, bundled `gemini.js` file that contains
- **GitHub release:** Publishes a single, bundled `gemini.js` file that contains
all dependencies, for easy execution via `npx`.
This dual-artifact process ensures that both traditional `npm` users and those

View File

@@ -7,20 +7,20 @@
"slug": "docs"
},
{
"label": "Architecture Overview",
"label": "Architecture overview",
"slug": "docs/architecture"
},
{
"label": "Contribution Guide",
"label": "Contribution guide",
"slug": "docs/contributing"
}
]
},
{
"label": "Get Started",
"label": "Get started",
"items": [
{
"label": "Gemini CLI Quickstart",
"label": "Gemini CLI quickstart",
"slug": "docs/get-started"
},
{
@@ -61,7 +61,7 @@
"slug": "docs/cli/checkpointing"
},
{
"label": "Custom Commands",
"label": "Custom commands",
"slug": "docs/cli/custom-commands"
},
{
@@ -69,15 +69,15 @@
"slug": "docs/cli/enterprise"
},
{
"label": "Headless Mode",
"label": "Headless mode",
"slug": "docs/cli/headless"
},
{
"label": "Keyboard Shortcuts",
"label": "Keyboard shortcuts",
"slug": "docs/cli/keyboard-shortcuts"
},
{
"label": "Model Selection",
"label": "Model selection",
"slug": "docs/cli/model"
},
{
@@ -101,7 +101,7 @@
"slug": "docs/cli/themes"
},
{
"label": "Token Caching",
"label": "Token caching",
"slug": "docs/cli/token-caching"
},
{
@@ -147,7 +147,7 @@
"slug": "docs/tools"
},
{
"label": "File System",
"label": "File system",
"slug": "docs/tools/file-system"
},
{
@@ -155,11 +155,11 @@
"slug": "docs/tools/shell"
},
{
"label": "Web Fetch",
"label": "Web fetch",
"slug": "docs/tools/web-fetch"
},
{
"label": "Web Search",
"label": "Web search",
"slug": "docs/tools/web-search"
},
{
@@ -171,7 +171,7 @@
"slug": "docs/tools/todos"
},
{
"label": "MCP Servers",
"label": "MCP servers",
"slug": "docs/tools/mcp-server"
}
]
@@ -184,24 +184,24 @@
"slug": "docs/extensions"
},
{
"label": "Get Started with Extensions",
"label": "Get started with extensions",
"slug": "docs/extensions/getting-started-extensions"
},
{
"label": "Extension Releasing",
"label": "Extension releasing",
"slug": "docs/extensions/extension-releasing"
}
]
},
{
"label": "IDE Integration",
"label": "IDE integration",
"items": [
{
"label": "Introduction",
"slug": "docs/ide-integration"
},
{
"label": "IDE Companion Spec",
"label": "IDE companion spec",
"slug": "docs/ide-integration/ide-companion-spec"
}
]
@@ -222,11 +222,11 @@
"slug": "docs/changelogs"
},
{
"label": "Integration Tests",
"label": "Integration tests",
"slug": "docs/integration-tests"
},
{
"label": "Issue and PR Automation",
"label": "Issue and PR automation",
"slug": "docs/issue-and-pr-automation"
}
]
@@ -243,11 +243,11 @@
"slug": "docs/troubleshooting"
},
{
"label": "Quota and Pricing",
"label": "Quota and pricing",
"slug": "docs/quota-and-pricing"
},
{
"label": "Terms of Service",
"label": "Terms of service",
"slug": "docs/tos-privacy"
}
]

View File

@@ -184,7 +184,7 @@ context around the `old_string` to ensure it modifies the correct location.
- If `old_string` is provided, it reads the `file_path` and attempts to find
exactly one occurrence of `old_string`.
- If one occurrence is found, it replaces it with `new_string`.
- **Enhanced Reliability (Multi-Stage Edit Correction):** To significantly
- **Enhanced reliability (multi-stage edit correction):** To significantly
improve the success rate of edits, especially when the model-provided
`old_string` might not be perfectly precise, the tool incorporates a
multi-stage edit correction mechanism.

View File

@@ -23,7 +23,7 @@ With an MCP server, you can extend the Gemini CLI's capabilities to perform
actions beyond its built-in features, such as interacting with databases, APIs,
custom scripts, or specialized workflows.
## Core Integration Architecture
## Core integration architecture
The Gemini CLI integrates with MCP servers through a sophisticated discovery and
execution system built into the core package (`packages/core/src/tools/`):
@@ -41,7 +41,7 @@ The discovery process is orchestrated by `discoverMcpTools()`, which:
API
5. **Registers tools** in the global tool registry with conflict resolution
### Execution Layer (`mcp-tool.ts`)
### Execution layer (`mcp-tool.ts`)
Each discovered MCP tool is wrapped in a `DiscoveredMCPTool` instance that:
@@ -51,7 +51,7 @@ Each discovered MCP tool is wrapped in a `DiscoveredMCPTool` instance that:
- **Processes responses** for both the LLM context and user display
- **Maintains connection state** and handles timeouts
### Transport Mechanisms
### Transport mechanisms
The Gemini CLI supports three MCP transport types:
@@ -72,7 +72,7 @@ through the top-level `mcpServers` object for specific server definitions, and
through the `mcp` object for global settings that control server discovery and
execution.
#### Global MCP Settings (`mcp`)
#### Global MCP settings (`mcp`)
The `mcp` object in your `settings.json` allows you to define global rules for
all MCP servers.
@@ -95,12 +95,12 @@ all MCP servers.
}
```
#### Server-Specific Configuration (`mcpServers`)
#### Server-specific configuration (`mcpServers`)
The `mcpServers` object is where you define each individual MCP server you want
the CLI to connect to.
### Configuration Structure
### Configuration structure
Add an `mcpServers` object to your `settings.json` file:
@@ -121,7 +121,7 @@ Add an `mcpServers` object to your `settings.json` file:
}
```
### Configuration Properties
### Configuration properties
Each server configuration supports the following properties:
@@ -157,13 +157,13 @@ Each server configuration supports the following properties:
Service Account to impersonate. Used with
`authProviderType: 'service_account_impersonation'`.
### OAuth Support for Remote MCP Servers
### OAuth support for remote MCP servers
The Gemini CLI supports OAuth 2.0 authentication for remote MCP servers using
SSE or HTTP transports. This enables secure access to MCP servers that require
authentication.
#### Automatic OAuth Discovery
#### Automatic OAuth discovery
For servers that support OAuth discovery, you can omit the OAuth configuration
and let the CLI discover it automatically:
@@ -185,7 +185,7 @@ The CLI will automatically:
- Perform dynamic client registration if supported
- Handle the OAuth flow and token management
#### Authentication Flow
#### Authentication flow
When connecting to an OAuth-enabled server:
@@ -196,7 +196,7 @@ When connecting to an OAuth-enabled server:
5. **Tokens are stored** securely for future use
6. **Connection retry** succeeds with valid tokens
#### Browser Redirect Requirements
#### Browser redirect requirements
**Important:** OAuth authentication requires that your local machine can:
@@ -209,7 +209,7 @@ This feature will not work in:
- Remote SSH sessions without X11 forwarding
- Containerized environments without browser support
#### Managing OAuth Authentication
#### Managing OAuth authentication
Use the `/mcp auth` command to manage OAuth authentication:
@@ -224,7 +224,7 @@ Use the `/mcp auth` command to manage OAuth authentication:
/mcp auth serverName
```
#### OAuth Configuration Properties
#### OAuth configuration properties
- **`enabled`** (boolean): Enable OAuth for this server
- **`clientId`** (string): OAuth client identifier (optional with dynamic
@@ -239,7 +239,7 @@ Use the `/mcp auth` command to manage OAuth authentication:
- **`tokenParamName`** (string): Query parameter name for tokens in SSE URLs
- **`audiences`** (string[]): Audiences the token is valid for
#### Token Management
#### Token management
OAuth tokens are automatically:
@@ -248,7 +248,7 @@ OAuth tokens are automatically:
- **Validated** before each connection attempt
- **Cleaned up** when invalid or expired
#### Authentication Provider Type
#### Authentication provider type
You can specify the authentication provider type using the `authProviderType`
property:
@@ -265,7 +265,7 @@ property:
accessing IAP-protected services (this was specifically designed for Cloud
Run services).
#### Google Credentials
#### Google credentials
```json
{
@@ -281,7 +281,7 @@ property:
}
```
#### Service Account Impersonation
#### Service account impersonation
To authenticate with a server using Service Account Impersonation, you must set
the `authProviderType` to `service_account_impersonation` and provide the
@@ -296,7 +296,7 @@ The CLI will use your local Application Default Credentials (ADC) to generate an
OIDC ID token for the specified service account and audience. This token will
then be used to authenticate with the MCP server.
#### Setup Instructions
#### Setup instructions
1. **[Create](https://cloud.google.com/iap/docs/oauth-client-creation) or use an
existing OAuth 2.0 client ID.** To use an existing OAuth 2.0 client ID,
@@ -318,9 +318,9 @@ then be used to authenticate with the MCP server.
6. **[Enable](https://console.cloud.google.com/apis/library/iamcredentials.googleapis.com)
the IAM Credentials API** for your project.
### Example Configurations
### Example configurations
#### Python MCP Server (Stdio)
#### Python MCP server (stdio)
```json
{
@@ -339,7 +339,7 @@ then be used to authenticate with the MCP server.
}
```
#### Node.js MCP Server (Stdio)
#### Node.js MCP server (stdio)
```json
{
@@ -354,7 +354,7 @@ then be used to authenticate with the MCP server.
}
```
#### Docker-based MCP Server
#### Docker-based MCP server
```json
{
@@ -379,7 +379,7 @@ then be used to authenticate with the MCP server.
}
```
#### HTTP-based MCP Server
#### HTTP-based MCP server
```json
{
@@ -392,7 +392,7 @@ then be used to authenticate with the MCP server.
}
```
#### HTTP-based MCP Server with Custom Headers
#### HTTP-based MCP Server with custom headers
```json
{
@@ -410,7 +410,7 @@ then be used to authenticate with the MCP server.
}
```
#### MCP Server with Tool Filtering
#### MCP server with tool filtering
```json
{
@@ -426,7 +426,7 @@ then be used to authenticate with the MCP server.
}
```
### SSE MCP Server with SA Impersonation
### SSE MCP server with SA impersonation
```json
{
@@ -441,12 +441,12 @@ then be used to authenticate with the MCP server.
}
```
## Discovery Process Deep Dive
## Discovery process deep dive
When the Gemini CLI starts, it performs MCP server discovery through the
following detailed process:
### 1. Server Iteration and Connection
### 1. Server iteration and connection
For each configured server in `mcpServers`:
@@ -460,7 +460,7 @@ For each configured server in `mcpServers`:
4. **Error handling:** Connection failures are logged and the server status is
set to `DISCONNECTED`
### 2. Tool Discovery
### 2. Tool discovery
Upon successful connection:
@@ -475,7 +475,7 @@ Upon successful connection:
- Names longer than 63 characters are truncated with middle replacement
(`___`)
### 3. Conflict Resolution
### 3. Conflict resolution
When multiple servers expose tools with the same name:
@@ -486,7 +486,7 @@ When multiple servers expose tools with the same name:
3. **Registry tracking:** The tool registry maintains mappings between server
names and their tools
### 4. Schema Processing
### 4. Schema processing
Tool parameter schemas undergo sanitization for Gemini API compatibility:
@@ -496,7 +496,7 @@ Tool parameter schemas undergo sanitization for Gemini API compatibility:
compatibility)
- **Recursive processing** applies to nested schemas
### 5. Connection Management
### 5. Connection management
After discovery:
@@ -507,23 +507,23 @@ After discovery:
- **Status updates:** Final server statuses are set to `CONNECTED` or
`DISCONNECTED`
## Tool Execution Flow
## Tool execution flow
When the Gemini model decides to use an MCP tool, the following execution flow
occurs:
### 1. Tool Invocation
### 1. Tool invocation
The model generates a `FunctionCall` with:
- **Tool name:** The registered name (potentially prefixed)
- **Arguments:** JSON object matching the tool's parameter schema
### 2. Confirmation Process
### 2. Confirmation process
Each `DiscoveredMCPTool` implements sophisticated confirmation logic:
#### Trust-based Bypass
#### Trust-based bypass
```typescript
if (this.trust) {
@@ -531,14 +531,14 @@ if (this.trust) {
}
```
#### Dynamic Allow-listing
#### Dynamic allow-listing
The system maintains internal allow-lists for:
- **Server-level:** `serverName` → All tools from this server are trusted
- **Tool-level:** `serverName.toolName` → This specific tool is trusted
#### User Choice Handling
#### User choice handling
When confirmation is required, users can choose:
@@ -566,7 +566,7 @@ Upon confirmation (or trust bypass):
3. **Response processing:** Results are formatted for both LLM context and user
display
### 4. Response Handling
### 4. Response handling
The execution result contains:
@@ -576,7 +576,7 @@ The execution result contains:
## How to interact with your MCP server
### Using the `/mcp` Command
### Using the `/mcp` command
The `/mcp` command provides comprehensive information about your MCP server
setup:
@@ -593,7 +593,7 @@ This displays:
- **Available tools:** List of tools from each server with descriptions
- **Discovery state:** Overall discovery process status
### Example `/mcp` Output
### Example `/mcp` output
```
MCP Servers Status:
@@ -615,7 +615,7 @@ MCP Servers Status:
Discovery State: COMPLETED
```
### Tool Usage
### Tool usage
Once discovered, MCP tools are available to the Gemini model like built-in
tools. The model will automatically:
@@ -625,27 +625,27 @@ tools. The model will automatically:
3. **Execute tools** with proper parameters
4. **Display results** in a user-friendly format
## Status Monitoring and Troubleshooting
## Status monitoring and troubleshooting
### Connection States
### Connection states
The MCP integration tracks several states:
#### Server Status (`MCPServerStatus`)
#### Server status (`MCPServerStatus`)
- **`DISCONNECTED`:** Server is not connected or has errors
- **`CONNECTING`:** Connection attempt in progress
- **`CONNECTED`:** Server is connected and ready
#### Discovery State (`MCPDiscoveryState`)
#### Discovery state (`MCPDiscoveryState`)
- **`NOT_STARTED`:** Discovery hasn't begun
- **`IN_PROGRESS`:** Currently discovering servers
- **`COMPLETED`:** Discovery finished (with or without errors)
### Common Issues and Solutions
### Common issues and solutions
#### Server Won't Connect
#### Server won't connect
**Symptoms:** Server shows `DISCONNECTED` status
@@ -657,7 +657,7 @@ The MCP integration tracks several states:
4. **Review logs:** Look for error messages in the CLI output
5. **Verify permissions:** Ensure the CLI can execute the server command
#### No Tools Discovered
#### No tools discovered
**Symptoms:** Server connects but no tools are available
@@ -669,7 +669,7 @@ The MCP integration tracks several states:
3. **Review server logs:** Check stderr output for server-side errors
4. **Test tool listing:** Manually test your server's tool discovery endpoint
#### Tools Not Executing
#### Tools not executing
**Symptoms:** Tools are discovered but fail during execution
@@ -680,7 +680,7 @@ The MCP integration tracks several states:
3. **Error handling:** Check if your tool is throwing unhandled exceptions
4. **Timeout issues:** Consider increasing the `timeout` setting
#### Sandbox Compatibility
#### Sandbox compatibility
**Symptoms:** MCP servers fail when sandboxing is enabled
@@ -693,7 +693,7 @@ The MCP integration tracks several states:
4. **Environment variables:** Verify required environment variables are passed
through
### Debugging Tips
### Debugging tips
1. **Enable debug mode:** Run the CLI with `--debug` for verbose output
2. **Check stderr:** MCP server stderr is captured and logged (INFO messages
@@ -703,9 +703,9 @@ The MCP integration tracks several states:
functionality
5. **Use `/mcp` frequently:** Monitor server status during development
## Important Notes
## Important notes
### Security Considerations
### Security sonsiderations
- **Trust settings:** The `trust` option bypasses all confirmation dialogs. Use
cautiously and only for servers you completely control
@@ -716,7 +716,7 @@ The MCP integration tracks several states:
- **Private data:** Using broadly scoped personal access tokens can lead to
information leakage between repositories
### Performance and Resource Management
### Performance and resource management
- **Connection persistence:** The CLI maintains persistent connections to
servers that successfully register tools
@@ -727,7 +727,7 @@ The MCP integration tracks several states:
- **Resource monitoring:** MCP servers run as separate processes and consume
system resources
### Schema Compatibility
### Schema compatibility
- **Property stripping:** The system automatically removes certain schema
properties (`$schema`, `additionalProperties`) for Gemini API compatibility
@@ -740,7 +740,7 @@ This comprehensive integration makes MCP servers a powerful way to extend the
Gemini CLI's capabilities while maintaining security, reliability, and ease of
use.
## Returning Rich Content from Tools
## Returning rich content from tools
MCP tools are not limited to returning simple text. You can return rich,
multi-part content, including text, images, audio, and other binary data in a
@@ -751,7 +751,7 @@ All data returned from the tool is processed and sent to the model as context
for its next generation, enabling it to reason about or summarize the provided
information.
### How It Works
### How it works
To return rich content, your tool's response must adhere to the MCP
specification for a
@@ -769,7 +769,7 @@ supported block types include:
- `resource` (embedded content)
- `resource_link`
### Example: Returning Text and an Image
### Example: Returning text and an image
Here is an example of a valid JSON response from an MCP tool that returns both a
text description and an image:
@@ -805,13 +805,13 @@ When the Gemini CLI receives this response, it will:
This enables you to build sophisticated tools that can provide rich, multi-modal
context to the Gemini model.
## MCP Prompts as Slash Commands
## MCP prompts as slash commands
In addition to tools, MCP servers can expose predefined prompts that can be
executed as slash commands within the Gemini CLI. This allows you to create
shortcuts for common or complex queries that can be easily invoked by name.
### Defining Prompts on the Server
### Defining prompts on the server
Here's a small example of a stdio MCP server that defines prompts:
@@ -862,7 +862,7 @@ This can be included in `settings.json` under `mcpServers` with:
}
```
### Invoking Prompts
### Invoking prompts
Once a prompt is discovered, you can invoke it using its name as a slash
command. The CLI will automatically handle parsing arguments.
@@ -883,7 +883,7 @@ substituting the arguments into the prompt template and returning the final
prompt text. The CLI then sends this prompt to the model for execution. This
provides a convenient way to automate and share common workflows.
## Managing MCP Servers with `gemini mcp`
## Managing MCP servers with `gemini mcp`
While you can always configure MCP servers by manually editing your
`settings.json` file, the Gemini CLI provides a convenient set of commands to
@@ -891,7 +891,7 @@ manage your server configurations programmatically. These commands streamline
the process of adding, listing, and removing MCP servers without needing to
directly edit JSON files.
### Adding a Server (`gemini mcp add`)
### Adding a server (`gemini mcp add`)
The `add` command configures a new MCP server in your `settings.json`. Based on
the scope (`-s, --scope`), it will be added to either the user config
@@ -908,7 +908,7 @@ gemini mcp add [options] <name> <commandOrUrl> [args...]
`http`/`sse`).
- `[args...]`: Optional arguments for a `stdio` command.
**Options (Flags):**
**Options (flags):**
- `-s, --scope`: Configuration scope (user or project). [default: "project"]
- `-t, --transport`: Transport type (stdio, sse, http). [default: "stdio"]
@@ -966,7 +966,7 @@ gemini mcp add --transport sse sse-server https://api.example.com/sse/
gemini mcp add --transport sse --header "Authorization: Bearer abc123" secure-sse https://api.example.com/sse/
```
### Listing Servers (`gemini mcp list`)
### Listing servers (`gemini mcp list`)
To view all MCP servers currently configured, use the `list` command. It
displays each server's name, configuration details, and connection status. This
@@ -978,7 +978,7 @@ command has no flags.
gemini mcp list
```
**Example Output:**
**Example output:**
```sh
✓ stdio-server: command: python3 server.py (stdio) - Connected
@@ -986,7 +986,7 @@ gemini mcp list
✗ sse-server: https://api.example.com/sse (sse) - Disconnected
```
### Removing a Server (`gemini mcp remove`)
### Removing a server (`gemini mcp remove`)
To delete a server from your configuration, use the `remove` command with the
server's name.
@@ -997,7 +997,7 @@ server's name.
gemini mcp remove <name>
```
**Options (Flags):**
**Options (flags):**
- `-s, --scope`: Configuration scope (user or project). [default: "project"]

View File

@@ -1,4 +1,4 @@
# Memory Tool (`save_memory`)
# Memory tool (`save_memory`)
This document describes the `save_memory` tool for the Gemini CLI.

View File

@@ -1,4 +1,4 @@
# Shell Tool (`run_shell_command`)
# Shell tool (`run_shell_command`)
This document describes the `run_shell_command` tool for the Gemini CLI.
@@ -71,7 +71,7 @@ run_shell_command(command="npm run dev &", description="Start development server
You can configure the behavior of the `run_shell_command` tool by modifying your
`settings.json` file or by using the `/settings` command in the Gemini CLI.
### Enabling Interactive Commands
### Enabling interactive commands
To enable interactive commands, you need to set the
`tools.shell.enableInteractiveShell` setting to `true`. This will use `node-pty`
@@ -91,7 +91,7 @@ implementation, which does not support interactive commands.
}
```
### Showing Color in Output
### Showing color in output
To show color in the shell output, you need to set the `tools.shell.showColor`
setting to `true`. **Note: This setting only applies when
@@ -109,7 +109,7 @@ setting to `true`. **Note: This setting only applies when
}
```
### Setting the Pager
### Setting the pager
You can set a custom pager for the shell output by setting the
`tools.shell.pager` setting. The default pager is `cat`. **Note: This setting
@@ -127,7 +127,7 @@ only applies when `tools.shell.enableInteractiveShell` is enabled.**
}
```
## Interactive Commands
## Interactive commands
The `run_shell_command` tool now supports interactive commands by integrating a
pseudo-terminal (pty). This allows you to run commands that require real-time
@@ -149,13 +149,13 @@ including complex TUIs, will be rendered correctly.
background. The `Background PIDs` field will contain the process ID of the
background process.
## Environment Variables
## Environment variables
When `run_shell_command` executes a command, it sets the `GEMINI_CLI=1`
environment variable in the subprocess's environment. This allows scripts or
tools to detect if they are being run from within the Gemini CLI.
## Command Restrictions
## Command restrictions
You can restrict the commands that can be executed by the `run_shell_command`
tool by using the `tools.core` and `tools.exclude` settings in your
@@ -174,16 +174,16 @@ configuration file.
The validation logic is designed to be secure and flexible:
1. **Command Chaining Disabled**: The tool automatically splits commands
1. **Command chaining disabled**: The tool automatically splits commands
chained with `&&`, `||`, or `;` and validates each part separately. If any
part of the chain is disallowed, the entire command is blocked.
2. **Prefix Matching**: The tool uses prefix matching. For example, if you
2. **Prefix matching**: The tool uses prefix matching. For example, if you
allow `git`, you can run `git status` or `git log`.
3. **Blocklist Precedence**: The `tools.exclude` list is always checked first.
3. **Blocklist precedence**: The `tools.exclude` list is always checked first.
If a command matches a blocked prefix, it will be denied, even if it also
matches an allowed prefix in `tools.core`.
### Command Restriction Examples
### Command restriction examples
**Allow only specific command prefixes**
@@ -251,7 +251,7 @@ To block all shell commands, add the `run_shell_command` wildcard to
- `ls -l`: Blocked
- `any other command`: Blocked
## Security Note for `excludeTools`
## Security note for `excludeTools`
Command-specific restrictions in `excludeTools` for `run_shell_command` are
based on simple string matching and can be easily bypassed. This feature is

View File

@@ -1,4 +1,4 @@
# Todo Tool (`write_todos`)
# Todo tool (`write_todos`)
This document describes the `write_todos` tool for the Gemini CLI.
@@ -24,11 +24,11 @@ alignment where the agent is less likely to lose track of its current goal.
The agent uses this tool to break down complex multi-step requests into a clear
plan.
- **Progress Tracking:** The agent updates this list as it works, marking tasks
- **Progress tracking:** The agent updates this list as it works, marking tasks
as `completed` when done.
- **Single Focus:** Only one task will be marked `in_progress` at a time,
- **Single socus:** Only one task will be marked `in_progress` at a time,
indicating exactly what the agent is currently working on.
- **Dynamic Updates:** The plan may evolve as the agent discovers new
- **Dynamic updates:** The plan may evolve as the agent discovers new
information, leading to new tasks being added or unnecessary ones being
cancelled.
@@ -53,5 +53,5 @@ write_todos({
- **Enabling:** This tool is enabled by default. You can disable it in your
`settings.json` file by setting `"useWriteTodos": false`.
- **Intended Use:** This tool is primarily used by the agent for complex,
- **Intended use:** This tool is primarily used by the agent for complex,
multi-turn tasks. It is generally not used for simple, single-turn questions.

View File

@@ -1,4 +1,4 @@
# Web Fetch Tool (`web_fetch`)
# Web fetch tool (`web_fetch`)
This document describes the `web_fetch` tool for the Gemini CLI.

View File

@@ -1,4 +1,4 @@
# Web Search Tool (`google_web_search`)
# Web search tool (`google_web_search`)
This document describes the `google_web_search` tool.

View File

@@ -90,7 +90,7 @@ topics on:
`advanced.excludedEnvVars` setting in your `settings.json` to exclude fewer
variables.
## Exit Codes
## Exit codes
The Gemini CLI uses specific exit codes to indicate the reason for termination.
This is especially useful for scripting and automation.
@@ -103,7 +103,7 @@ This is especially useful for scripting and automation.
| 52 | `FatalConfigError` | A configuration file (`settings.json`) is invalid or contains errors. |
| 53 | `FatalTurnLimitedError` | The maximum number of conversational turns for the session was reached. (non-interactive mode only) |
## Debugging Tips
## Debugging tips
- **CLI debugging:**
- Use the `--verbose` flag (if available) with CLI commands for more detailed
@@ -129,7 +129,7 @@ This is especially useful for scripting and automation.
- Always run `npm run preflight` before committing code. This can catch many
common issues related to formatting, linting, and type errors.
## Existing GitHub Issues similar to yours or creating new Issues
## Existing GitHub issues similar to yours or creating new issues
If you encounter an issue that was not covered here in this _Troubleshooting
guide_, consider searching the Gemini CLI