LM Studio OSS Support (#2312)

## Overview

Adds LM Studio OSS support. Closes #1883


### Changes
This PR enhances the behavior of `--oss` flag to support LM Studio as a
provider. Additionally, it introduces a new flag`--local-provider` which
can take in `lmstudio` or `ollama` as values if the user wants to
explicitly choose which one to use.

If no provider is specified `codex --oss` will auto-select the provider
based on whichever is running.

#### Additional enhancements 
The default can be set using `oss-provider` in config like:

```
oss_provider = "lmstudio"
```

For non-interactive users, they will need to either provide the provider
as an arg or have it in their `config.toml`

### Notes
For best performance, [set the default context
length](https://lmstudio.ai/docs/app/advanced/per-model) for gpt-oss to
the maximum your machine can support

---------

Co-authored-by: Matt Clayton <matt@lmstudio.ai>
Co-authored-by: Eric Traut <etraut@openai.com>
This commit is contained in:
rugvedS07
2025-11-17 14:49:09 -05:00
committed by GitHub
parent 842a1b7fe7
commit 837bc98a1d
21 changed files with 1315 additions and 69 deletions

View File

@@ -253,6 +253,20 @@ This is analogous to `model_context_window`, but for the maximum number of outpu
> See also [`codex exec`](./exec.md) to see how these model settings influence non-interactive runs.
### oss_provider
Specifies the default OSS provider to use when running Codex. This is used when the `--oss` flag is provided without a specific provider.
Valid values are:
- `"lmstudio"` - Use LM Studio as the local model provider
- `"ollama"` - Use Ollama as the local model provider
```toml
# Example: Set default OSS provider to LM Studio
oss_provider = "lmstudio"
```
## Execution environment
### approval_policy