Compare commits

..

33 Commits
v1.2.7 ... beta

Author SHA1 Message Date
opencode-agent[bot]
650354e93b Apply PR #14038: feat: add list sessions for all sessions 2026-02-19 23:14:53 +00:00
opencode-agent[bot]
bc38ce8ec8 Apply PR #12633: feat(tui): add auto-accept mode for permission requests 2026-02-19 23:14:53 +00:00
opencode-agent[bot]
e6d0c9711b Apply PR #12022: feat: update tui model dialog to utilize model family to reduce noise in list 2026-02-19 23:14:52 +00:00
opencode-agent[bot]
4e2105ff3f Apply PR #11811: feat: make plan mode the default 2026-02-19 23:14:51 +00:00
Dax Raad
f28d3e0781 core: remove User-Agent header assertion from LLM test to fix failing test 2026-02-19 18:10:26 -05:00
opencode-agent[bot]
6f0ce9d84b chore: generate 2026-02-19 22:51:05 +00:00
Dax Raad
973715f3da anthropic legal requests 2026-02-19 17:49:22 -05:00
opencode
f2090b26c1 release: v1.2.8 2026-02-19 22:38:42 +00:00
Adam
fca0166488 fix(app): black screen on launch with sidecar server 2026-02-19 16:23:33 -06:00
opencode-agent[bot]
686dd330a0 chore: generate 2026-02-19 22:19:09 +00:00
tctev
193013a44d feat(opencode): support adaptive thinking for claude sonnet 4.6 (#14283)
Co-authored-by: tctev <224793535+tctev@users.noreply.github.com>
Co-authored-by: Aiden Cline <63023139+rekram1-node@users.noreply.github.com>
Co-authored-by: Aiden Cline <aidenpcline@gmail.com>
2026-02-19 16:17:57 -06:00
Yanosh Kunsh
824ab4cecc feat(tui): add custom tool and mcp call responses visible and collapsable (#10649)
Co-authored-by: Aiden Cline <63023139+rekram1-node@users.noreply.github.com>
2026-02-19 15:36:40 -06:00
Adam
7a42ecdddb chore: cleanup 2026-02-19 21:27:39 +00:00
Adam
dd011e879c fix(app): clear todos on abort 2026-02-19 21:27:39 +00:00
Ryan Vogel
84643f4192 feat: move session listing to experimental routes with enhanced query parameters 2026-02-18 19:04:18 -05:00
Ryan Vogel
34e69735cb feat: add cursor pagination to global session list 2026-02-17 19:47:55 -05:00
Ryan Vogel
b9ec4e6b97 Merge branch 'dev' into feat/global-recent-sessions 2026-02-17 18:38:45 -05:00
Ryan Vogel
007e633465 Merge branch 'dev' into feat/global-recent-sessions 2026-02-17 18:33:15 -05:00
Ryan Vogel
c0e0156680 feat: add list sessions for all sessions 2026-02-17 18:27:11 -05:00
Dax
e31f00ad22 Merge branch 'dev' into feat/auto-accept-permissions 2026-02-16 21:50:34 -05:00
LukeParkerDev
a90e8de050 add missing return 2026-02-11 13:24:17 +10:00
Aiden Cline
eabf770053 Merge branch 'dev' into utilize-family-in-dialog 2026-02-10 14:43:15 -06:00
Dax
86d7bdc542 Merge branch 'dev' into feat/auto-accept-permissions 2026-02-09 10:55:01 -05:00
Dax
d3ab78bba0 Merge branch 'dev' into feat/auto-accept-permissions 2026-02-09 10:04:40 -05:00
Dax Raad
a531f3f36d core: run command build agent now auto-accepts file edits to reduce workflow interruptions while still requiring confirmation for bash commands 2026-02-07 20:00:09 -05:00
Dax Raad
bb3382311d tui: standardize autoedit indicator text styling to match other status labels 2026-02-07 19:57:45 -05:00
Dax Raad
ad545d0cc9 tui: allow auto-accepting only edit permissions instead of all permissions 2026-02-07 19:52:53 -05:00
Dax Raad
ac244b1458 tui: add searchable 'toggle' keywords to command palette and show current state in toggle titles 2026-02-07 17:03:34 -05:00
Dax Raad
f202536b65 tui: show enable/disable state in permission toggle and make it searchable by 'toggle permissions' 2026-02-07 16:57:48 -05:00
Dax Raad
405cc3f610 tui: streamline permission toggle command naming and add keyboard shortcut support
Rename 'Toggle autoaccept permissions' to 'Toggle permissions' for clarity
and move the command to the Agent category for better discoverability.
Add permission_auto_accept_toggle keybind to enable keyboard shortcut
toggling of auto-accept mode for permission requests.
2026-02-07 16:51:55 -05:00
Dax Raad
878c1b8c2d feat(tui): add auto-accept mode for permission requests
Add a toggleable auto-accept mode that automatically accepts all incoming
permission requests with a 'once' reply. This is useful for users who want
to streamline their workflow when they trust the agent's actions.

Changes:
- Add permission_auto_accept keybind (default: shift+tab) to config
- Remove default for agent_cycle_reverse (was shift+tab)
- Add auto-accept logic in sync.tsx to auto-reply when enabled
- Add command bar action to toggle auto-accept mode (copy: "Toggle autoaccept permissions")
- Add visual indicator showing 'auto-accept' when active
- Store auto-accept state in KV for persistence across sessions
2026-02-07 16:44:39 -05:00
Aiden Cline
bb4d978684 feat: update tui model dialog to utilize model family to reduce noise in list 2026-02-03 15:48:40 -06:00
Dax Raad
afec40e8da feat: make plan mode the default, remove experimental flag
- Remove OPENCODE_EXPERIMENTAL_PLAN_MODE flag from flag.ts
- Update prompt.ts to always use plan mode logic
- Update registry.ts to always include plan tools in CLI
- Remove flag documentation from cli.mdx
2026-02-02 10:40:40 -05:00
51 changed files with 623 additions and 336 deletions

View File

@@ -25,7 +25,7 @@
},
"packages/app": {
"name": "@opencode-ai/app",
"version": "1.2.7",
"version": "1.2.8",
"dependencies": {
"@kobalte/core": "catalog:",
"@opencode-ai/sdk": "workspace:*",
@@ -75,7 +75,7 @@
},
"packages/console/app": {
"name": "@opencode-ai/console-app",
"version": "1.2.7",
"version": "1.2.8",
"dependencies": {
"@cloudflare/vite-plugin": "1.15.2",
"@ibm/plex": "6.4.1",
@@ -109,7 +109,7 @@
},
"packages/console/core": {
"name": "@opencode-ai/console-core",
"version": "1.2.7",
"version": "1.2.8",
"dependencies": {
"@aws-sdk/client-sts": "3.782.0",
"@jsx-email/render": "1.1.1",
@@ -136,7 +136,7 @@
},
"packages/console/function": {
"name": "@opencode-ai/console-function",
"version": "1.2.7",
"version": "1.2.8",
"dependencies": {
"@ai-sdk/anthropic": "2.0.0",
"@ai-sdk/openai": "2.0.2",
@@ -160,7 +160,7 @@
},
"packages/console/mail": {
"name": "@opencode-ai/console-mail",
"version": "1.2.7",
"version": "1.2.8",
"dependencies": {
"@jsx-email/all": "2.2.3",
"@jsx-email/cli": "1.4.3",
@@ -184,7 +184,7 @@
},
"packages/desktop": {
"name": "@opencode-ai/desktop",
"version": "1.2.7",
"version": "1.2.8",
"dependencies": {
"@opencode-ai/app": "workspace:*",
"@opencode-ai/ui": "workspace:*",
@@ -217,7 +217,7 @@
},
"packages/enterprise": {
"name": "@opencode-ai/enterprise",
"version": "1.2.7",
"version": "1.2.8",
"dependencies": {
"@opencode-ai/ui": "workspace:*",
"@opencode-ai/util": "workspace:*",
@@ -246,7 +246,7 @@
},
"packages/function": {
"name": "@opencode-ai/function",
"version": "1.2.7",
"version": "1.2.8",
"dependencies": {
"@octokit/auth-app": "8.0.1",
"@octokit/rest": "catalog:",
@@ -262,7 +262,7 @@
},
"packages/opencode": {
"name": "opencode",
"version": "1.2.7",
"version": "1.2.8",
"bin": {
"opencode": "./bin/opencode",
},
@@ -376,7 +376,7 @@
},
"packages/plugin": {
"name": "@opencode-ai/plugin",
"version": "1.2.7",
"version": "1.2.8",
"dependencies": {
"@opencode-ai/sdk": "workspace:*",
"zod": "catalog:",
@@ -396,7 +396,7 @@
},
"packages/sdk/js": {
"name": "@opencode-ai/sdk",
"version": "1.2.7",
"version": "1.2.8",
"devDependencies": {
"@hey-api/openapi-ts": "0.90.10",
"@tsconfig/node22": "catalog:",
@@ -407,7 +407,7 @@
},
"packages/slack": {
"name": "@opencode-ai/slack",
"version": "1.2.7",
"version": "1.2.8",
"dependencies": {
"@opencode-ai/sdk": "workspace:*",
"@slack/bolt": "^3.17.1",
@@ -420,7 +420,7 @@
},
"packages/ui": {
"name": "@opencode-ai/ui",
"version": "1.2.7",
"version": "1.2.8",
"dependencies": {
"@kobalte/core": "catalog:",
"@opencode-ai/sdk": "workspace:*",
@@ -462,7 +462,7 @@
},
"packages/util": {
"name": "@opencode-ai/util",
"version": "1.2.7",
"version": "1.2.8",
"dependencies": {
"zod": "catalog:",
},
@@ -473,7 +473,7 @@
},
"packages/web": {
"name": "@opencode-ai/web",
"version": "1.2.7",
"version": "1.2.8",
"dependencies": {
"@astrojs/cloudflare": "12.6.3",
"@astrojs/markdown-remark": "6.3.1",

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/app",
"version": "1.2.7",
"version": "1.2.8",
"description": "",
"type": "module",
"exports": {

View File

@@ -73,12 +73,16 @@ export function createPromptSubmit(input: PromptSubmitInput) {
const abort = async () => {
const sessionID = params.id
if (!sessionID) return Promise.resolve()
globalSync.todo.set(sessionID, [])
const [, setStore] = globalSync.child(sdk.directory)
setStore("todo", sessionID, [])
const queued = pending.get(sessionID)
if (queued) {
queued.abort.abort()
queued.cleanup()
pending.delete(sessionID)
globalSync.todo.set(sessionID, undefined)
return Promise.resolve()
}
return sdk.client.session
@@ -86,9 +90,6 @@ export function createPromptSubmit(input: PromptSubmitInput) {
sessionID,
})
.catch(() => {})
.finally(() => {
globalSync.todo.set(sessionID, undefined)
})
}
const restoreCommentItems = (items: CommentItem[]) => {

View File

@@ -1098,6 +1098,7 @@ export default function Page() {
comments.clear()
resumeScroll()
}}
onResponseSubmit={resumeScroll}
setPromptDockRef={(el) => {
promptDock = el
}}

View File

@@ -16,6 +16,7 @@ export function SessionComposerRegion(props: {
newSessionWorktree: string
onNewSessionWorktreeReset: () => void
onSubmit: () => void
onResponseSubmit: () => void
setPromptDockRef: (el: HTMLDivElement) => void
}) {
const params = useParams()
@@ -57,7 +58,7 @@ export function SessionComposerRegion(props: {
<Show when={props.state.questionRequest()} keyed>
{(request) => (
<div>
<SessionQuestionDock request={request} />
<SessionQuestionDock request={request} onSubmit={props.onResponseSubmit} />
</div>
)}
</Show>
@@ -68,7 +69,10 @@ export function SessionComposerRegion(props: {
<SessionPermissionDock
request={request}
responding={props.state.permissionResponding()}
onDecide={props.state.decide}
onDecide={(response) => {
props.onResponseSubmit()
props.state.decide(response)
}}
/>
</div>
)}

View File

@@ -8,7 +8,7 @@ import type { QuestionAnswer, QuestionRequest } from "@opencode-ai/sdk/v2"
import { useLanguage } from "@/context/language"
import { useSDK } from "@/context/sdk"
export const SessionQuestionDock: Component<{ request: QuestionRequest }> = (props) => {
export const SessionQuestionDock: Component<{ request: QuestionRequest; onSubmit: () => void }> = (props) => {
const sdk = useSDK()
const language = useLanguage()
@@ -115,6 +115,7 @@ export const SessionQuestionDock: Component<{ request: QuestionRequest }> = (pro
const reply = async (answers: QuestionAnswer[]) => {
if (store.sending) return
props.onSubmit()
setStore("sending", true)
try {
await sdk.client.question.reply({ requestID: props.request.id, answers })
@@ -128,6 +129,7 @@ export const SessionQuestionDock: Component<{ request: QuestionRequest }> = (pro
const reject = async () => {
if (store.sending) return
props.onSubmit()
setStore("sending", true)
try {
await sdk.client.question.reject({ requestID: props.request.id })

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/console-app",
"version": "1.2.7",
"version": "1.2.8",
"type": "module",
"license": "MIT",
"scripts": {

View File

@@ -1,7 +1,7 @@
{
"$schema": "https://json.schemastore.org/package.json",
"name": "@opencode-ai/console-core",
"version": "1.2.7",
"version": "1.2.8",
"private": true,
"type": "module",
"license": "MIT",

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/console-function",
"version": "1.2.7",
"version": "1.2.8",
"$schema": "https://json.schemastore.org/package.json",
"private": true,
"type": "module",

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/console-mail",
"version": "1.2.7",
"version": "1.2.8",
"dependencies": {
"@jsx-email/all": "2.2.3",
"@jsx-email/cli": "1.4.3",

View File

@@ -1,7 +1,7 @@
{
"name": "@opencode-ai/desktop",
"private": true,
"version": "1.2.7",
"version": "1.2.8",
"type": "module",
"license": "MIT",
"scripts": {

View File

@@ -472,12 +472,10 @@ render(() => {
}
return (
<Show when={defaultServer.loading ? false : defaultServer.latest}>
{(defaultServer) => (
<AppInterface defaultServer={defaultServer() ?? ServerConnection.key(server)} servers={[server]}>
<Inner />
</AppInterface>
)}
<Show when={!defaultServer.loading}>
<AppInterface defaultServer={defaultServer.latest ?? ServerConnection.key(server)} servers={[server]}>
<Inner />
</AppInterface>
</Show>
)
}}
@@ -492,19 +490,34 @@ type ServerReadyData = { url: string; password: string | null }
// Gate component that waits for the server to be ready
function ServerGate(props: { children: (data: Accessor<ServerReadyData>) => JSX.Element }) {
const [serverData] = createResource(() => commands.awaitInitialization(new Channel<InitStep>() as any))
if (serverData.state === "errored") throw serverData.error
return (
<Show
when={serverData.state !== "pending" && serverData()}
when={serverData.state !== "errored"}
fallback={
<div class="h-screen w-screen flex flex-col items-center justify-center bg-background-base">
<Splash class="w-16 h-20 opacity-50 animate-pulse" />
<div class="h-screen w-screen flex flex-col items-center justify-center bg-background-base gap-4">
<Splash class="w-16 h-20 opacity-50" />
<div class="max-w-md px-4 text-center">
<p class="text-sm font-medium text-red-400">Failed to start server</p>
<p class="mt-2 text-xs text-zinc-400 break-words whitespace-pre-wrap">
{String(serverData.error ?? "Unknown error")}
</p>
</div>
<div data-tauri-decorum-tb class="flex flex-row absolute top-0 right-0 z-10 h-10" />
</div>
}
>
{(data) => props.children(data)}
<Show
when={serverData.state !== "pending" && serverData()}
fallback={
<div class="h-screen w-screen flex flex-col items-center justify-center bg-background-base">
<Splash class="w-16 h-20 opacity-50 animate-pulse" />
<div data-tauri-decorum-tb class="flex flex-row absolute top-0 right-0 z-10 h-10" />
</div>
}
>
{(data) => props.children(data)}
</Show>
</Show>
)
}

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/enterprise",
"version": "1.2.7",
"version": "1.2.8",
"private": true,
"type": "module",
"license": "MIT",

View File

@@ -1,7 +1,7 @@
id = "opencode"
name = "OpenCode"
description = "The open source coding agent."
version = "1.2.7"
version = "1.2.8"
schema_version = 1
authors = ["Anomaly"]
repository = "https://github.com/anomalyco/opencode"
@@ -11,26 +11,26 @@ name = "OpenCode"
icon = "./icons/opencode.svg"
[agent_servers.opencode.targets.darwin-aarch64]
archive = "https://github.com/anomalyco/opencode/releases/download/v1.2.7/opencode-darwin-arm64.zip"
archive = "https://github.com/anomalyco/opencode/releases/download/v1.2.8/opencode-darwin-arm64.zip"
cmd = "./opencode"
args = ["acp"]
[agent_servers.opencode.targets.darwin-x86_64]
archive = "https://github.com/anomalyco/opencode/releases/download/v1.2.7/opencode-darwin-x64.zip"
archive = "https://github.com/anomalyco/opencode/releases/download/v1.2.8/opencode-darwin-x64.zip"
cmd = "./opencode"
args = ["acp"]
[agent_servers.opencode.targets.linux-aarch64]
archive = "https://github.com/anomalyco/opencode/releases/download/v1.2.7/opencode-linux-arm64.tar.gz"
archive = "https://github.com/anomalyco/opencode/releases/download/v1.2.8/opencode-linux-arm64.tar.gz"
cmd = "./opencode"
args = ["acp"]
[agent_servers.opencode.targets.linux-x86_64]
archive = "https://github.com/anomalyco/opencode/releases/download/v1.2.7/opencode-linux-x64.tar.gz"
archive = "https://github.com/anomalyco/opencode/releases/download/v1.2.8/opencode-linux-x64.tar.gz"
cmd = "./opencode"
args = ["acp"]
[agent_servers.opencode.targets.windows-x86_64]
archive = "https://github.com/anomalyco/opencode/releases/download/v1.2.7/opencode-windows-x64.zip"
archive = "https://github.com/anomalyco/opencode/releases/download/v1.2.8/opencode-windows-x64.zip"
cmd = "./opencode.exe"
args = ["acp"]

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/function",
"version": "1.2.7",
"version": "1.2.8",
"$schema": "https://json.schemastore.org/package.json",
"private": true,
"type": "module",

View File

@@ -1,6 +1,6 @@
{
"$schema": "https://json.schemastore.org/package.json",
"version": "1.2.7",
"version": "1.2.8",
"name": "opencode",
"type": "module",
"license": "MIT",

View File

@@ -63,6 +63,7 @@ export namespace Agent {
question: "deny",
plan_enter: "deny",
plan_exit: "deny",
edit: "ask",
// mirrors github.com/github/gitignore Node.gitignore pattern for .env files
read: {
"*": "allow",

View File

@@ -332,7 +332,6 @@ export const AuthLoginCommand = cmd({
value: x.id,
hint: {
opencode: "recommended",
anthropic: "Claude Max or API key",
openai: "ChatGPT Plus/Pro or API key",
}[x.id],
})),

View File

@@ -365,6 +365,11 @@ export const RunCommand = cmd({
action: "deny",
pattern: "*",
},
{
permission: "edit",
action: "allow",
pattern: "*",
},
]
function title() {

View File

@@ -457,6 +457,7 @@ function App() {
{
title: "Toggle MCPs",
value: "mcp.list",
search: "toggle mcps",
category: "Agent",
slash: {
name: "mcps",
@@ -532,8 +533,9 @@ function App() {
category: "System",
},
{
title: "Toggle appearance",
title: mode() === "dark" ? "Light mode" : "Dark mode",
value: "theme.switch_mode",
search: "toggle appearance",
onSelect: (dialog) => {
setMode(mode() === "dark" ? "light" : "dark")
dialog.clear()
@@ -572,6 +574,7 @@ function App() {
},
{
title: "Toggle debug panel",
search: "toggle debug",
category: "System",
value: "app.debug",
onSelect: (dialog) => {
@@ -581,6 +584,7 @@ function App() {
},
{
title: "Toggle console",
search: "toggle console",
category: "System",
value: "app.console",
onSelect: (dialog) => {
@@ -621,6 +625,7 @@ function App() {
{
title: terminalTitleEnabled() ? "Disable terminal title" : "Enable terminal title",
value: "terminal.title.toggle",
search: "toggle terminal title",
keybind: "terminal_title_toggle",
category: "System",
onSelect: (dialog) => {
@@ -636,6 +641,7 @@ function App() {
{
title: kv.get("animations_enabled", true) ? "Disable animations" : "Enable animations",
value: "app.toggle.animations",
search: "toggle animations",
category: "System",
onSelect: (dialog) => {
kv.set("animations_enabled", !kv.get("animations_enabled", true))
@@ -645,6 +651,7 @@ function App() {
{
title: kv.get("diff_wrap_mode", "word") === "word" ? "Disable diff wrapping" : "Enable diff wrapping",
value: "app.toggle.diffwrap",
search: "toggle diff wrapping",
category: "System",
onSelect: (dialog) => {
const current = kv.get("diff_wrap_mode", "word")

View File

@@ -7,6 +7,27 @@ import { useDialog } from "@tui/ui/dialog"
import { createDialogProviderOptions, DialogProvider } from "./dialog-provider"
import { useKeybind } from "../context/keybind"
import * as fuzzysort from "fuzzysort"
import type { Provider } from "@opencode-ai/sdk/v2"
function pickLatest(models: [string, Provider["models"][string]][]) {
const picks: Record<string, [string, Provider["models"][string]]> = {}
for (const item of models) {
const model = item[0]
const info = item[1]
const key = info.family ?? model
const prev = picks[key]
if (!prev) {
picks[key] = item
continue
}
if (info.release_date !== prev[1].release_date) {
if (info.release_date > prev[1].release_date) picks[key] = item
continue
}
if (model > prev[0]) picks[key] = item
}
return Object.values(picks)
}
export function useConnected() {
const sync = useSync()
@@ -21,6 +42,7 @@ export function DialogModel(props: { providerID?: string }) {
const dialog = useDialog()
const keybind = useKeybind()
const [query, setQuery] = createSignal("")
const [all, setAll] = createSignal(false)
const connected = useConnected()
const providers = createDialogProviderOptions()
@@ -72,8 +94,8 @@ export function DialogModel(props: { providerID?: string }) {
(provider) => provider.id !== "opencode",
(provider) => provider.name,
),
flatMap((provider) =>
pipe(
flatMap((provider) => {
const items = pipe(
provider.models,
entries(),
filter(([_, info]) => info.status !== "deprecated"),
@@ -104,8 +126,9 @@ export function DialogModel(props: { providerID?: string }) {
(x) => x.footer !== "Free",
(x) => x.title,
),
),
),
)
return items
}),
)
const popularProviders = !connected()
@@ -154,6 +177,13 @@ export function DialogModel(props: { providerID?: string }) {
local.model.toggleFavorite(option.value as { providerID: string; modelID: string })
},
},
{
keybind: keybind.all.model_show_all_toggle?.[0],
title: all() ? "Show latest only" : "Show all models",
onTrigger: () => {
setAll((value) => !value)
},
},
]}
onFilter={setQuery}
flat={true}

View File

@@ -77,6 +77,7 @@ export function Prompt(props: PromptProps) {
const renderer = useRenderer()
const { theme, syntax } = useTheme()
const kv = useKV()
const [autoaccept, setAutoaccept] = kv.signal<"none" | "edit">("permission_auto_accept", "edit")
function promptModelWarning() {
toast.show({
@@ -170,6 +171,17 @@ export function Prompt(props: PromptProps) {
command.register(() => {
return [
{
title: autoaccept() === "none" ? "Enable autoedit" : "Disable autoedit",
value: "permission.auto_accept.toggle",
search: "toggle permissions",
keybind: "permission_auto_accept_toggle",
category: "Agent",
onSelect: (dialog) => {
setAutoaccept(() => (autoaccept() === "none" ? "edit" : "none"))
dialog.clear()
},
},
{
title: "Clear prompt",
value: "prompt.clear",
@@ -996,23 +1008,30 @@ export function Prompt(props: PromptProps) {
cursorColor={theme.text}
syntaxStyle={syntax()}
/>
<box flexDirection="row" flexShrink={0} paddingTop={1} gap={1}>
<text fg={highlight()}>
{store.mode === "shell" ? "Shell" : Locale.titlecase(local.agent.current().name)}{" "}
</text>
<Show when={store.mode === "normal"}>
<box flexDirection="row" gap={1}>
<text flexShrink={0} fg={keybind.leader ? theme.textMuted : theme.text}>
{local.model.parsed().model}
</text>
<text fg={theme.textMuted}>{local.model.parsed().provider}</text>
<Show when={showVariant()}>
<text fg={theme.textMuted}>·</text>
<text>
<span style={{ fg: theme.warning, bold: true }}>{local.model.variant.current()}</span>
<box flexDirection="row" flexShrink={0} paddingTop={1} gap={1} justifyContent="space-between">
<box flexDirection="row" gap={1}>
<text fg={highlight()}>
{store.mode === "shell" ? "Shell" : Locale.titlecase(local.agent.current().name)}{" "}
</text>
<Show when={store.mode === "normal"}>
<box flexDirection="row" gap={1}>
<text flexShrink={0} fg={keybind.leader ? theme.textMuted : theme.text}>
{local.model.parsed().model}
</text>
</Show>
</box>
<text fg={theme.textMuted}>{local.model.parsed().provider}</text>
<Show when={showVariant()}>
<text fg={theme.textMuted}>·</text>
<text>
<span style={{ fg: theme.warning, bold: true }}>{local.model.variant.current()}</span>
</text>
</Show>
</box>
</Show>
</box>
<Show when={autoaccept() === "edit"}>
<text>
<span style={{ fg: theme.warning }}>autoedit</span>
</text>
</Show>
</box>
</box>

View File

@@ -25,6 +25,7 @@ import { createSimpleContext } from "./helper"
import type { Snapshot } from "@/snapshot"
import { useExit } from "./exit"
import { useArgs } from "./args"
import { useKV } from "./kv"
import { batch, onMount } from "solid-js"
import { Log } from "@/util/log"
import type { Path } from "@opencode-ai/sdk"
@@ -103,6 +104,8 @@ export const { use: useSync, provider: SyncProvider } = createSimpleContext({
})
const sdk = useSDK()
const kv = useKV()
const [autoaccept] = kv.signal<"none" | "edit">("permission_auto_accept", "edit")
sdk.event.listen((e) => {
const event = e.details
@@ -127,6 +130,13 @@ export const { use: useSync, provider: SyncProvider } = createSimpleContext({
case "permission.asked": {
const request = event.properties
if (autoaccept() === "edit" && request.permission === "edit") {
sdk.client.permission.reply({
reply: "once",
requestID: request.id,
})
break
}
const requests = store.permission[request.sessionID]
if (!requests) {
setStore("permission", request.sessionID, [request])
@@ -441,6 +451,7 @@ export const { use: useSync, provider: SyncProvider } = createSimpleContext({
get ready() {
return store.status !== "loading"
},
session: {
get(sessionID: string) {
const match = Binary.search(store.session, sessionID, (s) => s.id)

View File

@@ -46,6 +46,7 @@ export function Home() {
{
title: tipsHidden() ? "Show tips" : "Hide tips",
value: "tips.toggle",
search: "toggle tips",
keybind: "tips_toggle",
category: "System",
onSelect: (dialog) => {

View File

@@ -98,6 +98,7 @@ const context = createContext<{
showThinking: () => boolean
showTimestamps: () => boolean
showDetails: () => boolean
showGenericToolOutput: () => boolean
diffWrapMode: () => "word" | "none"
sync: ReturnType<typeof useSync>
}>()
@@ -152,6 +153,7 @@ export function Session() {
const [showHeader, setShowHeader] = kv.signal("header_visible", true)
const [diffWrapMode] = kv.signal<"word" | "none">("diff_wrap_mode", "word")
const [animationsEnabled, setAnimationsEnabled] = kv.signal("animations_enabled", true)
const [showGenericToolOutput, setShowGenericToolOutput] = kv.signal("generic_tool_output_visibility", false)
const wide = createMemo(() => dimensions().width > 120)
const sidebarVisible = createMemo(() => {
@@ -523,6 +525,7 @@ export function Session() {
{
title: sidebarVisible() ? "Hide sidebar" : "Show sidebar",
value: "session.sidebar.toggle",
search: "toggle sidebar",
keybind: "sidebar_toggle",
category: "Session",
onSelect: (dialog) => {
@@ -537,6 +540,7 @@ export function Session() {
{
title: conceal() ? "Disable code concealment" : "Enable code concealment",
value: "session.toggle.conceal",
search: "toggle code concealment",
keybind: "messages_toggle_conceal" as any,
category: "Session",
onSelect: (dialog) => {
@@ -547,6 +551,7 @@ export function Session() {
{
title: showTimestamps() ? "Hide timestamps" : "Show timestamps",
value: "session.toggle.timestamps",
search: "toggle timestamps",
category: "Session",
slash: {
name: "timestamps",
@@ -560,6 +565,7 @@ export function Session() {
{
title: showThinking() ? "Hide thinking" : "Show thinking",
value: "session.toggle.thinking",
search: "toggle thinking",
keybind: "display_thinking",
category: "Session",
slash: {
@@ -574,6 +580,7 @@ export function Session() {
{
title: showDetails() ? "Hide tool details" : "Show tool details",
value: "session.toggle.actions",
search: "toggle tool details",
keybind: "tool_details",
category: "Session",
onSelect: (dialog) => {
@@ -582,8 +589,9 @@ export function Session() {
},
},
{
title: "Toggle session scrollbar",
title: showScrollbar() ? "Hide session scrollbar" : "Show session scrollbar",
value: "session.toggle.scrollbar",
search: "toggle session scrollbar",
keybind: "scrollbar_toggle",
category: "Session",
onSelect: (dialog) => {
@@ -600,6 +608,15 @@ export function Session() {
dialog.clear()
},
},
{
title: showGenericToolOutput() ? "Hide generic tool output" : "Show generic tool output",
value: "session.toggle.generic_tool_output",
category: "Session",
onSelect: (dialog) => {
setShowGenericToolOutput((prev) => !prev)
dialog.clear()
},
},
{
title: "Page up",
value: "session.page.up",
@@ -974,6 +991,7 @@ export function Session() {
showThinking,
showTimestamps,
showDetails,
showGenericToolOutput,
diffWrapMode,
sync,
}}
@@ -1508,10 +1526,40 @@ type ToolProps<T extends Tool.Info> = {
part: ToolPart
}
function GenericTool(props: ToolProps<any>) {
const { theme } = useTheme()
const ctx = use()
const output = createMemo(() => props.output?.trim() ?? "")
const [expanded, setExpanded] = createSignal(false)
const lines = createMemo(() => output().split("\n"))
const maxLines = 3
const overflow = createMemo(() => lines().length > maxLines)
const limited = createMemo(() => {
if (expanded() || !overflow()) return output()
return [...lines().slice(0, maxLines), "…"].join("\n")
})
return (
<InlineTool icon="⚙" pending="Writing command..." complete={true} part={props.part}>
{props.tool} {input(props.input)}
</InlineTool>
<Show
when={props.output && ctx.showGenericToolOutput()}
fallback={
<InlineTool icon="⚙" pending="Writing command..." complete={true} part={props.part}>
{props.tool} {input(props.input)}
</InlineTool>
}
>
<BlockTool
title={`# ${props.tool} ${input(props.input)}`}
part={props.part}
onClick={overflow() ? () => setExpanded((prev) => !prev) : undefined}
>
<box gap={1}>
<text fg={theme.text}>{limited()}</text>
<Show when={overflow()}>
<text fg={theme.textMuted}>{expanded() ? "Click to collapse" : "Click to expand"}</text>
</Show>
</box>
</BlockTool>
</Show>
)
}

View File

@@ -34,6 +34,7 @@ export interface DialogSelectOption<T = any> {
title: string
value: T
description?: string
search?: string
footer?: JSX.Element | string
category?: string
disabled?: boolean
@@ -85,8 +86,8 @@ export function DialogSelect<T>(props: DialogSelectProps<T>) {
// users typically search by the item name, and not its category.
const result = fuzzysort
.go(needle, options, {
keys: ["title", "category"],
scoreFn: (r) => r[0].score * 2 + r[1].score,
keys: ["title", "category", "search"],
scoreFn: (r) => r[0].score * 2 + r[1].score + r[2].score,
})
.map((x) => x.obj)

View File

@@ -789,6 +789,7 @@ export namespace Config {
stash_delete: z.string().optional().default("ctrl+d").describe("Delete stash entry"),
model_provider_list: z.string().optional().default("ctrl+a").describe("Open provider list from model dialog"),
model_favorite_toggle: z.string().optional().default("ctrl+f").describe("Toggle model favorite status"),
model_show_all_toggle: z.string().optional().default("ctrl+o").describe("Toggle showing all models"),
session_share: z.string().optional().default("none").describe("Share current session"),
session_unshare: z.string().optional().default("none").describe("Unshare current session"),
session_interrupt: z.string().optional().default("escape").describe("Interrupt current session"),
@@ -829,7 +830,12 @@ export namespace Config {
command_list: z.string().optional().default("ctrl+p").describe("List available commands"),
agent_list: z.string().optional().default("<leader>a").describe("List agents"),
agent_cycle: z.string().optional().default("tab").describe("Next agent"),
agent_cycle_reverse: z.string().optional().default("shift+tab").describe("Previous agent"),
agent_cycle_reverse: z.string().optional().default("none").describe("Previous agent"),
permission_auto_accept_toggle: z
.string()
.optional()
.default("shift+tab")
.describe("Toggle auto-accept mode for permissions"),
variant_cycle: z.string().optional().default("ctrl+t").describe("Cycle model variants"),
input_clear: z.string().optional().default("ctrl+c").describe("Clear input field"),
input_paste: z.string().optional().default("ctrl+v").describe("Paste from clipboard"),

View File

@@ -50,7 +50,7 @@ export namespace Flag {
export const OPENCODE_EXPERIMENTAL_LSP_TY = truthy("OPENCODE_EXPERIMENTAL_LSP_TY")
export const OPENCODE_EXPERIMENTAL_LSP_TOOL = OPENCODE_EXPERIMENTAL || truthy("OPENCODE_EXPERIMENTAL_LSP_TOOL")
export const OPENCODE_DISABLE_FILETIME_CHECK = truthy("OPENCODE_DISABLE_FILETIME_CHECK")
export const OPENCODE_EXPERIMENTAL_PLAN_MODE = OPENCODE_EXPERIMENTAL || truthy("OPENCODE_EXPERIMENTAL_PLAN_MODE")
export const OPENCODE_EXPERIMENTAL_MARKDOWN = truthy("OPENCODE_EXPERIMENTAL_MARKDOWN")
export const OPENCODE_MODELS_URL = process.env["OPENCODE_MODELS_URL"]
export const OPENCODE_MODELS_PATH = process.env["OPENCODE_MODELS_PATH"]

View File

@@ -16,8 +16,6 @@ import { gitlabAuthPlugin as GitlabAuthPlugin } from "@gitlab/opencode-gitlab-au
export namespace Plugin {
const log = Log.create({ service: "plugin" })
const BUILTIN = ["opencode-anthropic-auth@0.0.13"]
// Built-in plugins that are directly imported (not installed from npm)
const INTERNAL_PLUGINS: PluginInstance[] = [CodexAuthPlugin, CopilotAuthPlugin, GitlabAuthPlugin]
@@ -47,9 +45,6 @@ export namespace Plugin {
let plugins = config.plugin ?? []
if (plugins.length) await Config.waitForDependencies()
if (!Flag.OPENCODE_DISABLE_DEFAULT_PLUGINS) {
plugins = [...BUILTIN, ...plugins]
}
for (let plugin of plugins) {
// ignore old codex plugin since it is supported first party now
@@ -59,24 +54,7 @@ export namespace Plugin {
const lastAtIndex = plugin.lastIndexOf("@")
const pkg = lastAtIndex > 0 ? plugin.substring(0, lastAtIndex) : plugin
const version = lastAtIndex > 0 ? plugin.substring(lastAtIndex + 1) : "latest"
const builtin = BUILTIN.some((x) => x.startsWith(pkg + "@"))
plugin = await BunProc.install(pkg, version).catch((err) => {
if (!builtin) throw err
const message = err instanceof Error ? err.message : String(err)
log.error("failed to install builtin plugin", {
pkg,
version,
error: message,
})
Bus.publish(Session.Event.Error, {
error: new NamedError.Unknown({
message: `Failed to install built-in plugin ${pkg}@${version}: ${message}`,
}).toObject(),
})
return ""
})
plugin = await BunProc.install(pkg, version)
if (!plugin) continue
}
const mod = await import(plugin)

View File

@@ -122,8 +122,7 @@ export namespace Provider {
autoload: false,
options: {
headers: {
"anthropic-beta":
"claude-code-20250219,interleaved-thinking-2025-05-14,fine-grained-tool-streaming-2025-05-14",
"anthropic-beta": "interleaved-thinking-2025-05-14,fine-grained-tool-streaming-2025-05-14",
},
},
}

View File

@@ -333,6 +333,10 @@ export namespace ProviderTransform {
if (!model.capabilities.reasoning) return {}
const id = model.id.toLowerCase()
const isAnthropicAdaptive = ["opus-4-6", "opus-4.6", "sonnet-4-6", "sonnet-4.6"].some((v) =>
model.api.id.includes(v),
)
const adaptiveEfforts = ["low", "medium", "high", "max"]
if (
id.includes("deepseek") ||
id.includes("minimax") ||
@@ -366,6 +370,19 @@ export namespace ProviderTransform {
case "@ai-sdk/gateway":
if (model.id.includes("anthropic")) {
if (isAnthropicAdaptive) {
return Object.fromEntries(
adaptiveEfforts.map((effort) => [
effort,
{
thinking: {
type: "adaptive",
},
effort,
},
]),
)
}
return {
high: {
thinking: {
@@ -502,10 +519,9 @@ export namespace ProviderTransform {
case "@ai-sdk/google-vertex/anthropic":
// https://v5.ai-sdk.dev/providers/ai-sdk-providers/google-vertex#anthropic-provider
if (model.api.id.includes("opus-4-6") || model.api.id.includes("opus-4.6")) {
const efforts = ["low", "medium", "high", "max"]
if (isAnthropicAdaptive) {
return Object.fromEntries(
efforts.map((effort) => [
adaptiveEfforts.map((effort) => [
effort,
{
thinking: {
@@ -534,10 +550,9 @@ export namespace ProviderTransform {
case "@ai-sdk/amazon-bedrock":
// https://v5.ai-sdk.dev/providers/ai-sdk-providers/amazon-bedrock
if (model.api.id.includes("opus-4-6") || model.api.id.includes("opus-4.6")) {
const efforts = ["low", "medium", "high", "max"]
if (isAnthropicAdaptive) {
return Object.fromEntries(
efforts.map((effort) => [
adaptiveEfforts.map((effort) => [
effort,
{
reasoningConfig: {

View File

@@ -6,6 +6,7 @@ import { Worktree } from "../../worktree"
import { Instance } from "../../project/instance"
import { Project } from "../../project/project"
import { MCP } from "../../mcp"
import { Session } from "../../session"
import { zodToJsonSchema } from "zod-to-json-schema"
import { errors } from "../error"
import { lazy } from "../../util/lazy"
@@ -184,6 +185,65 @@ export const ExperimentalRoutes = lazy(() =>
return c.json(true)
},
)
.get(
"/session",
describeRoute({
summary: "List sessions",
description:
"Get a list of all OpenCode sessions across projects, sorted by most recently updated. Archived sessions are excluded by default.",
operationId: "experimental.session.list",
responses: {
200: {
description: "List of sessions",
content: {
"application/json": {
schema: resolver(Session.GlobalInfo.array()),
},
},
},
},
}),
validator(
"query",
z.object({
directory: z.string().optional().meta({ description: "Filter sessions by project directory" }),
roots: z.coerce.boolean().optional().meta({ description: "Only return root sessions (no parentID)" }),
start: z.coerce
.number()
.optional()
.meta({ description: "Filter sessions updated on or after this timestamp (milliseconds since epoch)" }),
cursor: z.coerce
.number()
.optional()
.meta({ description: "Return sessions updated before this timestamp (milliseconds since epoch)" }),
search: z.string().optional().meta({ description: "Filter sessions by title (case-insensitive)" }),
limit: z.coerce.number().optional().meta({ description: "Maximum number of sessions to return" }),
archived: z.coerce.boolean().optional().meta({ description: "Include archived sessions (default false)" }),
}),
),
async (c) => {
const query = c.req.valid("query")
const limit = query.limit ?? 100
const sessions: Session.GlobalInfo[] = []
for await (const session of Session.listGlobal({
directory: query.directory,
roots: query.roots,
start: query.start,
cursor: query.cursor,
search: query.search,
limit: limit + 1,
archived: query.archived,
})) {
sessions.push(session)
}
const hasMore = sessions.length > limit
const list = hasMore ? sessions.slice(0, limit) : sessions
if (hasMore && list.length > 0) {
c.header("x-next-cursor", String(list[list.length - 1].time.updated))
}
return c.json(list)
},
)
.get(
"/resource",
describeRoute({

View File

@@ -10,8 +10,10 @@ import { Flag } from "../flag/flag"
import { Identifier } from "../id/id"
import { Installation } from "../installation"
import { Database, NotFoundError, eq, and, or, gte, isNull, desc, like } from "../storage/db"
import { Database, NotFoundError, eq, and, or, gte, isNull, desc, like, inArray, lt } from "../storage/db"
import type { SQL } from "../storage/db"
import { SessionTable, MessageTable, PartTable } from "./session.sql"
import { ProjectTable } from "../project/project.sql"
import { Storage } from "@/storage/storage"
import { Log } from "../util/log"
import { MessageV2 } from "./message-v2"
@@ -154,6 +156,24 @@ export namespace Session {
})
export type Info = z.output<typeof Info>
export const ProjectInfo = z
.object({
id: z.string(),
name: z.string().optional(),
worktree: z.string(),
})
.meta({
ref: "ProjectSummary",
})
export type ProjectInfo = z.output<typeof ProjectInfo>
export const GlobalInfo = Info.extend({
project: ProjectInfo.nullable(),
}).meta({
ref: "GlobalSession",
})
export type GlobalInfo = z.output<typeof GlobalInfo>
export const Event = {
Created: BusEvent.define(
"session.created",
@@ -544,6 +564,75 @@ export namespace Session {
}
}
export function* listGlobal(input?: {
directory?: string
roots?: boolean
start?: number
cursor?: number
search?: string
limit?: number
archived?: boolean
}) {
const conditions: SQL[] = []
if (input?.directory) {
conditions.push(eq(SessionTable.directory, input.directory))
}
if (input?.roots) {
conditions.push(isNull(SessionTable.parent_id))
}
if (input?.start) {
conditions.push(gte(SessionTable.time_updated, input.start))
}
if (input?.cursor) {
conditions.push(lt(SessionTable.time_updated, input.cursor))
}
if (input?.search) {
conditions.push(like(SessionTable.title, `%${input.search}%`))
}
if (!input?.archived) {
conditions.push(isNull(SessionTable.time_archived))
}
const limit = input?.limit ?? 100
const rows = Database.use((db) => {
const query =
conditions.length > 0
? db
.select()
.from(SessionTable)
.where(and(...conditions))
: db.select().from(SessionTable)
return query.orderBy(desc(SessionTable.time_updated), desc(SessionTable.id)).limit(limit).all()
})
const ids = [...new Set(rows.map((row) => row.project_id))]
const projects = new Map<string, ProjectInfo>()
if (ids.length > 0) {
const items = Database.use((db) =>
db
.select({ id: ProjectTable.id, name: ProjectTable.name, worktree: ProjectTable.worktree })
.from(ProjectTable)
.where(inArray(ProjectTable.id, ids))
.all(),
)
for (const item of items) {
projects.set(item.id, {
id: item.id,
name: item.name ?? undefined,
worktree: item.worktree,
})
}
}
for (const row of rows) {
const project = projects.get(row.project_id) ?? null
yield { ...fromRow(row), project }
}
}
export const children = fn(Identifier.schema("session"), async (parentID) => {
const project = Instance.project
const rows = Database.use((db) =>

View File

@@ -210,18 +210,12 @@ export namespace LLM {
maxOutputTokens,
abortSignal: input.abort,
headers: {
...(input.model.providerID.startsWith("opencode")
? {
"x-opencode-project": Instance.project.id,
"x-opencode-session": input.sessionID,
"x-opencode-request": input.user.id,
"x-opencode-client": Flag.OPENCODE_CLIENT,
}
: input.model.providerID !== "anthropic"
? {
"User-Agent": `opencode/${Installation.VERSION}`,
}
: undefined),
...(input.model.providerID.startsWith("opencode") && {
"x-opencode-project": Instance.project.id,
"x-opencode-session": input.sessionID,
"x-opencode-request": input.user.id,
"x-opencode-client": Flag.OPENCODE_CLIENT,
}),
...input.model.headers,
...headers,
},

View File

@@ -1320,33 +1320,7 @@ export namespace SessionPrompt {
const userMessage = input.messages.findLast((msg) => msg.info.role === "user")
if (!userMessage) return input.messages
// Original logic when experimental plan mode is disabled
if (!Flag.OPENCODE_EXPERIMENTAL_PLAN_MODE) {
if (input.agent.name === "plan") {
userMessage.parts.push({
id: Identifier.ascending("part"),
messageID: userMessage.info.id,
sessionID: userMessage.info.sessionID,
type: "text",
text: PROMPT_PLAN,
synthetic: true,
})
}
const wasPlan = input.messages.some((msg) => msg.info.role === "assistant" && msg.info.agent === "plan")
if (wasPlan && input.agent.name === "build") {
userMessage.parts.push({
id: Identifier.ascending("part"),
messageID: userMessage.info.id,
sessionID: userMessage.info.sessionID,
type: "text",
text: BUILD_SWITCH,
synthetic: true,
})
}
return input.messages
}
// New plan mode logic when flag is enabled
// Plan mode logic
const assistantMessage = input.messages.findLast((msg) => msg.info.role === "assistant")
// Switching from plan mode to build mode

View File

@@ -1,166 +0,0 @@
You are an interactive CLI tool that helps users with software engineering tasks. Use the instructions below and the tools available to you to assist the user.
IMPORTANT: Assist with defensive security tasks only. Refuse to create, modify, or improve code that may be used maliciously. Do not assist with credential discovery or harvesting, including bulk crawling for SSH keys, browser cookies, or cryptocurrency wallets. Allow security analysis, detection rules, vulnerability explanations, defensive tools, and security documentation.
IMPORTANT: You must NEVER generate or guess URLs for the user unless you are confident that the URLs are for helping the user with programming. You may use URLs provided by the user in their messages or local files.
If the user asks for help or wants to give feedback inform them of the following:
- /help: Get help with using Claude Code
- To give feedback, users should report the issue at https://github.com/anthropics/claude-code/issues
When the user directly asks about Claude Code (eg. "can Claude Code do...", "does Claude Code have..."), or asks in second person (eg. "are you able...", "can you do..."), or asks how to use a specific Claude Code feature (eg. implement a hook, or write a slash command), use the WebFetch tool to gather information to answer the question from Claude Code docs. The list of available docs is available at https://docs.claude.com/en/docs/claude-code/claude_code_docs_map.md.
# Tone and style
You should be concise, direct, and to the point, while providing complete information and matching the level of detail you provide in your response with the level of complexity of the user's query or the work you have completed.
A concise response is generally less than 4 lines, not including tool calls or code generated. You should provide more detail when the task is complex or when the user asks you to.
IMPORTANT: You should minimize output tokens as much as possible while maintaining helpfulness, quality, and accuracy. Only address the specific task at hand, avoiding tangential information unless absolutely critical for completing the request. If you can answer in 1-3 sentences or a short paragraph, please do.
IMPORTANT: You should NOT answer with unnecessary preamble or postamble (such as explaining your code or summarizing your action), unless the user asks you to.
Do not add additional code explanation summary unless requested by the user. After working on a file, briefly confirm that you have completed the task, rather than providing an explanation of what you did.
Answer the user's question directly, avoiding any elaboration, explanation, introduction, conclusion, or excessive details. Brief answers are best, but be sure to provide complete information. You MUST avoid extra preamble before/after your response, such as "The answer is <answer>.", "Here is the content of the file..." or "Based on the information provided, the answer is..." or "Here is what I will do next...".
Here are some examples to demonstrate appropriate verbosity:
<example>
user: 2 + 2
assistant: 4
</example>
<example>
user: what is 2+2?
assistant: 4
</example>
<example>
user: is 11 a prime number?
assistant: Yes
</example>
<example>
user: what command should I run to list files in the current directory?
assistant: ls
</example>
<example>
user: what command should I run to watch files in the current directory?
assistant: [runs ls to list the files in the current directory, then read docs/commands in the relevant file to find out how to watch files]
npm run dev
</example>
<example>
user: How many golf balls fit inside a jetta?
assistant: 150000
</example>
<example>
user: what files are in the directory src/?
assistant: [runs ls and sees foo.c, bar.c, baz.c]
user: which file contains the implementation of foo?
assistant: src/foo.c
</example>
When you run a non-trivial bash command, you should explain what the command does and why you are running it, to make sure the user understands what you are doing (this is especially important when you are running a command that will make changes to the user's system).
Remember that your output will be displayed on a command line interface. Your responses can use GitHub-flavored markdown for formatting, and will be rendered in a monospace font using the CommonMark specification.
Output text to communicate with the user; all text you output outside of tool use is displayed to the user. Only use tools to complete tasks. Never use tools like Bash or code comments as means to communicate with the user during the session.
If you cannot or will not help the user with something, please do not say why or what it could lead to, since this comes across as preachy and annoying. Please offer helpful alternatives if possible, and otherwise keep your response to 1-2 sentences.
Only use emojis if the user explicitly requests it. Avoid using emojis in all communication unless asked.
IMPORTANT: Keep your responses short, since they will be displayed on a command line interface.
# Proactiveness
You are allowed to be proactive, but only when the user asks you to do something. You should strive to strike a balance between:
- Doing the right thing when asked, including taking actions and follow-up actions
- Not surprising the user with actions you take without asking
For example, if the user asks you how to approach something, you should do your best to answer their question first, and not immediately jump into taking actions.
# Professional objectivity
Prioritize technical accuracy and truthfulness over validating the user's beliefs. Focus on facts and problem-solving, providing direct, objective technical info without any unnecessary superlatives, praise, or emotional validation. It is best for the user if Claude honestly applies the same rigorous standards to all ideas and disagrees when necessary, even if it may not be what the user wants to hear. Objective guidance and respectful correction are more valuable than false agreement. Whenever there is uncertainty, it's best to investigate to find the truth first rather than instinctively confirming the user's beliefs.
# Task Management
You have access to the TodoWrite tools to help you manage and plan tasks. Use these tools VERY frequently to ensure that you are tracking your tasks and giving the user visibility into your progress.
These tools are also EXTREMELY helpful for planning tasks, and for breaking down larger complex tasks into smaller steps. If you do not use this tool when planning, you may forget to do important tasks - and that is unacceptable.
It is critical that you mark todos as completed as soon as you are done with a task. Do not batch up multiple tasks before marking them as completed.
Examples:
<example>
user: Run the build and fix any type errors
assistant: I'm going to use the TodoWrite tool to write the following items to the todo list:
- Run the build
- Fix any type errors
I'm now going to run the build using Bash.
Looks like I found 10 type errors. I'm going to use the TodoWrite tool to write 10 items to the todo list.
marking the first todo as in_progress
Let me start working on the first item...
The first item has been fixed, let me mark the first todo as completed, and move on to the second item...
..
..
</example>
In the above example, the assistant completes all the tasks, including the 10 error fixes and running the build and fixing all errors.
<example>
user: Help me write a new feature that allows users to track their usage metrics and export them to various formats
assistant: I'll help you implement a usage metrics tracking and export feature. Let me first use the TodoWrite tool to plan this task.
Adding the following todos to the todo list:
1. Research existing metrics tracking in the codebase
2. Design the metrics collection system
3. Implement core metrics tracking functionality
4. Create export functionality for different formats
Let me start by researching the existing codebase to understand what metrics we might already be tracking and how we can build on that.
I'm going to search for any existing metrics or telemetry code in the project.
I've found some existing telemetry code. Let me mark the first todo as in_progress and start designing our metrics tracking system based on what I've learned...
[Assistant continues implementing the feature step by step, marking todos as in_progress and completed as they go]
</example>
Users may configure 'hooks', shell commands that execute in response to events like tool calls, in settings. Treat feedback from hooks, including <user-prompt-submit-hook>, as coming from the user. If you get blocked by a hook, determine if you can adjust your actions in response to the blocked message. If not, ask the user to check their hooks configuration.
# Doing tasks
The user will primarily request you perform software engineering tasks. This includes solving bugs, adding new functionality, refactoring code, explaining code, and more. For these tasks the following steps are recommended:
- Use the TodoWrite tool to plan the task if required
- Tool results and user messages may include <system-reminder> tags. <system-reminder> tags contain useful information and reminders. They are automatically added by the system, and bear no direct relation to the specific tool results or user messages in which they appear.
# Tool usage policy
- When doing file search, prefer to use the Task tool in order to reduce context usage.
- You should proactively use the Task tool with specialized agents when the task at hand matches the agent's description.
- When WebFetch returns a message about a redirect to a different host, you should immediately make a new WebFetch request with the redirect URL provided in the response.
- You have the capability to call multiple tools in a single response. When multiple independent pieces of information are requested, batch your tool calls together for optimal performance. When making multiple bash tool calls, you MUST send a single message with multiple tools calls to run the calls in parallel. For example, if you need to run "git status" and "git diff", send a single message with two tool calls to run the calls in parallel.
- If the user specifies that they want you to run tools "in parallel", you MUST send a single message with multiple tool use content blocks. For example, if you need to launch multiple agents in parallel, send a single message with multiple Task tool calls.
- Use specialized tools instead of bash commands when possible, as this provides a better user experience. For file operations, use dedicated tools: Read for reading files instead of cat/head/tail, Edit for editing instead of sed/awk, and Write for creating files instead of cat with heredoc or echo redirection. Reserve bash tools exclusively for actual system commands and terminal operations that require shell execution. NEVER use bash echo or other command-line tools to communicate thoughts, explanations, or instructions to the user. Output all communication directly in your response text instead.
Here is useful information about the environment you are running in:
<env>
Working directory: /home/thdxr/dev/projects/anomalyco/opencode/packages/opencode
Is directory a git repo: Yes
Platform: linux
OS Version: Linux 6.12.4-arch1-1
Today's date: 2025-09-30
</env>
You are powered by the model named Sonnet 4.5. The exact model ID is claude-sonnet-4-5-20250929.
Assistant knowledge cutoff is January 2025.
IMPORTANT: Assist with defensive security tasks only. Refuse to create, modify, or improve code that may be used maliciously. Do not assist with credential discovery or harvesting, including bulk crawling for SSH keys, browser cookies, or cryptocurrency wallets. Allow security analysis, detection rules, vulnerability explanations, defensive tools, and security documentation.
IMPORTANT: Always use the TodoWrite tool to plan and track tasks throughout the conversation.
# Code References
When referencing specific functions or pieces of code include the pattern `file_path:line_number` to allow the user to easily navigate to the source code location.
<example>
user: Where are errors from the client handled?
assistant: Clients are marked as failed in the `connectToServer` function in src/services/process.ts:712.
</example>

View File

@@ -117,7 +117,7 @@ export namespace ToolRegistry {
ApplyPatchTool,
...(Flag.OPENCODE_EXPERIMENTAL_LSP_TOOL ? [LspTool] : []),
...(config.experimental?.batch_tool === true ? [BatchTool] : []),
...(Flag.OPENCODE_EXPERIMENTAL_PLAN_MODE && Flag.OPENCODE_CLIENT === "cli" ? [PlanExitTool, PlanEnterTool] : []),
...(Flag.OPENCODE_CLIENT === "cli" ? [PlanExitTool, PlanEnterTool] : []),
...custom,
]
}

View File

@@ -38,7 +38,7 @@ test("build agent has correct default properties", async () => {
expect(build).toBeDefined()
expect(build?.mode).toBe("primary")
expect(build?.native).toBe(true)
expect(evalPerm(build, "edit")).toBe("allow")
expect(evalPerm(build, "edit")).toBe("ask")
expect(evalPerm(build, "bash")).toBe("allow")
},
})
@@ -217,8 +217,8 @@ test("agent permission config merges with defaults", async () => {
expect(build).toBeDefined()
// Specific pattern is denied
expect(PermissionNext.evaluate("bash", "rm -rf *", build!.permission).action).toBe("deny")
// Edit still allowed
expect(evalPerm(build, "edit")).toBe("allow")
// Edit still asks (default behavior)
expect(evalPerm(build, "edit")).toBe("ask")
},
})
})

View File

@@ -1705,6 +1705,66 @@ describe("ProviderTransform.variants", () => {
})
describe("@ai-sdk/gateway", () => {
test("anthropic sonnet 4.6 models return adaptive thinking options", () => {
const model = createMockModel({
id: "anthropic/claude-sonnet-4-6",
providerID: "gateway",
api: {
id: "anthropic/claude-sonnet-4-6",
url: "https://gateway.ai",
npm: "@ai-sdk/gateway",
},
})
const result = ProviderTransform.variants(model)
expect(Object.keys(result)).toEqual(["low", "medium", "high", "max"])
expect(result.medium).toEqual({
thinking: {
type: "adaptive",
},
effort: "medium",
})
})
test("anthropic sonnet 4.6 dot-format models return adaptive thinking options", () => {
const model = createMockModel({
id: "anthropic/claude-sonnet-4-6",
providerID: "gateway",
api: {
id: "anthropic/claude-sonnet-4.6",
url: "https://gateway.ai",
npm: "@ai-sdk/gateway",
},
})
const result = ProviderTransform.variants(model)
expect(Object.keys(result)).toEqual(["low", "medium", "high", "max"])
expect(result.medium).toEqual({
thinking: {
type: "adaptive",
},
effort: "medium",
})
})
test("anthropic opus 4.6 dot-format models return adaptive thinking options", () => {
const model = createMockModel({
id: "anthropic/claude-opus-4-6",
providerID: "gateway",
api: {
id: "anthropic/claude-opus-4.6",
url: "https://gateway.ai",
npm: "@ai-sdk/gateway",
},
})
const result = ProviderTransform.variants(model)
expect(Object.keys(result)).toEqual(["low", "medium", "high", "max"])
expect(result.high).toEqual({
thinking: {
type: "adaptive",
},
effort: "high",
})
})
test("anthropic models return anthropic thinking options", () => {
const model = createMockModel({
id: "anthropic/claude-sonnet-4",
@@ -2064,6 +2124,26 @@ describe("ProviderTransform.variants", () => {
})
describe("@ai-sdk/anthropic", () => {
test("sonnet 4.6 returns adaptive thinking options", () => {
const model = createMockModel({
id: "anthropic/claude-sonnet-4-6",
providerID: "anthropic",
api: {
id: "claude-sonnet-4-6",
url: "https://api.anthropic.com",
npm: "@ai-sdk/anthropic",
},
})
const result = ProviderTransform.variants(model)
expect(Object.keys(result)).toEqual(["low", "medium", "high", "max"])
expect(result.high).toEqual({
thinking: {
type: "adaptive",
},
effort: "high",
})
})
test("returns high and max with thinking config", () => {
const model = createMockModel({
id: "anthropic/claude-4",
@@ -2092,6 +2172,26 @@ describe("ProviderTransform.variants", () => {
})
describe("@ai-sdk/amazon-bedrock", () => {
test("anthropic sonnet 4.6 returns adaptive reasoning options", () => {
const model = createMockModel({
id: "bedrock/anthropic-claude-sonnet-4-6",
providerID: "bedrock",
api: {
id: "anthropic.claude-sonnet-4-6",
url: "https://bedrock.amazonaws.com",
npm: "@ai-sdk/amazon-bedrock",
},
})
const result = ProviderTransform.variants(model)
expect(Object.keys(result)).toEqual(["low", "medium", "high", "max"])
expect(result.max).toEqual({
reasoningConfig: {
type: "adaptive",
maxReasoningEffort: "max",
},
})
})
test("returns WIDELY_SUPPORTED_EFFORTS with reasoningConfig", () => {
const model = createMockModel({
id: "bedrock/llama-4",

View File

@@ -0,0 +1,89 @@
import { describe, expect, test } from "bun:test"
import { Instance } from "../../src/project/instance"
import { Project } from "../../src/project/project"
import { Session } from "../../src/session"
import { Log } from "../../src/util/log"
import { tmpdir } from "../fixture/fixture"
Log.init({ print: false })
describe("Session.listGlobal", () => {
test("lists sessions across projects with project metadata", async () => {
await using first = await tmpdir({ git: true })
await using second = await tmpdir({ git: true })
const firstSession = await Instance.provide({
directory: first.path,
fn: async () => Session.create({ title: "first-session" }),
})
const secondSession = await Instance.provide({
directory: second.path,
fn: async () => Session.create({ title: "second-session" }),
})
const sessions = [...Session.listGlobal({ limit: 200 })]
const ids = sessions.map((session) => session.id)
expect(ids).toContain(firstSession.id)
expect(ids).toContain(secondSession.id)
const firstProject = Project.get(firstSession.projectID)
const secondProject = Project.get(secondSession.projectID)
const firstItem = sessions.find((session) => session.id === firstSession.id)
const secondItem = sessions.find((session) => session.id === secondSession.id)
expect(firstItem?.project?.id).toBe(firstProject?.id)
expect(firstItem?.project?.worktree).toBe(firstProject?.worktree)
expect(secondItem?.project?.id).toBe(secondProject?.id)
expect(secondItem?.project?.worktree).toBe(secondProject?.worktree)
})
test("excludes archived sessions by default", async () => {
await using tmp = await tmpdir({ git: true })
const archived = await Instance.provide({
directory: tmp.path,
fn: async () => Session.create({ title: "archived-session" }),
})
await Instance.provide({
directory: tmp.path,
fn: async () => Session.setArchived({ sessionID: archived.id, time: Date.now() }),
})
const sessions = [...Session.listGlobal({ limit: 200 })]
const ids = sessions.map((session) => session.id)
expect(ids).not.toContain(archived.id)
const allSessions = [...Session.listGlobal({ limit: 200, archived: true })]
const allIds = allSessions.map((session) => session.id)
expect(allIds).toContain(archived.id)
})
test("supports cursor pagination", async () => {
await using tmp = await tmpdir({ git: true })
const first = await Instance.provide({
directory: tmp.path,
fn: async () => Session.create({ title: "page-one" }),
})
await new Promise((resolve) => setTimeout(resolve, 5))
const second = await Instance.provide({
directory: tmp.path,
fn: async () => Session.create({ title: "page-two" }),
})
const page = [...Session.listGlobal({ directory: tmp.path, limit: 1 })]
expect(page.length).toBe(1)
expect(page[0].id).toBe(second.id)
const next = [...Session.listGlobal({ directory: tmp.path, limit: 10, cursor: page[0].time.updated })]
const ids = next.map((session) => session.id)
expect(ids).toContain(first.id)
expect(ids).not.toContain(second.id)
})
})

View File

@@ -307,7 +307,6 @@ describe("session.llm.stream", () => {
expect(url.pathname.startsWith("/v1/")).toBe(true)
expect(url.pathname.endsWith("/chat/completions")).toBe(true)
expect(headers.get("Authorization")).toBe("Bearer test-key")
expect(headers.get("User-Agent") ?? "").toMatch(/^opencode\//)
expect(body.model).toBe(resolved.api.id)
expect(body.temperature).toBe(0.4)

View File

@@ -1,7 +1,7 @@
{
"$schema": "https://json.schemastore.org/package.json",
"name": "@opencode-ai/plugin",
"version": "1.2.7",
"version": "1.2.8",
"type": "module",
"license": "MIT",
"scripts": {

View File

@@ -1,7 +1,7 @@
{
"$schema": "https://json.schemastore.org/package.json",
"name": "@opencode-ai/sdk",
"version": "1.2.7",
"version": "1.2.8",
"type": "module",
"license": "MIT",
"scripts": {

View File

@@ -1067,6 +1067,10 @@ export type KeybindsConfig = {
* Toggle model favorite status
*/
model_favorite_toggle?: string
/**
* Toggle showing all models
*/
model_show_all_toggle?: string
/**
* Share current session
*/
@@ -1183,6 +1187,10 @@ export type KeybindsConfig = {
* Previous agent
*/
agent_cycle_reverse?: string
/**
* Toggle auto-accept mode for permissions
*/
permission_auto_accept_toggle?: string
/**
* Cycle model variants
*/

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/slack",
"version": "1.2.7",
"version": "1.2.8",
"type": "module",
"license": "MIT",
"scripts": {

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/ui",
"version": "1.2.7",
"version": "1.2.8",
"type": "module",
"license": "MIT",
"exports": {

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/util",
"version": "1.2.7",
"version": "1.2.8",
"private": true,
"type": "module",
"license": "MIT",

View File

@@ -2,7 +2,7 @@
"name": "@opencode-ai/web",
"type": "module",
"license": "MIT",
"version": "1.2.7",
"version": "1.2.8",
"scripts": {
"dev": "astro dev",
"dev:remote": "VITE_API_URL=https://api.opencode.ai astro dev",

View File

@@ -600,4 +600,3 @@ These environment variables enable experimental features that may change or be r
| `OPENCODE_EXPERIMENTAL_EXA` | boolean | Enable experimental Exa features |
| `OPENCODE_EXPERIMENTAL_LSP_TY` | boolean | Enable experimental LSP type checking |
| `OPENCODE_EXPERIMENTAL_MARKDOWN` | boolean | Enable experimental markdown features |
| `OPENCODE_EXPERIMENTAL_PLAN_MODE` | boolean | Enable plan mode |

View File

@@ -266,8 +266,6 @@ For custom inference profiles, use the model and provider name in the key and se
```txt
┌ Select auth method
│ Claude Pro/Max
│ Create an API Key
│ Manually enter API Key
```
@@ -279,14 +277,16 @@ For custom inference profiles, use the model and provider name in the key and se
```
:::info
Using your Claude Pro/Max subscription in OpenCode is not officially supported by [Anthropic](https://anthropic.com).
:::
There are plugins that allow you to use your Claude Pro/Max models with
OpenCode. Anthropic explicitly prohibits this.
##### Using API keys
Other companies support freedom of choice with developer tooling - you can use
the following subscriptions in OpenCode with zero setup:
You can also select **Create an API Key** if you don't have a Pro/Max subscription. It'll also open your browser and ask you to login to Anthropic and give you a code you can paste in your terminal.
Or if you already have an API key, you can select **Manually enter API Key** and paste it in your terminal.
- ChatGPT Plus
- Github Copilot
- Gitlab Duo
:::
---

View File

@@ -2,7 +2,7 @@
"name": "opencode",
"displayName": "opencode",
"description": "opencode for VS Code",
"version": "1.2.7",
"version": "1.2.8",
"publisher": "sst-dev",
"repository": {
"type": "git",