Compare commits

..

40 Commits

Author SHA1 Message Date
Kit Langton
8d2385ad49 test: finish HTTP mock processor coverage 2026-03-31 20:36:18 -04:00
Kit Langton
7f6a5bb2c8 test: migrate processor tests to HTTP mock LLM server
Replace the custom TestLLM Effect service with the real LLM layer +
TestLLMServer HTTP mock for 9 of 10 processor tests. Tests now exercise
the full HTTP→SSE→AI SDK→processor pipeline.

- Export Provider.defaultLayer for test layer composition
- Add boot() helper for common service access (processor, session, provider)
- Extend TestLLMServer with usage support and httpError step type
- Tool abort test registers a real tool with hanging execute
- Reasoning test stays with in-process TestLLM (needs fine-grained events)
2026-03-31 19:59:56 -04:00
Kit Langton
537cc32bf0 test: migrate processor tests to HTTP mock LLM server
Replace the custom TestLLM Effect service with the real LLM layer +
TestLLMServer HTTP mock. Tests now exercise the full HTTP→SSE→AI SDK
pipeline instead of injecting Effect streams directly.

- Extend TestLLMServer with usage support on text responses and
  httpError step type for non-200 responses
- Drop reasoning test (can't produce reasoning events via
  @ai-sdk/openai-compatible SSE)
- 9 tests pass, covering: text capture, token overflow, error retry,
  structured errors, context overflow, abort/interrupt cleanup
2026-03-31 19:21:27 -04:00
Kit Langton
82da702f64 refactor: tighten instance context helper fallbacks 2026-03-31 17:32:24 -04:00
Kit Langton
90469bbb7e refactor: simplify instance context helpers in prompt tests 2026-03-31 16:42:20 -04:00
Kit Langton
4ff0fbc043 fix: retry scoped tempdir cleanup on windows 2026-03-31 16:42:19 -04:00
Kit Langton
e24369eaf1 fix: break installation cycle in database context binding 2026-03-31 16:42:18 -04:00
Kit Langton
825f51c39f fix: restore instance context in deferred database callbacks 2026-03-31 16:42:18 -04:00
Kit Langton
191a747405 fix: propagate InstanceRef across static function boundaries
- makeRuntime.provide reads InstanceRef from current Effect fiber when
  ALS is unavailable, bridging static function calls (like Bus.publish)
  that create new fibers from inside Effect code
- Database.transaction preserves Instance ALS via Instance.bind on the
  bun:sqlite transaction callback (native fn loses ALS)
- Instance.restore helper for bridging Effect→sync code with ALS
- InstanceState.withALS bridges InstanceRef back to ALS for sync callers
- prompt.ts: InstructionPrompt.clear wrapped with withALS
- Remove ALL provideInstance(dir) wrappers from prompt-effect tests
2026-03-31 16:42:17 -04:00
Kit Langton
cc412f3014 refactor: migrate Instance ALS reads to InstanceRef in Effect services
Migrate 16 direct Instance.directory/worktree/project reads inside
Effect code to use InstanceState.directory/context helpers that check
the InstanceRef first and fall back to ALS.

- Export InstanceState.directory and InstanceState.context helpers
- bus/index.ts: GlobalBus.emit uses InstanceState.directory
- session/prompt.ts: 5 callsites migrated to InstanceState.context
- session/index.ts: 4 callsites migrated
- session/compaction.ts: 1 callsite migrated
- config/config.ts: 1 callsite migrated
- format/index.ts: 1 callsite migrated
- worktree/index.ts: 5 callsites migrated
- storage/db.ts: Database.effect preserves Instance ALS via Instance.bind
- test/lib/llm-server.ts: add wait/hold/fail SSE stream support
- Remove most provideInstance(dir) wrappers from prompt tests
  (5 remain due to Instance.state sync ALS dependency)
2026-03-31 16:42:16 -04:00
Kit Langton
bb039496d5 refactor: propagate Instance context through Effect fibers via InstanceRef
Add a ServiceMap.Reference that carries InstanceContext through the
Effect service graph so child fibers retain instance context even when
resumed by external I/O events outside the ALS boundary.

- Add InstanceRef to instance-state.ts; InstanceState.get/has/invalidate
  try the Reference first, fall back to ALS
- makeRuntime automatically captures ALS into InstanceRef at the boundary
- provideInstance (test fixture) sets InstanceRef for Effect.runPromiseWith
- Remove all redundant provideInstance(dir) wrappers from prompt tests
- Fix test/lib/effect.ts type params (drop unnecessary S/T generics)
2026-03-31 16:42:16 -04:00
Kit Langton
f2fa1a681d test: move more prompt cases to mock llm server
Migrate the next prompt-effect cases to the HTTP-backed mock server path, keep the shell handoff cases on short live timeouts, and leave the stream-failure case on the in-process fake until the server DSL matches it.
2026-03-31 16:42:15 -04:00
Kit Langton
6bd340492c test: infer mock server callback types 2026-03-31 16:42:15 -04:00
Kit Langton
21ec3207e7 test: extend mock llm server coverage
Add fixture support for tmpdir-backed mock server tests, extend the mock LLM server DSL for failure and hanging cases, and migrate the next prompt tests to the HTTP-backed path.
2026-03-31 16:42:14 -04:00
Kit Langton
123123b6c3 test: start moving prompt tests to mock llm server
Switch the basic assistant reply prompt-effect test to the HTTP-backed mock LLM server while keeping the more stream-sensitive cases on the in-process fake for now.
2026-03-31 16:42:14 -04:00
Kit Langton
6ea467b0ac test: add live effect helper mode
Default the shared effect test helper to support both test-clock and live execution, and switch the current opencode effect tests to the live path for real integration behavior.
2026-03-31 16:42:13 -04:00
Kit Langton
459fbc99a8 refactor(test): migrate llm-server to Effect HTTP platform
- Replace Bun.serve with Effect HTTP server using NodeHttpServer
- Add TestLLMServer service for mock LLM testing with SSE responses
- Update prompt-provider.test.ts to use testEffect pattern with provideTmpdirInstance
- Remove redundant test/fixture/effect.ts (using existing test/lib/effect.ts instead)
2026-03-31 16:42:13 -04:00
github-actions[bot]
d6d4446f46 Update VOUCHED list
https://github.com/anomalyco/opencode/issues/20342#issuecomment-4165277636
2026-03-31 20:24:07 +00:00
Major Hayden
26cc924ea2 feat: enable prompt caching and cache token tracking for google-vertex-anthropic (#20266)
Signed-off-by: Major Hayden <major@mhtx.net>
2026-03-31 15:16:14 -05:00
Aiden Cline
4dd866d5c4 fix: rm exclusion of ai-sdk/azure in transform.ts, when we migrated to v6 the ai sdk changed the key for ai-sdk/azure so the exclusion is no longer needed (#20326) 2026-03-31 14:57:15 -05:00
opencode
beab4cc2c2 release: v1.3.11 2026-03-31 19:55:41 +00:00
Dax
567a91191a refactor(session): simplify LLM stream by replacing queue with fromAsyncIterable (#20324) 2026-03-31 15:27:51 -04:00
Aiden Cline
434d82bbe2 test: update model test fixture (#20182) 2026-03-31 16:20:01 +00:00
Aiden Cline
2929774acb chore: rm harcoded model definition from codex plugin (#20294) 2026-03-31 11:13:11 -05:00
Adam
6e61a46a84 chore: skip 2 tests 2026-03-31 10:56:06 -05:00
Yuxin Dong
2daf4b805a feat: add a dedicated system prompt for Kimi models (#20259)
Co-authored-by: dongyuxin <dongyuxin@dev.dongyuxin.msh-dev.svc.cluster.local>
2026-03-31 17:44:17 +02:00
opencode-agent[bot]
7342e650c0 chore: update nix node_modules hashes 2026-03-31 15:33:12 +00:00
Adam
8c2e2ecc95 chore: e2e model 2026-03-31 10:14:26 -05:00
Sebastian
25a2b739e6 warn only and ignore plugins without entrypoints, default config via exports (#20284) 2026-03-31 17:14:03 +02:00
Adam
85c16926c4 chore: use paid zen model in e2e 2026-03-31 10:06:44 -05:00
Sebastian
2e78fdec43 ensure pinned plugin versions and do not run package scripts on install (#20248) 2026-03-31 16:59:43 +02:00
Sebastian
1fcb920eb4 upgrade opentui to 0.1.93 (#19950) 2026-03-31 16:50:23 +02:00
opencode
b1e89c344b release: v1.3.10 2026-03-31 13:31:37 +00:00
Dax
befbedacdc fix(session): subagents not being clickable (#20263) 2026-03-31 08:58:46 -04:00
Frank
2cc738fb17 wip: zen 2026-03-31 00:07:56 -04:00
opencode-agent[bot]
71b20698bb chore: generate 2026-03-31 01:57:41 +00:00
Kit Langton
3df18dcde1 refactor(provider): effectify Provider service (#20160) 2026-03-30 21:56:43 -04:00
Kit Langton
a898c2ea3a refactor(storage): effectify Storage service (#20132) 2026-03-31 01:16:02 +00:00
Kit Langton
bf777298c8 fix(theme): darken muted text in catppuccin tui themes (#20161) 2026-03-30 21:06:05 -04:00
Luke Parker
93fad99f7f smarter changelog (#20138) 2026-03-31 00:05:46 +00:00
93 changed files with 66178 additions and 37979 deletions

3
.github/VOUCHED.td vendored
View File

@@ -21,8 +21,9 @@ jayair
kitlangton
kommander
-opencode2026
-opencodeengineer bot that spams issues
r44vc0rp
rekram1-node
-robinmordasiewicz
-spider-yamet clawdbot/llm psychosis, spam pinging the team
thdxr
-OpenCodeEngineer bot that spams issues

View File

@@ -100,6 +100,9 @@ jobs:
run: bun --cwd packages/app test:e2e:local
env:
CI: true
OPENCODE_API_KEY: ${{ secrets.OPENCODE_API_KEY }}
OPENCODE_E2E_MODEL: opencode/claude-haiku-4-5
OPENCODE_E2E_REQUIRE_PAID: "true"
timeout-minutes: 30
- name: Upload Playwright artifacts

View File

@@ -1,22 +1,19 @@
---
model: opencode/kimi-k2.5
model: opencode/gpt-5.4
---
Create `UPCOMING_CHANGELOG.md` from the structured changelog input below.
If `UPCOMING_CHANGELOG.md` already exists, ignore its current contents completely.
Do not preserve, merge, or reuse text from the existing file.
Any command arguments are passed directly to `bun script/changelog.ts`.
Use `--from` / `-f` and `--to` / `-t` to preview a specific release range.
The input already contains the exact commit range since the last non-draft release.
The commits are already filtered to the release-relevant packages and grouped into
the release sections. Do not fetch GitHub releases, PRs, or build your own commit list.
The input may also include a `## Community Contributors Input` section.
Before writing any entry you keep, inspect the real diff with
`git show --stat --format='' <hash>` or `git show --format='' <hash>` so the
summary reflects the actual user-facing change and not just the commit message.
`git show --stat --format='' <hash>` or `git show --format='' <hash>` so you can
understand the actual code changes and not just the commit message (they may be misleading).
Do not use `git log` or author metadata when deciding attribution.
Rules:
@@ -38,7 +35,12 @@ Rules:
- Do not add, remove, rewrite, or reorder contributor names or commit titles in that block
- Do not derive the thank-you section from the main summary bullets
- Do not include the heading `## Community Contributors Input` in the final file
- Focus on writing the least words to get your point across - users will skim read the changelog, so we should be precise
## Changelog Input
**Importantly, the changelog is for users (who are at least slightly technical), they may use the TUI, Desktop, SDK, Plugins and so forth. Be thorough in understanding flow on effects may not be immediately apparent. e.g. a package upgrade looks internal but may patch a bug. Or a refactor may also stabilise some race condition that fixes bugs for users. The PR title/body + commit message will give you the authors context, usually containing the outcome not just technical detail**
!`bun script/changelog.ts $ARGUMENTS`
<changelog_input>
!`bun script/raw-changelog.ts $ARGUMENTS`
</changelog_input>

View File

@@ -26,7 +26,7 @@
},
"packages/app": {
"name": "@opencode-ai/app",
"version": "1.3.9",
"version": "1.3.11",
"dependencies": {
"@kobalte/core": "catalog:",
"@opencode-ai/sdk": "workspace:*",
@@ -79,7 +79,7 @@
},
"packages/console/app": {
"name": "@opencode-ai/console-app",
"version": "1.3.9",
"version": "1.3.11",
"dependencies": {
"@cloudflare/vite-plugin": "1.15.2",
"@ibm/plex": "6.4.1",
@@ -113,7 +113,7 @@
},
"packages/console/core": {
"name": "@opencode-ai/console-core",
"version": "1.3.9",
"version": "1.3.11",
"dependencies": {
"@aws-sdk/client-sts": "3.782.0",
"@jsx-email/render": "1.1.1",
@@ -140,7 +140,7 @@
},
"packages/console/function": {
"name": "@opencode-ai/console-function",
"version": "1.3.9",
"version": "1.3.11",
"dependencies": {
"@ai-sdk/anthropic": "3.0.64",
"@ai-sdk/openai": "3.0.48",
@@ -164,7 +164,7 @@
},
"packages/console/mail": {
"name": "@opencode-ai/console-mail",
"version": "1.3.9",
"version": "1.3.11",
"dependencies": {
"@jsx-email/all": "2.2.3",
"@jsx-email/cli": "1.4.3",
@@ -188,7 +188,7 @@
},
"packages/desktop": {
"name": "@opencode-ai/desktop",
"version": "1.3.9",
"version": "1.3.11",
"dependencies": {
"@opencode-ai/app": "workspace:*",
"@opencode-ai/ui": "workspace:*",
@@ -221,7 +221,7 @@
},
"packages/desktop-electron": {
"name": "@opencode-ai/desktop-electron",
"version": "1.3.9",
"version": "1.3.11",
"dependencies": {
"@opencode-ai/app": "workspace:*",
"@opencode-ai/ui": "workspace:*",
@@ -252,7 +252,7 @@
},
"packages/enterprise": {
"name": "@opencode-ai/enterprise",
"version": "1.3.9",
"version": "1.3.11",
"dependencies": {
"@opencode-ai/ui": "workspace:*",
"@opencode-ai/util": "workspace:*",
@@ -281,7 +281,7 @@
},
"packages/function": {
"name": "@opencode-ai/function",
"version": "1.3.9",
"version": "1.3.11",
"dependencies": {
"@octokit/auth-app": "8.0.1",
"@octokit/rest": "catalog:",
@@ -297,7 +297,7 @@
},
"packages/opencode": {
"name": "opencode",
"version": "1.3.9",
"version": "1.3.11",
"bin": {
"opencode": "./bin/opencode",
},
@@ -338,8 +338,8 @@
"@opencode-ai/sdk": "workspace:*",
"@opencode-ai/util": "workspace:*",
"@openrouter/ai-sdk-provider": "2.3.3",
"@opentui/core": "0.1.92",
"@opentui/solid": "0.1.92",
"@opentui/core": "0.1.93",
"@opentui/solid": "0.1.93",
"@parcel/watcher": "2.5.1",
"@pierre/diffs": "catalog:",
"@solid-primitives/event-bus": "1.1.2",
@@ -423,22 +423,22 @@
},
"packages/plugin": {
"name": "@opencode-ai/plugin",
"version": "1.3.9",
"version": "1.3.11",
"dependencies": {
"@opencode-ai/sdk": "workspace:*",
"zod": "catalog:",
},
"devDependencies": {
"@opentui/core": "0.1.92",
"@opentui/solid": "0.1.92",
"@opentui/core": "0.1.93",
"@opentui/solid": "0.1.93",
"@tsconfig/node22": "catalog:",
"@types/node": "catalog:",
"@typescript/native-preview": "catalog:",
"typescript": "catalog:",
},
"peerDependencies": {
"@opentui/core": ">=0.1.92",
"@opentui/solid": ">=0.1.92",
"@opentui/core": ">=0.1.93",
"@opentui/solid": ">=0.1.93",
},
"optionalPeers": [
"@opentui/core",
@@ -457,7 +457,7 @@
},
"packages/sdk/js": {
"name": "@opencode-ai/sdk",
"version": "1.3.9",
"version": "1.3.11",
"devDependencies": {
"@hey-api/openapi-ts": "0.90.10",
"@tsconfig/node22": "catalog:",
@@ -468,7 +468,7 @@
},
"packages/slack": {
"name": "@opencode-ai/slack",
"version": "1.3.9",
"version": "1.3.11",
"dependencies": {
"@opencode-ai/sdk": "workspace:*",
"@slack/bolt": "^3.17.1",
@@ -503,7 +503,7 @@
},
"packages/ui": {
"name": "@opencode-ai/ui",
"version": "1.3.9",
"version": "1.3.11",
"dependencies": {
"@kobalte/core": "catalog:",
"@opencode-ai/sdk": "workspace:*",
@@ -550,7 +550,7 @@
},
"packages/util": {
"name": "@opencode-ai/util",
"version": "1.3.9",
"version": "1.3.11",
"dependencies": {
"zod": "catalog:",
},
@@ -561,7 +561,7 @@
},
"packages/web": {
"name": "@opencode-ai/web",
"version": "1.3.9",
"version": "1.3.11",
"dependencies": {
"@astrojs/cloudflare": "12.6.3",
"@astrojs/markdown-remark": "6.3.1",
@@ -1461,21 +1461,21 @@
"@opentelemetry/api": ["@opentelemetry/api@1.9.0", "", {}, "sha512-3giAOQvZiH5F9bMlMiv8+GSPMeqg0dbaeo58/0SlA9sxSqZhnUtxzX9/2FzyhS9sWQf5S0GJE0AKBrFqjpeYcg=="],
"@opentui/core": ["@opentui/core@0.1.92", "", { "dependencies": { "bun-ffi-structs": "0.1.2", "diff": "8.0.2", "jimp": "1.6.0", "marked": "17.0.1", "yoga-layout": "3.2.1" }, "optionalDependencies": { "@dimforge/rapier2d-simd-compat": "^0.17.3", "@opentui/core-darwin-arm64": "0.1.92", "@opentui/core-darwin-x64": "0.1.92", "@opentui/core-linux-arm64": "0.1.92", "@opentui/core-linux-x64": "0.1.92", "@opentui/core-win32-arm64": "0.1.92", "@opentui/core-win32-x64": "0.1.92", "bun-webgpu": "0.1.5", "planck": "^1.4.2", "three": "0.177.0" }, "peerDependencies": { "web-tree-sitter": "0.25.10" } }, "sha512-c+KdYAIH3M8n24RYaor+t7AQtKZ3l84L7xdP7DEaN4xtuYH8W08E6Gi+wUal4g+HSai3HS9irox68yFf0VPAxw=="],
"@opentui/core": ["@opentui/core@0.1.93", "", { "dependencies": { "bun-ffi-structs": "0.1.2", "diff": "8.0.2", "jimp": "1.6.0", "marked": "17.0.1", "yoga-layout": "3.2.1" }, "optionalDependencies": { "@dimforge/rapier2d-simd-compat": "^0.17.3", "@opentui/core-darwin-arm64": "0.1.93", "@opentui/core-darwin-x64": "0.1.93", "@opentui/core-linux-arm64": "0.1.93", "@opentui/core-linux-x64": "0.1.93", "@opentui/core-win32-arm64": "0.1.93", "@opentui/core-win32-x64": "0.1.93", "bun-webgpu": "0.1.5", "planck": "^1.4.2", "three": "0.177.0" }, "peerDependencies": { "web-tree-sitter": "0.25.10" } }, "sha512-HlTM16ZiBKN0mPBNMHSILkSrbzNku6Pg/ovIpVVkEPqLeWeSC2bfZS4Uhc0Ej1sckVVVoU9HKBJanfHvpP+pMg=="],
"@opentui/core-darwin-arm64": ["@opentui/core-darwin-arm64@0.1.92", "", { "os": "darwin", "cpu": "arm64" }, "sha512-NX/qFRuc7My0pazyOrw9fdTXmU7omXcZzQuHcsaVnwssljaT52UYMrJ7mCKhSo69RhHw0lnGCymTorvz3XBdsA=="],
"@opentui/core-darwin-arm64": ["@opentui/core-darwin-arm64@0.1.93", "", { "os": "darwin", "cpu": "arm64" }, "sha512-4I2mwhXLqRNUv7tu88hA6cBGaGpLZXkAa8W0VqBiGDV+Tx337x4T+vbQ7G57OwKXT787oTrEOF9rOOrGLov6qw=="],
"@opentui/core-darwin-x64": ["@opentui/core-darwin-x64@0.1.92", "", { "os": "darwin", "cpu": "x64" }, "sha512-Zb4jn33hOf167llINKLniOabQIycs14LPOBZnQ6l4khbeeTPVJdG8gy9PhlAyIQygDKmRTFncVlP0RP+L6C7og=="],
"@opentui/core-darwin-x64": ["@opentui/core-darwin-x64@0.1.93", "", { "os": "darwin", "cpu": "x64" }, "sha512-jvYMgcg47a5qLhSv1DnQiafEWBQ1UukGutmsYV1TvNuhWtuDXYLVy2AhKIHPzbB9JNrV0IpjbxUC8QnJaP3n8g=="],
"@opentui/core-linux-arm64": ["@opentui/core-linux-arm64@0.1.92", "", { "os": "linux", "cpu": "arm64" }, "sha512-4VA1A91OTMPJ3LkAyaxKEZVJsk5jIc3Kz0gV2vip8p2aGLPpYHHpkFZpXP/FyzsnJzoSGftBeA6ya1GKa5bkXg=="],
"@opentui/core-linux-arm64": ["@opentui/core-linux-arm64@0.1.93", "", { "os": "linux", "cpu": "arm64" }, "sha512-bvFqRcPftmg14iYmMc3d63XC9rhe4yF7pJRApH6klLBKp27WX/LU0iSO4mvyX7qhy65gcmyy4Sj9dl5jNJ+vlA=="],
"@opentui/core-linux-x64": ["@opentui/core-linux-x64@0.1.92", "", { "os": "linux", "cpu": "x64" }, "sha512-tr7va8hfKS1uY+TBmulQBoBlwijzJk56K/U/L9/tbHfW7oJctqxPVwEFHIh1HDcOQ3/UhMMWGvMfeG6cFiK8/A=="],
"@opentui/core-linux-x64": ["@opentui/core-linux-x64@0.1.93", "", { "os": "linux", "cpu": "x64" }, "sha512-/wJXhwtNxdcpshrRl1KouyGE54ODAHxRQgBHtnlM/F4bB8cjzOlq2Yc+5cv5DxRz4Q0nQZFCPefwpg2U6ZwNdA=="],
"@opentui/core-win32-arm64": ["@opentui/core-win32-arm64@0.1.92", "", { "os": "win32", "cpu": "arm64" }, "sha512-34YM3uPtDjzUVeSnJWIK2J8mxyduzV7f3mYc4Hub0glNpUdM1jjzF2HvvvnrKK5ElzTsIcno3c3lOYT8yvG1Zg=="],
"@opentui/core-win32-arm64": ["@opentui/core-win32-arm64@0.1.93", "", { "os": "win32", "cpu": "arm64" }, "sha512-g3PQobfM2yFPSzkBKRKFp8FgTG4ulWyJcU+GYXjyYmxQIT+ZbOU7UfR//ImRq3/FxUAfUC/MhC6WwjqccjEqBw=="],
"@opentui/core-win32-x64": ["@opentui/core-win32-x64@0.1.92", "", { "os": "win32", "cpu": "x64" }, "sha512-uk442kA2Vn0mmJHHqk5sPM+Zai/AN9sgl7egekhoEOUx2VK3gxftKsVlx2YVpCHTvTE/S+vnD2WpQaJk2SNjww=="],
"@opentui/core-win32-x64": ["@opentui/core-win32-x64@0.1.93", "", { "os": "win32", "cpu": "x64" }, "sha512-Spllte2W7q+WfB1zVHgHilVJNp+jpp77PkkxTWyMQNvT7vJNt9LABMNjGTGiJBBMkAuKvO0GgFNKxrda7tFKrQ=="],
"@opentui/solid": ["@opentui/solid@0.1.92", "", { "dependencies": { "@babel/core": "7.28.0", "@babel/preset-typescript": "7.27.1", "@opentui/core": "0.1.92", "babel-plugin-module-resolver": "5.0.2", "babel-preset-solid": "1.9.10", "entities": "7.0.1", "s-js": "^0.4.9" }, "peerDependencies": { "solid-js": "1.9.11" } }, "sha512-0Sx1+6zRpmMJ5oDEY0JS9b9+eGd/Q0fPndNllrQNnp7w2FCjpXmvHdBdq+pFI6kFp01MHq2ZOkUU5zX5/9YMSQ=="],
"@opentui/solid": ["@opentui/solid@0.1.93", "", { "dependencies": { "@babel/core": "7.28.0", "@babel/preset-typescript": "7.27.1", "@opentui/core": "0.1.93", "babel-plugin-module-resolver": "5.0.2", "babel-preset-solid": "1.9.10", "entities": "7.0.1", "s-js": "^0.4.9" }, "peerDependencies": { "solid-js": "1.9.11" } }, "sha512-Qx+4qoLSjnRGoo/YY4sZJMyXj09Y5kaAMpVO+65Ax58MMj4TjABN4bOOiRT2KV7sKOMTjxiAgXAIaBuqBBJ0Qg=="],
"@oslojs/asn1": ["@oslojs/asn1@1.0.0", "", { "dependencies": { "@oslojs/binary": "1.0.0" } }, "sha512-zw/wn0sj0j0QKbIXfIlnEcTviaCzYOY3V5rAyjR6YtOByFtJiT574+8p9Wlach0lZH9fddD4yb9laEAIl4vXQA=="],

View File

@@ -1,8 +1,8 @@
{
"nodeModules": {
"x86_64-linux": "sha256-5w+DwEvUrCly9LHZuTa1yTSD45X56cGJG8sds/N29mU=",
"aarch64-linux": "sha256-pLhyzajYinBlFyGWwPypyC8gHEU8S7fVXIs6aqgBmhg=",
"aarch64-darwin": "sha256-vN0sXYs7pLtpq7U9SorR2z6st/wMfHA3dybOnwIh1pU=",
"x86_64-darwin": "sha256-P8fgyBcZJmY5VbNxNer/EL4r/F28dNxaqheaqNZH488="
"x86_64-linux": "sha256-UuVbB5lTRB4bIcaKMc8CLSbQW7m9EjXgxYvxp/uO7Co=",
"aarch64-linux": "sha256-8D7ReLRVb7NDd5PQTVxFhRLmlLbfjK007XgIhhpNKoE=",
"aarch64-darwin": "sha256-M+z7C/eXfVqwDiGiiwKo/LT/m4dvCjL1Pblsr1kxoyI=",
"x86_64-darwin": "sha256-RzZS6GMwYVDPK0W+K/mlebixNMs2+JRkMG9n8OFhd0c="
}
}

View File

@@ -15,6 +15,16 @@ import { createSdk, dirSlug, getWorktree, sessionPath } from "./utils"
export const settingsKey = "settings.v3"
const seedModel = (() => {
const [providerID = "opencode", modelID = "big-pickle"] = (
process.env.OPENCODE_E2E_MODEL ?? "opencode/big-pickle"
).split("/")
return {
providerID: providerID || "opencode",
modelID: modelID || "big-pickle",
}
})()
type TestFixtures = {
sdk: ReturnType<typeof createSdk>
gotoSession: (sessionID?: string) => Promise<void>
@@ -125,7 +135,7 @@ export const test = base.extend<TestFixtures, WorkerFixtures>({
async function seedStorage(page: Page, input: { directory: string; extra?: string[] }) {
await seedProjects(page, input)
await page.addInitScript(() => {
await page.addInitScript((model: { providerID: string; modelID: string }) => {
const win = window as E2EWindow
win.__opencode_e2e = {
...win.__opencode_e2e,
@@ -143,12 +153,12 @@ async function seedStorage(page: Page, input: { directory: string; extra?: strin
localStorage.setItem(
"opencode.global.dat:model",
JSON.stringify({
recent: [{ providerID: "opencode", modelID: "big-pickle" }],
recent: [model],
user: [],
variant: {},
}),
)
})
}, seedModel)
}
export { expect }

View File

@@ -234,6 +234,7 @@ async function fileOverflow(page: Parameters<typeof test>[0]["page"]) {
}
test("review applies inline comment clicks without horizontal overflow", async ({ page, withProject }) => {
test.skip(true, "Flaky in CI for now.")
test.setTimeout(180_000)
const tag = `review-comment-${Date.now()}`
@@ -283,6 +284,7 @@ test("review applies inline comment clicks without horizontal overflow", async (
})
test("review file comments submit on click without clipping actions", async ({ page, withProject }) => {
test.skip(true, "Flaky in CI for now.")
test.setTimeout(180_000)
const tag = `review-file-comment-${Date.now()}`

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/app",
"version": "1.3.9",
"version": "1.3.11",
"description": "",
"type": "module",
"exports": {

View File

@@ -71,7 +71,7 @@ const serverEnv = {
OPENCODE_E2E_PROJECT_DIR: repoDir,
OPENCODE_E2E_SESSION_TITLE: "E2E Session",
OPENCODE_E2E_MESSAGE: "Seeded for UI e2e",
OPENCODE_E2E_MODEL: "opencode/gpt-5-nano",
OPENCODE_E2E_MODEL: process.env.OPENCODE_E2E_MODEL ?? "opencode/gpt-5-nano",
OPENCODE_CLIENT: "app",
OPENCODE_STRICT_CONFIG_DEPS: "true",
} satisfies Record<string, string>

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/console-app",
"version": "1.3.9",
"version": "1.3.11",
"type": "module",
"license": "MIT",
"scripts": {

View File

@@ -1,7 +1,7 @@
{
"$schema": "https://json.schemastore.org/package.json",
"name": "@opencode-ai/console-core",
"version": "1.3.9",
"version": "1.3.11",
"private": true,
"type": "module",
"license": "MIT",

View File

@@ -10,7 +10,7 @@ if (!stage) throw new Error("Stage is required")
const root = path.resolve(process.cwd(), "..", "..", "..")
// read the secret
const ret = await $`bun sst secret list`.cwd(root).text()
const ret = await $`bun sst secret list --stage frank`.cwd(root).text()
const lines = ret.split("\n")
const value = lines.find((line) => line.startsWith("ZEN_LIMITS"))?.split("=")[1]
if (!value) throw new Error("ZEN_LIMITS not found")

View File

@@ -12,7 +12,7 @@ const root = path.resolve(process.cwd(), "..", "..", "..")
const PARTS = 30
// read the secret
const ret = await $`bun sst secret list`.cwd(root).text()
const ret = await $`bun sst secret list --stage frank`.cwd(root).text()
const lines = ret.split("\n")
const values = Array.from({ length: PARTS }, (_, i) => {
const value = lines

View File

@@ -6,7 +6,7 @@ import os from "os"
import { Subscription } from "../src/subscription"
const root = path.resolve(process.cwd(), "..", "..", "..")
const secrets = await $`bun sst secret list`.cwd(root).text()
const secrets = await $`bun sst secret list --stage frank`.cwd(root).text()
// read value
const lines = secrets.split("\n")
@@ -25,4 +25,4 @@ const newValue = JSON.stringify(JSON.parse(await tempFile.text()))
Subscription.validate(JSON.parse(newValue))
// update the secret
await $`bun sst secret set ZEN_LIMITS ${newValue}`
await $`bun sst secret set ZEN_LIMITS ${newValue} --stage frank`.cwd(root)

View File

@@ -6,7 +6,7 @@ import os from "os"
import { ZenData } from "../src/model"
const root = path.resolve(process.cwd(), "..", "..", "..")
const models = await $`bun sst secret list`.cwd(root).text()
const models = await $`bun sst secret list --stage frank`.cwd(root).text()
const PARTS = 30
// read the line starting with "ZEN_MODELS"
@@ -40,4 +40,4 @@ const newValues = Array.from({ length: PARTS }, (_, i) =>
const envFile = Bun.file(path.join(os.tmpdir(), `models-${Date.now()}.env`))
await envFile.write(newValues.map((v, i) => `ZEN_MODELS${i + 1}="${v.replace(/"/g, '\\"')}"`).join("\n"))
await $`bun sst secret load ${envFile.name}`.cwd(root)
await $`bun sst secret load ${envFile.name} --stage frank`.cwd(root)

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/console-function",
"version": "1.3.9",
"version": "1.3.11",
"$schema": "https://json.schemastore.org/package.json",
"private": true,
"type": "module",

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/console-mail",
"version": "1.3.9",
"version": "1.3.11",
"dependencies": {
"@jsx-email/all": "2.2.3",
"@jsx-email/cli": "1.4.3",

View File

@@ -1,7 +1,7 @@
{
"name": "@opencode-ai/desktop-electron",
"private": true,
"version": "1.3.9",
"version": "1.3.11",
"type": "module",
"license": "MIT",
"homepage": "https://opencode.ai",

View File

@@ -1,7 +1,7 @@
{
"name": "@opencode-ai/desktop",
"private": true,
"version": "1.3.9",
"version": "1.3.11",
"type": "module",
"license": "MIT",
"scripts": {

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/enterprise",
"version": "1.3.9",
"version": "1.3.11",
"private": true,
"type": "module",
"license": "MIT",

View File

@@ -1,7 +1,7 @@
id = "opencode"
name = "OpenCode"
description = "The open source coding agent."
version = "1.3.9"
version = "1.3.11"
schema_version = 1
authors = ["Anomaly"]
repository = "https://github.com/anomalyco/opencode"
@@ -11,26 +11,26 @@ name = "OpenCode"
icon = "./icons/opencode.svg"
[agent_servers.opencode.targets.darwin-aarch64]
archive = "https://github.com/anomalyco/opencode/releases/download/v1.3.9/opencode-darwin-arm64.zip"
archive = "https://github.com/anomalyco/opencode/releases/download/v1.3.11/opencode-darwin-arm64.zip"
cmd = "./opencode"
args = ["acp"]
[agent_servers.opencode.targets.darwin-x86_64]
archive = "https://github.com/anomalyco/opencode/releases/download/v1.3.9/opencode-darwin-x64.zip"
archive = "https://github.com/anomalyco/opencode/releases/download/v1.3.11/opencode-darwin-x64.zip"
cmd = "./opencode"
args = ["acp"]
[agent_servers.opencode.targets.linux-aarch64]
archive = "https://github.com/anomalyco/opencode/releases/download/v1.3.9/opencode-linux-arm64.tar.gz"
archive = "https://github.com/anomalyco/opencode/releases/download/v1.3.11/opencode-linux-arm64.tar.gz"
cmd = "./opencode"
args = ["acp"]
[agent_servers.opencode.targets.linux-x86_64]
archive = "https://github.com/anomalyco/opencode/releases/download/v1.3.9/opencode-linux-x64.tar.gz"
archive = "https://github.com/anomalyco/opencode/releases/download/v1.3.11/opencode-linux-x64.tar.gz"
cmd = "./opencode"
args = ["acp"]
[agent_servers.opencode.targets.windows-x86_64]
archive = "https://github.com/anomalyco/opencode/releases/download/v1.3.9/opencode-windows-x64.zip"
archive = "https://github.com/anomalyco/opencode/releases/download/v1.3.11/opencode-windows-x64.zip"
cmd = "./opencode.exe"
args = ["acp"]

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/function",
"version": "1.3.9",
"version": "1.3.11",
"$schema": "https://json.schemastore.org/package.json",
"private": true,
"type": "module",

View File

@@ -1,6 +1,6 @@
{
"$schema": "https://json.schemastore.org/package.json",
"version": "1.3.9",
"version": "1.3.11",
"name": "opencode",
"type": "module",
"license": "MIT",
@@ -102,8 +102,8 @@
"@opencode-ai/sdk": "workspace:*",
"@opencode-ai/util": "workspace:*",
"@openrouter/ai-sdk-provider": "2.3.3",
"@opentui/core": "0.1.92",
"@opentui/solid": "0.1.92",
"@opentui/core": "0.1.93",
"@opentui/solid": "0.1.93",
"@parcel/watcher": "2.5.1",
"@pierre/diffs": "catalog:",
"@solid-primitives/event-bus": "1.1.2",

View File

@@ -2,6 +2,7 @@ const dir = process.env.OPENCODE_E2E_PROJECT_DIR ?? process.cwd()
const title = process.env.OPENCODE_E2E_SESSION_TITLE ?? "E2E Session"
const text = process.env.OPENCODE_E2E_MESSAGE ?? "Seeded for UI e2e"
const model = process.env.OPENCODE_E2E_MODEL ?? "opencode/gpt-5-nano"
const requirePaid = process.env.OPENCODE_E2E_REQUIRE_PAID === "true"
const parts = model.split("/")
const providerID = parts[0] ?? "opencode"
const modelID = parts[1] ?? "gpt-5-nano"
@@ -11,6 +12,7 @@ const seed = async () => {
const { Instance } = await import("../src/project/instance")
const { InstanceBootstrap } = await import("../src/project/bootstrap")
const { Config } = await import("../src/config/config")
const { Provider } = await import("../src/provider/provider")
const { Session } = await import("../src/session")
const { MessageID, PartID } = await import("../src/session/schema")
const { Project } = await import("../src/project/project")
@@ -25,6 +27,19 @@ const seed = async () => {
await Config.waitForDependencies()
await ToolRegistry.ids()
if (requirePaid && providerID === "opencode" && !process.env.OPENCODE_API_KEY) {
throw new Error("OPENCODE_API_KEY is required when OPENCODE_E2E_REQUIRE_PAID=true")
}
const info = await Provider.getModel(ProviderID.make(providerID), ModelID.make(modelID))
if (requirePaid) {
const paid =
info.cost.input > 0 || info.cost.output > 0 || info.cost.cache.read > 0 || info.cost.cache.write > 0
if (!paid) {
throw new Error(`OPENCODE_E2E_MODEL must resolve to a paid model: ${providerID}/${modelID}`)
}
}
const session = await Session.create({ title })
const messageID = MessageID.ascending()
const partID = PartID.ascending()

View File

@@ -210,15 +210,13 @@ Fully migrated (single namespace, InstanceState where needed, flattened facade):
- [x] `Vcs``project/vcs.ts`
- [x] `Worktree``worktree/index.ts`
Still open and likely worth migrating:
- [x] `Session``session/index.ts`
- [ ] `SessionProcessor`blocked by AI SDK v6 PR (#18433)
- [ ] `SessionPrompt`blocked by AI SDK v6 PR (#18433)
- [ ] `SessionCompaction`blocked by AI SDK v6 PR (#18433)
- [ ] `Provider`blocked by AI SDK v6 PR (#18433)
- [x] `SessionProcessor``session/processor.ts`
- [x] `SessionPrompt``session/prompt.ts`
- [x] `SessionCompaction``session/compaction.ts`
- [x] `Provider``provider/provider.ts`
Other services not yet migrated:
Still open:
- [ ] `SessionSummary``session/summary.ts`
- [ ] `SessionTodo``session/todo.ts`
@@ -235,7 +233,7 @@ Once individual tools are effectified, change `Tool.Info` (`tool/tool.ts`) so `i
1. Migrate each tool to return Effects
2. Update `Tool.define()` factory to work with Effects
3. Update `SessionPrompt` to `yield*` tool results instead of `await`ing — blocked by AI SDK v6 PR (#18433)
3. Update `SessionPrompt` to `yield*` tool results instead of `await`ing
Individual tools, ordered by value:

View File

@@ -88,6 +88,7 @@ export default plugin
- If package `exports` exists, loader only resolves `./tui` or `./server`; it never falls back to `exports["."]`.
- For npm package specs, TUI does not use `package.json` `main` as a fallback entry.
- `package.json` `main` is only used for server plugin entrypoint resolution.
- If a configured plugin has no target-specific entrypoint, it is skipped with a warning (not a load failure).
- If a package supports both server and TUI, use separate files and package `exports` (`./server` and `./tui`) so each target resolves to a target-only module.
- File/path plugins must export a non-empty `id`.
- npm plugins may omit `id`; package `name` is used.
@@ -100,7 +101,10 @@ export default plugin
## Package manifest and install
Package manifest is read from `package.json` field `oc-plugin`.
Install target detection is inferred from `package.json` entrypoints:
- `server` target when `exports["./server"]` exists or `main` is set.
- `tui` target when `exports["./tui"]` exists.
Example:
@@ -108,14 +112,20 @@ Example:
{
"name": "@acme/opencode-plugin",
"type": "module",
"main": "./dist/index.js",
"main": "./dist/server.js",
"exports": {
"./server": {
"import": "./dist/server.js",
"config": { "custom": true }
},
"./tui": {
"import": "./dist/tui.js",
"config": { "compact": true }
}
},
"engines": {
"opencode": "^1.0.0"
},
"oc-plugin": [
["server", { "custom": true }],
["tui", { "compact": true }]
]
}
}
```
@@ -144,12 +154,16 @@ npm plugins can declare a version compatibility range in `package.json` using th
- Local installs resolve target dir inside `patchPluginConfig`.
- For local scope, path is `<worktree>/.opencode` only when VCS is git and `worktree !== "/"`; otherwise `<directory>/.opencode`.
- Root-worktree fallback (`worktree === "/"` uses `<directory>/.opencode`) is covered by regression tests.
- `patchPluginConfig` applies all declared manifest targets (`server` and/or `tui`) in one call.
- `patchPluginConfig` applies all detected targets (`server` and/or `tui`) in one call.
- `patchPluginConfig` returns structured result unions (`ok`, `code`, fields by error kind) instead of custom thrown errors.
- `patchPluginConfig` serializes per-target config writes with `Flock.acquire(...)`.
- `patchPluginConfig` uses targeted `jsonc-parser` edits, so existing JSONC comments are preserved when plugin entries are added or replaced.
- npm plugin package installs are executed with `--ignore-scripts`, so package `install` / `postinstall` lifecycle scripts are not run.
- `exports["./server"].config` and `exports["./tui"].config` can provide default plugin options written on first install.
- Without `--force`, an already-configured npm package name is a no-op.
- With `--force`, replacement matches by package name. If the existing row is `[spec, options]`, those tuple options are kept.
- Explicit npm specs with a version suffix (for example `pkg@1.2.3`) are pinned. Runtime install requests that exact version and does not run stale/latest checks for newer registry versions.
- Bare npm specs (`pkg`) are treated as `latest` and can refresh when the cached version is stale.
- Tuple targets in `oc-plugin` provide default options written into config.
- A package can target `server`, `tui`, or both.
- If a package targets both, each target must still resolve to a separate target-only module. Do not export `{ server, tui }` from one module.
@@ -317,7 +331,6 @@ Slot notes:
- `api.plugins.install(spec, { global? })` runs install -> manifest read -> config patch using the same helper flow as CLI install.
- `api.plugins.install(...)` returns either `{ ok: false, message, missing? }` or `{ ok: true, dir, tui }`.
- `api.plugins.install(...)` does not load plugins into the current session. Call `api.plugins.add(spec)` to load after install.
- For packages that declare a tuple `tui` target in `oc-plugin`, `api.plugins.install(...)` stages those tuple options so a following `api.plugins.add(spec)` uses them.
- If activation fails, the plugin can remain `enabled=true` and `active=false`.
- `api.lifecycle.signal` is aborted before cleanup runs.
- `api.lifecycle.onDispose(fn)` registers cleanup and returns an unregister function.

View File

@@ -50,7 +50,7 @@ export namespace BunProc {
}),
)
export async function install(pkg: string, version = "latest") {
export async function install(pkg: string, version = "latest", opts?: { ignoreScripts?: boolean }) {
// Use lock to ensure only one install at a time
using _ = await Lock.write("bun-install")
@@ -82,6 +82,7 @@ export namespace BunProc {
"add",
"--force",
"--exact",
...(opts?.ignoreScripts ? ["--ignore-scripts"] : []),
// TODO: get rid of this case (see: https://github.com/oven-sh/bun/issues/19936)
...(proxied() || process.env.CI ? ["--no-cache"] : []),
"--cwd",

View File

@@ -90,8 +90,9 @@ export namespace Bus {
if (ps) yield* PubSub.publish(ps, payload)
yield* PubSub.publish(state.wildcard, payload)
const dir = yield* InstanceState.directory
GlobalBus.emit("event", {
directory: Instance.directory,
directory: dir,
payload,
})
})

View File

@@ -114,8 +114,8 @@ export function createPlugTask(input: PlugInput, dep: PlugDeps = defaultPlugDeps
if (manifest.code === "manifest_no_targets") {
inspect.stop("No plugin targets found", 1)
dep.log.error(`"${mod}" does not declare supported targets in package.json`)
dep.log.info('Expected: "oc-plugin": ["server", "tui"] or tuples like [["tui", { ... }]].')
dep.log.error(`"${mod}" does not expose plugin entrypoints in package.json`)
dep.log.info('Expected one of: exports["./tui"], exports["./server"], or package.json main for server.')
return false
}

View File

@@ -125,6 +125,7 @@ import { DialogVariant } from "./component/dialog-variant"
function rendererConfig(_config: TuiConfig.Info): CliRendererConfig {
return {
externalOutputMode: "passthrough",
targetFps: 60,
gatherStats: false,
exitOnCtrlC: false,
@@ -250,7 +251,6 @@ function App(props: { onSnapshot?: () => Promise<string[]> }) {
const route = useRoute()
const dimensions = useTerminalDimensions()
const renderer = useRenderer()
renderer.disableStdoutInterception()
const dialog = useDialog()
const local = useLocal()
const kv = useKV()

View File

@@ -62,8 +62,8 @@
"light": "frappeText"
},
"textMuted": {
"dark": "frappeSubtext1",
"light": "frappeSubtext1"
"dark": "frappeOverlay2",
"light": "frappeOverlay2"
},
"background": {
"dark": "frappeBase",

View File

@@ -62,8 +62,8 @@
"light": "macText"
},
"textMuted": {
"dark": "macSubtext1",
"light": "macSubtext1"
"dark": "macOverlay2",
"light": "macOverlay2"
},
"background": {
"dark": "macBase",

View File

@@ -63,7 +63,7 @@
"success": { "dark": "darkGreen", "light": "lightGreen" },
"info": { "dark": "darkTeal", "light": "lightTeal" },
"text": { "dark": "darkText", "light": "lightText" },
"textMuted": { "dark": "darkSubtext1", "light": "lightSubtext1" },
"textMuted": { "dark": "darkOverlay2", "light": "lightOverlay2" },
"background": { "dark": "darkBase", "light": "lightBase" },
"backgroundPanel": { "dark": "darkMantle", "light": "lightMantle" },
"backgroundElement": { "dark": "darkCrust", "light": "lightCrust" },

View File

@@ -87,6 +87,11 @@ function fail(message: string, data: Record<string, unknown>) {
console.error(`[tui.plugin] ${text}`, next)
}
function warn(message: string, data: Record<string, unknown>) {
log.warn(message, data)
console.warn(`[tui.plugin] ${message}`, data)
}
type CleanupResult = { type: "ok" } | { type: "error"; error: unknown } | { type: "timeout" }
function runCleanup(fn: () => unknown, ms: number): Promise<CleanupResult> {
@@ -229,6 +234,15 @@ async function loadExternalPlugin(cfg: TuiConfig.PluginRecord, retry = false): P
log.info("loading tui plugin", { path: plan.spec, retry })
const resolved = await PluginLoader.resolve(plan, "tui")
if (!resolved.ok) {
if (resolved.stage === "missing") {
warn("tui plugin has no entrypoint", {
path: plan.spec,
retry,
message: resolved.message,
})
return
}
if (resolved.stage === "install") {
fail("failed to resolve tui plugin", { path: plan.spec, retry, error: resolved.error })
return
@@ -753,7 +767,6 @@ async function addPluginBySpec(state: RuntimeState | undefined, raw: string) {
return [] as PluginLoad[]
})
if (!ready.length) {
fail("failed to add tui plugin", { path: next })
return false
}
@@ -824,7 +837,7 @@ async function installPluginBySpec(
if (manifest.code === "manifest_no_targets") {
return {
ok: false,
message: `"${spec}" does not declare supported targets in package.json`,
message: `"${spec}" does not expose plugin entrypoints in package.json`,
}
}

View File

@@ -121,7 +121,10 @@ export namespace Config {
const gitignore = path.join(dir, ".gitignore")
const ignore = await Filesystem.exists(gitignore)
if (!ignore) {
await Filesystem.write(gitignore, ["node_modules", "package.json", "bun.lock", ".gitignore"].join("\n"))
await Filesystem.write(
gitignore,
["node_modules", "package.json", "package-lock.json", "bun.lock", ".gitignore"].join("\n"),
)
}
// Bun can race cache writes on Windows when installs run in parallel across dirs.
@@ -1483,7 +1486,8 @@ export namespace Config {
})
const update = Effect.fn("Config.update")(function* (config: Info) {
const file = path.join(Instance.directory, "config.json")
const dir = yield* InstanceState.directory
const file = path.join(dir, "config.json")
const existing = yield* loadFile(file)
yield* fs.writeFileString(file, JSON.stringify(mergeDeep(existing, config), null, 2)).pipe(Effect.orDie)
yield* Effect.promise(() => Instance.dispose())

View File

@@ -0,0 +1,6 @@
import { ServiceMap } from "effect"
import type { InstanceContext } from "@/project/instance"
export const InstanceRef = ServiceMap.Reference<InstanceContext | undefined>("~opencode/InstanceRef", {
defaultValue: () => undefined,
})

View File

@@ -1,5 +1,7 @@
import { Effect, ScopedCache, Scope } from "effect"
import { Effect, Fiber, ScopedCache, Scope, ServiceMap } from "effect"
import { Instance, type InstanceContext } from "@/project/instance"
import { Context } from "@/util/context"
import { InstanceRef } from "./instance-ref"
import { registerDisposer } from "./instance-registry"
const TypeId = "~opencode/InstanceState"
@@ -10,13 +12,34 @@ export interface InstanceState<A, E = never, R = never> {
}
export namespace InstanceState {
export const bind = <F extends (...args: any[]) => any>(fn: F): F => {
try {
return Instance.bind(fn)
} catch (err) {
if (!(err instanceof Context.NotFound)) throw err
}
const fiber = Fiber.getCurrent()
const ctx = fiber ? ServiceMap.getReferenceUnsafe(fiber.services, InstanceRef) : undefined
if (!ctx) return fn
return ((...args: any[]) => Instance.restore(ctx, () => fn(...args))) as F
}
export const context = Effect.fnUntraced(function* () {
return (yield* InstanceRef) ?? Instance.current
})()
export const directory = Effect.map(context, (ctx) => ctx.directory)
export const make = <A, E = never, R = never>(
init: (ctx: InstanceContext) => Effect.Effect<A, E, R | Scope.Scope>,
): Effect.Effect<InstanceState<A, E, Exclude<R, Scope.Scope>>, never, R | Scope.Scope> =>
Effect.gen(function* () {
const cache = yield* ScopedCache.make<string, A, E, R>({
capacity: Number.POSITIVE_INFINITY,
lookup: () => init(Instance.current),
lookup: () =>
Effect.fnUntraced(function* () {
return yield* init(yield* context)
})(),
})
const off = registerDisposer((directory) => Effect.runPromise(ScopedCache.invalidate(cache, directory)))
@@ -29,7 +52,9 @@ export namespace InstanceState {
})
export const get = <A, E, R>(self: InstanceState<A, E, R>) =>
Effect.suspend(() => ScopedCache.get(self.cache, Instance.directory))
Effect.gen(function* () {
return yield* ScopedCache.get(self.cache, yield* directory)
})
export const use = <A, E, R, B>(self: InstanceState<A, E, R>, select: (value: A) => B) =>
Effect.map(get(self), select)
@@ -40,8 +65,18 @@ export namespace InstanceState {
) => Effect.flatMap(get(self), select)
export const has = <A, E, R>(self: InstanceState<A, E, R>) =>
Effect.suspend(() => ScopedCache.has(self.cache, Instance.directory))
Effect.gen(function* () {
return yield* ScopedCache.has(self.cache, yield* directory)
})
export const invalidate = <A, E, R>(self: InstanceState<A, E, R>) =>
Effect.suspend(() => ScopedCache.invalidate(self.cache, Instance.directory))
Effect.gen(function* () {
return yield* ScopedCache.invalidate(self.cache, yield* directory)
})
/**
* Effect finalizers run on the fiber scheduler after the original async
* boundary, so ALS reads like Instance.directory can be gone by then.
*/
export const withALS = <T>(fn: () => T) => Effect.map(context, (ctx) => Instance.restore(ctx, fn))
}

View File

@@ -1,19 +1,33 @@
import { Effect, Layer, ManagedRuntime } from "effect"
import * as ServiceMap from "effect/ServiceMap"
import { Instance } from "@/project/instance"
import { Context } from "@/util/context"
import { InstanceRef } from "./instance-ref"
export const memoMap = Layer.makeMemoMapUnsafe()
function attach<A, E, R>(effect: Effect.Effect<A, E, R>): Effect.Effect<A, E, R> {
try {
const ctx = Instance.current
return Effect.provideService(effect, InstanceRef, ctx)
} catch (err) {
if (!(err instanceof Context.NotFound)) throw err
}
return effect
}
export function makeRuntime<I, S, E>(service: ServiceMap.Service<I, S>, layer: Layer.Layer<I, E>) {
let rt: ManagedRuntime.ManagedRuntime<I, E> | undefined
const getRuntime = () => (rt ??= ManagedRuntime.make(layer, { memoMap }))
return {
runSync: <A, Err>(fn: (svc: S) => Effect.Effect<A, Err, I>) => getRuntime().runSync(service.use(fn)),
runSync: <A, Err>(fn: (svc: S) => Effect.Effect<A, Err, I>) => getRuntime().runSync(attach(service.use(fn))),
runPromiseExit: <A, Err>(fn: (svc: S) => Effect.Effect<A, Err, I>, options?: Effect.RunOptions) =>
getRuntime().runPromiseExit(service.use(fn), options),
getRuntime().runPromiseExit(attach(service.use(fn)), options),
runPromise: <A, Err>(fn: (svc: S) => Effect.Effect<A, Err, I>, options?: Effect.RunOptions) =>
getRuntime().runPromise(service.use(fn), options),
runFork: <A, Err>(fn: (svc: S) => Effect.Effect<A, Err, I>) => getRuntime().runFork(service.use(fn)),
runCallback: <A, Err>(fn: (svc: S) => Effect.Effect<A, Err, I>) => getRuntime().runCallback(service.use(fn)),
getRuntime().runPromise(attach(service.use(fn)), options),
runFork: <A, Err>(fn: (svc: S) => Effect.Effect<A, Err, I>) => getRuntime().runFork(attach(service.use(fn))),
runCallback: <A, Err>(fn: (svc: S) => Effect.Effect<A, Err, I>) =>
getRuntime().runCallback(attach(service.use(fn))),
}
}

View File

@@ -108,10 +108,11 @@ export namespace Format {
for (const item of yield* Effect.promise(() => getFormatter(ext))) {
log.info("running", { command: item.command })
const cmd = item.command.map((x) => x.replace("$FILE", filepath))
const dir = yield* InstanceState.directory
const code = yield* spawner
.spawn(
ChildProcess.make(cmd[0]!, cmd.slice(1), {
cwd: Instance.directory,
cwd: dir,
env: item.environment,
extendEnv: true,
}),

View File

@@ -9,11 +9,7 @@ import z from "zod"
import { BusEvent } from "@/bus/bus-event"
import { Flag } from "../flag/flag"
import { Log } from "../util/log"
declare global {
const OPENCODE_VERSION: string
const OPENCODE_CHANNEL: string
}
import { CHANNEL as channel, VERSION as version } from "./meta"
import semver from "semver"
@@ -60,8 +56,8 @@ export namespace Installation {
})
export type Info = z.infer<typeof Info>
export const VERSION = typeof OPENCODE_VERSION === "string" ? OPENCODE_VERSION : "local"
export const CHANNEL = typeof OPENCODE_CHANNEL === "string" ? OPENCODE_CHANNEL : "local"
export const VERSION = version
export const CHANNEL = channel
export const USER_AGENT = `opencode/${CHANNEL}/${VERSION}/${Flag.OPENCODE_CLIENT}`
export function isPreview() {

View File

@@ -0,0 +1,7 @@
declare global {
const OPENCODE_VERSION: string
const OPENCODE_CHANNEL: string
}
export const VERSION = typeof OPENCODE_VERSION === "string" ? OPENCODE_VERSION : "local"
export const CHANNEL = typeof OPENCODE_CHANNEL === "string" ? OPENCODE_CHANNEL : "local"

View File

@@ -375,38 +375,6 @@ export async function CodexAuthPlugin(input: PluginInput): Promise<Hooks> {
delete provider.models[modelId]
}
if (!provider.models["gpt-5.3-codex"]) {
const model = {
id: ModelID.make("gpt-5.3-codex"),
providerID: ProviderID.openai,
api: {
id: "gpt-5.3-codex",
url: "https://chatgpt.com/backend-api/codex",
npm: "@ai-sdk/openai",
},
name: "GPT-5.3 Codex",
capabilities: {
temperature: false,
reasoning: true,
attachment: true,
toolcall: true,
input: { text: true, audio: false, image: true, video: false, pdf: false },
output: { text: true, audio: false, image: false, video: false, pdf: false },
interleaved: false,
},
cost: { input: 0, output: 0, cache: { read: 0, write: 0 } },
limit: { context: 400_000, input: 272_000, output: 128_000 },
status: "active" as const,
options: {},
headers: {},
release_date: "2026-02-05",
variants: {} as Record<string, Record<string, any>>,
family: "gpt-codex",
}
model.variants = ProviderTransform.variants(model)
provider.models["gpt-5.3-codex"] = model
}
// Zero out costs for Codex (included with ChatGPT subscription)
for (const model of Object.values(provider.models)) {
model.cost = {

View File

@@ -157,6 +157,14 @@ export namespace Plugin {
const resolved = await PluginLoader.resolve(plan, "server")
if (!resolved.ok) {
if (resolved.stage === "missing") {
log.warn("plugin has no server entrypoint", {
path: plan.spec,
message: resolved.message,
})
return
}
const cause =
resolved.error instanceof Error ? (resolved.error.cause ?? resolved.error) : resolved.error
const message = errorMessage(cause)

View File

@@ -11,6 +11,7 @@ import { ConfigPaths } from "@/config/paths"
import { Global } from "@/global"
import { Filesystem } from "@/util/filesystem"
import { Flock } from "@/util/flock"
import { isRecord } from "@/util/record"
import { parsePluginSpecifier, readPluginPackage, resolvePluginTarget } from "./shared"
@@ -101,28 +102,60 @@ function pluginList(data: unknown) {
return item.plugin
}
function parseTarget(item: unknown): Target | undefined {
if (item === "server" || item === "tui") return { kind: item }
if (!Array.isArray(item)) return
if (item[0] !== "server" && item[0] !== "tui") return
if (item.length < 2) return { kind: item[0] }
const opt = item[1]
if (!opt || typeof opt !== "object" || Array.isArray(opt)) return { kind: item[0] }
return {
kind: item[0],
opts: opt,
function exportValue(value: unknown): string | undefined {
if (typeof value === "string") {
const next = value.trim()
if (next) return next
return
}
if (!isRecord(value)) return
for (const key of ["import", "default"]) {
const next = value[key]
if (typeof next !== "string") continue
const hit = next.trim()
if (!hit) continue
return hit
}
}
function parseTargets(raw: unknown) {
if (!Array.isArray(raw)) return []
const map = new Map<Kind, Target>()
for (const item of raw) {
const hit = parseTarget(item)
if (!hit) continue
map.set(hit.kind, hit)
function exportOptions(value: unknown): Record<string, unknown> | undefined {
if (!isRecord(value)) return
const config = value.config
if (!isRecord(config)) return
return config
}
function exportTarget(pkg: Record<string, unknown>, kind: Kind) {
const exports = pkg.exports
if (!isRecord(exports)) return
const value = exports[`./${kind}`]
const entry = exportValue(value)
if (!entry) return
return {
opts: exportOptions(value),
}
return [...map.values()]
}
function hasMainTarget(pkg: Record<string, unknown>) {
const main = pkg.main
if (typeof main !== "string") return false
return Boolean(main.trim())
}
function packageTargets(pkg: Record<string, unknown>) {
const targets: Target[] = []
const server = exportTarget(pkg, "server")
if (server) {
targets.push({ kind: "server", opts: server.opts })
} else if (hasMainTarget(pkg)) {
targets.push({ kind: "server" })
}
const tui = exportTarget(pkg, "tui")
if (tui) {
targets.push({ kind: "tui", opts: tui.opts })
}
return targets
}
function patch(text: string, path: Array<string | number>, value: unknown, insert = false) {
@@ -260,7 +293,7 @@ export async function readPluginManifest(target: string): Promise<ManifestResult
}
}
const targets = parseTargets(pkg.item.json["oc-plugin"])
const targets = packageTargets(pkg.item.json)
if (!targets.length) {
return {
ok: false,
@@ -330,7 +363,7 @@ async function patchOne(dir: string, target: Target, spec: string, force: boolea
}
const list = pluginList(data)
const item = target.opts ? [spec, target.opts] : spec
const item = target.opts ? ([spec, target.opts] as const) : spec
const out = patchPluginList(text, list, spec, item, force)
if (out.mode === "noop") {
return {

View File

@@ -43,7 +43,9 @@ export namespace PluginLoader {
plan: Plan,
kind: PluginKind,
): Promise<
{ ok: true; value: Resolved } | { ok: false; stage: "install" | "entry" | "compatibility"; error: unknown }
| { ok: true; value: Resolved }
| { ok: false; stage: "missing"; message: string }
| { ok: false; stage: "install" | "entry" | "compatibility"; error: unknown }
> {
let target = ""
try {
@@ -77,8 +79,8 @@ export namespace PluginLoader {
if (!base.entry) {
return {
ok: false,
stage: "entry",
error: new Error(`Plugin ${plan.spec} entry is empty`),
stage: "missing",
message: `Plugin ${plan.spec} does not expose a ${kind} entrypoint`,
}
}

View File

@@ -34,7 +34,7 @@ export type PluginEntry = {
source: PluginSource
target: string
pkg?: PluginPackage
entry: string
entry?: string
}
const INDEX_FILES = ["index.ts", "index.tsx", "index.js", "index.mjs", "index.cjs"]
@@ -128,13 +128,8 @@ async function resolvePluginEntrypoint(spec: string, target: string, kind: Plugi
if (index) return pathToFileURL(index).href
}
if (source === "npm") {
throw new TypeError(`Plugin ${spec} must define package.json exports["./tui"]`)
}
if (dir) {
throw new TypeError(`Plugin ${spec} must define package.json exports["./tui"] or include index file`)
}
if (source === "npm") return
if (dir) return
return target
}
@@ -145,7 +140,7 @@ async function resolvePluginEntrypoint(spec: string, target: string, kind: Plugi
if (index) return pathToFileURL(index).href
}
throw new TypeError(`Plugin ${spec} must define package.json exports["./server"] or package.json main`)
return
}
return target
@@ -189,7 +184,7 @@ export async function checkPluginCompatibility(target: string, opencodeVersion:
export async function resolvePluginTarget(spec: string, parsed = parsePluginSpecifier(spec)) {
if (isPathPluginSpec(spec)) return resolvePathPluginTarget(spec)
return BunProc.install(parsed.pkg, parsed.version)
return BunProc.install(parsed.pkg, parsed.version, { ignoreScripts: true })
}
export async function readPluginPackage(target: string): Promise<PluginPackage> {

View File

@@ -114,6 +114,14 @@ export const Instance = {
const ctx = context.use()
return ((...args: any[]) => context.provide(ctx, () => fn(...args))) as F
},
/**
* Run a synchronous function within the given instance context ALS.
* Use this to bridge from Effect (where InstanceRef carries context)
* back to sync code that reads Instance.directory from ALS.
*/
restore<R>(ctx: InstanceContext, fn: () => R): R {
return context.provide(ctx, fn)
},
state<S>(init: () => S, dispose?: (state: Awaited<S>) => Promise<void>): () => S {
return State.create(() => Instance.directory, init, dispose)
},

File diff suppressed because it is too large Load Diff

View File

@@ -280,6 +280,7 @@ export namespace ProviderTransform {
msgs = normalizeMessages(msgs, model, options)
if (
(model.providerID === "anthropic" ||
model.providerID === "google-vertex-anthropic" ||
model.api.id.includes("anthropic") ||
model.api.id.includes("claude") ||
model.id.includes("anthropic") ||
@@ -292,7 +293,7 @@ export namespace ProviderTransform {
// Remap providerOptions keys from stored providerID to expected SDK key
const key = sdkKey(model.api.npm)
if (key && key !== model.providerID && model.api.npm !== "@ai-sdk/azure") {
if (key && key !== model.providerID) {
const remap = (opts: Record<string, any> | undefined) => {
if (!opts) return opts
if (!(model.providerID in opts)) return opts

View File

@@ -17,6 +17,7 @@ import { NotFoundError } from "@/storage/db"
import { ModelID, ProviderID } from "@/provider/schema"
import { Effect, Layer, ServiceMap } from "effect"
import { makeRuntime } from "@/effect/run-service"
import { InstanceState } from "@/effect/instance-state"
import { isOverflow as overflow } from "./overflow"
export namespace SessionCompaction {
@@ -213,6 +214,7 @@ When constructing the summary, try to stick to this template:
const msgs = structuredClone(messages)
yield* plugin.trigger("experimental.chat.messages.transform", {}, { messages: msgs })
const modelMessages = yield* Effect.promise(() => MessageV2.toModelMessages(msgs, model, { stripMedia: true }))
const ctx = yield* InstanceState.context
const msg: MessageV2.Assistant = {
id: MessageID.ascending(),
role: "assistant",
@@ -223,8 +225,8 @@ When constructing the summary, try to stick to this template:
variant: userMessage.variant,
summary: true,
path: {
cwd: Instance.directory,
root: Instance.worktree,
cwd: ctx.directory,
root: ctx.worktree,
},
cost: 0,
tokens: {

View File

@@ -19,6 +19,7 @@ import { Log } from "../util/log"
import { updateSchema } from "../util/update-schema"
import { MessageV2 } from "./message-v2"
import { Instance } from "../project/instance"
import { InstanceState } from "@/effect/instance-state"
import { SessionPrompt } from "./prompt"
import { fn } from "@/util/fn"
import { Command } from "../command"
@@ -257,6 +258,9 @@ export namespace Session {
const cacheReadInputTokens = safe(input.usage.cachedInputTokens ?? 0)
const cacheWriteInputTokens = safe(
(input.metadata?.["anthropic"]?.["cacheCreationInputTokens"] ??
// google-vertex-anthropic returns metadata under "vertex" key
// (AnthropicMessagesLanguageModel custom provider key from 'vertex.anthropic.messages')
input.metadata?.["vertex"]?.["cacheCreationInputTokens"] ??
// @ts-expect-error
input.metadata?.["bedrock"]?.["usage"]?.["cacheWriteInputTokens"] ??
// @ts-expect-error
@@ -379,11 +383,12 @@ export namespace Session {
directory: string
permission?: Permission.Ruleset
}) {
const ctx = yield* InstanceState.context
const result: Info = {
id: SessionID.descending(input.id),
slug: Slug.create(),
version: Installation.VERSION,
projectID: Instance.project.id,
projectID: ctx.project.id,
directory: input.directory,
workspaceID: input.workspaceID,
parentID: input.parentID,
@@ -441,12 +446,12 @@ export namespace Session {
})
const children = Effect.fn("Session.children")(function* (parentID: SessionID) {
const project = Instance.project
const ctx = yield* InstanceState.context
const rows = yield* db((d) =>
d
.select()
.from(SessionTable)
.where(and(eq(SessionTable.project_id, project.id), eq(SessionTable.parent_id, parentID)))
.where(and(eq(SessionTable.project_id, ctx.project.id), eq(SessionTable.parent_id, parentID)))
.all(),
)
return rows.map(fromRow)
@@ -493,9 +498,10 @@ export namespace Session {
permission?: Permission.Ruleset
workspaceID?: WorkspaceID
}) {
const directory = yield* InstanceState.directory
return yield* createNext({
parentID: input?.parentID,
directory: Instance.directory,
directory,
title: input?.title,
permission: input?.permission,
workspaceID: input?.workspaceID,
@@ -503,10 +509,11 @@ export namespace Session {
})
const fork = Effect.fn("Session.fork")(function* (input: { sessionID: SessionID; messageID?: MessageID }) {
const directory = yield* InstanceState.directory
const original = yield* get(input.sessionID)
const title = getForkedTitle(original.title)
const session = yield* createNext({
directory: Instance.directory,
directory,
workspaceID: original.workspaceID,
title,
})

View File

@@ -53,32 +53,22 @@ export namespace LLM {
Effect.gen(function* () {
return Service.of({
stream(input) {
const stream: Stream.Stream<Event, unknown> = Stream.scoped(
return Stream.scoped(
Stream.unwrap(
Effect.gen(function* () {
const ctrl = yield* Effect.acquireRelease(
Effect.sync(() => new AbortController()),
(ctrl) => Effect.sync(() => ctrl.abort()),
)
const queue = yield* Queue.unbounded<Event, unknown | Cause.Done>()
yield* Effect.promise(async () => {
const result = await LLM.stream({ ...input, abort: ctrl.signal })
for await (const event of result.fullStream) {
if (!Queue.offerUnsafe(queue, event)) break
}
Queue.endUnsafe(queue)
}).pipe(
Effect.catchCause((cause) => Effect.sync(() => void Queue.failCauseUnsafe(queue, cause))),
Effect.onInterrupt(() => Effect.sync(() => ctrl.abort())),
Effect.forkScoped,
const result = yield* Effect.promise(() => LLM.stream({ ...input, abort: ctrl.signal }))
return Stream.fromAsyncIterable(result.fullStream, (e) =>
e instanceof Error ? e : new Error(String(e)),
)
return Stream.fromQueue(queue)
}),
),
)
return stream
},
})
}),

View File

@@ -148,6 +148,7 @@ export namespace SessionPrompt {
})
const resolvePromptParts = Effect.fn("SessionPrompt.resolvePromptParts")(function* (template: string) {
const ctx = yield* InstanceState.context
const parts: PromptInput["parts"] = [{ type: "text", text: template }]
const files = ConfigMarkdown.files(template)
const seen = new Set<string>()
@@ -159,7 +160,7 @@ export namespace SessionPrompt {
seen.add(name)
const filepath = name.startsWith("~/")
? path.join(os.homedir(), name.slice(2))
: path.resolve(Instance.worktree, name)
: path.resolve(ctx.worktree, name)
const info = yield* fsys.stat(filepath).pipe(Effect.option)
if (Option.isNone(info)) {
@@ -403,7 +404,7 @@ NOTE: At any point in time through this workflow you should feel free to ask the
Effect.runPromise(
Effect.gen(function* () {
const match = input.processor.partFromToolCall(options.toolCallId)
if (!match || match.state.status !== "running") return
if (!match || !["running", "pending"].includes(match.state.status)) return
yield* sessions.updatePart({
...match,
state: {
@@ -553,6 +554,7 @@ NOTE: At any point in time through this workflow you should feel free to ask the
msgs: MessageV2.WithParts[]
}) {
const { task, model, lastUser, sessionID, session, msgs } = input
const ctx = yield* InstanceState.context
const taskTool = yield* Effect.promise(() => TaskTool.init())
const taskModel = task.model ? yield* getModel(task.model.providerID, task.model.modelID, sessionID) : model
const assistantMessage: MessageV2.Assistant = yield* sessions.updateMessage({
@@ -563,7 +565,7 @@ NOTE: At any point in time through this workflow you should feel free to ask the
mode: task.agent,
agent: task.agent,
variant: lastUser.variant,
path: { cwd: Instance.directory, root: Instance.worktree },
path: { cwd: ctx.directory, root: ctx.worktree },
cost: 0,
tokens: { input: 0, output: 0, reasoning: 0, cache: { read: 0, write: 0 } },
modelID: taskModel.id,
@@ -734,6 +736,7 @@ NOTE: At any point in time through this workflow you should feel free to ask the
})
const shellImpl = Effect.fn("SessionPrompt.shellImpl")(function* (input: ShellInput, signal: AbortSignal) {
const ctx = yield* InstanceState.context
const session = yield* sessions.get(input.sessionID)
if (session.revert) {
yield* Effect.promise(() => SessionRevert.cleanup(session))
@@ -773,7 +776,7 @@ NOTE: At any point in time through this workflow you should feel free to ask the
mode: input.agent,
agent: input.agent,
cost: 0,
path: { cwd: Instance.directory, root: Instance.worktree },
path: { cwd: ctx.directory, root: ctx.worktree },
time: { created: Date.now() },
role: "assistant",
tokens: { input: 0, output: 0, reasoning: 0, cache: { read: 0, write: 0 } },
@@ -832,7 +835,7 @@ NOTE: At any point in time through this workflow you should feel free to ask the
}
const args = (invocations[shellName] ?? invocations[""]).args
const cwd = Instance.directory
const cwd = ctx.directory
const shellEnv = yield* plugin.trigger(
"shell.env",
{ cwd, sessionID: input.sessionID, callID: part.callID },
@@ -976,7 +979,7 @@ NOTE: At any point in time through this workflow you should feel free to ask the
variant,
}
yield* Effect.addFinalizer(() => Effect.sync(() => InstructionPrompt.clear(info.id)))
yield* Effect.addFinalizer(() => InstanceState.withALS(() => InstructionPrompt.clear(info.id)))
type Draft<T> = T extends MessageV2.Part ? Omit<T, "id"> & { id?: string } : never
const assign = (part: Draft<MessageV2.Part>): MessageV2.Part => ({
@@ -1330,6 +1333,7 @@ NOTE: At any point in time through this workflow you should feel free to ask the
const runLoop: (sessionID: SessionID) => Effect.Effect<MessageV2.WithParts> = Effect.fn("SessionPrompt.run")(
function* (sessionID: SessionID) {
const ctx = yield* InstanceState.context
let structured: unknown | undefined
let step = 0
const session = yield* sessions.get(sessionID)
@@ -1421,7 +1425,7 @@ NOTE: At any point in time through this workflow you should feel free to ask the
mode: agent.name,
agent: agent.name,
variant: lastUser.variant,
path: { cwd: Instance.directory, root: Instance.worktree },
path: { cwd: ctx.directory, root: ctx.worktree },
cost: 0,
tokens: { input: 0, output: 0, reasoning: 0, cache: { read: 0, write: 0 } },
modelID: model.id,
@@ -1538,7 +1542,7 @@ NOTE: At any point in time through this workflow you should feel free to ask the
}),
Effect.fnUntraced(function* (exit) {
if (Exit.isFailure(exit) && Cause.hasInterruptsOnly(exit.cause)) yield* handle.abort()
InstructionPrompt.clear(handle.message.id)
yield* InstanceState.withALS(() => InstructionPrompt.clear(handle.message.id))
}),
)
if (outcome === "break") break

View File

@@ -0,0 +1,114 @@
You are OpenCode, an interactive general AI agent running on a user's computer.
Your primary goal is to help users with software engineering tasks by taking action — use the tools available to you to make real changes on the user's system. You should also answer questions when asked. Always adhere strictly to the following system instructions and the user's requirements.
# Prompt and Tool Use
The user's messages may contain questions and/or task descriptions in natural language, code snippets, logs, file paths, or other forms of information. Read them, understand them and do what the user requested. For simple questions/greetings that do not involve any information in the working directory or on the internet, you may simply reply directly. For anything else, default to taking action with tools. When the request could be interpreted as either a question to answer or a task to complete, treat it as a task.
When handling the user's request, if it involves creating, modifying, or running code or files, you MUST use the appropriate tools to make actual changes — do not just describe the solution in text. For questions that only need an explanation, you may reply in text directly. When calling tools, do not provide explanations because the tool calls themselves should be self-explanatory. You MUST follow the description of each tool and its parameters when calling tools.
If the `task` tool is available, you can use it to delegate a focused subtask to a subagent instance. When delegating, provide a complete prompt with all necessary context because a newly created subagent does not automatically see your current context.
You have the capability to output any number of tool calls in a single response. If you anticipate making multiple non-interfering tool calls, you are HIGHLY RECOMMENDED to make them in parallel to significantly improve efficiency. This is very important to your performance.
The results of the tool calls will be returned to you in a tool message. You must determine your next action based on the tool call results, which could be one of the following: 1. Continue working on the task, 2. Inform the user that the task is completed or has failed, or 3. Ask the user for more information.
Tool results and user messages may include `<system-reminder>` tags. These are authoritative system directives that you MUST follow. They bear no direct relation to the specific tool results or user messages in which they appear. Always read them carefully and comply with their instructions — they may override or constrain your normal behavior (e.g., restricting you to read-only actions during plan mode).
When responding to the user, you MUST use the SAME language as the user, unless explicitly instructed to do otherwise.
# General Guidelines for Coding
When building something from scratch, you should:
- Understand the user's requirements.
- Ask the user for clarification if there is anything unclear.
- Design the architecture and make a plan for the implementation.
- Write the code in a modular and maintainable way.
Always use tools to implement your code changes:
- Use `write`/`edit` to create or modify source files. Code that only appears in your text response is NOT saved to the file system and will not take effect.
- Use `bash` to run and test your code after writing it.
- Iterate: if tests fail, read the error, fix the code with `write`/`edit`, and re-test with `bash`.
When working on an existing codebase, you should:
- Understand the codebase by reading it with tools (`read`, `glob`, `grep`) before making changes. Identify the ultimate goal and the most important criteria to achieve the goal.
- For a bug fix, you typically need to check error logs or failed tests, scan over the codebase to find the root cause, and figure out a fix. If user mentioned any failed tests, you should make sure they pass after the changes.
- For a feature, you typically need to design the architecture, and write the code in a modular and maintainable way, with minimal intrusions to existing code. Add new tests if the project already has tests.
- For a code refactoring, you typically need to update all the places that call the code you are refactoring if the interface changes. DO NOT change any existing logic especially in tests, focus only on fixing any errors caused by the interface changes.
- Make MINIMAL changes to achieve the goal. This is very important to your performance.
- Follow the coding style of existing code in the project.
DO NOT run `git commit`, `git push`, `git reset`, `git rebase` and/or do any other git mutations unless explicitly asked to do so. Ask for confirmation each time when you need to do git mutations, even if the user has confirmed in earlier conversations.
# General Guidelines for Research and Data Processing
The user may ask you to research on certain topics, process or generate certain multimedia files. When doing such tasks, you must:
- Understand the user's requirements thoroughly, ask for clarification before you start if needed.
- Make plans before doing deep or wide research, to ensure you are always on track.
- Search on the Internet if possible, with carefully-designed search queries to improve efficiency and accuracy.
- Use proper tools or shell commands or Python packages to process or generate images, videos, PDFs, docs, spreadsheets, presentations, or other multimedia files. Detect if there are already such tools in the environment. If you have to install third-party tools/packages, you MUST ensure that they are installed in a virtual/isolated environment.
- Once you generate or edit any images, videos or other media files, try to read it again before proceed, to ensure that the content is as expected.
- Avoid installing or deleting anything to/from outside of the current working directory. If you have to do so, ask the user for confirmation.
# Working Environment
## Operating System
The operating environment is not in a sandbox. Any actions you do will immediately affect the user's system. So you MUST be extremely cautious. Unless being explicitly instructed to do so, you should never access (read/write/execute) files outside of the working directory.
## Working Directory
The working directory should be considered as the project root if you are instructed to perform tasks on the project. Every file system operation will be relative to the working directory if you do not explicitly specify the absolute path. Tools may require absolute paths for some parameters, IF SO, YOU MUST use absolute paths for these parameters.
# Project Information
Markdown files named `AGENTS.md` usually contain the background, structure, coding styles, user preferences and other relevant information about the project. You should use this information to understand the project and the user's preferences. `AGENTS.md` files may exist at different locations in the project, but typically there is one in the project root.
> Why `AGENTS.md`?
>
> `README.md` files are for humans: quick starts, project descriptions, and contribution guidelines. `AGENTS.md` complements this by containing the extra, sometimes detailed context coding agents need: build steps, tests, and conventions that might clutter a README or arent relevant to human contributors.
>
> We intentionally kept it separate to:
>
> - Give agents a clear, predictable place for instructions.
> - Keep `README`s concise and focused on human contributors.
> - Provide precise, agent-focused guidance that complements existing `README` and docs.
If the `AGENTS.md` is empty or insufficient, you may check `README`/`README.md` files or `AGENTS.md` files in subdirectories for more information about specific parts of the project.
If you modified any files/styles/structures/configurations/workflows/... mentioned in `AGENTS.md` files, you MUST update the corresponding `AGENTS.md` files to keep them up-to-date.
# Skills
Skills are reusable, composable capabilities that enhance your abilities. Each skill is a self-contained directory with a `SKILL.md` file that contains instructions, examples, and/or reference material.
## What are skills?
Skills are modular extensions that provide:
- Specialized knowledge: Domain-specific expertise (e.g., PDF processing, data analysis)
- Workflow patterns: Best practices for common tasks
- Tool integrations: Pre-configured tool chains for specific operations
- Reference material: Documentation, templates, and examples
## How to use skills
Identify the skills that are likely to be useful for the tasks you are currently working on, use the `skill` tool to load a skill for detailed instructions, guidelines, scripts and more.
Only load skill details when needed to conserve the context window.
# Ultimate Reminders
At any time, you should be HELPFUL, CONCISE, and ACCURATE. Be thorough in your actions — test what you build, verify what you change — not in your explanations.
- Never diverge from the requirements and the goals of the task you work on. Stay on track.
- Never give the user more than what they want.
- Try your best to avoid any hallucination. Do fact checking before providing any factual information.
- Think about the best approach, then take action decisively.
- Do not give up too early.
- ALWAYS, keep it stupidly simple. Do not overcomplicate things.
- When the task requires creating or modifying files, always use tools to do so. Never treat displaying code in your response as a substitute for actually writing it to the file system.

View File

@@ -7,6 +7,7 @@ import PROMPT_DEFAULT from "./prompt/default.txt"
import PROMPT_BEAST from "./prompt/beast.txt"
import PROMPT_GEMINI from "./prompt/gemini.txt"
import PROMPT_GPT from "./prompt/gpt.txt"
import PROMPT_KIMI from "./prompt/kimi.txt"
import PROMPT_CODEX from "./prompt/codex.txt"
import PROMPT_TRINITY from "./prompt/trinity.txt"
@@ -28,6 +29,7 @@ export namespace SystemPrompt {
if (model.api.id.includes("gemini-")) return [PROMPT_GEMINI]
if (model.api.id.includes("claude")) return [PROMPT_ANTHROPIC]
if (model.api.id.toLowerCase().includes("trinity")) return [PROMPT_TRINITY]
if (model.api.id.toLowerCase().includes("kimi")) return [PROMPT_KIMI]
return [PROMPT_DEFAULT]
}

View File

@@ -10,8 +10,9 @@ import { NamedError } from "@opencode-ai/util/error"
import z from "zod"
import path from "path"
import { readFileSync, readdirSync, existsSync } from "fs"
import { Installation } from "../installation"
import { Flag } from "../flag/flag"
import { CHANNEL } from "../installation/meta"
import { InstanceState } from "@/effect/instance-state"
import { iife } from "@/util/iife"
import { init } from "#db"
@@ -28,10 +29,9 @@ const log = Log.create({ service: "db" })
export namespace Database {
export function getChannelPath() {
const channel = Installation.CHANNEL
if (["latest", "beta"].includes(channel) || Flag.OPENCODE_DISABLE_CHANNEL_DB)
if (["latest", "beta"].includes(CHANNEL) || Flag.OPENCODE_DISABLE_CHANNEL_DB)
return path.join(Global.Path.data, "opencode.db")
const safe = channel.replace(/[^a-zA-Z0-9._-]/g, "-")
const safe = CHANNEL.replace(/[^a-zA-Z0-9._-]/g, "-")
return path.join(Global.Path.data, `opencode-${safe}.db`)
}
@@ -142,10 +142,11 @@ export namespace Database {
}
export function effect(fn: () => any | Promise<any>) {
const bound = InstanceState.bind(fn)
try {
ctx.use().effects.push(fn)
ctx.use().effects.push(bound)
} catch {
fn()
bound()
}
}
@@ -162,12 +163,8 @@ export namespace Database {
} catch (err) {
if (err instanceof Context.NotFound) {
const effects: (() => void | Promise<void>)[] = []
const result = Client().transaction(
(tx: TxOrDb) => {
return ctx.provide({ tx, effects }, () => callback(tx))
},
{ behavior: options?.behavior },
)
const txCallback = InstanceState.bind((tx: TxOrDb) => ctx.provide({ tx, effects }, () => callback(tx)))
const result = Client().transaction(txCallback, { behavior: options?.behavior })
for (const effect of effects) effect()
return result as NotPromise<T>
}

View File

@@ -1,19 +1,17 @@
import { Log } from "../util/log"
import path from "path"
import fs from "fs/promises"
import { Global } from "../global"
import { Filesystem } from "../util/filesystem"
import { lazy } from "../util/lazy"
import { Lock } from "../util/lock"
import { NamedError } from "@opencode-ai/util/error"
import z from "zod"
import { Glob } from "../util/glob"
import { git } from "@/util/git"
import { AppFileSystem } from "@/filesystem"
import { makeRuntime } from "@/effect/run-service"
import { Effect, Exit, Layer, Option, RcMap, Schema, ServiceMap, TxReentrantLock } from "effect"
export namespace Storage {
const log = Log.create({ service: "storage" })
type Migration = (dir: string) => Promise<void>
type Migration = (dir: string, fs: AppFileSystem.Interface) => Effect.Effect<void, AppFileSystem.Error>
export const NotFoundError = NamedError.create(
"NotFoundError",
@@ -22,36 +20,101 @@ export namespace Storage {
}),
)
export type Error = AppFileSystem.Error | InstanceType<typeof NotFoundError>
const RootFile = Schema.Struct({
path: Schema.optional(
Schema.Struct({
root: Schema.optional(Schema.String),
}),
),
})
const SessionFile = Schema.Struct({
id: Schema.String,
})
const MessageFile = Schema.Struct({
id: Schema.String,
})
const DiffFile = Schema.Struct({
additions: Schema.Number,
deletions: Schema.Number,
})
const SummaryFile = Schema.Struct({
id: Schema.String,
projectID: Schema.String,
summary: Schema.Struct({ diffs: Schema.Array(DiffFile) }),
})
const decodeRoot = Schema.decodeUnknownOption(RootFile)
const decodeSession = Schema.decodeUnknownOption(SessionFile)
const decodeMessage = Schema.decodeUnknownOption(MessageFile)
const decodeSummary = Schema.decodeUnknownOption(SummaryFile)
export interface Interface {
readonly remove: (key: string[]) => Effect.Effect<void, AppFileSystem.Error>
readonly read: <T>(key: string[]) => Effect.Effect<T, Error>
readonly update: <T>(key: string[], fn: (draft: T) => void) => Effect.Effect<T, Error>
readonly write: <T>(key: string[], content: T) => Effect.Effect<void, AppFileSystem.Error>
readonly list: (prefix: string[]) => Effect.Effect<string[][], AppFileSystem.Error>
}
export class Service extends ServiceMap.Service<Service, Interface>()("@opencode/Storage") {}
function file(dir: string, key: string[]) {
return path.join(dir, ...key) + ".json"
}
function missing(err: unknown) {
if (!err || typeof err !== "object") return false
if ("code" in err && err.code === "ENOENT") return true
if ("reason" in err && err.reason && typeof err.reason === "object" && "_tag" in err.reason) {
return err.reason._tag === "NotFound"
}
return false
}
function parseMigration(text: string) {
const value = Number.parseInt(text, 10)
return Number.isNaN(value) ? 0 : value
}
const MIGRATIONS: Migration[] = [
async (dir) => {
Effect.fn("Storage.migration.1")(function* (dir: string, fs: AppFileSystem.Interface) {
const project = path.resolve(dir, "../project")
if (!(await Filesystem.isDir(project))) return
const projectDirs = await Glob.scan("*", {
if (!(yield* fs.isDir(project))) return
const projectDirs = yield* fs.glob("*", {
cwd: project,
include: "all",
})
for (const projectDir of projectDirs) {
const fullPath = path.join(project, projectDir)
if (!(await Filesystem.isDir(fullPath))) continue
const full = path.join(project, projectDir)
if (!(yield* fs.isDir(full))) continue
log.info(`migrating project ${projectDir}`)
let projectID = projectDir
const fullProjectDir = path.join(project, projectDir)
let worktree = "/"
if (projectID !== "global") {
for (const msgFile of await Glob.scan("storage/session/message/*/*.json", {
cwd: path.join(project, projectDir),
for (const msgFile of yield* fs.glob("storage/session/message/*/*.json", {
cwd: full,
absolute: true,
})) {
const json = await Filesystem.readJson<any>(msgFile)
worktree = json.path?.root
if (worktree) break
const json = decodeRoot(yield* fs.readJson(msgFile), { onExcessProperty: "preserve" })
const root = Option.isSome(json) ? json.value.path?.root : undefined
if (!root) continue
worktree = root
break
}
if (!worktree) continue
if (!(await Filesystem.isDir(worktree))) continue
const result = await git(["rev-list", "--max-parents=0", "--all"], {
cwd: worktree,
})
if (!(yield* fs.isDir(worktree))) continue
const result = yield* Effect.promise(() =>
git(["rev-list", "--max-parents=0", "--all"], {
cwd: worktree,
}),
)
const [id] = result
.text()
.split("\n")
@@ -61,157 +124,230 @@ export namespace Storage {
if (!id) continue
projectID = id
await Filesystem.writeJson(path.join(dir, "project", projectID + ".json"), {
id,
vcs: "git",
worktree,
time: {
created: Date.now(),
initialized: Date.now(),
},
})
yield* fs.writeWithDirs(
path.join(dir, "project", projectID + ".json"),
JSON.stringify(
{
id,
vcs: "git",
worktree,
time: {
created: Date.now(),
initialized: Date.now(),
},
},
null,
2,
),
)
log.info(`migrating sessions for project ${projectID}`)
for (const sessionFile of await Glob.scan("storage/session/info/*.json", {
cwd: fullProjectDir,
for (const sessionFile of yield* fs.glob("storage/session/info/*.json", {
cwd: full,
absolute: true,
})) {
const dest = path.join(dir, "session", projectID, path.basename(sessionFile))
log.info("copying", {
sessionFile,
dest,
})
const session = await Filesystem.readJson<any>(sessionFile)
await Filesystem.writeJson(dest, session)
log.info(`migrating messages for session ${session.id}`)
for (const msgFile of await Glob.scan(`storage/session/message/${session.id}/*.json`, {
cwd: fullProjectDir,
log.info("copying", { sessionFile, dest })
const session = yield* fs.readJson(sessionFile)
const info = decodeSession(session, { onExcessProperty: "preserve" })
yield* fs.writeWithDirs(dest, JSON.stringify(session, null, 2))
if (Option.isNone(info)) continue
log.info(`migrating messages for session ${info.value.id}`)
for (const msgFile of yield* fs.glob(`storage/session/message/${info.value.id}/*.json`, {
cwd: full,
absolute: true,
})) {
const dest = path.join(dir, "message", session.id, path.basename(msgFile))
const next = path.join(dir, "message", info.value.id, path.basename(msgFile))
log.info("copying", {
msgFile,
dest,
dest: next,
})
const message = await Filesystem.readJson<any>(msgFile)
await Filesystem.writeJson(dest, message)
const message = yield* fs.readJson(msgFile)
const item = decodeMessage(message, { onExcessProperty: "preserve" })
yield* fs.writeWithDirs(next, JSON.stringify(message, null, 2))
if (Option.isNone(item)) continue
log.info(`migrating parts for message ${message.id}`)
for (const partFile of await Glob.scan(`storage/session/part/${session.id}/${message.id}/*.json`, {
cwd: fullProjectDir,
log.info(`migrating parts for message ${item.value.id}`)
for (const partFile of yield* fs.glob(`storage/session/part/${info.value.id}/${item.value.id}/*.json`, {
cwd: full,
absolute: true,
})) {
const dest = path.join(dir, "part", message.id, path.basename(partFile))
const part = await Filesystem.readJson(partFile)
const out = path.join(dir, "part", item.value.id, path.basename(partFile))
const part = yield* fs.readJson(partFile)
log.info("copying", {
partFile,
dest,
dest: out,
})
await Filesystem.writeJson(dest, part)
yield* fs.writeWithDirs(out, JSON.stringify(part, null, 2))
}
}
}
}
}
},
async (dir) => {
for (const item of await Glob.scan("session/*/*.json", {
}),
Effect.fn("Storage.migration.2")(function* (dir: string, fs: AppFileSystem.Interface) {
for (const item of yield* fs.glob("session/*/*.json", {
cwd: dir,
absolute: true,
})) {
const session = await Filesystem.readJson<any>(item)
if (!session.projectID) continue
if (!session.summary?.diffs) continue
const { diffs } = session.summary
await Filesystem.write(path.join(dir, "session_diff", session.id + ".json"), JSON.stringify(diffs))
await Filesystem.writeJson(path.join(dir, "session", session.projectID, session.id + ".json"), {
...session,
summary: {
additions: diffs.reduce((sum: any, x: any) => sum + x.additions, 0),
deletions: diffs.reduce((sum: any, x: any) => sum + x.deletions, 0),
},
})
const raw = yield* fs.readJson(item)
const session = decodeSummary(raw, { onExcessProperty: "preserve" })
if (Option.isNone(session)) continue
const diffs = session.value.summary.diffs
yield* fs.writeWithDirs(
path.join(dir, "session_diff", session.value.id + ".json"),
JSON.stringify(diffs, null, 2),
)
yield* fs.writeWithDirs(
path.join(dir, "session", session.value.projectID, session.value.id + ".json"),
JSON.stringify(
{
...(raw as Record<string, unknown>),
summary: {
additions: diffs.reduce((sum, x) => sum + x.additions, 0),
deletions: diffs.reduce((sum, x) => sum + x.deletions, 0),
},
},
null,
2,
),
)
}
},
}),
]
const state = lazy(async () => {
const dir = path.join(Global.Path.data, "storage")
const migration = await Filesystem.readJson<string>(path.join(dir, "migration"))
.then((x) => parseInt(x))
.catch(() => 0)
for (let index = migration; index < MIGRATIONS.length; index++) {
log.info("running migration", { index })
const migration = MIGRATIONS[index]
await migration(dir).catch(() => log.error("failed to run migration", { index }))
await Filesystem.write(path.join(dir, "migration"), (index + 1).toString())
}
return {
dir,
}
})
export const layer = Layer.effect(
Service,
Effect.gen(function* () {
const fs = yield* AppFileSystem.Service
const locks = yield* RcMap.make({
lookup: () => TxReentrantLock.make(),
idleTimeToLive: 0,
})
const state = yield* Effect.cached(
Effect.gen(function* () {
const dir = path.join(Global.Path.data, "storage")
const marker = path.join(dir, "migration")
const migration = yield* fs.readFileString(marker).pipe(
Effect.map(parseMigration),
Effect.catchIf(missing, () => Effect.succeed(0)),
Effect.orElseSucceed(() => 0),
)
for (let i = migration; i < MIGRATIONS.length; i++) {
log.info("running migration", { index: i })
const step = MIGRATIONS[i]!
const exit = yield* Effect.exit(step(dir, fs))
if (Exit.isFailure(exit)) {
log.error("failed to run migration", { index: i, cause: exit.cause })
break
}
yield* fs.writeWithDirs(marker, String(i + 1))
}
return { dir }
}),
)
const fail = (target: string): Effect.Effect<never, InstanceType<typeof NotFoundError>> =>
Effect.fail(new NotFoundError({ message: `Resource not found: ${target}` }))
const wrap = <A>(target: string, body: Effect.Effect<A, AppFileSystem.Error>) =>
body.pipe(Effect.catchIf(missing, () => fail(target)))
const writeJson = Effect.fnUntraced(function* (target: string, content: unknown) {
yield* fs.writeWithDirs(target, JSON.stringify(content, null, 2))
})
const withResolved = <A, E>(
key: string[],
fn: (target: string, rw: TxReentrantLock.TxReentrantLock) => Effect.Effect<A, E>,
): Effect.Effect<A, E | AppFileSystem.Error> =>
Effect.scoped(
Effect.gen(function* () {
const target = file((yield* state).dir, key)
return yield* fn(target, yield* RcMap.get(locks, target))
}),
)
const remove: Interface["remove"] = Effect.fn("Storage.remove")(function* (key: string[]) {
yield* withResolved(key, (target, rw) =>
TxReentrantLock.withWriteLock(rw, fs.remove(target).pipe(Effect.catchIf(missing, () => Effect.void))),
)
})
const read: Interface["read"] = <T>(key: string[]) =>
Effect.gen(function* () {
const value = yield* withResolved(key, (target, rw) =>
TxReentrantLock.withReadLock(rw, wrap(target, fs.readJson(target))),
)
return value as T
})
const update: Interface["update"] = <T>(key: string[], fn: (draft: T) => void) =>
Effect.gen(function* () {
const value = yield* withResolved(key, (target, rw) =>
TxReentrantLock.withWriteLock(
rw,
Effect.gen(function* () {
const content = yield* wrap(target, fs.readJson(target))
fn(content as T)
yield* writeJson(target, content)
return content
}),
),
)
return value as T
})
const write: Interface["write"] = (key: string[], content: unknown) =>
Effect.gen(function* () {
yield* withResolved(key, (target, rw) => TxReentrantLock.withWriteLock(rw, writeJson(target, content)))
})
const list: Interface["list"] = Effect.fn("Storage.list")(function* (prefix: string[]) {
const dir = (yield* state).dir
const cwd = path.join(dir, ...prefix)
const result = yield* fs
.glob("**/*", {
cwd,
include: "file",
})
.pipe(Effect.catch(() => Effect.succeed<string[]>([])))
return result
.map((x) => [...prefix, ...x.slice(0, -5).split(path.sep)])
.toSorted((a, b) => a.join("/").localeCompare(b.join("/")))
})
return Service.of({
remove,
read,
update,
write,
list,
})
}),
)
export const defaultLayer = layer.pipe(Layer.provide(AppFileSystem.defaultLayer))
const { runPromise } = makeRuntime(Service, defaultLayer)
export async function remove(key: string[]) {
const dir = await state().then((x) => x.dir)
const target = path.join(dir, ...key) + ".json"
return withErrorHandling(async () => {
await fs.unlink(target).catch(() => {})
})
return runPromise((svc) => svc.remove(key))
}
export async function read<T>(key: string[]) {
const dir = await state().then((x) => x.dir)
const target = path.join(dir, ...key) + ".json"
return withErrorHandling(async () => {
using _ = await Lock.read(target)
const result = await Filesystem.readJson<T>(target)
return result as T
})
return runPromise((svc) => svc.read<T>(key))
}
export async function update<T>(key: string[], fn: (draft: T) => void) {
const dir = await state().then((x) => x.dir)
const target = path.join(dir, ...key) + ".json"
return withErrorHandling(async () => {
using _ = await Lock.write(target)
const content = await Filesystem.readJson<T>(target)
fn(content as T)
await Filesystem.writeJson(target, content)
return content
})
return runPromise((svc) => svc.update<T>(key, fn))
}
export async function write<T>(key: string[], content: T) {
const dir = await state().then((x) => x.dir)
const target = path.join(dir, ...key) + ".json"
return withErrorHandling(async () => {
using _ = await Lock.write(target)
await Filesystem.writeJson(target, content)
})
}
async function withErrorHandling<T>(body: () => Promise<T>) {
return body().catch((e) => {
if (!(e instanceof Error)) throw e
const errnoException = e as NodeJS.ErrnoException
if (errnoException.code === "ENOENT") {
throw new NotFoundError({ message: `Resource not found: ${errnoException.path}` })
}
throw e
})
return runPromise((svc) => svc.write(key, content))
}
export async function list(prefix: string[]) {
const dir = await state().then((x) => x.dir)
try {
const result = await Glob.scan("**/*", {
cwd: path.join(dir, ...prefix),
include: "file",
}).then((results) => results.map((x) => [...prefix, ...x.slice(0, -5).split(path.sep)]))
result.sort()
return result
} catch {
return []
}
return runPromise((svc) => svc.list(prefix))
}
}

View File

@@ -18,6 +18,7 @@ import { NodePath } from "@effect/platform-node"
import { AppFileSystem } from "@/filesystem"
import { makeRuntime } from "@/effect/run-service"
import * as CrossSpawnSpawner from "@/effect/cross-spawn-spawner"
import { InstanceState } from "@/effect/instance-state"
export namespace Worktree {
const log = Log.create({ service: "worktree" })
@@ -199,6 +200,7 @@ export namespace Worktree {
const MAX_NAME_ATTEMPTS = 26
const candidate = Effect.fn("Worktree.candidate")(function* (root: string, base?: string) {
const ctx = yield* InstanceState.context
for (const attempt of Array.from({ length: MAX_NAME_ATTEMPTS }, (_, i) => i)) {
const name = base ? (attempt === 0 ? base : `${base}-${Slug.create()}`) : Slug.create()
const branch = `opencode/${name}`
@@ -207,7 +209,7 @@ export namespace Worktree {
if (yield* fs.exists(directory).pipe(Effect.orDie)) continue
const ref = `refs/heads/${branch}`
const branchCheck = yield* git(["show-ref", "--verify", "--quiet", ref], { cwd: Instance.worktree })
const branchCheck = yield* git(["show-ref", "--verify", "--quiet", ref], { cwd: ctx.worktree })
if (branchCheck.code === 0) continue
return Info.parse({ name, branch, directory })
@@ -216,11 +218,12 @@ export namespace Worktree {
})
const makeWorktreeInfo = Effect.fn("Worktree.makeWorktreeInfo")(function* (name?: string) {
if (Instance.project.vcs !== "git") {
const ctx = yield* InstanceState.context
if (ctx.project.vcs !== "git") {
throw new NotGitError({ message: "Worktrees are only supported for git projects" })
}
const root = pathSvc.join(Global.Path.data, "worktree", Instance.project.id)
const root = pathSvc.join(Global.Path.data, "worktree", ctx.project.id)
yield* fs.makeDirectory(root, { recursive: true }).pipe(Effect.orDie)
const base = name ? slugify(name) : ""
@@ -228,18 +231,20 @@ export namespace Worktree {
})
const setup = Effect.fnUntraced(function* (info: Info) {
const ctx = yield* InstanceState.context
const created = yield* git(["worktree", "add", "--no-checkout", "-b", info.branch, info.directory], {
cwd: Instance.worktree,
cwd: ctx.worktree,
})
if (created.code !== 0) {
throw new CreateFailedError({ message: created.stderr || created.text || "Failed to create git worktree" })
}
yield* project.addSandbox(Instance.project.id, info.directory).pipe(Effect.catch(() => Effect.void))
yield* project.addSandbox(ctx.project.id, info.directory).pipe(Effect.catch(() => Effect.void))
})
const boot = Effect.fnUntraced(function* (info: Info, startCommand?: string) {
const projectID = Instance.project.id
const ctx = yield* InstanceState.context
const projectID = ctx.project.id
const extra = startCommand?.trim()
const populated = yield* git(["reset", "--hard"], { cwd: info.directory })

View File

@@ -16,21 +16,21 @@ const truncate = Layer.effectDiscard(
const it = testEffect(Layer.merge(AccountRepo.layer, truncate))
it.effect("list returns empty when no accounts exist", () =>
it.live("list returns empty when no accounts exist", () =>
Effect.gen(function* () {
const accounts = yield* AccountRepo.use((r) => r.list())
expect(accounts).toEqual([])
}),
)
it.effect("active returns none when no accounts exist", () =>
it.live("active returns none when no accounts exist", () =>
Effect.gen(function* () {
const active = yield* AccountRepo.use((r) => r.active())
expect(Option.isNone(active)).toBe(true)
}),
)
it.effect("persistAccount inserts and getRow retrieves", () =>
it.live("persistAccount inserts and getRow retrieves", () =>
Effect.gen(function* () {
const id = AccountID.make("user-1")
yield* AccountRepo.use((r) =>
@@ -56,7 +56,7 @@ it.effect("persistAccount inserts and getRow retrieves", () =>
}),
)
it.effect("persistAccount sets the active account and org", () =>
it.live("persistAccount sets the active account and org", () =>
Effect.gen(function* () {
const id1 = AccountID.make("user-1")
const id2 = AccountID.make("user-2")
@@ -93,7 +93,7 @@ it.effect("persistAccount sets the active account and org", () =>
}),
)
it.effect("list returns all accounts", () =>
it.live("list returns all accounts", () =>
Effect.gen(function* () {
const id1 = AccountID.make("user-1")
const id2 = AccountID.make("user-2")
@@ -128,7 +128,7 @@ it.effect("list returns all accounts", () =>
}),
)
it.effect("remove deletes an account", () =>
it.live("remove deletes an account", () =>
Effect.gen(function* () {
const id = AccountID.make("user-1")
@@ -151,7 +151,7 @@ it.effect("remove deletes an account", () =>
}),
)
it.effect("use stores the selected org and marks the account active", () =>
it.live("use stores the selected org and marks the account active", () =>
Effect.gen(function* () {
const id1 = AccountID.make("user-1")
const id2 = AccountID.make("user-2")
@@ -191,7 +191,7 @@ it.effect("use stores the selected org and marks the account active", () =>
}),
)
it.effect("persistToken updates token fields", () =>
it.live("persistToken updates token fields", () =>
Effect.gen(function* () {
const id = AccountID.make("user-1")
@@ -225,7 +225,7 @@ it.effect("persistToken updates token fields", () =>
}),
)
it.effect("persistToken with no expiry sets token_expiry to null", () =>
it.live("persistToken with no expiry sets token_expiry to null", () =>
Effect.gen(function* () {
const id = AccountID.make("user-1")
@@ -255,7 +255,7 @@ it.effect("persistToken with no expiry sets token_expiry to null", () =>
}),
)
it.effect("persistAccount upserts on conflict", () =>
it.live("persistAccount upserts on conflict", () =>
Effect.gen(function* () {
const id = AccountID.make("user-1")
@@ -295,7 +295,7 @@ it.effect("persistAccount upserts on conflict", () =>
}),
)
it.effect("remove clears active state when deleting the active account", () =>
it.live("remove clears active state when deleting the active account", () =>
Effect.gen(function* () {
const id = AccountID.make("user-1")
@@ -318,7 +318,7 @@ it.effect("remove clears active state when deleting the active account", () =>
}),
)
it.effect("getRow returns none for nonexistent account", () =>
it.live("getRow returns none for nonexistent account", () =>
Effect.gen(function* () {
const row = yield* AccountRepo.use((r) => r.getRow(AccountID.make("nope")))
expect(Option.isNone(row)).toBe(true)

View File

@@ -54,7 +54,7 @@ const deviceTokenClient = (body: unknown, status = 400) =>
const poll = (body: unknown, status = 400) =>
Account.Service.use((s) => s.poll(login())).pipe(Effect.provide(live(deviceTokenClient(body, status))))
it.effect("orgsByAccount groups orgs per account", () =>
it.live("orgsByAccount groups orgs per account", () =>
Effect.gen(function* () {
yield* AccountRepo.use((r) =>
r.persistAccount({
@@ -107,7 +107,7 @@ it.effect("orgsByAccount groups orgs per account", () =>
}),
)
it.effect("token refresh persists the new token", () =>
it.live("token refresh persists the new token", () =>
Effect.gen(function* () {
const id = AccountID.make("user-1")
@@ -148,7 +148,7 @@ it.effect("token refresh persists the new token", () =>
}),
)
it.effect("config sends the selected org header", () =>
it.live("config sends the selected org header", () =>
Effect.gen(function* () {
const id = AccountID.make("user-1")
@@ -188,7 +188,7 @@ it.effect("config sends the selected org header", () =>
}),
)
it.effect("poll stores the account and first org on success", () =>
it.live("poll stores the account and first org on success", () =>
Effect.gen(function* () {
const client = HttpClient.make((req) =>
Effect.succeed(
@@ -259,7 +259,7 @@ for (const [name, body, expectedTag] of [
"PollExpired",
],
] as const) {
it.effect(`poll returns ${name} for ${body.error}`, () =>
it.live(`poll returns ${name} for ${body.error}`, () =>
Effect.gen(function* () {
const result = yield* poll(body)
expect(result._tag).toBe(expectedTag)
@@ -267,7 +267,7 @@ for (const [name, body, expectedTag] of [
)
}
it.effect("poll returns poll error for other OAuth errors", () =>
it.live("poll returns poll error for other OAuth errors", () =>
Effect.gen(function* () {
const result = yield* poll({
error: "server_error",

View File

@@ -1,6 +1,10 @@
import { describe, expect, test } from "bun:test"
import { describe, expect, spyOn, test } from "bun:test"
import fs from "fs/promises"
import path from "path"
import { BunProc } from "../src/bun"
import { PackageRegistry } from "../src/bun/registry"
import { Global } from "../src/global"
import { Process } from "../src/util/process"
describe("BunProc registry configuration", () => {
test("should not contain hardcoded registry parameters", async () => {
@@ -51,3 +55,83 @@ describe("BunProc registry configuration", () => {
}
})
})
describe("BunProc install pinning", () => {
test("uses pinned cache without touching registry", async () => {
const pkg = `pin-test-${Date.now().toString(36)}-${Math.random().toString(36).slice(2, 8)}`
const ver = "1.2.3"
const mod = path.join(Global.Path.cache, "node_modules", pkg)
const data = path.join(Global.Path.cache, "package.json")
await fs.mkdir(mod, { recursive: true })
await Bun.write(path.join(mod, "package.json"), JSON.stringify({ name: pkg, version: ver }, null, 2))
const src = await fs.readFile(data, "utf8").catch(() => "")
const json = src ? ((JSON.parse(src) as { dependencies?: Record<string, string> }) ?? {}) : {}
const deps = json.dependencies ?? {}
deps[pkg] = ver
await Bun.write(data, JSON.stringify({ ...json, dependencies: deps }, null, 2))
const stale = spyOn(PackageRegistry, "isOutdated").mockImplementation(async () => {
throw new Error("unexpected registry check")
})
const run = spyOn(Process, "run").mockImplementation(async () => {
throw new Error("unexpected process.run")
})
try {
const out = await BunProc.install(pkg, ver)
expect(out).toBe(mod)
expect(stale).not.toHaveBeenCalled()
expect(run).not.toHaveBeenCalled()
} finally {
stale.mockRestore()
run.mockRestore()
await fs.rm(mod, { recursive: true, force: true })
const end = await fs
.readFile(data, "utf8")
.then((item) => JSON.parse(item) as { dependencies?: Record<string, string> })
.catch(() => undefined)
if (end?.dependencies) {
delete end.dependencies[pkg]
await Bun.write(data, JSON.stringify(end, null, 2))
}
}
})
test("passes --ignore-scripts when requested", async () => {
const pkg = `ignore-test-${Date.now().toString(36)}-${Math.random().toString(36).slice(2, 8)}`
const ver = "4.5.6"
const mod = path.join(Global.Path.cache, "node_modules", pkg)
const data = path.join(Global.Path.cache, "package.json")
const run = spyOn(Process, "run").mockImplementation(async () => ({
code: 0,
stdout: Buffer.alloc(0),
stderr: Buffer.alloc(0),
}))
try {
await fs.rm(mod, { recursive: true, force: true })
await BunProc.install(pkg, ver, { ignoreScripts: true })
expect(run).toHaveBeenCalled()
const call = run.mock.calls[0]?.[0]
expect(call).toContain("--ignore-scripts")
expect(call).toContain(`${pkg}@${ver}`)
} finally {
run.mockRestore()
await fs.rm(mod, { recursive: true, force: true })
const end = await fs
.readFile(data, "utf8")
.then((item) => JSON.parse(item) as { dependencies?: Record<string, string> })
.catch(() => undefined)
if (end?.dependencies) {
delete end.dependencies[pkg]
await Bun.write(data, JSON.stringify(end, null, 2))
}
}
})
})

View File

@@ -22,7 +22,7 @@ const live = Layer.mergeAll(Bus.layer, node)
const it = testEffect(live)
describe("Bus (Effect-native)", () => {
it.effect("publish + subscribe stream delivers events", () =>
it.live("publish + subscribe stream delivers events", () =>
provideTmpdirInstance(() =>
Effect.gen(function* () {
const bus = yield* Bus.Service
@@ -46,7 +46,7 @@ describe("Bus (Effect-native)", () => {
),
)
it.effect("subscribe filters by event type", () =>
it.live("subscribe filters by event type", () =>
provideTmpdirInstance(() =>
Effect.gen(function* () {
const bus = yield* Bus.Service
@@ -70,7 +70,7 @@ describe("Bus (Effect-native)", () => {
),
)
it.effect("subscribeAll receives all types", () =>
it.live("subscribeAll receives all types", () =>
provideTmpdirInstance(() =>
Effect.gen(function* () {
const bus = yield* Bus.Service
@@ -95,7 +95,7 @@ describe("Bus (Effect-native)", () => {
),
)
it.effect("multiple subscribers each receive the event", () =>
it.live("multiple subscribers each receive the event", () =>
provideTmpdirInstance(() =>
Effect.gen(function* () {
const bus = yield* Bus.Service
@@ -129,7 +129,7 @@ describe("Bus (Effect-native)", () => {
),
)
it.effect("subscribeAll stream sees InstanceDisposed on disposal", () =>
it.live("subscribeAll stream sees InstanceDisposed on disposal", () =>
Effect.gen(function* () {
const dir = yield* tmpdirScoped()
const types: string[] = []

View File

@@ -21,8 +21,12 @@ test("installs plugin without loading it", async () => {
{
name: "demo-install-plugin",
type: "module",
main: "./install-plugin.ts",
"oc-plugin": [["tui", { marker }]],
exports: {
"./tui": {
import: "./install-plugin.ts",
config: { marker },
},
},
},
null,
2,
@@ -46,7 +50,7 @@ test("installs plugin without loading it", async () => {
})
process.env.OPENCODE_PLUGIN_META_FILE = path.join(tmp.path, "plugin-meta.json")
let cfg: Awaited<ReturnType<typeof TuiConfig.get>> = {
const cfg: Awaited<ReturnType<typeof TuiConfig.get>> = {
plugin: [],
plugin_records: undefined,
}
@@ -66,17 +70,6 @@ test("installs plugin without loading it", async () => {
try {
await TuiPluginRuntime.init(api)
cfg = {
plugin: [[tmp.extra.spec, { marker: tmp.extra.marker }]],
plugin_records: [
{
item: [tmp.extra.spec, { marker: tmp.extra.marker }],
scope: "local",
source: path.join(tmp.path, "tui.json"),
},
],
}
const out = await TuiPluginRuntime.installPlugin(tmp.extra.spec)
expect(out).toMatchObject({
ok: true,

View File

@@ -304,17 +304,23 @@ test("does not use npm package main for tui entry", async () => {
const wait = spyOn(TuiConfig, "waitForDependencies").mockResolvedValue()
const cwd = spyOn(process, "cwd").mockImplementation(() => tmp.path)
const install = spyOn(BunProc, "install").mockResolvedValue(tmp.extra.mod)
const warn = spyOn(console, "warn").mockImplementation(() => {})
const error = spyOn(console, "error").mockImplementation(() => {})
try {
await TuiPluginRuntime.init(createTuiPluginApi())
await expect(fs.readFile(tmp.extra.marker, "utf8")).rejects.toThrow()
expect(TuiPluginRuntime.list().some((item) => item.spec === tmp.extra.spec)).toBe(false)
expect(error).not.toHaveBeenCalled()
expect(warn.mock.calls.some((call) => String(call[0]).includes("tui plugin has no entrypoint"))).toBe(true)
} finally {
await TuiPluginRuntime.dispose()
install.mockRestore()
cwd.mockRestore()
get.mockRestore()
wait.mockRestore()
warn.mockRestore()
error.mockRestore()
delete process.env.OPENCODE_PLUGIN_META_FILE
}
})

View File

@@ -792,6 +792,7 @@ test("installs dependencies in writable OPENCODE_CONFIG_DIR", async () => {
expect(await Filesystem.exists(path.join(tmp.extra, "package.json"))).toBe(true)
expect(await Filesystem.exists(path.join(tmp.extra, ".gitignore"))).toBe(true)
expect(await Filesystem.readText(path.join(tmp.extra, ".gitignore"))).toContain("package-lock.json")
} finally {
online.mockRestore()
run.mockRestore()

View File

@@ -1,6 +1,7 @@
import { afterEach, expect, test } from "bun:test"
import { Duration, Effect, Layer, ManagedRuntime, ServiceMap } from "effect"
import { Cause, Deferred, Duration, Effect, Exit, Fiber, Layer, ManagedRuntime, ServiceMap } from "effect"
import { InstanceState } from "../../src/effect/instance-state"
import { InstanceRef } from "../../src/effect/instance-ref"
import { Instance } from "../../src/project/instance"
import { tmpdir } from "../fixture/fixture"
@@ -382,3 +383,100 @@ test("InstanceState dedupes concurrent lookups", async () => {
),
)
})
test("InstanceState survives deferred resume from the same instance context", async () => {
await using tmp = await tmpdir({ git: true })
interface Api {
readonly get: (gate: Deferred.Deferred<void>) => Effect.Effect<string>
}
class Test extends ServiceMap.Service<Test, Api>()("@test/DeferredResume") {
static readonly layer = Layer.effect(
Test,
Effect.gen(function* () {
const state = yield* InstanceState.make((ctx) => Effect.sync(() => ctx.directory))
return Test.of({
get: Effect.fn("Test.get")(function* (gate: Deferred.Deferred<void>) {
yield* Deferred.await(gate)
return yield* InstanceState.get(state)
}),
})
}),
)
}
const rt = ManagedRuntime.make(Test.layer)
try {
const gate = await Effect.runPromise(Deferred.make<void>())
const fiber = await Instance.provide({
directory: tmp.path,
fn: () => Promise.resolve(rt.runFork(Test.use((svc) => svc.get(gate)))),
})
await Instance.provide({
directory: tmp.path,
fn: () => Effect.runPromise(Deferred.succeed(gate, void 0)),
})
const exit = await Effect.runPromise(Fiber.await(fiber))
expect(Exit.isSuccess(exit)).toBe(true)
if (Exit.isSuccess(exit)) {
expect(exit.value).toBe(tmp.path)
}
} finally {
await rt.dispose()
}
})
test("InstanceState survives deferred resume outside ALS when InstanceRef is set", async () => {
await using tmp = await tmpdir({ git: true })
interface Api {
readonly get: (gate: Deferred.Deferred<void>) => Effect.Effect<string>
}
class Test extends ServiceMap.Service<Test, Api>()("@test/DeferredResumeOutside") {
static readonly layer = Layer.effect(
Test,
Effect.gen(function* () {
const state = yield* InstanceState.make((ctx) => Effect.sync(() => ctx.directory))
return Test.of({
get: Effect.fn("Test.get")(function* (gate: Deferred.Deferred<void>) {
yield* Deferred.await(gate)
return yield* InstanceState.get(state)
}),
})
}),
)
}
const rt = ManagedRuntime.make(Test.layer)
try {
const gate = await Effect.runPromise(Deferred.make<void>())
// Provide InstanceRef so the fiber carries the context even when
// the deferred is resolved from outside Instance.provide ALS.
const fiber = await Instance.provide({
directory: tmp.path,
fn: () =>
Promise.resolve(
rt.runFork(Test.use((svc) => svc.get(gate)).pipe(Effect.provideService(InstanceRef, Instance.current))),
),
})
// Resume from outside any Instance.provide — ALS is NOT set here
await Effect.runPromise(Deferred.succeed(gate, void 0))
const exit = await Effect.runPromise(Fiber.await(fiber))
expect(Exit.isSuccess(exit)).toBe(true)
if (Exit.isSuccess(exit)) {
expect(exit.value).toBe(tmp.path)
}
} finally {
await rt.dispose()
}
})

View File

@@ -6,7 +6,7 @@ import { it } from "../lib/effect"
describe("Runner", () => {
// --- ensureRunning semantics ---
it.effect(
it.live(
"ensureRunning starts work and returns result",
Effect.gen(function* () {
const s = yield* Scope.Scope
@@ -18,7 +18,7 @@ describe("Runner", () => {
}),
)
it.effect(
it.live(
"ensureRunning propagates work failures",
Effect.gen(function* () {
const s = yield* Scope.Scope
@@ -29,7 +29,7 @@ describe("Runner", () => {
}),
)
it.effect(
it.live(
"concurrent callers share the same run",
Effect.gen(function* () {
const s = yield* Scope.Scope
@@ -51,7 +51,7 @@ describe("Runner", () => {
}),
)
it.effect(
it.live(
"concurrent callers all receive same error",
Effect.gen(function* () {
const s = yield* Scope.Scope
@@ -71,7 +71,7 @@ describe("Runner", () => {
}),
)
it.effect(
it.live(
"ensureRunning can be called again after previous run completes",
Effect.gen(function* () {
const s = yield* Scope.Scope
@@ -81,7 +81,7 @@ describe("Runner", () => {
}),
)
it.effect(
it.live(
"second ensureRunning ignores new work if already running",
Effect.gen(function* () {
const s = yield* Scope.Scope
@@ -110,7 +110,7 @@ describe("Runner", () => {
// --- cancel semantics ---
it.effect(
it.live(
"cancel interrupts running work",
Effect.gen(function* () {
const s = yield* Scope.Scope
@@ -128,7 +128,7 @@ describe("Runner", () => {
}),
)
it.effect(
it.live(
"cancel on idle is a no-op",
Effect.gen(function* () {
const s = yield* Scope.Scope
@@ -138,7 +138,7 @@ describe("Runner", () => {
}),
)
it.effect(
it.live(
"cancel with onInterrupt resolves callers gracefully",
Effect.gen(function* () {
const s = yield* Scope.Scope
@@ -154,7 +154,7 @@ describe("Runner", () => {
}),
)
it.effect(
it.live(
"cancel with queued callers resolves all",
Effect.gen(function* () {
const s = yield* Scope.Scope
@@ -175,7 +175,7 @@ describe("Runner", () => {
}),
)
it.effect(
it.live(
"work can be started after cancel",
Effect.gen(function* () {
const s = yield* Scope.Scope
@@ -245,7 +245,7 @@ describe("Runner", () => {
// --- shell semantics ---
it.effect(
it.live(
"shell runs exclusively",
Effect.gen(function* () {
const s = yield* Scope.Scope
@@ -256,7 +256,7 @@ describe("Runner", () => {
}),
)
it.effect(
it.live(
"shell rejects when run is active",
Effect.gen(function* () {
const s = yield* Scope.Scope
@@ -272,7 +272,7 @@ describe("Runner", () => {
}),
)
it.effect(
it.live(
"shell rejects when another shell is running",
Effect.gen(function* () {
const s = yield* Scope.Scope
@@ -292,7 +292,7 @@ describe("Runner", () => {
}),
)
it.effect(
it.live(
"shell rejects via busy callback and cancel still stops the first shell",
Effect.gen(function* () {
const s = yield* Scope.Scope
@@ -323,7 +323,7 @@ describe("Runner", () => {
}),
)
it.effect(
it.live(
"cancel interrupts shell that ignores abort signal",
Effect.gen(function* () {
const s = yield* Scope.Scope
@@ -349,7 +349,7 @@ describe("Runner", () => {
// --- shell→run handoff ---
it.effect(
it.live(
"ensureRunning queues behind shell then runs after",
Effect.gen(function* () {
const s = yield* Scope.Scope
@@ -376,7 +376,7 @@ describe("Runner", () => {
}),
)
it.effect(
it.live(
"multiple ensureRunning callers share the queued run behind shell",
Effect.gen(function* () {
const s = yield* Scope.Scope
@@ -407,7 +407,7 @@ describe("Runner", () => {
}),
)
it.effect(
it.live(
"cancel during shell_then_run cancels both",
Effect.gen(function* () {
const s = yield* Scope.Scope
@@ -441,7 +441,7 @@ describe("Runner", () => {
// --- lifecycle callbacks ---
it.effect(
it.live(
"onIdle fires when returning to idle from running",
Effect.gen(function* () {
const s = yield* Scope.Scope
@@ -454,7 +454,7 @@ describe("Runner", () => {
}),
)
it.effect(
it.live(
"onIdle fires on cancel",
Effect.gen(function* () {
const s = yield* Scope.Scope
@@ -470,7 +470,7 @@ describe("Runner", () => {
}),
)
it.effect(
it.live(
"onBusy fires when shell starts",
Effect.gen(function* () {
const s = yield* Scope.Scope
@@ -485,7 +485,7 @@ describe("Runner", () => {
// --- busy flag ---
it.effect(
it.live(
"busy is true during run",
Effect.gen(function* () {
const s = yield* Scope.Scope
@@ -502,7 +502,7 @@ describe("Runner", () => {
}),
)
it.effect(
it.live(
"busy is true during shell",
Effect.gen(function* () {
const s = yield* Scope.Scope

View File

@@ -2,10 +2,14 @@ import { $ } from "bun"
import * as fs from "fs/promises"
import os from "os"
import path from "path"
import { Effect, FileSystem, ServiceMap } from "effect"
import { Effect, ServiceMap } from "effect"
import type * as PlatformError from "effect/PlatformError"
import type * as Scope from "effect/Scope"
import { ChildProcess, ChildProcessSpawner } from "effect/unstable/process"
import type { Config } from "../../src/config/config"
import { InstanceRef } from "../../src/effect/instance-ref"
import { Instance } from "../../src/project/instance"
import { TestLLMServer } from "../lib/llm-server"
// Strip null bytes from paths (defensive fix for CI environment issues)
function sanitizePath(p: string): string {
@@ -78,9 +82,17 @@ export async function tmpdir<T>(options?: TmpDirOptions<T>) {
/** Effectful scoped tmpdir. Cleaned up when the scope closes. Make sure these stay in sync */
export function tmpdirScoped(options?: { git?: boolean; config?: Partial<Config.Info> }) {
return Effect.gen(function* () {
const fs = yield* FileSystem.FileSystem
const spawner = yield* ChildProcessSpawner.ChildProcessSpawner
const dir = yield* fs.makeTempDirectoryScoped({ prefix: "opencode-test-" })
const dirpath = sanitizePath(path.join(os.tmpdir(), "opencode-test-" + Math.random().toString(36).slice(2)))
yield* Effect.promise(() => fs.mkdir(dirpath, { recursive: true }))
const dir = sanitizePath(yield* Effect.promise(() => fs.realpath(dirpath)))
yield* Effect.addFinalizer(() =>
Effect.promise(async () => {
if (options?.git) await stop(dir).catch(() => undefined)
await clean(dir).catch(() => undefined)
}),
)
const git = (...args: string[]) =>
spawner.spawn(ChildProcess.make("git", args, { cwd: dir })).pipe(Effect.flatMap((handle) => handle.exitCode))
@@ -94,9 +106,11 @@ export function tmpdirScoped(options?: { git?: boolean; config?: Partial<Config.
}
if (options?.config) {
yield* fs.writeFileString(
path.join(dir, "opencode.json"),
JSON.stringify({ $schema: "https://opencode.ai/config.json", ...options.config }),
yield* Effect.promise(() =>
fs.writeFile(
path.join(dir, "opencode.json"),
JSON.stringify({ $schema: "https://opencode.ai/config.json", ...options.config }),
),
)
}
@@ -111,7 +125,7 @@ export const provideInstance =
Effect.promise<A>(async () =>
Instance.provide({
directory,
fn: () => Effect.runPromiseWith(services)(self),
fn: () => Effect.runPromiseWith(services)(self.pipe(Effect.provideService(InstanceRef, Instance.current))),
}),
),
)
@@ -139,3 +153,20 @@ export function provideTmpdirInstance<A, E, R>(
return yield* self(path).pipe(provideInstance(path))
})
}
export function provideTmpdirServer<A, E, R>(
self: (input: { dir: string; llm: TestLLMServer["Service"] }) => Effect.Effect<A, E, R>,
options?: { git?: boolean; config?: (url: string) => Partial<Config.Info> },
): Effect.Effect<
A,
E | PlatformError.PlatformError,
R | TestLLMServer | ChildProcessSpawner.ChildProcessSpawner | Scope.Scope
> {
return Effect.gen(function* () {
const llm = yield* TestLLMServer
return yield* provideTmpdirInstance((dir) => self({ dir, llm }), {
git: options?.git,
config: options?.config?.(llm.url),
})
})
}

View File

@@ -10,7 +10,7 @@ import * as Formatter from "../../src/format/formatter"
const it = testEffect(Layer.mergeAll(Format.defaultLayer, CrossSpawnSpawner.defaultLayer, NodeFileSystem.layer))
describe("Format", () => {
it.effect("status() returns built-in formatters when no config overrides", () =>
it.live("status() returns built-in formatters when no config overrides", () =>
provideTmpdirInstance(() =>
Format.Service.use((fmt) =>
Effect.gen(function* () {
@@ -32,7 +32,7 @@ describe("Format", () => {
),
)
it.effect("status() returns empty list when formatter is disabled", () =>
it.live("status() returns empty list when formatter is disabled", () =>
provideTmpdirInstance(
() =>
Format.Service.use((fmt) =>
@@ -44,7 +44,7 @@ describe("Format", () => {
),
)
it.effect("status() excludes formatters marked as disabled in config", () =>
it.live("status() excludes formatters marked as disabled in config", () =>
provideTmpdirInstance(
() =>
Format.Service.use((fmt) =>
@@ -64,11 +64,11 @@ describe("Format", () => {
),
)
it.effect("service initializes without error", () =>
it.live("service initializes without error", () =>
provideTmpdirInstance(() => Format.Service.use(() => Effect.void)),
)
it.effect("status() initializes formatter state per directory", () =>
it.live("status() initializes formatter state per directory", () =>
Effect.gen(function* () {
const a = yield* provideTmpdirInstance(() => Format.Service.use((fmt) => fmt.status()), {
config: { formatter: false },
@@ -80,7 +80,7 @@ describe("Format", () => {
}),
)
it.effect("runs enabled checks for matching formatters in parallel", () =>
it.live("runs enabled checks for matching formatters in parallel", () =>
provideTmpdirInstance((path) =>
Effect.gen(function* () {
const file = `${path}/test.parallel`
@@ -144,7 +144,7 @@ describe("Format", () => {
),
)
it.effect("runs matching formatters sequentially for the same file", () =>
it.live("runs matching formatters sequentially for the same file", () =>
provideTmpdirInstance(
(path) =>
Effect.gen(function* () {

View File

@@ -1,14 +1,14 @@
import { test, type TestOptions } from "bun:test"
import { Cause, Effect, Exit, Layer } from "effect"
import type * as Scope from "effect/Scope"
import * as TestClock from "effect/testing/TestClock"
import * as TestConsole from "effect/testing/TestConsole"
type Body<A, E, R> = Effect.Effect<A, E, R> | (() => Effect.Effect<A, E, R>)
const env = TestConsole.layer
const body = <A, E, R>(value: Body<A, E, R>) => Effect.suspend(() => (typeof value === "function" ? value() : value))
const run = <A, E, R, E2>(value: Body<A, E, R | Scope.Scope>, layer: Layer.Layer<R, E2, never>) =>
const run = <A, E, R, E2>(value: Body<A, E, R | Scope.Scope>, layer: Layer.Layer<R, E2>) =>
Effect.gen(function* () {
const exit = yield* body(value).pipe(Effect.scoped, Effect.provide(layer), Effect.exit)
if (Exit.isFailure(exit)) {
@@ -19,19 +19,35 @@ const run = <A, E, R, E2>(value: Body<A, E, R | Scope.Scope>, layer: Layer.Layer
return yield* exit
}).pipe(Effect.runPromise)
const make = <R, E>(layer: Layer.Layer<R, E, never>) => {
const make = <R, E>(testLayer: Layer.Layer<R, E>, liveLayer: Layer.Layer<R, E>) => {
const effect = <A, E2>(name: string, value: Body<A, E2, R | Scope.Scope>, opts?: number | TestOptions) =>
test(name, () => run(value, layer), opts)
test(name, () => run(value, testLayer), opts)
effect.only = <A, E2>(name: string, value: Body<A, E2, R | Scope.Scope>, opts?: number | TestOptions) =>
test.only(name, () => run(value, layer), opts)
test.only(name, () => run(value, testLayer), opts)
effect.skip = <A, E2>(name: string, value: Body<A, E2, R | Scope.Scope>, opts?: number | TestOptions) =>
test.skip(name, () => run(value, layer), opts)
test.skip(name, () => run(value, testLayer), opts)
return { effect }
const live = <A, E2>(name: string, value: Body<A, E2, R | Scope.Scope>, opts?: number | TestOptions) =>
test(name, () => run(value, liveLayer), opts)
live.only = <A, E2>(name: string, value: Body<A, E2, R | Scope.Scope>, opts?: number | TestOptions) =>
test.only(name, () => run(value, liveLayer), opts)
live.skip = <A, E2>(name: string, value: Body<A, E2, R | Scope.Scope>, opts?: number | TestOptions) =>
test.skip(name, () => run(value, liveLayer), opts)
return { effect, live }
}
export const it = make(env)
// Test environment with TestClock and TestConsole
const testEnv = Layer.mergeAll(TestConsole.layer, TestClock.layer())
export const testEffect = <R, E>(layer: Layer.Layer<R, E, never>) => make(Layer.provideMerge(layer, env))
// Live environment - uses real clock, but keeps TestConsole for output capture
const liveEnv = TestConsole.layer
export const it = make(testEnv, liveEnv)
export const testEffect = <R, E>(layer: Layer.Layer<R, E>) =>
make(Layer.provideMerge(layer, testEnv), Layer.provideMerge(layer, liveEnv))

View File

@@ -0,0 +1,422 @@
import { NodeHttpServer, NodeHttpServerRequest } from "@effect/platform-node"
import * as Http from "node:http"
import { Deferred, Effect, Layer, ServiceMap, Stream } from "effect"
import * as HttpServer from "effect/unstable/http/HttpServer"
import { HttpRouter, HttpServerRequest, HttpServerResponse } from "effect/unstable/http"
export type Usage = { input: number; output: number }
type Line = Record<string, unknown>
type Hit = {
url: URL
body: Record<string, unknown>
}
type Wait = {
count: number
ready: Deferred.Deferred<void>
}
type Sse = {
type: "sse"
head: unknown[]
tail: unknown[]
wait?: PromiseLike<unknown>
hang?: boolean
error?: unknown
reset?: boolean
}
type HttpError = {
type: "http-error"
status: number
body: unknown
}
export type Item = Sse | HttpError
const done = Symbol("done")
function line(input: unknown) {
if (input === done) return "data: [DONE]\n\n"
return `data: ${JSON.stringify(input)}\n\n`
}
function tokens(input?: Usage) {
if (!input) return
return {
prompt_tokens: input.input,
completion_tokens: input.output,
total_tokens: input.input + input.output,
}
}
function chunk(input: { delta?: Record<string, unknown>; finish?: string; usage?: Usage }) {
return {
id: "chatcmpl-test",
object: "chat.completion.chunk",
choices: [
{
delta: input.delta ?? {},
...(input.finish ? { finish_reason: input.finish } : {}),
},
],
...(input.usage ? { usage: tokens(input.usage) } : {}),
} satisfies Line
}
function role() {
return chunk({ delta: { role: "assistant" } })
}
function textLine(value: string) {
return chunk({ delta: { content: value } })
}
function reasonLine(value: string) {
return chunk({ delta: { reasoning_content: value } })
}
function finishLine(reason: string, usage?: Usage) {
return chunk({ finish: reason, usage })
}
function toolStartLine(id: string, name: string) {
return chunk({
delta: {
tool_calls: [
{
index: 0,
id,
type: "function",
function: {
name,
arguments: "",
},
},
],
},
})
}
function toolArgsLine(value: string) {
return chunk({
delta: {
tool_calls: [
{
index: 0,
function: {
arguments: value,
},
},
],
},
})
}
function bytes(input: Iterable<unknown>) {
return Stream.fromIterable([...input].map(line)).pipe(Stream.encodeText)
}
function send(item: Sse) {
const head = bytes(item.head)
const tail = bytes([...item.tail, ...(item.hang || item.error ? [] : [done])])
const empty = Stream.fromIterable<Uint8Array>([])
const wait = item.wait
const body: Stream.Stream<Uint8Array, unknown> = wait
? Stream.concat(head, Stream.fromEffect(Effect.promise(() => wait)).pipe(Stream.flatMap(() => tail)))
: Stream.concat(head, tail)
let end: Stream.Stream<Uint8Array, unknown> = empty
if (item.error) end = Stream.concat(empty, Stream.fail(item.error))
else if (item.hang) end = Stream.concat(empty, Stream.never)
return HttpServerResponse.stream(Stream.concat(body, end), { contentType: "text/event-stream" })
}
const reset = Effect.fn("TestLLMServer.reset")(function* (item: Sse) {
const req = yield* HttpServerRequest.HttpServerRequest
const res = NodeHttpServerRequest.toServerResponse(req)
yield* Effect.sync(() => {
res.writeHead(200, { "content-type": "text/event-stream" })
for (const part of item.head) res.write(line(part))
for (const part of item.tail) res.write(line(part))
res.destroy(new Error("connection reset"))
})
yield* Effect.never
})
function fail(item: HttpError) {
return HttpServerResponse.text(JSON.stringify(item.body), {
status: item.status,
contentType: "application/json",
})
}
export class Reply {
#head: unknown[] = [role()]
#tail: unknown[] = []
#usage: Usage | undefined
#finish: string | undefined
#wait: PromiseLike<unknown> | undefined
#hang = false
#error: unknown
#reset = false
#seq = 0
#id() {
this.#seq += 1
return `call_${this.#seq}`
}
text(value: string) {
this.#tail = [...this.#tail, textLine(value)]
return this
}
reason(value: string) {
this.#tail = [...this.#tail, reasonLine(value)]
return this
}
usage(value: Usage) {
this.#usage = value
return this
}
wait(value: PromiseLike<unknown>) {
this.#wait = value
return this
}
stop() {
this.#finish = "stop"
this.#hang = false
this.#error = undefined
this.#reset = false
return this
}
toolCalls() {
this.#finish = "tool_calls"
this.#hang = false
this.#error = undefined
this.#reset = false
return this
}
tool(name: string, input: unknown) {
const id = this.#id()
const args = JSON.stringify(input)
this.#tail = [...this.#tail, toolStartLine(id, name), toolArgsLine(args)]
return this.toolCalls()
}
pendingTool(name: string, input: unknown) {
const id = this.#id()
const args = JSON.stringify(input)
const size = Math.max(1, Math.floor(args.length / 2))
this.#tail = [...this.#tail, toolStartLine(id, name), toolArgsLine(args.slice(0, size))]
return this
}
hang() {
this.#finish = undefined
this.#hang = true
this.#error = undefined
this.#reset = false
return this
}
streamError(error: unknown = "boom") {
this.#finish = undefined
this.#hang = false
this.#error = error
this.#reset = false
return this
}
reset() {
this.#finish = undefined
this.#hang = false
this.#error = undefined
this.#reset = true
return this
}
item(): Item {
return {
type: "sse",
head: this.#head,
tail: this.#finish ? [...this.#tail, finishLine(this.#finish, this.#usage)] : this.#tail,
wait: this.#wait,
hang: this.#hang,
error: this.#error,
reset: this.#reset,
}
}
}
export function reply() {
return new Reply()
}
export function httpError(status: number, body: unknown): Item {
return {
type: "http-error",
status,
body,
}
}
export function raw(input: {
chunks?: unknown[]
head?: unknown[]
tail?: unknown[]
wait?: PromiseLike<unknown>
hang?: boolean
error?: unknown
reset?: boolean
}): Item {
return {
type: "sse",
head: input.head ?? input.chunks ?? [],
tail: input.tail ?? [],
wait: input.wait,
hang: input.hang,
error: input.error,
reset: input.reset,
}
}
function item(input: Item | Reply) {
return input instanceof Reply ? input.item() : input
}
namespace TestLLMServer {
export interface Service {
readonly url: string
readonly push: (...input: (Item | Reply)[]) => Effect.Effect<void>
readonly text: (value: string, opts?: { usage?: Usage }) => Effect.Effect<void>
readonly tool: (name: string, input: unknown) => Effect.Effect<void>
readonly toolHang: (name: string, input: unknown) => Effect.Effect<void>
readonly reason: (value: string, opts?: { text?: string; usage?: Usage }) => Effect.Effect<void>
readonly fail: (message?: unknown) => Effect.Effect<void>
readonly error: (status: number, body: unknown) => Effect.Effect<void>
readonly hang: Effect.Effect<void>
readonly hold: (value: string, wait: PromiseLike<unknown>) => Effect.Effect<void>
readonly hits: Effect.Effect<Hit[]>
readonly calls: Effect.Effect<number>
readonly wait: (count: number) => Effect.Effect<void>
readonly inputs: Effect.Effect<Record<string, unknown>[]>
readonly pending: Effect.Effect<number>
}
}
export class TestLLMServer extends ServiceMap.Service<TestLLMServer, TestLLMServer.Service>()("@test/LLMServer") {
static readonly layer = Layer.effect(
TestLLMServer,
Effect.gen(function* () {
const server = yield* HttpServer.HttpServer
const router = yield* HttpRouter.HttpRouter
let hits: Hit[] = []
let list: Item[] = []
let waits: Wait[] = []
const queue = (...input: (Item | Reply)[]) => {
list = [...list, ...input.map(item)]
}
const notify = Effect.fnUntraced(function* () {
const ready = waits.filter((item) => hits.length >= item.count)
if (!ready.length) return
waits = waits.filter((item) => hits.length < item.count)
yield* Effect.forEach(ready, (item) => Deferred.succeed(item.ready, void 0))
})
const pull = () => {
const first = list[0]
if (!first) return
list = list.slice(1)
return first
}
yield* router.add(
"POST",
"/v1/chat/completions",
Effect.gen(function* () {
const req = yield* HttpServerRequest.HttpServerRequest
const next = pull()
if (!next) return HttpServerResponse.text("unexpected request", { status: 500 })
const body = yield* req.json.pipe(Effect.orElseSucceed(() => ({})))
hits = [
...hits,
{
url: new URL(req.originalUrl, "http://localhost"),
body: body && typeof body === "object" ? (body as Record<string, unknown>) : {},
},
]
yield* notify()
if (next.type === "sse" && next.reset) {
yield* reset(next)
return HttpServerResponse.empty()
}
if (next.type === "sse") return send(next)
return fail(next)
}),
)
yield* server.serve(router.asHttpEffect())
return TestLLMServer.of({
url:
server.address._tag === "TcpAddress"
? `http://127.0.0.1:${server.address.port}/v1`
: `unix://${server.address.path}/v1`,
push: Effect.fn("TestLLMServer.push")(function* (...input: (Item | Reply)[]) {
queue(...input)
}),
text: Effect.fn("TestLLMServer.text")(function* (value: string, opts?: { usage?: Usage }) {
const out = reply().text(value)
if (opts?.usage) out.usage(opts.usage)
queue(out.stop().item())
}),
tool: Effect.fn("TestLLMServer.tool")(function* (name: string, input: unknown) {
queue(reply().tool(name, input).item())
}),
toolHang: Effect.fn("TestLLMServer.toolHang")(function* (name: string, input: unknown) {
queue(reply().pendingTool(name, input).hang().item())
}),
reason: Effect.fn("TestLLMServer.reason")(function* (value: string, opts?: { text?: string; usage?: Usage }) {
const out = reply().reason(value)
if (opts?.text) out.text(opts.text)
if (opts?.usage) out.usage(opts.usage)
queue(out.stop().item())
}),
fail: Effect.fn("TestLLMServer.fail")(function* (message: unknown = "boom") {
queue(reply().streamError(message).item())
}),
error: Effect.fn("TestLLMServer.error")(function* (status: number, body: unknown) {
queue(httpError(status, body))
}),
hang: Effect.gen(function* () {
queue(reply().hang().item())
}).pipe(Effect.withSpan("TestLLMServer.hang")),
hold: Effect.fn("TestLLMServer.hold")(function* (value: string, wait: PromiseLike<unknown>) {
queue(reply().wait(wait).text(value).stop().item())
}),
hits: Effect.sync(() => [...hits]),
calls: Effect.sync(() => hits.length),
wait: Effect.fn("TestLLMServer.wait")(function* (count: number) {
if (hits.length >= count) return
const ready = yield* Deferred.make<void>()
waits = [...waits, { count, ready }]
yield* Deferred.await(ready)
}),
inputs: Effect.sync(() => hits.map((hit) => hit.body)),
pending: Effect.sync(() => list.length),
})
}),
).pipe(Layer.provide(HttpRouter.layer), Layer.provide(NodeHttpServer.layer(() => Http.createServer(), { port: 0 })))
}

View File

@@ -25,6 +25,11 @@ function run(msg: Msg) {
async function plugin(dir: string, kinds: Array<"server" | "tui">) {
const p = path.join(dir, "plugin")
const server = kinds.includes("server")
const tui = kinds.includes("tui")
const exports: Record<string, string> = {}
if (server) exports["./server"] = "./server.js"
if (tui) exports["./tui"] = "./tui.js"
await fs.mkdir(p, { recursive: true })
await Bun.write(
path.join(p, "package.json"),
@@ -32,7 +37,8 @@ async function plugin(dir: string, kinds: Array<"server" | "tui">) {
{
name: "acme",
version: "1.0.0",
"oc-plugin": kinds,
...(server ? { main: "./server.js" } : {}),
...(Object.keys(exports).length ? { exports } : {}),
},
null,
2,

View File

@@ -55,8 +55,34 @@ function ctxRoot(dir: string): PlugCtx {
}
}
async function plugin(dir: string, kinds?: unknown) {
async function plugin(
dir: string,
kinds?: Array<"server" | "tui">,
opts?: {
server?: Record<string, unknown>
tui?: Record<string, unknown>
},
) {
const p = path.join(dir, "plugin")
const server = kinds?.includes("server") ?? false
const tui = kinds?.includes("tui") ?? false
const exports: Record<string, unknown> = {}
if (server) {
exports["./server"] = opts?.server
? {
import: "./server.js",
config: opts.server,
}
: "./server.js"
}
if (tui) {
exports["./tui"] = opts?.tui
? {
import: "./tui.js",
config: opts.tui,
}
: "./tui.js"
}
await fs.mkdir(p, { recursive: true })
await Bun.write(
path.join(p, "package.json"),
@@ -64,7 +90,8 @@ async function plugin(dir: string, kinds?: unknown) {
{
name: "acme",
version: "1.0.0",
...(kinds === undefined ? {} : { "oc-plugin": kinds }),
...(server ? { main: "./server.js" } : {}),
...(Object.keys(exports).length ? { exports } : {}),
},
null,
2,
@@ -99,12 +126,12 @@ describe("plugin.install.task", () => {
expect(tui.plugin).toEqual(["acme@1.2.3"])
})
test("writes default options from tuple manifest targets", async () => {
test("writes default options from exports config metadata", async () => {
await using tmp = await tmpdir()
const target = await plugin(tmp.path, [
["server", { custom: true, other: false }],
["tui", { compact: true }],
])
const target = await plugin(tmp.path, ["server", "tui"], {
server: { custom: true, other: false },
tui: { compact: true },
})
const run = createPlugTask(
{
mod: "acme@1.2.3",

View File

@@ -266,8 +266,8 @@ describe("plugin.loader.shared", () => {
try {
await load(tmp.path)
expect(install.mock.calls).toContainEqual(["acme-plugin", "latest"])
expect(install.mock.calls).toContainEqual(["scope-plugin", "2.3.4"])
expect(install.mock.calls).toContainEqual(["acme-plugin", "latest", { ignoreScripts: true }])
expect(install.mock.calls).toContainEqual(["scope-plugin", "2.3.4", { ignoreScripts: true }])
} finally {
install.mockRestore()
}
@@ -487,7 +487,7 @@ describe("plugin.loader.shared", () => {
.catch(() => false)
expect(called).toBe(false)
expect(errors.some((x) => x.includes('exports["./server"]') && x.includes("package.json main"))).toBe(true)
expect(errors).toHaveLength(0)
} finally {
install.mockRestore()
}

View File

@@ -1557,6 +1557,35 @@ describe("ProviderTransform.message - providerOptions key remapping", () => {
expect(result[0].providerOptions?.openai).toBeUndefined()
})
test("azure cognitive services remaps providerID to 'azure' key", () => {
const model = createModel("azure-cognitive-services", "@ai-sdk/azure")
const msgs = [
{
role: "user",
content: [
{
type: "text",
text: "Hello",
providerOptions: {
"azure-cognitive-services": { part: true },
},
},
],
providerOptions: {
"azure-cognitive-services": { someOption: "value" },
},
},
] as any[]
const result = ProviderTransform.message(msgs, model, {}) as any[]
const part = result[0].content[0] as any
expect(result[0].providerOptions?.azure).toEqual({ someOption: "value" })
expect(result[0].providerOptions?.["azure-cognitive-services"]).toBeUndefined()
expect(part.providerOptions?.azure).toEqual({ part: true })
expect(part.providerOptions?.["azure-cognitive-services"]).toBeUndefined()
})
test("copilot remaps providerID to 'copilot' key", () => {
const model = createModel("github-copilot", "@ai-sdk/github-copilot")
const msgs = [
@@ -1763,6 +1792,58 @@ describe("ProviderTransform.message - cache control on gateway", () => {
},
})
})
test("google-vertex-anthropic applies cache control", () => {
const model = createModel({
providerID: "google-vertex-anthropic",
api: {
id: "google-vertex-anthropic",
url: "https://us-central1-aiplatform.googleapis.com",
npm: "@ai-sdk/google-vertex/anthropic",
},
id: "claude-sonnet-4@20250514",
})
const msgs = [
{
role: "system",
content: "You are a helpful assistant",
},
{
role: "user",
content: "Hello",
},
] as any[]
const result = ProviderTransform.message(msgs, model, {}) as any[]
expect(result[0].providerOptions).toEqual({
anthropic: {
cacheControl: {
type: "ephemeral",
},
},
openrouter: {
cacheControl: {
type: "ephemeral",
},
},
bedrock: {
cachePoint: {
type: "default",
},
},
openaiCompatible: {
cache_control: {
type: "ephemeral",
},
},
copilot: {
copilot_cache_control: {
type: "ephemeral",
},
},
})
})
})
describe("ProviderTransform.variants", () => {

View File

@@ -13,6 +13,18 @@ afterEach(async () => {
await Instance.disposeAll()
})
async function withoutWatcher<T>(fn: () => Promise<T>) {
if (process.platform !== "win32") return fn()
const prev = process.env.OPENCODE_EXPERIMENTAL_DISABLE_FILEWATCHER
process.env.OPENCODE_EXPERIMENTAL_DISABLE_FILEWATCHER = "true"
try {
return await fn()
} finally {
if (prev === undefined) delete process.env.OPENCODE_EXPERIMENTAL_DISABLE_FILEWATCHER
else process.env.OPENCODE_EXPERIMENTAL_DISABLE_FILEWATCHER = prev
}
}
async function fill(sessionID: SessionID, count: number, time = (i: number) => Date.now() + i) {
const ids = [] as MessageID[]
for (let i = 0; i < count; i++) {
@@ -42,86 +54,94 @@ async function fill(sessionID: SessionID, count: number, time = (i: number) => D
describe("session messages endpoint", () => {
test("returns cursor headers for older pages", async () => {
await using tmp = await tmpdir({ git: true })
await Instance.provide({
directory: tmp.path,
fn: async () => {
const session = await Session.create({})
const ids = await fill(session.id, 5)
const app = Server.Default()
await withoutWatcher(() =>
Instance.provide({
directory: tmp.path,
fn: async () => {
const session = await Session.create({})
const ids = await fill(session.id, 5)
const app = Server.Default()
const a = await app.request(`/session/${session.id}/message?limit=2`)
expect(a.status).toBe(200)
const aBody = (await a.json()) as MessageV2.WithParts[]
expect(aBody.map((item) => item.info.id)).toEqual(ids.slice(-2))
const cursor = a.headers.get("x-next-cursor")
expect(cursor).toBeTruthy()
expect(a.headers.get("link")).toContain('rel="next"')
const a = await app.request(`/session/${session.id}/message?limit=2`)
expect(a.status).toBe(200)
const aBody = (await a.json()) as MessageV2.WithParts[]
expect(aBody.map((item) => item.info.id)).toEqual(ids.slice(-2))
const cursor = a.headers.get("x-next-cursor")
expect(cursor).toBeTruthy()
expect(a.headers.get("link")).toContain('rel="next"')
const b = await app.request(`/session/${session.id}/message?limit=2&before=${encodeURIComponent(cursor!)}`)
expect(b.status).toBe(200)
const bBody = (await b.json()) as MessageV2.WithParts[]
expect(bBody.map((item) => item.info.id)).toEqual(ids.slice(-4, -2))
const b = await app.request(`/session/${session.id}/message?limit=2&before=${encodeURIComponent(cursor!)}`)
expect(b.status).toBe(200)
const bBody = (await b.json()) as MessageV2.WithParts[]
expect(bBody.map((item) => item.info.id)).toEqual(ids.slice(-4, -2))
await Session.remove(session.id)
},
})
await Session.remove(session.id)
},
}),
)
})
test("keeps full-history responses when limit is omitted", async () => {
await using tmp = await tmpdir({ git: true })
await Instance.provide({
directory: tmp.path,
fn: async () => {
const session = await Session.create({})
const ids = await fill(session.id, 3)
const app = Server.Default()
await withoutWatcher(() =>
Instance.provide({
directory: tmp.path,
fn: async () => {
const session = await Session.create({})
const ids = await fill(session.id, 3)
const app = Server.Default()
const res = await app.request(`/session/${session.id}/message`)
expect(res.status).toBe(200)
const body = (await res.json()) as MessageV2.WithParts[]
expect(body.map((item) => item.info.id)).toEqual(ids)
const res = await app.request(`/session/${session.id}/message`)
expect(res.status).toBe(200)
const body = (await res.json()) as MessageV2.WithParts[]
expect(body.map((item) => item.info.id)).toEqual(ids)
await Session.remove(session.id)
},
})
await Session.remove(session.id)
},
}),
)
})
test("rejects invalid cursors and missing sessions", async () => {
await using tmp = await tmpdir({ git: true })
await Instance.provide({
directory: tmp.path,
fn: async () => {
const session = await Session.create({})
const app = Server.Default()
await withoutWatcher(() =>
Instance.provide({
directory: tmp.path,
fn: async () => {
const session = await Session.create({})
const app = Server.Default()
const bad = await app.request(`/session/${session.id}/message?limit=2&before=bad`)
expect(bad.status).toBe(400)
const bad = await app.request(`/session/${session.id}/message?limit=2&before=bad`)
expect(bad.status).toBe(400)
const miss = await app.request(`/session/ses_missing/message?limit=2`)
expect(miss.status).toBe(404)
const miss = await app.request(`/session/ses_missing/message?limit=2`)
expect(miss.status).toBe(404)
await Session.remove(session.id)
},
})
await Session.remove(session.id)
},
}),
)
})
test("does not truncate large legacy limit requests", async () => {
await using tmp = await tmpdir({ git: true })
await Instance.provide({
directory: tmp.path,
fn: async () => {
const session = await Session.create({})
await fill(session.id, 520)
const app = Server.Default()
await withoutWatcher(() =>
Instance.provide({
directory: tmp.path,
fn: async () => {
const session = await Session.create({})
await fill(session.id, 520)
const app = Server.Default()
const res = await app.request(`/session/${session.id}/message?limit=510`)
expect(res.status).toBe(200)
const body = (await res.json()) as MessageV2.WithParts[]
expect(body).toHaveLength(510)
const res = await app.request(`/session/${session.id}/message?limit=510`)
expect(res.status).toBe(200)
const body = (await res.json()) as MessageV2.WithParts[]
expect(body).toHaveLength(510)
await Session.remove(session.id)
},
})
await Session.remove(session.id)
},
}),
)
})
})

View File

@@ -1199,4 +1199,26 @@ describe("session.getUsage", () => {
expect(result.tokens.total).toBe(1500)
},
)
test("extracts cache write tokens from vertex metadata key", () => {
const model = createModel({ context: 100_000, output: 32_000, npm: "@ai-sdk/google-vertex/anthropic" })
const result = Session.getUsage({
model,
usage: {
inputTokens: 1000,
outputTokens: 500,
totalTokens: 1500,
cachedInputTokens: 200,
},
metadata: {
vertex: {
cacheCreationInputTokens: 300,
},
},
})
expect(result.tokens.input).toBe(500)
expect(result.tokens.cache.read).toBe(200)
expect(result.tokens.cache.write).toBe(300)
})
})

View File

@@ -1,8 +1,6 @@
import { NodeFileSystem } from "@effect/platform-node"
import { expect } from "bun:test"
import { APICallError } from "ai"
import { Cause, Effect, Exit, Fiber, Layer, ServiceMap } from "effect"
import * as Stream from "effect/Stream"
import { Cause, Effect, Exit, Fiber, Layer } from "effect"
import path from "path"
import type { Agent } from "../../src/agent/agent"
import { Agent as AgentSvc } from "../../src/agent/agent"
@@ -10,7 +8,7 @@ import { Bus } from "../../src/bus"
import { Config } from "../../src/config/config"
import { Permission } from "../../src/permission"
import { Plugin } from "../../src/plugin"
import type { Provider } from "../../src/provider/provider"
import { Provider } from "../../src/provider/provider"
import { ModelID, ProviderID } from "../../src/provider/schema"
import { Session } from "../../src/session"
import { LLM } from "../../src/session/llm"
@@ -21,8 +19,9 @@ import { SessionStatus } from "../../src/session/status"
import { Snapshot } from "../../src/snapshot"
import { Log } from "../../src/util/log"
import * as CrossSpawnSpawner from "../../src/effect/cross-spawn-spawner"
import { provideTmpdirInstance } from "../fixture/fixture"
import { provideTmpdirServer } from "../fixture/fixture"
import { testEffect } from "../lib/effect"
import { reply, TestLLMServer } from "../lib/llm-server"
Log.init({ print: false })
@@ -31,118 +30,51 @@ const ref = {
modelID: ModelID.make("test-model"),
}
type Script = Stream.Stream<LLM.Event, unknown> | ((input: LLM.StreamInput) => Stream.Stream<LLM.Event, unknown>)
class TestLLM extends ServiceMap.Service<
TestLLM,
{
readonly push: (stream: Script) => Effect.Effect<void>
readonly reply: (...items: LLM.Event[]) => Effect.Effect<void>
readonly calls: Effect.Effect<number>
readonly inputs: Effect.Effect<LLM.StreamInput[]>
}
>()("@test/SessionProcessorLLM") {}
function stream(...items: LLM.Event[]) {
return Stream.make(...items)
const cfg = {
provider: {
test: {
name: "Test",
id: "test",
env: [],
npm: "@ai-sdk/openai-compatible",
models: {
"test-model": {
id: "test-model",
name: "Test Model",
attachment: false,
reasoning: false,
temperature: false,
tool_call: true,
release_date: "2025-01-01",
limit: { context: 100000, output: 10000 },
cost: { input: 0, output: 0 },
options: {},
},
},
options: {
apiKey: "test-key",
baseURL: "http://localhost:1/v1",
},
},
},
}
function usage(input = 1, output = 1, total = input + output) {
function providerCfg(url: string) {
return {
inputTokens: input,
outputTokens: output,
totalTokens: total,
inputTokenDetails: {
noCacheTokens: undefined,
cacheReadTokens: undefined,
cacheWriteTokens: undefined,
},
outputTokenDetails: {
textTokens: undefined,
reasoningTokens: undefined,
...cfg,
provider: {
...cfg.provider,
test: {
...cfg.provider.test,
options: {
...cfg.provider.test.options,
baseURL: url,
},
},
},
}
}
function start(): LLM.Event {
return { type: "start" }
}
function textStart(id = "t"): LLM.Event {
return { type: "text-start", id }
}
function textDelta(id: string, text: string): LLM.Event {
return { type: "text-delta", id, text }
}
function textEnd(id = "t"): LLM.Event {
return { type: "text-end", id }
}
function reasoningStart(id: string): LLM.Event {
return { type: "reasoning-start", id }
}
function reasoningDelta(id: string, text: string): LLM.Event {
return { type: "reasoning-delta", id, text }
}
function reasoningEnd(id: string): LLM.Event {
return { type: "reasoning-end", id }
}
function finishStep(): LLM.Event {
return {
type: "finish-step",
finishReason: "stop",
rawFinishReason: "stop",
response: { id: "res", modelId: "test-model", timestamp: new Date() },
providerMetadata: undefined,
usage: usage(),
}
}
function finish(): LLM.Event {
return { type: "finish", finishReason: "stop", rawFinishReason: "stop", totalUsage: usage() }
}
function toolInputStart(id: string, toolName: string): LLM.Event {
return { type: "tool-input-start", id, toolName }
}
function toolCall(toolCallId: string, toolName: string, input: unknown): LLM.Event {
return { type: "tool-call", toolCallId, toolName, input }
}
function fail<E>(err: E, ...items: LLM.Event[]) {
return stream(...items).pipe(Stream.concat(Stream.fail(err)))
}
function hang(_input: LLM.StreamInput, ...items: LLM.Event[]) {
return stream(...items).pipe(Stream.concat(Stream.fromEffect(Effect.never)))
}
function model(context: number): Provider.Model {
return {
id: "test-model",
providerID: "test",
name: "Test",
limit: { context, output: 10 },
cost: { input: 0, output: 0, cache: { read: 0, write: 0 } },
capabilities: {
toolcall: true,
attachment: false,
reasoning: false,
temperature: true,
input: { text: true, image: false, audio: false, video: false },
output: { text: true, image: false, audio: false, video: false },
},
api: { npm: "@ai-sdk/anthropic" },
options: {},
} as Provider.Model
}
function agent(): Agent.Info {
return {
name: "build",
@@ -211,43 +143,6 @@ const assistant = Effect.fn("TestSession.assistant")(function* (
return msg
})
const llm = Layer.unwrap(
Effect.gen(function* () {
const queue: Script[] = []
const inputs: LLM.StreamInput[] = []
let calls = 0
const push = Effect.fn("TestLLM.push")((item: Script) => {
queue.push(item)
return Effect.void
})
const reply = Effect.fn("TestLLM.reply")((...items: LLM.Event[]) => push(stream(...items)))
return Layer.mergeAll(
Layer.succeed(
LLM.Service,
LLM.Service.of({
stream: (input) => {
calls += 1
inputs.push(input)
const item = queue.shift() ?? Stream.empty
return typeof item === "function" ? item(input) : item
},
}),
),
Layer.succeed(
TestLLM,
TestLLM.of({
push,
reply,
calls: Effect.sync(() => calls),
inputs: Effect.sync(() => [...inputs]),
}),
),
)
}),
)
const status = SessionStatus.layer.pipe(Layer.provideMerge(Bus.layer))
const infra = Layer.mergeAll(NodeFileSystem.layer, CrossSpawnSpawner.defaultLayer)
const deps = Layer.mergeAll(
@@ -257,27 +152,37 @@ const deps = Layer.mergeAll(
Permission.layer,
Plugin.defaultLayer,
Config.defaultLayer,
LLM.defaultLayer,
Provider.defaultLayer,
status,
llm,
).pipe(Layer.provideMerge(infra))
const env = SessionProcessor.layer.pipe(Layer.provideMerge(deps))
const env = Layer.mergeAll(TestLLMServer.layer, SessionProcessor.layer.pipe(Layer.provideMerge(deps)))
const it = testEffect(env)
it.effect("session.processor effect tests capture llm input cleanly", () => {
return provideTmpdirInstance(
(dir) =>
Effect.gen(function* () {
const test = yield* TestLLM
const processors = yield* SessionProcessor.Service
const session = yield* Session.Service
const boot = Effect.fn("test.boot")(function* () {
const processors = yield* SessionProcessor.Service
const session = yield* Session.Service
const provider = yield* Provider.Service
return { processors, session, provider }
})
yield* test.reply(start(), textStart(), textDelta("t", "hello"), textEnd(), finishStep(), finish())
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
it.live("session.processor effect tests capture llm input cleanly", () =>
provideTmpdirServer(
({ dir, llm }) =>
Effect.gen(function* () {
const { processors, session, provider } = yield* boot()
yield* llm.text("hello")
const chat = yield* session.create({})
const parent = yield* user(chat.id, "hi")
const msg = yield* assistant(chat.id, parent.id, path.resolve(dir))
const mdl = model(100)
const mdl = yield* provider.getModel(ref.providerID, ref.modelID)
const handle = yield* processors.create({
assistantMessage: msg,
sessionID: chat.id,
@@ -303,46 +208,29 @@ it.effect("session.processor effect tests capture llm input cleanly", () => {
const value = yield* handle.process(input)
const parts = yield* Effect.promise(() => MessageV2.parts(msg.id))
const calls = yield* test.calls
const inputs = yield* test.inputs
const calls = yield* llm.calls
expect(value).toBe("continue")
expect(calls).toBe(1)
expect(inputs).toHaveLength(1)
expect(inputs[0].messages).toStrictEqual([{ role: "user", content: "hi" }])
expect(parts.some((part) => part.type === "text" && part.text === "hello")).toBe(true)
}),
{ git: true },
)
})
{ git: true, config: (url) => providerCfg(url) },
),
)
it.effect("session.processor effect tests stop after token overflow requests compaction", () => {
return provideTmpdirInstance(
(dir) =>
it.live("session.processor effect tests stop after token overflow requests compaction", () =>
provideTmpdirServer(
({ dir, llm }) =>
Effect.gen(function* () {
const test = yield* TestLLM
const processors = yield* SessionProcessor.Service
const session = yield* Session.Service
const { processors, session, provider } = yield* boot()
yield* test.reply(
start(),
{
type: "finish-step",
finishReason: "stop",
rawFinishReason: "stop",
response: { id: "res", modelId: "test-model", timestamp: new Date() },
providerMetadata: undefined,
usage: usage(100, 0, 100),
},
textStart(),
textDelta("t", "after"),
textEnd(),
)
yield* llm.text("after", { usage: { input: 100, output: 0 } })
const chat = yield* session.create({})
const parent = yield* user(chat.id, "compact")
const msg = yield* assistant(chat.id, parent.id, path.resolve(dir))
const mdl = model(20)
const base = yield* provider.getModel(ref.providerID, ref.modelID)
const mdl = { ...base, limit: { context: 20, output: 10 } }
const handle = yield* processors.create({
assistantMessage: msg,
sessionID: chat.id,
@@ -369,51 +257,73 @@ it.effect("session.processor effect tests stop after token overflow requests com
const parts = yield* Effect.promise(() => MessageV2.parts(msg.id))
expect(value).toBe("compact")
expect(parts.some((part) => part.type === "text")).toBe(false)
expect(parts.some((part) => part.type === "text" && part.text === "after")).toBe(true)
expect(parts.some((part) => part.type === "step-finish")).toBe(true)
}),
{ git: true },
)
})
{ git: true, config: (url) => providerCfg(url) },
),
)
it.effect("session.processor effect tests reset reasoning state across retries", () => {
return provideTmpdirInstance(
(dir) =>
it.live("session.processor effect tests capture reasoning from http mock", () =>
provideTmpdirServer(
({ dir, llm }) =>
Effect.gen(function* () {
const test = yield* TestLLM
const processors = yield* SessionProcessor.Service
const session = yield* Session.Service
const { processors, session, provider } = yield* boot()
yield* test.push(
fail(
new APICallError({
message: "boom",
url: "https://example.com/v1/chat/completions",
requestBodyValues: {},
statusCode: 503,
responseHeaders: { "retry-after-ms": "0" },
responseBody: '{"error":"boom"}',
isRetryable: true,
}),
start(),
reasoningStart("r"),
reasoningDelta("r", "one"),
),
)
yield* test.reply(
start(),
reasoningStart("r"),
reasoningDelta("r", "two"),
reasoningEnd("r"),
finishStep(),
finish(),
)
yield* llm.push(reply().reason("think").text("done").stop())
const chat = yield* session.create({})
const parent = yield* user(chat.id, "reason")
const msg = yield* assistant(chat.id, parent.id, path.resolve(dir))
const mdl = model(100)
const mdl = yield* provider.getModel(ref.providerID, ref.modelID)
const handle = yield* processors.create({
assistantMessage: msg,
sessionID: chat.id,
model: mdl,
})
const value = yield* handle.process({
user: {
id: parent.id,
sessionID: chat.id,
role: "user",
time: parent.time,
agent: parent.agent,
model: { providerID: ref.providerID, modelID: ref.modelID },
} satisfies MessageV2.User,
sessionID: chat.id,
model: mdl,
agent: agent(),
system: [],
messages: [{ role: "user", content: "reason" }],
tools: {},
})
const parts = yield* Effect.promise(() => MessageV2.parts(msg.id))
const reasoning = parts.find((part): part is MessageV2.ReasoningPart => part.type === "reasoning")
const text = parts.find((part): part is MessageV2.TextPart => part.type === "text")
expect(value).toBe("continue")
expect(yield* llm.calls).toBe(1)
expect(reasoning?.text).toBe("think")
expect(text?.text).toBe("done")
}),
{ git: true, config: (url) => providerCfg(url) },
),
)
it.live("session.processor effect tests reset reasoning state across retries", () =>
provideTmpdirServer(
({ dir, llm }) =>
Effect.gen(function* () {
const { processors, session, provider } = yield* boot()
yield* llm.push(reply().reason("one").reset(), reply().reason("two").stop())
const chat = yield* session.create({})
const parent = yield* user(chat.id, "reason")
const msg = yield* assistant(chat.id, parent.id, path.resolve(dir))
const mdl = yield* provider.getModel(ref.providerID, ref.modelID)
const handle = yield* processors.create({
assistantMessage: msg,
sessionID: chat.id,
@@ -441,28 +351,26 @@ it.effect("session.processor effect tests reset reasoning state across retries",
const reasoning = parts.filter((part): part is MessageV2.ReasoningPart => part.type === "reasoning")
expect(value).toBe("continue")
expect(yield* test.calls).toBe(2)
expect(yield* llm.calls).toBe(2)
expect(reasoning.some((part) => part.text === "two")).toBe(true)
expect(reasoning.some((part) => part.text === "onetwo")).toBe(false)
}),
{ git: true },
)
})
{ git: true, config: (url) => providerCfg(url) },
),
)
it.effect("session.processor effect tests do not retry unknown json errors", () => {
return provideTmpdirInstance(
(dir) =>
it.live("session.processor effect tests do not retry unknown json errors", () =>
provideTmpdirServer(
({ dir, llm }) =>
Effect.gen(function* () {
const test = yield* TestLLM
const processors = yield* SessionProcessor.Service
const session = yield* Session.Service
const { processors, session, provider } = yield* boot()
yield* test.push(fail({ error: { message: "no_kv_space" } }, start()))
yield* llm.error(400, { error: { message: "no_kv_space" } })
const chat = yield* session.create({})
const parent = yield* user(chat.id, "json")
const msg = yield* assistant(chat.id, parent.id, path.resolve(dir))
const mdl = model(100)
const mdl = yield* provider.getModel(ref.providerID, ref.modelID)
const handle = yield* processors.create({
assistantMessage: msg,
sessionID: chat.id,
@@ -487,29 +395,26 @@ it.effect("session.processor effect tests do not retry unknown json errors", ()
})
expect(value).toBe("stop")
expect(yield* test.calls).toBe(1)
expect(yield* test.inputs).toHaveLength(1)
expect(handle.message.error?.name).toBe("UnknownError")
expect(yield* llm.calls).toBe(1)
expect(handle.message.error?.name).toBe("APIError")
}),
{ git: true },
)
})
{ git: true, config: (url) => providerCfg(url) },
),
)
it.effect("session.processor effect tests retry recognized structured json errors", () => {
return provideTmpdirInstance(
(dir) =>
it.live("session.processor effect tests retry recognized structured json errors", () =>
provideTmpdirServer(
({ dir, llm }) =>
Effect.gen(function* () {
const test = yield* TestLLM
const processors = yield* SessionProcessor.Service
const session = yield* Session.Service
const { processors, session, provider } = yield* boot()
yield* test.push(fail({ type: "error", error: { type: "too_many_requests" } }, start()))
yield* test.reply(start(), textStart(), textDelta("t", "after"), textEnd(), finishStep(), finish())
yield* llm.error(429, { type: "error", error: { type: "too_many_requests" } })
yield* llm.text("after")
const chat = yield* session.create({})
const parent = yield* user(chat.id, "retry json")
const msg = yield* assistant(chat.id, parent.id, path.resolve(dir))
const mdl = model(100)
const mdl = yield* provider.getModel(ref.providerID, ref.modelID)
const handle = yield* processors.create({
assistantMessage: msg,
sessionID: chat.id,
@@ -536,43 +441,28 @@ it.effect("session.processor effect tests retry recognized structured json error
const parts = yield* Effect.promise(() => MessageV2.parts(msg.id))
expect(value).toBe("continue")
expect(yield* test.calls).toBe(2)
expect(yield* llm.calls).toBe(2)
expect(parts.some((part) => part.type === "text" && part.text === "after")).toBe(true)
expect(handle.message.error).toBeUndefined()
}),
{ git: true },
)
})
{ git: true, config: (url) => providerCfg(url) },
),
)
it.effect("session.processor effect tests publish retry status updates", () => {
return provideTmpdirInstance(
(dir) =>
it.live("session.processor effect tests publish retry status updates", () =>
provideTmpdirServer(
({ dir, llm }) =>
Effect.gen(function* () {
const test = yield* TestLLM
const processors = yield* SessionProcessor.Service
const session = yield* Session.Service
const { processors, session, provider } = yield* boot()
const bus = yield* Bus.Service
yield* test.push(
fail(
new APICallError({
message: "boom",
url: "https://example.com/v1/chat/completions",
requestBodyValues: {},
statusCode: 503,
responseHeaders: { "retry-after-ms": "0" },
responseBody: '{"error":"boom"}',
isRetryable: true,
}),
start(),
),
)
yield* test.reply(start(), finishStep(), finish())
yield* llm.error(503, { error: "boom" })
yield* llm.text("")
const chat = yield* session.create({})
const parent = yield* user(chat.id, "retry")
const msg = yield* assistant(chat.id, parent.id, path.resolve(dir))
const mdl = model(100)
const mdl = yield* provider.getModel(ref.providerID, ref.modelID)
const states: number[] = []
const off = yield* bus.subscribeCallback(SessionStatus.Event.Status, (evt) => {
if (evt.properties.sessionID !== chat.id) return
@@ -604,27 +494,25 @@ it.effect("session.processor effect tests publish retry status updates", () => {
off()
expect(value).toBe("continue")
expect(yield* test.calls).toBe(2)
expect(yield* llm.calls).toBe(2)
expect(states).toStrictEqual([1])
}),
{ git: true },
)
})
{ git: true, config: (url) => providerCfg(url) },
),
)
it.effect("session.processor effect tests compact on structured context overflow", () => {
return provideTmpdirInstance(
(dir) =>
it.live("session.processor effect tests compact on structured context overflow", () =>
provideTmpdirServer(
({ dir, llm }) =>
Effect.gen(function* () {
const test = yield* TestLLM
const processors = yield* SessionProcessor.Service
const session = yield* Session.Service
const { processors, session, provider } = yield* boot()
yield* test.push(fail({ type: "error", error: { code: "context_length_exceeded" } }, start()))
yield* llm.error(400, { type: "error", error: { code: "context_length_exceeded" } })
const chat = yield* session.create({})
const parent = yield* user(chat.id, "compact json")
const msg = yield* assistant(chat.id, parent.id, path.resolve(dir))
const mdl = model(100)
const mdl = yield* provider.getModel(ref.providerID, ref.modelID)
const handle = yield* processors.create({
assistantMessage: msg,
sessionID: chat.id,
@@ -649,32 +537,25 @@ it.effect("session.processor effect tests compact on structured context overflow
})
expect(value).toBe("compact")
expect(yield* test.calls).toBe(1)
expect(yield* llm.calls).toBe(1)
expect(handle.message.error).toBeUndefined()
}),
{ git: true },
)
})
{ git: true, config: (url) => providerCfg(url) },
),
)
it.effect("session.processor effect tests mark pending tools as aborted on cleanup", () => {
return provideTmpdirInstance(
(dir) =>
it.live("session.processor effect tests mark pending tools as aborted on cleanup", () =>
provideTmpdirServer(
({ dir, llm }) =>
Effect.gen(function* () {
const ready = defer<void>()
const test = yield* TestLLM
const processors = yield* SessionProcessor.Service
const session = yield* Session.Service
const { processors, session, provider } = yield* boot()
yield* test.push((input) =>
hang(input, start(), toolInputStart("tool-1", "bash"), toolCall("tool-1", "bash", { cmd: "pwd" })).pipe(
Stream.tap((event) => (event.type === "tool-call" ? Effect.sync(() => ready.resolve()) : Effect.void)),
),
)
yield* llm.toolHang("bash", { cmd: "pwd" })
const chat = yield* session.create({})
const parent = yield* user(chat.id, "tool abort")
const msg = yield* assistant(chat.id, parent.id, path.resolve(dir))
const mdl = model(100)
const mdl = yield* provider.getModel(ref.providerID, ref.modelID)
const handle = yield* processors.create({
assistantMessage: msg,
sessionID: chat.id,
@@ -700,7 +581,15 @@ it.effect("session.processor effect tests mark pending tools as aborted on clean
})
.pipe(Effect.forkChild)
yield* Effect.promise(() => ready.promise)
yield* llm.wait(1)
yield* Effect.promise(async () => {
const end = Date.now() + 500
while (Date.now() < end) {
const parts = await MessageV2.parts(msg.id)
if (parts.some((part) => part.type === "tool")) return
await Bun.sleep(10)
}
})
yield* Fiber.interrupt(run)
const exit = yield* Fiber.await(run)
@@ -708,45 +597,38 @@ it.effect("session.processor effect tests mark pending tools as aborted on clean
yield* handle.abort()
}
const parts = yield* Effect.promise(() => MessageV2.parts(msg.id))
const tool = parts.find((part): part is MessageV2.ToolPart => part.type === "tool")
const call = parts.find((part): part is MessageV2.ToolPart => part.type === "tool")
expect(Exit.isFailure(exit)).toBe(true)
if (Exit.isFailure(exit)) {
expect(Cause.hasInterruptsOnly(exit.cause)).toBe(true)
}
expect(yield* test.calls).toBe(1)
expect(tool?.state.status).toBe("error")
if (tool?.state.status === "error") {
expect(tool.state.error).toBe("Tool execution aborted")
expect(tool.state.time.end).toBeDefined()
expect(yield* llm.calls).toBe(1)
expect(call?.state.status).toBe("error")
if (call?.state.status === "error") {
expect(call.state.error).toBe("Tool execution aborted")
expect(call.state.time.end).toBeDefined()
}
}),
{ git: true },
)
})
{ git: true, config: (url) => providerCfg(url) },
),
)
it.effect("session.processor effect tests record aborted errors and idle state", () => {
return provideTmpdirInstance(
(dir) =>
it.live("session.processor effect tests record aborted errors and idle state", () =>
provideTmpdirServer(
({ dir, llm }) =>
Effect.gen(function* () {
const ready = defer<void>()
const seen = defer<void>()
const test = yield* TestLLM
const processors = yield* SessionProcessor.Service
const session = yield* Session.Service
const { processors, session, provider } = yield* boot()
const bus = yield* Bus.Service
const status = yield* SessionStatus.Service
const sts = yield* SessionStatus.Service
yield* test.push((input) =>
hang(input, start()).pipe(
Stream.tap((event) => (event.type === "start" ? Effect.sync(() => ready.resolve()) : Effect.void)),
),
)
yield* llm.hang
const chat = yield* session.create({})
const parent = yield* user(chat.id, "abort")
const msg = yield* assistant(chat.id, parent.id, path.resolve(dir))
const mdl = model(100)
const mdl = yield* provider.getModel(ref.providerID, ref.modelID)
const errs: string[] = []
const off = yield* bus.subscribeCallback(Session.Event.Error, (evt) => {
if (evt.properties.sessionID !== chat.id) return
@@ -779,7 +661,7 @@ it.effect("session.processor effect tests record aborted errors and idle state",
})
.pipe(Effect.forkChild)
yield* Effect.promise(() => ready.promise)
yield* llm.wait(1)
yield* Fiber.interrupt(run)
const exit = yield* Fiber.await(run)
@@ -788,7 +670,7 @@ it.effect("session.processor effect tests record aborted errors and idle state",
}
yield* Effect.promise(() => seen.promise)
const stored = yield* Effect.promise(() => MessageV2.get({ sessionID: chat.id, messageID: msg.id }))
const state = yield* status.get(chat.id)
const state = yield* sts.get(chat.id)
off()
expect(Exit.isFailure(exit)).toBe(true)
@@ -803,30 +685,23 @@ it.effect("session.processor effect tests record aborted errors and idle state",
expect(state).toMatchObject({ type: "idle" })
expect(errs).toContain("MessageAbortedError")
}),
{ git: true },
)
})
{ git: true, config: (url) => providerCfg(url) },
),
)
it.effect("session.processor effect tests mark interruptions aborted without manual abort", () => {
return provideTmpdirInstance(
(dir) =>
it.live("session.processor effect tests mark interruptions aborted without manual abort", () =>
provideTmpdirServer(
({ dir, llm }) =>
Effect.gen(function* () {
const ready = defer<void>()
const processors = yield* SessionProcessor.Service
const session = yield* Session.Service
const status = yield* SessionStatus.Service
const test = yield* TestLLM
const { processors, session, provider } = yield* boot()
const sts = yield* SessionStatus.Service
yield* test.push((input) =>
hang(input, start()).pipe(
Stream.tap((event) => (event.type === "start" ? Effect.sync(() => ready.resolve()) : Effect.void)),
),
)
yield* llm.hang
const chat = yield* session.create({})
const parent = yield* user(chat.id, "interrupt")
const msg = yield* assistant(chat.id, parent.id, path.resolve(dir))
const mdl = model(100)
const mdl = yield* provider.getModel(ref.providerID, ref.modelID)
const handle = yield* processors.create({
assistantMessage: msg,
sessionID: chat.id,
@@ -852,12 +727,12 @@ it.effect("session.processor effect tests mark interruptions aborted without man
})
.pipe(Effect.forkChild)
yield* Effect.promise(() => ready.promise)
yield* llm.wait(1)
yield* Fiber.interrupt(run)
const exit = yield* Fiber.await(run)
const stored = yield* Effect.promise(() => MessageV2.get({ sessionID: chat.id, messageID: msg.id }))
const state = yield* status.get(chat.id)
const state = yield* sts.get(chat.id)
expect(Exit.isFailure(exit)).toBe(true)
expect(handle.message.error?.name).toBe("MessageAbortedError")
@@ -867,6 +742,6 @@ it.effect("session.processor effect tests mark interruptions aborted without man
}
expect(state).toMatchObject({ type: "idle" })
}),
{ git: true },
)
})
{ git: true, config: (url) => providerCfg(url) },
),
)

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,295 @@
import { describe, expect, test } from "bun:test"
import fs from "fs/promises"
import path from "path"
import { Effect, Layer, ManagedRuntime } from "effect"
import { AppFileSystem } from "../../src/filesystem"
import { Global } from "../../src/global"
import { Storage } from "../../src/storage/storage"
import { tmpdir } from "../fixture/fixture"
const dir = path.join(Global.Path.data, "storage")
async function withScope<T>(fn: (root: string[]) => Promise<T>) {
const root = ["storage_test", crypto.randomUUID()]
try {
return await fn(root)
} finally {
await fs.rm(path.join(dir, ...root), { recursive: true, force: true })
}
}
function map(root: string, file: string) {
if (file === Global.Path.data) return root
if (file.startsWith(Global.Path.data + path.sep)) return path.join(root, path.relative(Global.Path.data, file))
return file
}
function layer(root: string) {
return Layer.effect(
AppFileSystem.Service,
Effect.gen(function* () {
const fs = yield* AppFileSystem.Service
return AppFileSystem.Service.of({
...fs,
isDir: (file) => fs.isDir(map(root, file)),
readJson: (file) => fs.readJson(map(root, file)),
writeWithDirs: (file, content, mode) => fs.writeWithDirs(map(root, file), content, mode),
readFileString: (file) => fs.readFileString(map(root, file)),
remove: (file) => fs.remove(map(root, file)),
glob: (pattern, options) =>
fs.glob(pattern, options?.cwd ? { ...options, cwd: map(root, options.cwd) } : options),
})
}),
).pipe(Layer.provide(AppFileSystem.defaultLayer))
}
async function withStorage<T>(
root: string,
fn: (run: <A, E>(body: Effect.Effect<A, E, Storage.Service>) => Promise<A>) => Promise<T>,
) {
const rt = ManagedRuntime.make(Storage.layer.pipe(Layer.provide(layer(root))))
try {
return await fn((body) => rt.runPromise(body))
} finally {
await rt.dispose()
}
}
async function write(file: string, value: unknown) {
await fs.mkdir(path.dirname(file), { recursive: true })
await Bun.write(file, JSON.stringify(value, null, 2))
}
async function text(file: string, value: string) {
await fs.mkdir(path.dirname(file), { recursive: true })
await Bun.write(file, value)
}
async function exists(file: string) {
return fs
.stat(file)
.then(() => true)
.catch(() => false)
}
describe("Storage", () => {
test("round-trips JSON content", async () => {
await withScope(async (root) => {
const key = [...root, "session_diff", "roundtrip"]
const value = [{ file: "a.ts", additions: 2, deletions: 1 }]
await Storage.write(key, value)
expect(await Storage.read<typeof value>(key)).toEqual(value)
})
})
test("maps missing reads to NotFoundError", async () => {
await withScope(async (root) => {
await expect(Storage.read([...root, "missing", "value"])).rejects.toMatchObject({ name: "NotFoundError" })
})
})
test("update on missing key throws NotFoundError", async () => {
await withScope(async (root) => {
await expect(
Storage.update<{ value: number }>([...root, "missing", "key"], (draft) => {
draft.value += 1
}),
).rejects.toMatchObject({ name: "NotFoundError" })
})
})
test("write overwrites existing value", async () => {
await withScope(async (root) => {
const key = [...root, "overwrite", "test"]
await Storage.write<{ v: number }>(key, { v: 1 })
await Storage.write<{ v: number }>(key, { v: 2 })
expect(await Storage.read<{ v: number }>(key)).toEqual({ v: 2 })
})
})
test("remove on missing key is a no-op", async () => {
await withScope(async (root) => {
await expect(Storage.remove([...root, "nonexistent", "key"])).resolves.toBeUndefined()
})
})
test("list on missing prefix returns empty", async () => {
await withScope(async (root) => {
expect(await Storage.list([...root, "nonexistent"])).toEqual([])
})
})
test("serializes concurrent updates for the same key", async () => {
await withScope(async (root) => {
const key = [...root, "counter", "shared"]
await Storage.write(key, { value: 0 })
await Promise.all(
Array.from({ length: 25 }, () =>
Storage.update<{ value: number }>(key, (draft) => {
draft.value += 1
}),
),
)
expect(await Storage.read<{ value: number }>(key)).toEqual({ value: 25 })
})
})
test("concurrent reads do not block each other", async () => {
await withScope(async (root) => {
const key = [...root, "concurrent", "reads"]
await Storage.write(key, { ok: true })
const results = await Promise.all(Array.from({ length: 10 }, () => Storage.read(key)))
expect(results).toHaveLength(10)
for (const r of results) expect(r).toEqual({ ok: true })
})
})
test("nested keys create deep paths", async () => {
await withScope(async (root) => {
const key = [...root, "a", "b", "c", "deep"]
await Storage.write<{ nested: boolean }>(key, { nested: true })
expect(await Storage.read<{ nested: boolean }>(key)).toEqual({ nested: true })
expect(await Storage.list([...root, "a"])).toEqual([key])
})
})
test("lists and removes stored entries", async () => {
await withScope(async (root) => {
const a = [...root, "list", "a"]
const b = [...root, "list", "b"]
const prefix = [...root, "list"]
await Storage.write(b, { value: 2 })
await Storage.write(a, { value: 1 })
expect(await Storage.list(prefix)).toEqual([a, b])
await Storage.remove(a)
expect(await Storage.list(prefix)).toEqual([b])
await expect(Storage.read(a)).rejects.toMatchObject({ name: "NotFoundError" })
})
})
test("migration 2 runs when marker contents are invalid", async () => {
await using tmp = await tmpdir()
const storage = path.join(tmp.path, "storage")
const diffs = [
{ additions: 2, deletions: 1 },
{ additions: 3, deletions: 4 },
]
await text(path.join(storage, "migration"), "wat")
await write(path.join(storage, "session", "proj_test", "ses_test.json"), {
id: "ses_test",
projectID: "proj_test",
title: "legacy",
summary: { diffs },
})
await withStorage(tmp.path, async (run) => {
expect(await run(Storage.Service.use((svc) => svc.list(["session_diff"])))).toEqual([
["session_diff", "ses_test"],
])
expect(await run(Storage.Service.use((svc) => svc.read<typeof diffs>(["session_diff", "ses_test"])))).toEqual(
diffs,
)
expect(
await run(
Storage.Service.use((svc) =>
svc.read<{
id: string
projectID: string
title: string
summary: {
additions: number
deletions: number
}
}>(["session", "proj_test", "ses_test"]),
),
),
).toEqual({
id: "ses_test",
projectID: "proj_test",
title: "legacy",
summary: {
additions: 5,
deletions: 5,
},
})
})
expect(await Bun.file(path.join(storage, "migration")).text()).toBe("2")
})
test("migration 1 tolerates malformed legacy records", async () => {
await using tmp = await tmpdir({ git: true })
const storage = path.join(tmp.path, "storage")
const legacy = path.join(tmp.path, "project", "legacy")
await write(path.join(legacy, "storage", "session", "message", "probe", "0.json"), [])
await write(path.join(legacy, "storage", "session", "message", "probe", "1.json"), {
path: { root: tmp.path },
})
await write(path.join(legacy, "storage", "session", "info", "ses_legacy.json"), {
id: "ses_legacy",
title: "legacy",
})
await write(path.join(legacy, "storage", "session", "message", "ses_legacy", "msg_legacy.json"), {
role: "user",
text: "hello",
})
await withStorage(tmp.path, async (run) => {
const projects = await run(Storage.Service.use((svc) => svc.list(["project"])))
expect(projects).toHaveLength(1)
const project = projects[0]![1]
expect(await run(Storage.Service.use((svc) => svc.list(["session", project])))).toEqual([
["session", project, "ses_legacy"],
])
expect(
await run(
Storage.Service.use((svc) => svc.read<{ id: string; title: string }>(["session", project, "ses_legacy"])),
),
).toEqual({
id: "ses_legacy",
title: "legacy",
})
expect(
await run(
Storage.Service.use((svc) =>
svc.read<{ role: string; text: string }>(["message", "ses_legacy", "msg_legacy"]),
),
),
).toEqual({
role: "user",
text: "hello",
})
})
expect(await Bun.file(path.join(storage, "migration")).text()).toBe("2")
})
test("failed migrations do not advance the marker", async () => {
await using tmp = await tmpdir()
const storage = path.join(tmp.path, "storage")
const legacy = path.join(tmp.path, "project", "legacy")
await text(path.join(legacy, "storage", "session", "message", "probe", "0.json"), "{")
await withStorage(tmp.path, async (run) => {
expect(await run(Storage.Service.use((svc) => svc.list(["project"])))).toEqual([])
})
expect(await exists(path.join(storage, "migration"))).toBe(false)
})
})

File diff suppressed because it is too large Load Diff

View File

@@ -140,7 +140,7 @@ describe("Truncate", () => {
const DAY_MS = 24 * 60 * 60 * 1000
const it = testEffect(Layer.mergeAll(TruncateSvc.defaultLayer, NodeFileSystem.layer))
it.effect("deletes files older than 7 days and preserves recent files", () =>
it.live("deletes files older than 7 days and preserves recent files", () =>
Effect.gen(function* () {
const fs = yield* FileSystem.FileSystem

View File

@@ -1,7 +1,7 @@
{
"$schema": "https://json.schemastore.org/package.json",
"name": "@opencode-ai/plugin",
"version": "1.3.9",
"version": "1.3.11",
"type": "module",
"license": "MIT",
"scripts": {
@@ -21,8 +21,8 @@
"zod": "catalog:"
},
"peerDependencies": {
"@opentui/core": ">=0.1.92",
"@opentui/solid": ">=0.1.92"
"@opentui/core": ">=0.1.93",
"@opentui/solid": ">=0.1.93"
},
"peerDependenciesMeta": {
"@opentui/core": {
@@ -33,8 +33,8 @@
}
},
"devDependencies": {
"@opentui/core": "0.1.92",
"@opentui/solid": "0.1.92",
"@opentui/core": "0.1.93",
"@opentui/solid": "0.1.93",
"@tsconfig/node22": "catalog:",
"@types/node": "catalog:",
"typescript": "catalog:",

View File

@@ -1,7 +1,7 @@
{
"$schema": "https://json.schemastore.org/package.json",
"name": "@opencode-ai/sdk",
"version": "1.3.9",
"version": "1.3.11",
"type": "module",
"license": "MIT",
"scripts": {

View File

@@ -4,20 +4,6 @@ export type ClientOptions = {
baseUrl: `${string}://${string}` | (string & {})
}
export type EventInstallationUpdated = {
type: "installation.updated"
properties: {
version: string
}
}
export type EventInstallationUpdateAvailable = {
type: "installation.update-available"
properties: {
version: string
}
}
export type Project = {
id: string
worktree: string
@@ -47,6 +33,20 @@ export type EventProjectUpdated = {
properties: Project
}
export type EventInstallationUpdated = {
type: "installation.updated"
properties: {
version: string
}
}
export type EventInstallationUpdateAvailable = {
type: "installation.update-available"
properties: {
version: string
}
}
export type EventServerInstanceDisposed = {
type: "server.instance.disposed"
properties: {
@@ -964,9 +964,9 @@ export type EventSessionDeleted = {
}
export type Event =
| EventProjectUpdated
| EventInstallationUpdated
| EventInstallationUpdateAvailable
| EventProjectUpdated
| EventServerInstanceDisposed
| EventServerConnected
| EventGlobalDisposed

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/slack",
"version": "1.3.9",
"version": "1.3.11",
"type": "module",
"license": "MIT",
"scripts": {

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/ui",
"version": "1.3.9",
"version": "1.3.11",
"type": "module",
"license": "MIT",
"exports": {

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/util",
"version": "1.3.9",
"version": "1.3.11",
"private": true,
"type": "module",
"license": "MIT",

View File

@@ -2,7 +2,7 @@
"name": "@opencode-ai/web",
"type": "module",
"license": "MIT",
"version": "1.3.9",
"version": "1.3.11",
"scripts": {
"dev": "astro dev",
"dev:remote": "VITE_API_URL=https://api.opencode.ai astro dev",

View File

@@ -1,249 +1,40 @@
#!/usr/bin/env bun
import { $ } from "bun"
import { rm } from "fs/promises"
import path from "path"
import { parseArgs } from "util"
type Release = {
tag_name: string
draft: boolean
}
const root = path.resolve(import.meta.dir, "..")
const file = path.join(root, "UPCOMING_CHANGELOG.md")
const { values, positionals } = parseArgs({
args: Bun.argv.slice(2),
options: {
from: { type: "string", short: "f" },
to: { type: "string", short: "t" },
variant: { type: "string", default: "low" },
quiet: { type: "boolean", default: false },
print: { type: "boolean", default: false },
help: { type: "boolean", short: "h", default: false },
},
allowPositionals: true,
})
const args = [...positionals]
type Commit = {
hash: string
author: string | null
message: string
areas: Set<string>
}
if (values.from) args.push("--from", values.from)
if (values.to) args.push("--to", values.to)
type User = Map<string, Set<string>>
type Diff = {
sha: string
login: string | null
message: string
}
const repo = process.env.GH_REPO ?? "anomalyco/opencode"
const bot = ["actions-user", "opencode", "opencode-agent[bot]"]
const team = [
...(await Bun.file(new URL("../.github/TEAM_MEMBERS", import.meta.url))
.text()
.then((x) => x.split(/\r?\n/).map((x) => x.trim()))
.then((x) => x.filter((x) => x && !x.startsWith("#")))),
...bot,
]
const order = ["Core", "TUI", "Desktop", "SDK", "Extensions"] as const
const sections = {
core: "Core",
tui: "TUI",
app: "Desktop",
tauri: "Desktop",
sdk: "SDK",
plugin: "SDK",
"extensions/zed": "Extensions",
"extensions/vscode": "Extensions",
github: "Extensions",
} as const
function ref(input: string) {
if (input === "HEAD") return input
if (input.startsWith("v")) return input
if (input.match(/^\d+\.\d+\.\d+(?:[-+][0-9A-Za-z.-]+)?$/)) return `v${input}`
return input
}
async function latest() {
const data = await $`gh api "/repos/${repo}/releases?per_page=100"`.json()
const release = (data as Release[]).find((item) => !item.draft)
if (!release) throw new Error("No releases found")
return release.tag_name.replace(/^v/, "")
}
async function diff(base: string, head: string) {
const list: Diff[] = []
for (let page = 1; ; page++) {
const text =
await $`gh api "/repos/${repo}/compare/${base}...${head}?per_page=100&page=${page}" --jq '.commits[] | {sha: .sha, login: .author.login, message: .commit.message}'`.text()
const batch = text
.split("\n")
.filter(Boolean)
.map((line) => JSON.parse(line) as Diff)
if (batch.length === 0) break
list.push(...batch)
if (batch.length < 100) break
}
return list
}
function section(areas: Set<string>) {
const priority = ["core", "tui", "app", "tauri", "sdk", "plugin", "extensions/zed", "extensions/vscode", "github"]
for (const area of priority) {
if (areas.has(area)) return sections[area as keyof typeof sections]
}
return "Core"
}
function reverted(commits: Commit[]) {
const seen = new Map<string, Commit>()
for (const commit of commits) {
const match = commit.message.match(/^Revert "(.+)"$/)
if (match) {
const msg = match[1]!
if (seen.has(msg)) seen.delete(msg)
else seen.set(commit.message, commit)
continue
}
const revert = `Revert "${commit.message}"`
if (seen.has(revert)) {
seen.delete(revert)
continue
}
seen.set(commit.message, commit)
}
return [...seen.values()]
}
async function commits(from: string, to: string) {
const base = ref(from)
const head = ref(to)
const data = new Map<string, { login: string | null; message: string }>()
for (const item of await diff(base, head)) {
data.set(item.sha, { login: item.login, message: item.message.split("\n")[0] ?? "" })
}
const log =
await $`git log ${base}..${head} --format=%H -- packages/opencode packages/sdk packages/plugin packages/desktop packages/app sdks/vscode packages/extensions github`.text()
const list: Commit[] = []
for (const hash of log.split("\n").filter(Boolean)) {
const item = data.get(hash)
if (!item) continue
if (item.message.match(/^(ignore:|test:|chore:|ci:|release:)/i)) continue
const diff = await $`git diff-tree --no-commit-id --name-only -r ${hash}`.text()
const areas = new Set<string>()
for (const file of diff.split("\n").filter(Boolean)) {
if (file.startsWith("packages/opencode/src/cli/cmd/")) areas.add("tui")
else if (file.startsWith("packages/opencode/")) areas.add("core")
else if (file.startsWith("packages/desktop/src-tauri/")) areas.add("tauri")
else if (file.startsWith("packages/desktop/") || file.startsWith("packages/app/")) areas.add("app")
else if (file.startsWith("packages/sdk/") || file.startsWith("packages/plugin/")) areas.add("sdk")
else if (file.startsWith("packages/extensions/")) areas.add("extensions/zed")
else if (file.startsWith("sdks/vscode/") || file.startsWith("github/")) areas.add("extensions/vscode")
}
if (areas.size === 0) continue
list.push({
hash: hash.slice(0, 7),
author: item.login,
message: item.message,
areas,
})
}
return reverted(list)
}
async function contributors(from: string, to: string) {
const base = ref(from)
const head = ref(to)
const users: User = new Map()
for (const item of await diff(base, head)) {
const title = item.message.split("\n")[0] ?? ""
if (!item.login || team.includes(item.login)) continue
if (title.match(/^(ignore:|test:|chore:|ci:|release:)/i)) continue
if (!users.has(item.login)) users.set(item.login, new Set())
users.get(item.login)!.add(title)
}
return users
}
async function published(to: string) {
if (to === "HEAD") return
const body = await $`gh release view ${ref(to)} --repo ${repo} --json body --jq .body`.text().catch(() => "")
if (!body) return
const lines = body.split(/\r?\n/)
const start = lines.findIndex((line) => line.startsWith("**Thank you to "))
if (start < 0) return
return lines.slice(start).join("\n").trim()
}
async function thanks(from: string, to: string, reuse: boolean) {
const release = reuse ? await published(to) : undefined
if (release) return release.split(/\r?\n/)
const users = await contributors(from, to)
if (users.size === 0) return []
const lines = [`**Thank you to ${users.size} community contributor${users.size > 1 ? "s" : ""}:**`]
for (const [name, commits] of users) {
lines.push(`- @${name}:`)
for (const commit of commits) lines.push(` - ${commit}`)
}
return lines
}
function format(from: string, to: string, list: Commit[], thanks: string[]) {
const grouped = new Map<string, string[]>()
for (const title of order) grouped.set(title, [])
for (const commit of list) {
const title = section(commit.areas)
const attr = commit.author && !team.includes(commit.author) ? ` (@${commit.author})` : ""
grouped.get(title)!.push(`- \`${commit.hash}\` ${commit.message}${attr}`)
}
const lines = [`Last release: ${ref(from)}`, `Target ref: ${to}`, ""]
if (list.length === 0) {
lines.push("No notable changes.")
}
for (const title of order) {
const entries = grouped.get(title)
if (!entries || entries.length === 0) continue
lines.push(`## ${title}`)
lines.push(...entries)
lines.push("")
}
if (thanks.length > 0) {
if (lines.at(-1) !== "") lines.push("")
lines.push("## Community Contributors Input")
lines.push("")
lines.push(...thanks)
}
if (lines.at(-1) === "") lines.pop()
return lines.join("\n")
}
if (import.meta.main) {
const { values } = parseArgs({
args: Bun.argv.slice(2),
options: {
from: { type: "string", short: "f" },
to: { type: "string", short: "t", default: "HEAD" },
help: { type: "boolean", short: "h", default: false },
},
})
if (values.help) {
console.log(`
if (values.help) {
console.log(`
Usage: bun script/changelog.ts [options]
Generates UPCOMING_CHANGELOG.md by running the opencode changelog command.
Options:
-f, --from <version> Starting version (default: latest non-draft GitHub release)
-t, --to <ref> Ending ref (default: HEAD)
--variant <name> Thinking variant for opencode run (default: low)
--quiet Suppress opencode command output unless it fails
--print Print the generated UPCOMING_CHANGELOG.md after success
-h, --help Show this help message
Examples:
@@ -251,11 +42,35 @@ Examples:
bun script/changelog.ts --from 1.0.200
bun script/changelog.ts -f 1.0.200 -t 1.0.205
`)
process.exit(0)
}
const to = values.to!
const from = values.from ?? (await latest())
const list = await commits(from, to)
console.log(format(from, to, list, await thanks(from, to, !values.from)))
process.exit(0)
}
await rm(file, { force: true })
const quiet = values.quiet
const cmd = ["opencode", "run"]
cmd.push("--variant", values.variant)
cmd.push("--command", "changelog", "--", ...args)
const proc = Bun.spawn(cmd, {
cwd: root,
stdin: "inherit",
stdout: quiet ? "pipe" : "inherit",
stderr: quiet ? "pipe" : "inherit",
})
const [out, err] = quiet
? await Promise.all([new Response(proc.stdout).text(), new Response(proc.stderr).text()])
: ["", ""]
const code = await proc.exited
if (code === 0) {
if (values.print) process.stdout.write(await Bun.file(file).text())
process.exit(0)
}
if (quiet) {
if (out) process.stdout.write(out)
if (err) process.stderr.write(err)
}
process.exit(code)

261
script/raw-changelog.ts Normal file
View File

@@ -0,0 +1,261 @@
#!/usr/bin/env bun
import { $ } from "bun"
import { parseArgs } from "util"
type Release = {
tag_name: string
draft: boolean
}
type Commit = {
hash: string
author: string | null
message: string
areas: Set<string>
}
type User = Map<string, Set<string>>
type Diff = {
sha: string
login: string | null
message: string
}
const repo = process.env.GH_REPO ?? "anomalyco/opencode"
const bot = ["actions-user", "opencode", "opencode-agent[bot]"]
const team = [
...(await Bun.file(new URL("../.github/TEAM_MEMBERS", import.meta.url))
.text()
.then((x) => x.split(/\r?\n/).map((x) => x.trim()))
.then((x) => x.filter((x) => x && !x.startsWith("#")))),
...bot,
]
const order = ["Core", "TUI", "Desktop", "SDK", "Extensions"] as const
const sections = {
core: "Core",
tui: "TUI",
app: "Desktop",
tauri: "Desktop",
sdk: "SDK",
plugin: "SDK",
"extensions/zed": "Extensions",
"extensions/vscode": "Extensions",
github: "Extensions",
} as const
function ref(input: string) {
if (input === "HEAD") return input
if (input.startsWith("v")) return input
if (input.match(/^\d+\.\d+\.\d+(?:[-+][0-9A-Za-z.-]+)?$/)) return `v${input}`
return input
}
async function latest() {
const data = await $`gh api "/repos/${repo}/releases?per_page=100"`.json()
const release = (data as Release[]).find((item) => !item.draft)
if (!release) throw new Error("No releases found")
return release.tag_name.replace(/^v/, "")
}
async function diff(base: string, head: string) {
const list: Diff[] = []
for (let page = 1; ; page++) {
const text =
await $`gh api "/repos/${repo}/compare/${base}...${head}?per_page=100&page=${page}" --jq '.commits[] | {sha: .sha, login: .author.login, message: .commit.message}'`.text()
const batch = text
.split("\n")
.filter(Boolean)
.map((line) => JSON.parse(line) as Diff)
if (batch.length === 0) break
list.push(...batch)
if (batch.length < 100) break
}
return list
}
function section(areas: Set<string>) {
const priority = ["core", "tui", "app", "tauri", "sdk", "plugin", "extensions/zed", "extensions/vscode", "github"]
for (const area of priority) {
if (areas.has(area)) return sections[area as keyof typeof sections]
}
return "Core"
}
function reverted(commits: Commit[]) {
const seen = new Map<string, Commit>()
for (const commit of commits) {
const match = commit.message.match(/^Revert "(.+)"$/)
if (match) {
const msg = match[1]!
if (seen.has(msg)) seen.delete(msg)
else seen.set(commit.message, commit)
continue
}
const revert = `Revert "${commit.message}"`
if (seen.has(revert)) {
seen.delete(revert)
continue
}
seen.set(commit.message, commit)
}
return [...seen.values()]
}
async function commits(from: string, to: string) {
const base = ref(from)
const head = ref(to)
const data = new Map<string, { login: string | null; message: string }>()
for (const item of await diff(base, head)) {
data.set(item.sha, { login: item.login, message: item.message.split("\n")[0] ?? "" })
}
const log =
await $`git log ${base}..${head} --format=%H -- packages/opencode packages/sdk packages/plugin packages/desktop packages/app sdks/vscode packages/extensions github`.text()
const list: Commit[] = []
for (const hash of log.split("\n").filter(Boolean)) {
const item = data.get(hash)
if (!item) continue
if (item.message.match(/^(ignore:|test:|chore:|ci:|release:)/i)) continue
const diff = await $`git diff-tree --no-commit-id --name-only -r ${hash}`.text()
const areas = new Set<string>()
for (const file of diff.split("\n").filter(Boolean)) {
if (file.startsWith("packages/opencode/src/cli/cmd/")) areas.add("tui")
else if (file.startsWith("packages/opencode/")) areas.add("core")
else if (file.startsWith("packages/desktop/src-tauri/")) areas.add("tauri")
else if (file.startsWith("packages/desktop/") || file.startsWith("packages/app/")) areas.add("app")
else if (file.startsWith("packages/sdk/") || file.startsWith("packages/plugin/")) areas.add("sdk")
else if (file.startsWith("packages/extensions/")) areas.add("extensions/zed")
else if (file.startsWith("sdks/vscode/") || file.startsWith("github/")) areas.add("extensions/vscode")
}
if (areas.size === 0) continue
list.push({
hash: hash.slice(0, 7),
author: item.login,
message: item.message,
areas,
})
}
return reverted(list)
}
async function contributors(from: string, to: string) {
const base = ref(from)
const head = ref(to)
const users: User = new Map()
for (const item of await diff(base, head)) {
const title = item.message.split("\n")[0] ?? ""
if (!item.login || team.includes(item.login)) continue
if (title.match(/^(ignore:|test:|chore:|ci:|release:)/i)) continue
if (!users.has(item.login)) users.set(item.login, new Set())
users.get(item.login)!.add(title)
}
return users
}
async function published(to: string) {
if (to === "HEAD") return
const body = await $`gh release view ${ref(to)} --repo ${repo} --json body --jq .body`.text().catch(() => "")
if (!body) return
const lines = body.split(/\r?\n/)
const start = lines.findIndex((line) => line.startsWith("**Thank you to "))
if (start < 0) return
return lines.slice(start).join("\n").trim()
}
async function thanks(from: string, to: string, reuse: boolean) {
const release = reuse ? await published(to) : undefined
if (release) return release.split(/\r?\n/)
const users = await contributors(from, to)
if (users.size === 0) return []
const lines = [`**Thank you to ${users.size} community contributor${users.size > 1 ? "s" : ""}:**`]
for (const [name, commits] of users) {
lines.push(`- @${name}:`)
for (const commit of commits) lines.push(` - ${commit}`)
}
return lines
}
function format(from: string, to: string, list: Commit[], thanks: string[]) {
const grouped = new Map<string, string[]>()
for (const title of order) grouped.set(title, [])
for (const commit of list) {
const title = section(commit.areas)
const attr = commit.author && !team.includes(commit.author) ? ` (@${commit.author})` : ""
grouped.get(title)!.push(`- \`${commit.hash}\` ${commit.message}${attr}`)
}
const lines = [`Last release: ${ref(from)}`, `Target ref: ${to}`, ""]
if (list.length === 0) {
lines.push("No notable changes.")
}
for (const title of order) {
const entries = grouped.get(title)
if (!entries || entries.length === 0) continue
lines.push(`## ${title}`)
lines.push(...entries)
lines.push("")
}
if (thanks.length > 0) {
if (lines.at(-1) !== "") lines.push("")
lines.push("## Community Contributors Input")
lines.push("")
lines.push(...thanks)
}
if (lines.at(-1) === "") lines.pop()
return lines.join("\n")
}
if (import.meta.main) {
const { values } = parseArgs({
args: Bun.argv.slice(2),
options: {
from: { type: "string", short: "f" },
to: { type: "string", short: "t", default: "HEAD" },
help: { type: "boolean", short: "h", default: false },
},
})
if (values.help) {
console.log(`
Usage: bun script/raw-changelog.ts [options]
Options:
-f, --from <version> Starting version (default: latest non-draft GitHub release)
-t, --to <ref> Ending ref (default: HEAD)
-h, --help Show this help message
Examples:
bun script/raw-changelog.ts
bun script/raw-changelog.ts --from 1.0.200
bun script/raw-changelog.ts -f 1.0.200 -t 1.0.205
`)
process.exit(0)
}
const to = values.to!
const from = values.from ?? (await latest())
const list = await commits(from, to)
console.log(format(from, to, list, await thanks(from, to, !values.from)))
}

View File

@@ -7,7 +7,7 @@ const output = [`version=${Script.version}`]
if (!Script.preview) {
const sha = process.env.GITHUB_SHA ?? (await $`git rev-parse HEAD`.text()).trim()
await $`opencode run --command changelog -- --to ${sha}`.cwd(process.cwd())
await $`bun script/changelog.ts --to ${sha}`.cwd(process.cwd())
const file = `${process.cwd()}/UPCOMING_CHANGELOG.md`
const body = await Bun.file(file)
.text()

View File

@@ -2,7 +2,7 @@
"name": "opencode",
"displayName": "opencode",
"description": "opencode for VS Code",
"version": "1.3.9",
"version": "1.3.11",
"publisher": "sst-dev",
"repository": {
"type": "git",