Compare commits

..

17 Commits

Author SHA1 Message Date
Kit Langton
8d2385ad49 test: finish HTTP mock processor coverage 2026-03-31 20:36:18 -04:00
Kit Langton
7f6a5bb2c8 test: migrate processor tests to HTTP mock LLM server
Replace the custom TestLLM Effect service with the real LLM layer +
TestLLMServer HTTP mock for 9 of 10 processor tests. Tests now exercise
the full HTTP→SSE→AI SDK→processor pipeline.

- Export Provider.defaultLayer for test layer composition
- Add boot() helper for common service access (processor, session, provider)
- Extend TestLLMServer with usage support and httpError step type
- Tool abort test registers a real tool with hanging execute
- Reasoning test stays with in-process TestLLM (needs fine-grained events)
2026-03-31 19:59:56 -04:00
Kit Langton
537cc32bf0 test: migrate processor tests to HTTP mock LLM server
Replace the custom TestLLM Effect service with the real LLM layer +
TestLLMServer HTTP mock. Tests now exercise the full HTTP→SSE→AI SDK
pipeline instead of injecting Effect streams directly.

- Extend TestLLMServer with usage support on text responses and
  httpError step type for non-200 responses
- Drop reasoning test (can't produce reasoning events via
  @ai-sdk/openai-compatible SSE)
- 9 tests pass, covering: text capture, token overflow, error retry,
  structured errors, context overflow, abort/interrupt cleanup
2026-03-31 19:21:27 -04:00
Kit Langton
82da702f64 refactor: tighten instance context helper fallbacks 2026-03-31 17:32:24 -04:00
Kit Langton
90469bbb7e refactor: simplify instance context helpers in prompt tests 2026-03-31 16:42:20 -04:00
Kit Langton
4ff0fbc043 fix: retry scoped tempdir cleanup on windows 2026-03-31 16:42:19 -04:00
Kit Langton
e24369eaf1 fix: break installation cycle in database context binding 2026-03-31 16:42:18 -04:00
Kit Langton
825f51c39f fix: restore instance context in deferred database callbacks 2026-03-31 16:42:18 -04:00
Kit Langton
191a747405 fix: propagate InstanceRef across static function boundaries
- makeRuntime.provide reads InstanceRef from current Effect fiber when
  ALS is unavailable, bridging static function calls (like Bus.publish)
  that create new fibers from inside Effect code
- Database.transaction preserves Instance ALS via Instance.bind on the
  bun:sqlite transaction callback (native fn loses ALS)
- Instance.restore helper for bridging Effect→sync code with ALS
- InstanceState.withALS bridges InstanceRef back to ALS for sync callers
- prompt.ts: InstructionPrompt.clear wrapped with withALS
- Remove ALL provideInstance(dir) wrappers from prompt-effect tests
2026-03-31 16:42:17 -04:00
Kit Langton
cc412f3014 refactor: migrate Instance ALS reads to InstanceRef in Effect services
Migrate 16 direct Instance.directory/worktree/project reads inside
Effect code to use InstanceState.directory/context helpers that check
the InstanceRef first and fall back to ALS.

- Export InstanceState.directory and InstanceState.context helpers
- bus/index.ts: GlobalBus.emit uses InstanceState.directory
- session/prompt.ts: 5 callsites migrated to InstanceState.context
- session/index.ts: 4 callsites migrated
- session/compaction.ts: 1 callsite migrated
- config/config.ts: 1 callsite migrated
- format/index.ts: 1 callsite migrated
- worktree/index.ts: 5 callsites migrated
- storage/db.ts: Database.effect preserves Instance ALS via Instance.bind
- test/lib/llm-server.ts: add wait/hold/fail SSE stream support
- Remove most provideInstance(dir) wrappers from prompt tests
  (5 remain due to Instance.state sync ALS dependency)
2026-03-31 16:42:16 -04:00
Kit Langton
bb039496d5 refactor: propagate Instance context through Effect fibers via InstanceRef
Add a ServiceMap.Reference that carries InstanceContext through the
Effect service graph so child fibers retain instance context even when
resumed by external I/O events outside the ALS boundary.

- Add InstanceRef to instance-state.ts; InstanceState.get/has/invalidate
  try the Reference first, fall back to ALS
- makeRuntime automatically captures ALS into InstanceRef at the boundary
- provideInstance (test fixture) sets InstanceRef for Effect.runPromiseWith
- Remove all redundant provideInstance(dir) wrappers from prompt tests
- Fix test/lib/effect.ts type params (drop unnecessary S/T generics)
2026-03-31 16:42:16 -04:00
Kit Langton
f2fa1a681d test: move more prompt cases to mock llm server
Migrate the next prompt-effect cases to the HTTP-backed mock server path, keep the shell handoff cases on short live timeouts, and leave the stream-failure case on the in-process fake until the server DSL matches it.
2026-03-31 16:42:15 -04:00
Kit Langton
6bd340492c test: infer mock server callback types 2026-03-31 16:42:15 -04:00
Kit Langton
21ec3207e7 test: extend mock llm server coverage
Add fixture support for tmpdir-backed mock server tests, extend the mock LLM server DSL for failure and hanging cases, and migrate the next prompt tests to the HTTP-backed path.
2026-03-31 16:42:14 -04:00
Kit Langton
123123b6c3 test: start moving prompt tests to mock llm server
Switch the basic assistant reply prompt-effect test to the HTTP-backed mock LLM server while keeping the more stream-sensitive cases on the in-process fake for now.
2026-03-31 16:42:14 -04:00
Kit Langton
6ea467b0ac test: add live effect helper mode
Default the shared effect test helper to support both test-clock and live execution, and switch the current opencode effect tests to the live path for real integration behavior.
2026-03-31 16:42:13 -04:00
Kit Langton
459fbc99a8 refactor(test): migrate llm-server to Effect HTTP platform
- Replace Bun.serve with Effect HTTP server using NodeHttpServer
- Add TestLLMServer service for mock LLM testing with SSE responses
- Update prompt-provider.test.ts to use testEffect pattern with provideTmpdirInstance
- Remove redundant test/fixture/effect.ts (using existing test/lib/effect.ts instead)
2026-03-31 16:42:13 -04:00
41 changed files with 757 additions and 1015 deletions

1
.github/VOUCHED.td vendored
View File

@@ -27,4 +27,3 @@ rekram1-node
-robinmordasiewicz
-spider-yamet clawdbot/llm psychosis, spam pinging the team
thdxr
-toastythebot

View File

@@ -26,7 +26,7 @@
},
"packages/app": {
"name": "@opencode-ai/app",
"version": "1.3.13",
"version": "1.3.11",
"dependencies": {
"@kobalte/core": "catalog:",
"@opencode-ai/sdk": "workspace:*",
@@ -79,7 +79,7 @@
},
"packages/console/app": {
"name": "@opencode-ai/console-app",
"version": "1.3.13",
"version": "1.3.11",
"dependencies": {
"@cloudflare/vite-plugin": "1.15.2",
"@ibm/plex": "6.4.1",
@@ -113,7 +113,7 @@
},
"packages/console/core": {
"name": "@opencode-ai/console-core",
"version": "1.3.13",
"version": "1.3.11",
"dependencies": {
"@aws-sdk/client-sts": "3.782.0",
"@jsx-email/render": "1.1.1",
@@ -140,7 +140,7 @@
},
"packages/console/function": {
"name": "@opencode-ai/console-function",
"version": "1.3.13",
"version": "1.3.11",
"dependencies": {
"@ai-sdk/anthropic": "3.0.64",
"@ai-sdk/openai": "3.0.48",
@@ -164,7 +164,7 @@
},
"packages/console/mail": {
"name": "@opencode-ai/console-mail",
"version": "1.3.13",
"version": "1.3.11",
"dependencies": {
"@jsx-email/all": "2.2.3",
"@jsx-email/cli": "1.4.3",
@@ -188,7 +188,7 @@
},
"packages/desktop": {
"name": "@opencode-ai/desktop",
"version": "1.3.13",
"version": "1.3.11",
"dependencies": {
"@opencode-ai/app": "workspace:*",
"@opencode-ai/ui": "workspace:*",
@@ -221,7 +221,7 @@
},
"packages/desktop-electron": {
"name": "@opencode-ai/desktop-electron",
"version": "1.3.13",
"version": "1.3.11",
"dependencies": {
"@opencode-ai/app": "workspace:*",
"@opencode-ai/ui": "workspace:*",
@@ -252,7 +252,7 @@
},
"packages/enterprise": {
"name": "@opencode-ai/enterprise",
"version": "1.3.13",
"version": "1.3.11",
"dependencies": {
"@opencode-ai/ui": "workspace:*",
"@opencode-ai/util": "workspace:*",
@@ -281,7 +281,7 @@
},
"packages/function": {
"name": "@opencode-ai/function",
"version": "1.3.13",
"version": "1.3.11",
"dependencies": {
"@octokit/auth-app": "8.0.1",
"@octokit/rest": "catalog:",
@@ -297,7 +297,7 @@
},
"packages/opencode": {
"name": "opencode",
"version": "1.3.13",
"version": "1.3.11",
"bin": {
"opencode": "./bin/opencode",
},
@@ -338,8 +338,8 @@
"@opencode-ai/sdk": "workspace:*",
"@opencode-ai/util": "workspace:*",
"@openrouter/ai-sdk-provider": "2.3.3",
"@opentui/core": "0.1.95",
"@opentui/solid": "0.1.95",
"@opentui/core": "0.1.93",
"@opentui/solid": "0.1.93",
"@parcel/watcher": "2.5.1",
"@pierre/diffs": "catalog:",
"@solid-primitives/event-bus": "1.1.2",
@@ -423,22 +423,22 @@
},
"packages/plugin": {
"name": "@opencode-ai/plugin",
"version": "1.3.13",
"version": "1.3.11",
"dependencies": {
"@opencode-ai/sdk": "workspace:*",
"zod": "catalog:",
},
"devDependencies": {
"@opentui/core": "0.1.95",
"@opentui/solid": "0.1.95",
"@opentui/core": "0.1.93",
"@opentui/solid": "0.1.93",
"@tsconfig/node22": "catalog:",
"@types/node": "catalog:",
"@typescript/native-preview": "catalog:",
"typescript": "catalog:",
},
"peerDependencies": {
"@opentui/core": ">=0.1.95",
"@opentui/solid": ">=0.1.95",
"@opentui/core": ">=0.1.93",
"@opentui/solid": ">=0.1.93",
},
"optionalPeers": [
"@opentui/core",
@@ -457,7 +457,7 @@
},
"packages/sdk/js": {
"name": "@opencode-ai/sdk",
"version": "1.3.13",
"version": "1.3.11",
"devDependencies": {
"@hey-api/openapi-ts": "0.90.10",
"@tsconfig/node22": "catalog:",
@@ -468,7 +468,7 @@
},
"packages/slack": {
"name": "@opencode-ai/slack",
"version": "1.3.13",
"version": "1.3.11",
"dependencies": {
"@opencode-ai/sdk": "workspace:*",
"@slack/bolt": "^3.17.1",
@@ -503,7 +503,7 @@
},
"packages/ui": {
"name": "@opencode-ai/ui",
"version": "1.3.13",
"version": "1.3.11",
"dependencies": {
"@kobalte/core": "catalog:",
"@opencode-ai/sdk": "workspace:*",
@@ -550,7 +550,7 @@
},
"packages/util": {
"name": "@opencode-ai/util",
"version": "1.3.13",
"version": "1.3.11",
"dependencies": {
"zod": "catalog:",
},
@@ -561,7 +561,7 @@
},
"packages/web": {
"name": "@opencode-ai/web",
"version": "1.3.13",
"version": "1.3.11",
"dependencies": {
"@astrojs/cloudflare": "12.6.3",
"@astrojs/markdown-remark": "6.3.1",
@@ -1461,21 +1461,21 @@
"@opentelemetry/api": ["@opentelemetry/api@1.9.0", "", {}, "sha512-3giAOQvZiH5F9bMlMiv8+GSPMeqg0dbaeo58/0SlA9sxSqZhnUtxzX9/2FzyhS9sWQf5S0GJE0AKBrFqjpeYcg=="],
"@opentui/core": ["@opentui/core@0.1.95", "", { "dependencies": { "bun-ffi-structs": "0.1.2", "diff": "8.0.2", "jimp": "1.6.0", "marked": "17.0.1", "yoga-layout": "3.2.1" }, "optionalDependencies": { "@dimforge/rapier2d-simd-compat": "^0.17.3", "@opentui/core-darwin-arm64": "0.1.95", "@opentui/core-darwin-x64": "0.1.95", "@opentui/core-linux-arm64": "0.1.95", "@opentui/core-linux-x64": "0.1.95", "@opentui/core-win32-arm64": "0.1.95", "@opentui/core-win32-x64": "0.1.95", "bun-webgpu": "0.1.5", "planck": "^1.4.2", "three": "0.177.0" }, "peerDependencies": { "web-tree-sitter": "0.25.10" } }, "sha512-Ha73I+PPSy6Jk8CTZgdGRHU+nnmrPAs7m6w0k6ge1/kWbcNcZB0lY67sWQMdoa6bSINQMNWg7SjbNCC9B/0exg=="],
"@opentui/core": ["@opentui/core@0.1.93", "", { "dependencies": { "bun-ffi-structs": "0.1.2", "diff": "8.0.2", "jimp": "1.6.0", "marked": "17.0.1", "yoga-layout": "3.2.1" }, "optionalDependencies": { "@dimforge/rapier2d-simd-compat": "^0.17.3", "@opentui/core-darwin-arm64": "0.1.93", "@opentui/core-darwin-x64": "0.1.93", "@opentui/core-linux-arm64": "0.1.93", "@opentui/core-linux-x64": "0.1.93", "@opentui/core-win32-arm64": "0.1.93", "@opentui/core-win32-x64": "0.1.93", "bun-webgpu": "0.1.5", "planck": "^1.4.2", "three": "0.177.0" }, "peerDependencies": { "web-tree-sitter": "0.25.10" } }, "sha512-HlTM16ZiBKN0mPBNMHSILkSrbzNku6Pg/ovIpVVkEPqLeWeSC2bfZS4Uhc0Ej1sckVVVoU9HKBJanfHvpP+pMg=="],
"@opentui/core-darwin-arm64": ["@opentui/core-darwin-arm64@0.1.95", "", { "os": "darwin", "cpu": "arm64" }, "sha512-92joqr0ucGaIBCl9uYhe5DwAPbgGMTaCsCeY8Yf3VQ72wjGbOTwnC1TvU5wC6bUmiyqfijCqMyuUnj83teIVVQ=="],
"@opentui/core-darwin-arm64": ["@opentui/core-darwin-arm64@0.1.93", "", { "os": "darwin", "cpu": "arm64" }, "sha512-4I2mwhXLqRNUv7tu88hA6cBGaGpLZXkAa8W0VqBiGDV+Tx337x4T+vbQ7G57OwKXT787oTrEOF9rOOrGLov6qw=="],
"@opentui/core-darwin-x64": ["@opentui/core-darwin-x64@0.1.95", "", { "os": "darwin", "cpu": "x64" }, "sha512-+TLL3Kp3x7DTWEAkCAYe+RjRhl58QndoeXMstZNS8GQyrjSpUuivzwidzAz0HZK9SbZJfvaxZmXsToAIdI2fag=="],
"@opentui/core-darwin-x64": ["@opentui/core-darwin-x64@0.1.93", "", { "os": "darwin", "cpu": "x64" }, "sha512-jvYMgcg47a5qLhSv1DnQiafEWBQ1UukGutmsYV1TvNuhWtuDXYLVy2AhKIHPzbB9JNrV0IpjbxUC8QnJaP3n8g=="],
"@opentui/core-linux-arm64": ["@opentui/core-linux-arm64@0.1.95", "", { "os": "linux", "cpu": "arm64" }, "sha512-dAYeRqh7P8o0xFZleDDR1Abt4gSvCISqw6syOrbH3dl7pMbVdGgzA5stM9jqMgdPUVE7Ngumo17C23ehkGv93A=="],
"@opentui/core-linux-arm64": ["@opentui/core-linux-arm64@0.1.93", "", { "os": "linux", "cpu": "arm64" }, "sha512-bvFqRcPftmg14iYmMc3d63XC9rhe4yF7pJRApH6klLBKp27WX/LU0iSO4mvyX7qhy65gcmyy4Sj9dl5jNJ+vlA=="],
"@opentui/core-linux-x64": ["@opentui/core-linux-x64@0.1.95", "", { "os": "linux", "cpu": "x64" }, "sha512-O54TCgK8E7j2NKrDXUOTZqO4sb8JjeAfnhrStxAMMEw4RFCGWx3p3wLesqR16uKfFFJFDyoh2OWZ698tO88EAA=="],
"@opentui/core-linux-x64": ["@opentui/core-linux-x64@0.1.93", "", { "os": "linux", "cpu": "x64" }, "sha512-/wJXhwtNxdcpshrRl1KouyGE54ODAHxRQgBHtnlM/F4bB8cjzOlq2Yc+5cv5DxRz4Q0nQZFCPefwpg2U6ZwNdA=="],
"@opentui/core-win32-arm64": ["@opentui/core-win32-arm64@0.1.95", "", { "os": "win32", "cpu": "arm64" }, "sha512-T1RlZ6U/95eYDN6rUm4SLOVA5LBR7iL3TcBroQhV/883bVczXIBPhriEXQayup5FsAemnQba1BzMNvy6128SUw=="],
"@opentui/core-win32-arm64": ["@opentui/core-win32-arm64@0.1.93", "", { "os": "win32", "cpu": "arm64" }, "sha512-g3PQobfM2yFPSzkBKRKFp8FgTG4ulWyJcU+GYXjyYmxQIT+ZbOU7UfR//ImRq3/FxUAfUC/MhC6WwjqccjEqBw=="],
"@opentui/core-win32-x64": ["@opentui/core-win32-x64@0.1.95", "", { "os": "win32", "cpu": "x64" }, "sha512-lH2FHO0HSP2xWT+ccoz0BkLYFsMm7e6OYOh63BUHHh5b7ispnzP4aTyxiaLWrfJwdL0M9rp5cLIY32bhBKF2oA=="],
"@opentui/core-win32-x64": ["@opentui/core-win32-x64@0.1.93", "", { "os": "win32", "cpu": "x64" }, "sha512-Spllte2W7q+WfB1zVHgHilVJNp+jpp77PkkxTWyMQNvT7vJNt9LABMNjGTGiJBBMkAuKvO0GgFNKxrda7tFKrQ=="],
"@opentui/solid": ["@opentui/solid@0.1.95", "", { "dependencies": { "@babel/core": "7.28.0", "@babel/preset-typescript": "7.27.1", "@opentui/core": "0.1.95", "babel-plugin-module-resolver": "5.0.2", "babel-preset-solid": "1.9.10", "entities": "7.0.1", "s-js": "^0.4.9" }, "peerDependencies": { "solid-js": "1.9.11" } }, "sha512-iotYCvULgDurLXv3vgOzTLnEOySHFOa/6cEDex76jBt+gkniOEh2cjxxIVt6lkfTsk6UNTk6yCdwNK3nca/j+Q=="],
"@opentui/solid": ["@opentui/solid@0.1.93", "", { "dependencies": { "@babel/core": "7.28.0", "@babel/preset-typescript": "7.27.1", "@opentui/core": "0.1.93", "babel-plugin-module-resolver": "5.0.2", "babel-preset-solid": "1.9.10", "entities": "7.0.1", "s-js": "^0.4.9" }, "peerDependencies": { "solid-js": "1.9.11" } }, "sha512-Qx+4qoLSjnRGoo/YY4sZJMyXj09Y5kaAMpVO+65Ax58MMj4TjABN4bOOiRT2KV7sKOMTjxiAgXAIaBuqBBJ0Qg=="],
"@oslojs/asn1": ["@oslojs/asn1@1.0.0", "", { "dependencies": { "@oslojs/binary": "1.0.0" } }, "sha512-zw/wn0sj0j0QKbIXfIlnEcTviaCzYOY3V5rAyjR6YtOByFtJiT574+8p9Wlach0lZH9fddD4yb9laEAIl4vXQA=="],

View File

@@ -1,8 +1,8 @@
{
"nodeModules": {
"x86_64-linux": "sha256-C7y5FMI1pGEgMw/vcPoBhK9tw5uGg1bk0gPXPUUVhgU=",
"aarch64-linux": "sha256-cUlQ9jp4WIaJkd4GRoHMWc+REG/OnnGCmsQUNmvg4is=",
"aarch64-darwin": "sha256-3GXmqG7yihJ91wS/jlW19qxGI62b1bFJnpGB4LcMlpY=",
"x86_64-darwin": "sha256-cUF0TfYg2nXnU80kWFpr9kNHlu9txiatIgrHTltgx4g="
"x86_64-linux": "sha256-UuVbB5lTRB4bIcaKMc8CLSbQW7m9EjXgxYvxp/uO7Co=",
"aarch64-linux": "sha256-8D7ReLRVb7NDd5PQTVxFhRLmlLbfjK007XgIhhpNKoE=",
"aarch64-darwin": "sha256-M+z7C/eXfVqwDiGiiwKo/LT/m4dvCjL1Pblsr1kxoyI=",
"x86_64-darwin": "sha256-RzZS6GMwYVDPK0W+K/mlebixNMs2+JRkMG9n8OFhd0c="
}
}

View File

@@ -1,8 +1,5 @@
import { test as base, expect, type Page } from "@playwright/test"
import { ManagedRuntime } from "effect"
import type { E2EWindow } from "../src/testing/terminal"
import type { Item, Reply, Usage } from "../../opencode/test/lib/llm-server"
import { TestLLMServer } from "../../opencode/test/lib/llm-server"
import {
healthPhase,
cleanupSession,
@@ -16,24 +13,6 @@ import {
} from "./actions"
import { createSdk, dirSlug, getWorktree, sessionPath } from "./utils"
type LLMFixture = {
url: string
push: (...input: (Item | Reply)[]) => Promise<void>
text: (value: string, opts?: { usage?: Usage }) => Promise<void>
tool: (name: string, input: unknown) => Promise<void>
toolHang: (name: string, input: unknown) => Promise<void>
reason: (value: string, opts?: { text?: string; usage?: Usage }) => Promise<void>
fail: (message?: unknown) => Promise<void>
error: (status: number, body: unknown) => Promise<void>
hang: () => Promise<void>
hold: (value: string, wait: PromiseLike<unknown>) => Promise<void>
hits: () => Promise<Array<{ url: URL; body: Record<string, unknown> }>>
calls: () => Promise<number>
wait: (count: number) => Promise<void>
inputs: () => Promise<Record<string, unknown>[]>
pending: () => Promise<number>
}
export const settingsKey = "settings.v3"
const seedModel = (() => {
@@ -47,7 +26,6 @@ const seedModel = (() => {
})()
type TestFixtures = {
llm: LLMFixture
sdk: ReturnType<typeof createSdk>
gotoSession: (sessionID?: string) => Promise<void>
withProject: <T>(
@@ -58,11 +36,7 @@ type TestFixtures = {
trackSession: (sessionID: string, directory?: string) => void
trackDirectory: (directory: string) => void
}) => Promise<T>,
options?: {
extra?: string[]
model?: { providerID: string; modelID: string }
setup?: (directory: string) => Promise<void>
},
options?: { extra?: string[] },
) => Promise<T>
}
@@ -72,31 +46,6 @@ type WorkerFixtures = {
}
export const test = base.extend<TestFixtures, WorkerFixtures>({
llm: async ({}, use) => {
const rt = ManagedRuntime.make(TestLLMServer.layer)
try {
const svc = await rt.runPromise(TestLLMServer.asEffect())
await use({
url: svc.url,
push: (...input) => rt.runPromise(svc.push(...input)),
text: (value, opts) => rt.runPromise(svc.text(value, opts)),
tool: (name, input) => rt.runPromise(svc.tool(name, input)),
toolHang: (name, input) => rt.runPromise(svc.toolHang(name, input)),
reason: (value, opts) => rt.runPromise(svc.reason(value, opts)),
fail: (message) => rt.runPromise(svc.fail(message)),
error: (status, body) => rt.runPromise(svc.error(status, body)),
hang: () => rt.runPromise(svc.hang),
hold: (value, wait) => rt.runPromise(svc.hold(value, wait)),
hits: () => rt.runPromise(svc.hits),
calls: () => rt.runPromise(svc.calls),
wait: (count) => rt.runPromise(svc.wait(count)),
inputs: () => rt.runPromise(svc.inputs),
pending: () => rt.runPromise(svc.pending),
})
} finally {
await rt.dispose()
}
},
page: async ({ page }, use) => {
let boundary: string | undefined
setHealthPhase(page, "test")
@@ -150,8 +99,7 @@ export const test = base.extend<TestFixtures, WorkerFixtures>({
const root = await createTestProject()
const sessions = new Map<string, string>()
const dirs = new Set<string>()
await options?.setup?.(root)
await seedStorage(page, { directory: root, extra: options?.extra, model: options?.model })
await seedStorage(page, { directory: root, extra: options?.extra })
const gotoSession = async (sessionID?: string) => {
await page.goto(sessionPath(root, sessionID))
@@ -185,14 +133,7 @@ export const test = base.extend<TestFixtures, WorkerFixtures>({
},
})
async function seedStorage(
page: Page,
input: {
directory: string
extra?: string[]
model?: { providerID: string; modelID: string }
},
) {
async function seedStorage(page: Page, input: { directory: string; extra?: string[] }) {
await seedProjects(page, input)
await page.addInitScript((model: { providerID: string; modelID: string }) => {
const win = window as E2EWindow
@@ -217,7 +158,7 @@ async function seedStorage(
variant: {},
}),
)
}, input.model ?? seedModel)
}, seedModel)
}
export { expect }

View File

@@ -1,44 +1,8 @@
import fs from "node:fs/promises"
import path from "node:path"
import { test, expect } from "../fixtures"
import { promptSelector } from "../selectors"
import { sessionIDFromUrl } from "../actions"
import { createSdk } from "../utils"
import { cleanupSession, sessionIDFromUrl, withSession } from "../actions"
async function config(dir: string, url: string) {
await fs.writeFile(
path.join(dir, "opencode.json"),
JSON.stringify({
$schema: "https://opencode.ai/config.json",
enabled_providers: ["e2e-llm"],
provider: {
"e2e-llm": {
name: "E2E LLM",
npm: "@ai-sdk/openai-compatible",
env: [],
models: {
"test-model": {
name: "Test Model",
tool_call: true,
limit: { context: 128000, output: 32000 },
},
},
options: {
apiKey: "test-key",
baseURL: url,
},
},
},
agent: {
build: {
model: "e2e-llm/test-model",
},
},
}),
)
}
test("can send a prompt and receive a reply", async ({ page, llm, withProject }) => {
test("can send a prompt and receive a reply", async ({ page, sdk, gotoSession }) => {
test.setTimeout(120_000)
const pageErrors: string[] = []
@@ -47,51 +11,42 @@ test("can send a prompt and receive a reply", async ({ page, llm, withProject })
}
page.on("pageerror", onPageError)
await gotoSession()
const token = `E2E_OK_${Date.now()}`
const prompt = page.locator(promptSelector)
await prompt.click()
await page.keyboard.type(`Reply with exactly: ${token}`)
await page.keyboard.press("Enter")
await expect(page).toHaveURL(/\/session\/[^/?#]+/, { timeout: 30_000 })
const sessionID = (() => {
const id = sessionIDFromUrl(page.url())
if (!id) throw new Error(`Failed to parse session id from url: ${page.url()}`)
return id
})()
try {
await withProject(
async (project) => {
const sdk = createSdk(project.directory)
const token = `E2E_OK_${Date.now()}`
await expect
.poll(
async () => {
const messages = await sdk.session.messages({ sessionID, limit: 50 }).then((r) => r.data ?? [])
return messages
.filter((m) => m.info.role === "assistant")
.flatMap((m) => m.parts)
.filter((p) => p.type === "text")
.map((p) => p.text)
.join("\n")
},
{ timeout: 90_000 },
)
await llm.text(token)
await project.gotoSession()
const prompt = page.locator(promptSelector)
await prompt.click()
await page.keyboard.type(`Reply with exactly: ${token}`)
await page.keyboard.press("Enter")
await expect(page).toHaveURL(/\/session\/[^/?#]+/, { timeout: 30_000 })
const sessionID = (() => {
const id = sessionIDFromUrl(page.url())
if (!id) throw new Error(`Failed to parse session id from url: ${page.url()}`)
return id
})()
project.trackSession(sessionID)
await expect
.poll(
async () => {
const messages = await sdk.session.messages({ sessionID, limit: 50 }).then((r) => r.data ?? [])
return messages
.filter((m) => m.info.role === "assistant")
.flatMap((m) => m.parts)
.filter((p) => p.type === "text")
.map((p) => p.text)
.join("\n")
},
{ timeout: 30_000 },
)
.toContain(token)
},
{
model: { providerID: "e2e-llm", modelID: "test-model" },
setup: (dir) => config(dir, llm.url),
},
)
.toContain(token)
} finally {
page.off("pageerror", onPageError)
await cleanupSession({ sdk, sessionID })
}
if (pageErrors.length > 0) {

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/app",
"version": "1.3.13",
"version": "1.3.11",
"description": "",
"type": "module",
"exports": {

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/console-app",
"version": "1.3.13",
"version": "1.3.11",
"type": "module",
"license": "MIT",
"scripts": {

View File

@@ -1,7 +1,7 @@
{
"$schema": "https://json.schemastore.org/package.json",
"name": "@opencode-ai/console-core",
"version": "1.3.13",
"version": "1.3.11",
"private": true,
"type": "module",
"license": "MIT",

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/console-function",
"version": "1.3.13",
"version": "1.3.11",
"$schema": "https://json.schemastore.org/package.json",
"private": true,
"type": "module",

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/console-mail",
"version": "1.3.13",
"version": "1.3.11",
"dependencies": {
"@jsx-email/all": "2.2.3",
"@jsx-email/cli": "1.4.3",

View File

@@ -1,7 +1,7 @@
{
"name": "@opencode-ai/desktop-electron",
"private": true,
"version": "1.3.13",
"version": "1.3.11",
"type": "module",
"license": "MIT",
"homepage": "https://opencode.ai",

View File

@@ -1,7 +1,7 @@
{
"name": "@opencode-ai/desktop",
"private": true,
"version": "1.3.13",
"version": "1.3.11",
"type": "module",
"license": "MIT",
"scripts": {

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/enterprise",
"version": "1.3.13",
"version": "1.3.11",
"private": true,
"type": "module",
"license": "MIT",

View File

@@ -1,7 +1,7 @@
id = "opencode"
name = "OpenCode"
description = "The open source coding agent."
version = "1.3.13"
version = "1.3.11"
schema_version = 1
authors = ["Anomaly"]
repository = "https://github.com/anomalyco/opencode"
@@ -11,26 +11,26 @@ name = "OpenCode"
icon = "./icons/opencode.svg"
[agent_servers.opencode.targets.darwin-aarch64]
archive = "https://github.com/anomalyco/opencode/releases/download/v1.3.13/opencode-darwin-arm64.zip"
archive = "https://github.com/anomalyco/opencode/releases/download/v1.3.11/opencode-darwin-arm64.zip"
cmd = "./opencode"
args = ["acp"]
[agent_servers.opencode.targets.darwin-x86_64]
archive = "https://github.com/anomalyco/opencode/releases/download/v1.3.13/opencode-darwin-x64.zip"
archive = "https://github.com/anomalyco/opencode/releases/download/v1.3.11/opencode-darwin-x64.zip"
cmd = "./opencode"
args = ["acp"]
[agent_servers.opencode.targets.linux-aarch64]
archive = "https://github.com/anomalyco/opencode/releases/download/v1.3.13/opencode-linux-arm64.tar.gz"
archive = "https://github.com/anomalyco/opencode/releases/download/v1.3.11/opencode-linux-arm64.tar.gz"
cmd = "./opencode"
args = ["acp"]
[agent_servers.opencode.targets.linux-x86_64]
archive = "https://github.com/anomalyco/opencode/releases/download/v1.3.13/opencode-linux-x64.tar.gz"
archive = "https://github.com/anomalyco/opencode/releases/download/v1.3.11/opencode-linux-x64.tar.gz"
cmd = "./opencode"
args = ["acp"]
[agent_servers.opencode.targets.windows-x86_64]
archive = "https://github.com/anomalyco/opencode/releases/download/v1.3.13/opencode-windows-x64.zip"
archive = "https://github.com/anomalyco/opencode/releases/download/v1.3.11/opencode-windows-x64.zip"
cmd = "./opencode.exe"
args = ["acp"]

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/function",
"version": "1.3.13",
"version": "1.3.11",
"$schema": "https://json.schemastore.org/package.json",
"private": true,
"type": "module",

View File

@@ -1,6 +1,6 @@
{
"$schema": "https://json.schemastore.org/package.json",
"version": "1.3.13",
"version": "1.3.11",
"name": "opencode",
"type": "module",
"license": "MIT",
@@ -102,8 +102,8 @@
"@opencode-ai/sdk": "workspace:*",
"@opencode-ai/util": "workspace:*",
"@openrouter/ai-sdk-provider": "2.3.3",
"@opentui/core": "0.1.95",
"@opentui/solid": "0.1.95",
"@opentui/core": "0.1.93",
"@opentui/solid": "0.1.93",
"@parcel/watcher": "2.5.1",
"@pierre/diffs": "catalog:",
"@solid-primitives/event-bus": "1.1.2",

View File

@@ -75,7 +75,6 @@ export namespace Agent {
const config = yield* Config.Service
const auth = yield* Auth.Service
const skill = yield* Skill.Service
const provider = yield* Provider.Service
const state = yield* InstanceState.make<State>(
Effect.fn("Agent.state")(function* (ctx) {
@@ -331,9 +330,9 @@ export namespace Agent {
model?: { providerID: ProviderID; modelID: ModelID }
}) {
const cfg = yield* config.get()
const model = input.model ?? (yield* provider.defaultModel())
const resolved = yield* provider.getModel(model.providerID, model.modelID)
const language = yield* provider.getLanguage(resolved)
const model = input.model ?? (yield* Effect.promise(() => Provider.defaultModel()))
const resolved = yield* Effect.promise(() => Provider.getModel(model.providerID, model.modelID))
const language = yield* Effect.promise(() => Provider.getLanguage(resolved))
const system = [PROMPT_GENERATE]
yield* Effect.promise(() =>
@@ -394,7 +393,6 @@ export namespace Agent {
)
export const defaultLayer = layer.pipe(
Layer.provide(Provider.defaultLayer),
Layer.provide(Auth.defaultLayer),
Layer.provide(Config.defaultLayer),
Layer.provide(Skill.defaultLayer),

View File

@@ -436,13 +436,13 @@ export const SessionRoutes = lazy(() =>
validator(
"param",
z.object({
sessionID: SessionSummary.DiffInput.shape.sessionID,
sessionID: SessionSummary.diff.schema.shape.sessionID,
}),
),
validator(
"query",
z.object({
messageID: SessionSummary.DiffInput.shape.messageID,
messageID: SessionSummary.diff.schema.shape.messageID,
}),
),
async (c) => {

View File

@@ -63,13 +63,7 @@ export namespace SessionCompaction {
export const layer: Layer.Layer<
Service,
never,
| Bus.Service
| Config.Service
| Session.Service
| Agent.Service
| Plugin.Service
| SessionProcessor.Service
| Provider.Service
Bus.Service | Config.Service | Session.Service | Agent.Service | Plugin.Service | SessionProcessor.Service
> = Layer.effect(
Service,
Effect.gen(function* () {
@@ -79,7 +73,6 @@ export namespace SessionCompaction {
const agents = yield* Agent.Service
const plugin = yield* Plugin.Service
const processors = yield* SessionProcessor.Service
const provider = yield* Provider.Service
const isOverflow = Effect.fn("SessionCompaction.isOverflow")(function* (input: {
tokens: MessageV2.Assistant["tokens"]
@@ -177,9 +170,11 @@ export namespace SessionCompaction {
}
const agent = yield* agents.get("compaction")
const model = agent.model
? yield* provider.getModel(agent.model.providerID, agent.model.modelID)
: yield* provider.getModel(userMessage.model.providerID, userMessage.model.modelID)
const model = yield* Effect.promise(() =>
agent.model
? Provider.getModel(agent.model.providerID, agent.model.modelID)
: Provider.getModel(userMessage.model.providerID, userMessage.model.modelID),
)
// Allow plugins to inject context or replace compaction prompt.
const compacting = yield* plugin.trigger(
"experimental.session.compacting",
@@ -218,7 +213,7 @@ When constructing the summary, try to stick to this template:
const prompt = compacting.prompt ?? [defaultPrompt, ...compacting.context].join("\n\n")
const msgs = structuredClone(messages)
yield* plugin.trigger("experimental.chat.messages.transform", {}, { messages: msgs })
const modelMessages = yield* MessageV2.toModelMessagesEffect(msgs, model, { stripMedia: true })
const modelMessages = yield* Effect.promise(() => MessageV2.toModelMessages(msgs, model, { stripMedia: true }))
const ctx = yield* InstanceState.context
const msg: MessageV2.Assistant = {
id: MessageID.ascending(),
@@ -382,7 +377,6 @@ When constructing the summary, try to stick to this template:
export const defaultLayer = Layer.unwrap(
Effect.sync(() =>
layer.pipe(
Layer.provide(Provider.defaultLayer),
Layer.provide(Session.defaultLayer),
Layer.provide(SessionProcessor.defaultLayer),
Layer.provide(Agent.defaultLayer),

View File

@@ -593,13 +593,15 @@ export namespace Session {
})
const messages = Effect.fn("Session.messages")(function* (input: { sessionID: SessionID; limit?: number }) {
if (input.limit) {
const result = yield* MessageV2.pageEffect({ sessionID: input.sessionID, limit: input.limit })
return result.items
}
const all = yield* MessageV2.streamEffect(input.sessionID)
all.reverse()
return all
return yield* Effect.promise(async () => {
const result = [] as MessageV2.WithParts[]
for await (const msg of MessageV2.stream(input.sessionID)) {
if (input.limit && result.length >= input.limit) break
result.push(msg)
}
result.reverse()
return result
})
})
const removeMessage = Effect.fn("Session.removeMessage")(function* (input: {

View File

@@ -15,7 +15,6 @@ import { errorMessage } from "@/util/error"
import type { SystemError } from "bun"
import type { Provider } from "@/provider/provider"
import { ModelID, ProviderID } from "@/provider/schema"
import { Effect } from "effect"
/** Error shape thrown by Bun's fetch() when gzip/br decompression fails mid-stream */
interface FetchDecompressionError extends Error {
@@ -548,7 +547,7 @@ export namespace MessageV2 {
and(eq(MessageTable.time_created, row.time), lt(MessageTable.id, row.id)),
)
function hydrate(rows: (typeof MessageTable.$inferSelect)[]) {
async function hydrate(rows: (typeof MessageTable.$inferSelect)[]) {
const ids = rows.map((row) => row.id)
const partByMessage = new Map<string, MessageV2.Part[]>()
if (ids.length > 0) {
@@ -810,58 +809,48 @@ export namespace MessageV2 {
)
}
export const toModelMessagesEffect = Effect.fnUntraced(function* (
input: WithParts[],
model: Provider.Model,
options?: { stripMedia?: boolean },
) {
return yield* Effect.promise(() => toModelMessages(input, model, options))
})
function pageSync(input: { sessionID: SessionID; limit: number; before?: string }) {
const before = input.before ? cursor.decode(input.before) : undefined
const where = before
? and(eq(MessageTable.session_id, input.sessionID), older(before))
: eq(MessageTable.session_id, input.sessionID)
const rows = Database.use((db) =>
db
.select()
.from(MessageTable)
.where(where)
.orderBy(desc(MessageTable.time_created), desc(MessageTable.id))
.limit(input.limit + 1)
.all(),
)
if (rows.length === 0) {
const row = Database.use((db) =>
db.select({ id: SessionTable.id }).from(SessionTable).where(eq(SessionTable.id, input.sessionID)).get(),
)
if (!row) throw new NotFoundError({ message: `Session not found: ${input.sessionID}` })
return {
items: [] as MessageV2.WithParts[],
more: false,
}
}
const more = rows.length > input.limit
const slice = more ? rows.slice(0, input.limit) : rows
const items = hydrate(slice)
items.reverse()
const tail = slice.at(-1)
return {
items,
more,
cursor: more && tail ? cursor.encode({ id: tail.id, time: tail.time_created }) : undefined,
}
}
export const page = fn(
z.object({
sessionID: SessionID.zod,
limit: z.number().int().positive(),
before: z.string().optional(),
}),
async (input) => pageSync(input),
async (input) => {
const before = input.before ? cursor.decode(input.before) : undefined
const where = before
? and(eq(MessageTable.session_id, input.sessionID), older(before))
: eq(MessageTable.session_id, input.sessionID)
const rows = Database.use((db) =>
db
.select()
.from(MessageTable)
.where(where)
.orderBy(desc(MessageTable.time_created), desc(MessageTable.id))
.limit(input.limit + 1)
.all(),
)
if (rows.length === 0) {
const row = Database.use((db) =>
db.select({ id: SessionTable.id }).from(SessionTable).where(eq(SessionTable.id, input.sessionID)).get(),
)
if (!row) throw new NotFoundError({ message: `Session not found: ${input.sessionID}` })
return {
items: [] as MessageV2.WithParts[],
more: false,
}
}
const more = rows.length > input.limit
const page = more ? rows.slice(0, input.limit) : rows
const items = await hydrate(page)
items.reverse()
const tail = page.at(-1)
return {
items,
more,
cursor: more && tail ? cursor.encode({ id: tail.id, time: tail.time_created }) : undefined,
}
},
)
export const stream = fn(SessionID.zod, async function* (sessionID) {
@@ -878,7 +867,7 @@ export namespace MessageV2 {
}
})
function partsSync(message_id: MessageID) {
export const parts = fn(MessageID.zod, async (message_id) => {
const rows = Database.use((db) =>
db.select().from(PartTable).where(eq(PartTable.message_id, message_id)).orderBy(PartTable.id).all(),
)
@@ -891,82 +880,28 @@ export namespace MessageV2 {
messageID: row.message_id,
}) as MessageV2.Part,
)
}
export const parts = fn(MessageID.zod, async (message_id) => partsSync(message_id))
function getSync(input: { sessionID: SessionID; messageID: MessageID }): WithParts {
const row = Database.use((db) =>
db
.select()
.from(MessageTable)
.where(and(eq(MessageTable.id, input.messageID), eq(MessageTable.session_id, input.sessionID)))
.get(),
)
if (!row) throw new NotFoundError({ message: `Message not found: ${input.messageID}` })
return {
info: info(row),
parts: partsSync(input.messageID),
}
}
})
export const get = fn(
z.object({
sessionID: SessionID.zod,
messageID: MessageID.zod,
}),
async (input): Promise<WithParts> => getSync(input),
)
export const partsEffect = Effect.fnUntraced(function* (id: MessageID) {
return partsSync(id)
})
export const getEffect = Effect.fnUntraced(function* (input: { sessionID: SessionID; messageID: MessageID }) {
return getSync(input)
})
export const pageEffect = Effect.fnUntraced(function* (input: {
sessionID: SessionID
limit: number
before?: string
}) {
return pageSync(input)
})
export const streamEffect = Effect.fnUntraced(function* (sessionID: SessionID) {
const result: WithParts[] = []
const size = 50
let before: string | undefined
while (true) {
const next = pageSync({ sessionID, limit: size, before })
if (next.items.length === 0) break
for (let i = next.items.length - 1; i >= 0; i--) {
result.push(next.items[i])
}
if (!next.more || !next.cursor) break
before = next.cursor
}
return result
})
function applyCompactionFilter(msgs: MessageV2.WithParts[]) {
const result = [] as MessageV2.WithParts[]
const completed = new Set<string>()
for (const msg of msgs) {
result.push(msg)
if (
msg.info.role === "user" &&
completed.has(msg.info.id) &&
msg.parts.some((part) => part.type === "compaction")
async (input): Promise<WithParts> => {
const row = Database.use((db) =>
db
.select()
.from(MessageTable)
.where(and(eq(MessageTable.id, input.messageID), eq(MessageTable.session_id, input.sessionID)))
.get(),
)
break
if (msg.info.role === "assistant" && msg.info.summary && msg.info.finish && !msg.info.error)
completed.add(msg.info.parentID)
}
result.reverse()
return result
}
if (!row) throw new NotFoundError({ message: `Message not found: ${input.messageID}` })
return {
info: info(row),
parts: await parts(input.messageID),
}
},
)
export async function filterCompacted(stream: AsyncIterable<MessageV2.WithParts>) {
const result = [] as MessageV2.WithParts[]
@@ -986,10 +921,6 @@ export namespace MessageV2 {
return result
}
export const filterCompactedEffect = Effect.fnUntraced(function* (sessionID: SessionID) {
return applyCompactionFilter(yield* streamEffect(sessionID))
})
export function fromError(
e: unknown,
ctx: { providerID: ProviderID; aborted?: boolean },

View File

@@ -180,7 +180,7 @@ export namespace SessionProcessor {
metadata: value.providerMetadata,
} satisfies MessageV2.ToolPart)
const parts = yield* MessageV2.partsEffect(ctx.assistantMessage.id)
const parts = yield* Effect.promise(() => MessageV2.parts(ctx.assistantMessage.id))
const recentParts = parts.slice(-DOOM_LOOP_THRESHOLD)
if (
@@ -294,10 +294,12 @@ export namespace SessionProcessor {
}
ctx.snapshot = undefined
}
SessionSummary.summarize({
sessionID: ctx.sessionID,
messageID: ctx.assistantMessage.parentID,
})
yield* Effect.promise(() =>
SessionSummary.summarize({
sessionID: ctx.sessionID,
messageID: ctx.assistantMessage.parentID,
}),
).pipe(Effect.ignoreCause({ log: true, message: "session summary failed" }), Effect.forkDetach)
if (
!ctx.assistantMessage.summary &&
isOverflow({ cfg: yield* config.get(), tokens: usage.tokens, model: ctx.model })
@@ -392,7 +394,7 @@ export namespace SessionProcessor {
}
ctx.reasoningMap = {}
const parts = yield* MessageV2.partsEffect(ctx.assistantMessage.id)
const parts = yield* Effect.promise(() => MessageV2.parts(ctx.assistantMessage.id))
for (const part of parts) {
if (part.type !== "tool" || part.state.status === "completed" || part.state.status === "error") continue
yield* session.updatePart({

View File

@@ -84,7 +84,6 @@ export namespace SessionPrompt {
const status = yield* SessionStatus.Service
const sessions = yield* Session.Service
const agents = yield* Agent.Service
const provider = yield* Provider.Service
const processor = yield* SessionProcessor.Service
const compaction = yield* SessionCompaction.Service
const plugin = yield* Plugin.Service
@@ -207,14 +206,14 @@ export namespace SessionPrompt {
const ag = yield* agents.get("title")
if (!ag) return
const mdl = ag.model
? yield* provider.getModel(ag.model.providerID, ag.model.modelID)
: ((yield* provider.getSmallModel(input.providerID)) ??
(yield* provider.getModel(input.providerID, input.modelID)))
const msgs = onlySubtasks
? [{ role: "user" as const, content: subtasks.map((p) => p.prompt).join("\n") }]
: yield* MessageV2.toModelMessagesEffect(context, mdl)
const text = yield* Effect.promise(async (signal) => {
const mdl = ag.model
? await Provider.getModel(ag.model.providerID, ag.model.modelID)
: ((await Provider.getSmallModel(input.providerID)) ??
(await Provider.getModel(input.providerID, input.modelID)))
const msgs = onlySubtasks
? [{ role: "user" as const, content: subtasks.map((p) => p.prompt).join("\n") }]
: await MessageV2.toModelMessages(context, mdl)
const result = await LLM.stream({
agent: ag,
user: firstInfo,
@@ -933,35 +932,21 @@ NOTE: At any point in time through this workflow you should feel free to ask the
return { info: msg, parts: [part] }
})
const getModel = Effect.fn("SessionPrompt.getModel")(function* (
providerID: ProviderID,
modelID: ModelID,
sessionID: SessionID,
) {
const exit = yield* provider.getModel(providerID, modelID).pipe(Effect.exit)
if (Exit.isSuccess(exit)) return exit.value
const err = Cause.squash(exit.cause)
if (Provider.ModelNotFoundError.isInstance(err)) {
const hint = err.data.suggestions?.length ? ` Did you mean: ${err.data.suggestions.join(", ")}?` : ""
yield* bus.publish(Session.Event.Error, {
sessionID,
error: new NamedError.Unknown({
message: `Model not found: ${err.data.providerID}/${err.data.modelID}.${hint}`,
}).toObject(),
})
}
return yield* Effect.failCause(exit.cause)
})
const lastModel = Effect.fnUntraced(function* (sessionID: SessionID) {
const model = yield* Effect.promise(async () => {
for await (const item of MessageV2.stream(sessionID)) {
if (item.info.role === "user" && item.info.model) return item.info.model
}
})
if (model) return model
return yield* provider.defaultModel()
})
const getModel = (providerID: ProviderID, modelID: ModelID, sessionID: SessionID) =>
Effect.promise(() =>
Provider.getModel(providerID, modelID).catch((e) => {
if (Provider.ModelNotFoundError.isInstance(e)) {
const hint = e.data.suggestions?.length ? ` Did you mean: ${e.data.suggestions.join(", ")}?` : ""
Bus.publish(Session.Event.Error, {
sessionID,
error: new NamedError.Unknown({
message: `Model not found: ${e.data.providerID}/${e.data.modelID}.${hint}`,
}).toObject(),
})
}
throw e
}),
)
const createUserMessage = Effect.fn("SessionPrompt.createUserMessage")(function* (input: PromptInput) {
const agentName = input.agent || (yield* agents.defaultAgent())
@@ -975,12 +960,9 @@ NOTE: At any point in time through this workflow you should feel free to ask the
}
const model = input.model ?? ag.model ?? (yield* lastModel(input.sessionID))
const same = ag.model && model.providerID === ag.model.providerID && model.modelID === ag.model.modelID
const full =
!input.variant && ag.variant && same
? yield* provider
.getModel(model.providerID, model.modelID)
.pipe(Effect.catch(() => Effect.succeed(undefined)))
!input.variant && ag.variant
? yield* Effect.promise(() => Provider.getModel(model.providerID, model.modelID).catch(() => undefined))
: undefined
const variant = input.variant ?? (ag.variant && full?.variants?.[ag.variant] ? ag.variant : undefined)
@@ -1127,7 +1109,7 @@ NOTE: At any point in time through this workflow you should feel free to ask the
]
const read = yield* Effect.promise(() => ReadTool.init()).pipe(
Effect.flatMap((t) =>
provider.getModel(info.model.providerID, info.model.modelID).pipe(
Effect.promise(() => Provider.getModel(info.model.providerID, info.model.modelID)).pipe(
Effect.flatMap((mdl) =>
Effect.promise(() =>
t.execute(args, {
@@ -1360,7 +1342,7 @@ NOTE: At any point in time through this workflow you should feel free to ask the
yield* status.set(sessionID, { type: "busy" })
log.info("loop", { step, sessionID })
let msgs = yield* MessageV2.filterCompactedEffect(sessionID)
let msgs = yield* Effect.promise(() => MessageV2.filterCompacted(MessageV2.stream(sessionID)))
let lastUser: MessageV2.User | undefined
let lastAssistant: MessageV2.Assistant | undefined
@@ -1729,7 +1711,6 @@ NOTE: At any point in time through this workflow you should feel free to ask the
Layer.provide(FileTime.defaultLayer),
Layer.provide(ToolRegistry.defaultLayer),
Layer.provide(Truncate.layer),
Layer.provide(Provider.defaultLayer),
Layer.provide(AppFileSystem.defaultLayer),
Layer.provide(Plugin.defaultLayer),
Layer.provide(Session.defaultLayer),
@@ -1875,6 +1856,15 @@ NOTE: At any point in time through this workflow you should feel free to ask the
return runPromise((svc) => svc.command(CommandInput.parse(input)))
}
const lastModel = Effect.fnUntraced(function* (sessionID: SessionID) {
return yield* Effect.promise(async () => {
for await (const item of MessageV2.stream(sessionID)) {
if (item.info.role === "user" && item.info.model) return item.info.model
}
return Provider.defaultModel()
})
})
/** @internal Exported for testing */
export function createStructuredOutputTool(input: {
schema: Record<string, any>

View File

@@ -1,14 +1,12 @@
import z from "zod"
import { Effect, Layer, ServiceMap } from "effect"
import { makeRuntime } from "@/effect/run-service"
import { Bus } from "../bus"
import { Snapshot } from "../snapshot"
import { Storage } from "@/storage/storage"
import { SyncEvent } from "../sync"
import { Log } from "../util/log"
import { Session } from "."
import { MessageV2 } from "./message-v2"
import { SessionID, MessageID, PartID } from "./schema"
import { Snapshot } from "../snapshot"
import { MessageV2 } from "./message-v2"
import { Session } from "."
import { Log } from "../util/log"
import { SyncEvent } from "../sync"
import { Storage } from "@/storage/storage"
import { Bus } from "../bus"
import { SessionPrompt } from "./prompt"
import { SessionSummary } from "./summary"
@@ -22,152 +20,116 @@ export namespace SessionRevert {
})
export type RevertInput = z.infer<typeof RevertInput>
export interface Interface {
readonly revert: (input: RevertInput) => Effect.Effect<Session.Info>
readonly unrevert: (input: { sessionID: SessionID }) => Effect.Effect<Session.Info>
readonly cleanup: (session: Session.Info) => Effect.Effect<void>
}
export class Service extends ServiceMap.Service<Service, Interface>()("@opencode/SessionRevert") {}
export const layer = Layer.effect(
Service,
Effect.gen(function* () {
const sessions = yield* Session.Service
const snap = yield* Snapshot.Service
const storage = yield* Storage.Service
const bus = yield* Bus.Service
const revert = Effect.fn("SessionRevert.revert")(function* (input: RevertInput) {
yield* Effect.promise(() => SessionPrompt.assertNotBusy(input.sessionID))
const all = yield* sessions.messages({ sessionID: input.sessionID })
let lastUser: MessageV2.User | undefined
const session = yield* sessions.get(input.sessionID)
let rev: Session.Info["revert"]
const patches: Snapshot.Patch[] = []
for (const msg of all) {
if (msg.info.role === "user") lastUser = msg.info
const remaining = []
for (const part of msg.parts) {
if (rev) {
if (part.type === "patch") patches.push(part)
continue
}
if (!rev) {
if ((msg.info.id === input.messageID && !input.partID) || part.id === input.partID) {
const partID = remaining.some((item) => ["text", "tool"].includes(item.type)) ? input.partID : undefined
rev = {
messageID: !partID && lastUser ? lastUser.id : msg.info.id,
partID,
}
}
remaining.push(part)
}
}
}
if (!rev) return session
rev.snapshot = session.revert?.snapshot ?? (yield* snap.track())
yield* snap.revert(patches)
if (rev.snapshot) rev.diff = yield* snap.diff(rev.snapshot as string)
const range = all.filter((msg) => msg.info.id >= rev!.messageID)
const diffs = yield* Effect.promise(() => SessionSummary.computeDiff({ messages: range }))
yield* storage.write(["session_diff", input.sessionID], diffs).pipe(Effect.ignore)
yield* bus.publish(Session.Event.Diff, { sessionID: input.sessionID, diff: diffs })
yield* sessions.setRevert({
sessionID: input.sessionID,
revert: rev,
summary: {
additions: diffs.reduce((sum, x) => sum + x.additions, 0),
deletions: diffs.reduce((sum, x) => sum + x.deletions, 0),
files: diffs.length,
},
})
return yield* sessions.get(input.sessionID)
})
const unrevert = Effect.fn("SessionRevert.unrevert")(function* (input: { sessionID: SessionID }) {
log.info("unreverting", input)
yield* Effect.promise(() => SessionPrompt.assertNotBusy(input.sessionID))
const session = yield* sessions.get(input.sessionID)
if (!session.revert) return session
if (session.revert.snapshot) yield* snap.restore(session.revert!.snapshot!)
yield* sessions.clearRevert(input.sessionID)
return yield* sessions.get(input.sessionID)
})
const cleanup = Effect.fn("SessionRevert.cleanup")(function* (session: Session.Info) {
if (!session.revert) return
const sessionID = session.id
const msgs = yield* sessions.messages({ sessionID })
const messageID = session.revert.messageID
const remove = [] as MessageV2.WithParts[]
let target: MessageV2.WithParts | undefined
for (const msg of msgs) {
if (msg.info.id < messageID) continue
if (msg.info.id > messageID) {
remove.push(msg)
continue
}
if (session.revert.partID) {
target = msg
continue
}
remove.push(msg)
}
for (const msg of remove) {
SyncEvent.run(MessageV2.Event.Removed, {
sessionID,
messageID: msg.info.id,
})
}
if (session.revert.partID && target) {
const partID = session.revert.partID
const idx = target.parts.findIndex((part) => part.id === partID)
if (idx >= 0) {
const removeParts = target.parts.slice(idx)
target.parts = target.parts.slice(0, idx)
for (const part of removeParts) {
SyncEvent.run(MessageV2.Event.PartRemoved, {
sessionID,
messageID: target.info.id,
partID: part.id,
})
}
}
}
yield* sessions.clearRevert(sessionID)
})
return Service.of({ revert, unrevert, cleanup })
}),
)
export const defaultLayer = Layer.unwrap(
Effect.sync(() =>
layer.pipe(
Layer.provide(Session.defaultLayer),
Layer.provide(Snapshot.defaultLayer),
Layer.provide(Storage.defaultLayer),
Layer.provide(Bus.layer),
),
),
)
const { runPromise } = makeRuntime(Service, defaultLayer)
export async function revert(input: RevertInput) {
return runPromise((svc) => svc.revert(input))
await SessionPrompt.assertNotBusy(input.sessionID)
const all = await Session.messages({ sessionID: input.sessionID })
let lastUser: MessageV2.User | undefined
const session = await Session.get(input.sessionID)
let revert: Session.Info["revert"]
const patches: Snapshot.Patch[] = []
for (const msg of all) {
if (msg.info.role === "user") lastUser = msg.info
const remaining = []
for (const part of msg.parts) {
if (revert) {
if (part.type === "patch") {
patches.push(part)
}
continue
}
if (!revert) {
if ((msg.info.id === input.messageID && !input.partID) || part.id === input.partID) {
// if no useful parts left in message, same as reverting whole message
const partID = remaining.some((item) => ["text", "tool"].includes(item.type)) ? input.partID : undefined
revert = {
messageID: !partID && lastUser ? lastUser.id : msg.info.id,
partID,
}
}
remaining.push(part)
}
}
}
if (revert) {
const session = await Session.get(input.sessionID)
revert.snapshot = session.revert?.snapshot ?? (await Snapshot.track())
await Snapshot.revert(patches)
if (revert.snapshot) revert.diff = await Snapshot.diff(revert.snapshot)
const rangeMessages = all.filter((msg) => msg.info.id >= revert!.messageID)
const diffs = await SessionSummary.computeDiff({ messages: rangeMessages })
await Storage.write(["session_diff", input.sessionID], diffs)
Bus.publish(Session.Event.Diff, {
sessionID: input.sessionID,
diff: diffs,
})
return Session.setRevert({
sessionID: input.sessionID,
revert,
summary: {
additions: diffs.reduce((sum, x) => sum + x.additions, 0),
deletions: diffs.reduce((sum, x) => sum + x.deletions, 0),
files: diffs.length,
},
})
}
return session
}
export async function unrevert(input: { sessionID: SessionID }) {
return runPromise((svc) => svc.unrevert(input))
log.info("unreverting", input)
await SessionPrompt.assertNotBusy(input.sessionID)
const session = await Session.get(input.sessionID)
if (!session.revert) return session
if (session.revert.snapshot) await Snapshot.restore(session.revert.snapshot)
return Session.clearRevert(input.sessionID)
}
export async function cleanup(session: Session.Info) {
return runPromise((svc) => svc.cleanup(session))
if (!session.revert) return
const sessionID = session.id
const msgs = await Session.messages({ sessionID })
const messageID = session.revert.messageID
const remove = [] as MessageV2.WithParts[]
let target: MessageV2.WithParts | undefined
for (const msg of msgs) {
if (msg.info.id < messageID) {
continue
}
if (msg.info.id > messageID) {
remove.push(msg)
continue
}
if (session.revert.partID) {
target = msg
continue
}
remove.push(msg)
}
for (const msg of remove) {
SyncEvent.run(MessageV2.Event.Removed, {
sessionID: sessionID,
messageID: msg.info.id,
})
}
if (session.revert.partID && target) {
const partID = session.revert.partID
const removeStart = target.parts.findIndex((part) => part.id === partID)
if (removeStart >= 0) {
const preserveParts = target.parts.slice(0, removeStart)
const removeParts = target.parts.slice(removeStart)
target.parts = preserveParts
for (const part of removeParts) {
SyncEvent.run(MessageV2.Event.PartRemoved, {
sessionID: sessionID,
messageID: target.info.id,
partID: part.id,
})
}
}
}
await Session.clearRevert(sessionID)
}
}

View File

@@ -1,12 +1,14 @@
import { fn } from "@/util/fn"
import z from "zod"
import { Effect, Layer, ServiceMap } from "effect"
import { makeRuntime } from "@/effect/run-service"
import { Bus } from "@/bus"
import { Snapshot } from "@/snapshot"
import { Storage } from "@/storage/storage"
import { Session } from "."
import { MessageV2 } from "./message-v2"
import { SessionID, MessageID } from "./schema"
import { Snapshot } from "@/snapshot"
import { Storage } from "@/storage/storage"
import { Bus } from "@/bus"
import { NotFoundError } from "@/storage/db"
export namespace SessionSummary {
function unquoteGitPath(input: string) {
@@ -65,117 +67,103 @@ export namespace SessionSummary {
return Buffer.from(bytes).toString()
}
export interface Interface {
readonly summarize: (input: { sessionID: SessionID; messageID: MessageID }) => Effect.Effect<void>
readonly diff: (input: { sessionID: SessionID; messageID?: MessageID }) => Effect.Effect<Snapshot.FileDiff[]>
readonly computeDiff: (input: { messages: MessageV2.WithParts[] }) => Effect.Effect<Snapshot.FileDiff[]>
}
export class Service extends ServiceMap.Service<Service, Interface>()("@opencode/SessionSummary") {}
export const layer = Layer.effect(
Service,
Effect.gen(function* () {
const sessions = yield* Session.Service
const snapshot = yield* Snapshot.Service
const storage = yield* Storage.Service
const bus = yield* Bus.Service
const computeDiff = Effect.fn("SessionSummary.computeDiff")(function* (input: {
messages: MessageV2.WithParts[]
}) {
let from: string | undefined
let to: string | undefined
for (const item of input.messages) {
if (!from) {
for (const part of item.parts) {
if (part.type === "step-start" && part.snapshot) {
from = part.snapshot
break
}
}
}
for (const part of item.parts) {
if (part.type === "step-finish" && part.snapshot) to = part.snapshot
}
}
if (from && to) return yield* snapshot.diffFull(from, to)
return []
})
const summarize = Effect.fn("SessionSummary.summarize")(function* (input: {
sessionID: SessionID
messageID: MessageID
}) {
const all = yield* sessions.messages({ sessionID: input.sessionID })
if (!all.length) return
const diffs = yield* computeDiff({ messages: all })
yield* sessions.setSummary({
sessionID: input.sessionID,
summary: {
additions: diffs.reduce((sum, x) => sum + x.additions, 0),
deletions: diffs.reduce((sum, x) => sum + x.deletions, 0),
files: diffs.length,
},
})
yield* storage.write(["session_diff", input.sessionID], diffs).pipe(Effect.ignore)
yield* bus.publish(Session.Event.Diff, { sessionID: input.sessionID, diff: diffs })
const messages = all.filter(
(m) => m.info.id === input.messageID || (m.info.role === "assistant" && m.info.parentID === input.messageID),
)
const target = messages.find((m) => m.info.id === input.messageID)
if (!target || target.info.role !== "user") return
const msgDiffs = yield* computeDiff({ messages })
target.info.summary = { ...target.info.summary, diffs: msgDiffs }
yield* sessions.updateMessage(target.info)
})
const diff = Effect.fn("SessionSummary.diff")(function* (input: { sessionID: SessionID; messageID?: MessageID }) {
const diffs = yield* storage
.read<Snapshot.FileDiff[]>(["session_diff", input.sessionID])
.pipe(Effect.catch(() => Effect.succeed([] as Snapshot.FileDiff[])))
const next = diffs.map((item) => {
const file = unquoteGitPath(item.file)
if (file === item.file) return item
return { ...item, file }
})
const changed = next.some((item, i) => item.file !== diffs[i]?.file)
if (changed) yield* storage.write(["session_diff", input.sessionID], next).pipe(Effect.ignore)
return next
})
return Service.of({ summarize, diff, computeDiff })
export const summarize = fn(
z.object({
sessionID: SessionID.zod,
messageID: MessageID.zod,
}),
async (input) => {
await Session.messages({ sessionID: input.sessionID })
.then((all) =>
Promise.all([
summarizeSession({ sessionID: input.sessionID, messages: all }),
summarizeMessage({ messageID: input.messageID, messages: all }),
]),
)
.catch((err) => {
if (NotFoundError.isInstance(err)) return
throw err
})
},
)
export const defaultLayer = Layer.unwrap(
Effect.sync(() =>
layer.pipe(
Layer.provide(Session.defaultLayer),
Layer.provide(Snapshot.defaultLayer),
Layer.provide(Storage.defaultLayer),
Layer.provide(Bus.layer),
),
),
)
const { runPromise } = makeRuntime(Service, defaultLayer)
export const summarize = (input: { sessionID: SessionID; messageID: MessageID }) =>
void runPromise((svc) => svc.summarize(input)).catch(() => {})
export const DiffInput = z.object({
sessionID: SessionID.zod,
messageID: MessageID.zod.optional(),
})
export async function diff(input: z.infer<typeof DiffInput>) {
return runPromise((svc) => svc.diff(input))
async function summarizeSession(input: { sessionID: SessionID; messages: MessageV2.WithParts[] }) {
const diffs = await computeDiff({ messages: input.messages })
await Session.setSummary({
sessionID: input.sessionID,
summary: {
additions: diffs.reduce((sum, x) => sum + x.additions, 0),
deletions: diffs.reduce((sum, x) => sum + x.deletions, 0),
files: diffs.length,
},
})
await Storage.write(["session_diff", input.sessionID], diffs)
Bus.publish(Session.Event.Diff, {
sessionID: input.sessionID,
diff: diffs,
})
}
async function summarizeMessage(input: { messageID: string; messages: MessageV2.WithParts[] }) {
const messages = input.messages.filter(
(m) => m.info.id === input.messageID || (m.info.role === "assistant" && m.info.parentID === input.messageID),
)
const msgWithParts = messages.find((m) => m.info.id === input.messageID)
if (!msgWithParts || msgWithParts.info.role !== "user") return
const userMsg = msgWithParts.info
const diffs = await computeDiff({ messages })
userMsg.summary = {
...userMsg.summary,
diffs,
}
await Session.updateMessage(userMsg)
}
export const diff = fn(
z.object({
sessionID: SessionID.zod,
messageID: MessageID.zod.optional(),
}),
async (input) => {
const diffs = await Storage.read<Snapshot.FileDiff[]>(["session_diff", input.sessionID]).catch(() => [])
const next = diffs.map((item) => {
const file = unquoteGitPath(item.file)
if (file === item.file) return item
return {
...item,
file,
}
})
const changed = next.some((item, i) => item.file !== diffs[i]?.file)
if (changed) Storage.write(["session_diff", input.sessionID], next).catch(() => {})
return next
},
)
export async function computeDiff(input: { messages: MessageV2.WithParts[] }) {
return runPromise((svc) => svc.computeDiff(input))
let from: string | undefined
let to: string | undefined
// scan assistant messages to find earliest from and latest to
// snapshot
for (const item of input.messages) {
if (!from) {
for (const part of item.parts) {
if (part.type === "step-start" && part.snapshot) {
from = part.snapshot
break
}
}
}
for (const part of item.parts) {
if (part.type === "step-finish" && part.snapshot) {
to = part.snapshot
}
}
}
if (from && to) return Snapshot.diffFull(from, to)
return []
}
}

View File

@@ -1,47 +0,0 @@
/** @jsxImportSource @opentui/solid */
import { expect, test } from "bun:test"
import { createSlot, createSolidSlotRegistry, testRender, useRenderer } from "@opentui/solid"
import { onMount } from "solid-js"
type Slots = {
prompt: {}
}
test("replace slot mounts plugin content once", async () => {
let mounts = 0
const Probe = () => {
onMount(() => {
mounts += 1
})
return <box />
}
const App = () => {
const renderer = useRenderer()
const reg = createSolidSlotRegistry<Slots>(renderer, {})
const Slot = createSlot(reg)
reg.register({
id: "plugin",
slots: {
prompt() {
return <Probe />
},
},
})
return (
<box>
<Slot name="prompt" mode="replace">
<box />
</Slot>
</box>
)
}
await testRender(() => <App />)
expect(mounts).toBe(1)
})

View File

@@ -1,81 +0,0 @@
import { Effect, Layer } from "effect"
import { Provider } from "../../src/provider/provider"
import { ModelID, ProviderID } from "../../src/provider/schema"
export namespace ProviderTest {
export function model(override: Partial<Provider.Model> = {}): Provider.Model {
const id = override.id ?? ModelID.make("gpt-5.2")
const providerID = override.providerID ?? ProviderID.make("openai")
return {
id,
providerID,
name: "Test Model",
capabilities: {
toolcall: true,
attachment: false,
reasoning: false,
temperature: true,
interleaved: false,
input: { text: true, image: false, audio: false, video: false, pdf: false },
output: { text: true, image: false, audio: false, video: false, pdf: false },
},
api: { id, url: "https://example.com", npm: "@ai-sdk/openai" },
cost: { input: 0, output: 0, cache: { read: 0, write: 0 } },
limit: { context: 200_000, output: 10_000 },
status: "active",
options: {},
headers: {},
release_date: "2025-01-01",
...override,
}
}
export function info(override: Partial<Provider.Info> = {}, mdl = model()): Provider.Info {
const id = override.id ?? mdl.providerID
return {
id,
name: "Test Provider",
source: "config",
env: [],
options: {},
models: { [mdl.id]: mdl },
...override,
}
}
export function fake(override: Partial<Provider.Interface> & { model?: Provider.Model; info?: Provider.Info } = {}) {
const mdl = override.model ?? model()
const row = override.info ?? info({}, mdl)
return {
model: mdl,
info: row,
layer: Layer.succeed(
Provider.Service,
Provider.Service.of({
list: Effect.fn("TestProvider.list")(() => Effect.succeed({ [row.id]: row })),
getProvider: Effect.fn("TestProvider.getProvider")((providerID) => {
if (providerID === row.id) return Effect.succeed(row)
return Effect.die(new Error(`Unknown test provider: ${providerID}`))
}),
getModel: Effect.fn("TestProvider.getModel")((providerID, modelID) => {
if (providerID === row.id && modelID === mdl.id) return Effect.succeed(mdl)
return Effect.die(new Error(`Unknown test model: ${providerID}/${modelID}`))
}),
getLanguage: Effect.fn("TestProvider.getLanguage")(() =>
Effect.die(new Error("ProviderTest.getLanguage not configured")),
),
closest: Effect.fn("TestProvider.closest")((providerID) =>
Effect.succeed(providerID === row.id ? { providerID: row.id, modelID: mdl.id } : undefined),
),
getSmallModel: Effect.fn("TestProvider.getSmallModel")((providerID) =>
Effect.succeed(providerID === row.id ? mdl : undefined),
),
defaultModel: Effect.fn("TestProvider.defaultModel")(() =>
Effect.succeed({ providerID: row.id, modelID: mdl.id }),
),
...override,
}),
),
}
}
}

View File

@@ -64,7 +64,9 @@ describe("Format", () => {
),
)
it.live("service initializes without error", () => provideTmpdirInstance(() => Format.Service.use(() => Effect.void)))
it.live("service initializes without error", () =>
provideTmpdirInstance(() => Format.Service.use(() => Effect.void)),
)
it.live("status() initializes formatter state per directory", () =>
Effect.gen(function* () {

View File

@@ -1,4 +1,4 @@
import { afterEach, describe, expect, mock, test } from "bun:test"
import { afterEach, describe, expect, mock, spyOn, test } from "bun:test"
import { APICallError } from "ai"
import { Cause, Effect, Exit, Layer, ManagedRuntime } from "effect"
import * as Stream from "effect/Stream"
@@ -20,9 +20,9 @@ import { MessageID, PartID, SessionID } from "../../src/session/schema"
import { SessionStatus } from "../../src/session/status"
import { ModelID, ProviderID } from "../../src/provider/schema"
import type { Provider } from "../../src/provider/provider"
import * as ProviderModule from "../../src/provider/provider"
import * as SessionProcessorModule from "../../src/session/processor"
import { Snapshot } from "../../src/snapshot"
import { ProviderTest } from "../fake/provider"
Log.init({ print: false })
@@ -65,8 +65,6 @@ function createModel(opts: {
} as Provider.Model
}
const wide = () => ProviderTest.fake({ model: createModel({ context: 100_000, output: 32_000 }) })
async function user(sessionID: SessionID, text: string) {
const msg = await Session.updateMessage({
id: MessageID.ascending(),
@@ -164,11 +162,10 @@ function layer(result: "continue" | "compact") {
)
}
function runtime(result: "continue" | "compact", plugin = Plugin.defaultLayer, provider = ProviderTest.fake()) {
function runtime(result: "continue" | "compact", plugin = Plugin.defaultLayer) {
const bus = Bus.layer
return ManagedRuntime.make(
Layer.mergeAll(SessionCompaction.layer, bus).pipe(
Layer.provide(provider.layer),
Layer.provide(Session.defaultLayer),
Layer.provide(layer(result)),
Layer.provide(Agent.defaultLayer),
@@ -201,13 +198,12 @@ function llm() {
}
}
function liveRuntime(layer: Layer.Layer<LLM.Service>, provider = ProviderTest.fake()) {
function liveRuntime(layer: Layer.Layer<LLM.Service>) {
const bus = Bus.layer
const status = SessionStatus.layer.pipe(Layer.provide(bus))
const processor = SessionProcessorModule.SessionProcessor.layer
return ManagedRuntime.make(
Layer.mergeAll(SessionCompaction.layer.pipe(Layer.provide(processor)), processor, bus, status).pipe(
Layer.provide(provider.layer),
Layer.provide(Session.defaultLayer),
Layer.provide(Snapshot.defaultLayer),
Layer.provide(layer),
@@ -548,12 +544,14 @@ describe("session.compaction.process", () => {
await Instance.provide({
directory: tmp.path,
fn: async () => {
spyOn(ProviderModule.Provider, "getModel").mockResolvedValue(createModel({ context: 100_000, output: 32_000 }))
const session = await Session.create({})
const msg = await user(session.id, "hello")
const msgs = await Session.messages({ sessionID: session.id })
const done = defer()
let seen = false
const rt = runtime("continue", Plugin.defaultLayer, wide())
const rt = runtime("continue")
let unsub: (() => void) | undefined
try {
unsub = await rt.runPromise(
@@ -598,9 +596,11 @@ describe("session.compaction.process", () => {
await Instance.provide({
directory: tmp.path,
fn: async () => {
spyOn(ProviderModule.Provider, "getModel").mockResolvedValue(createModel({ context: 100_000, output: 32_000 }))
const session = await Session.create({})
const msg = await user(session.id, "hello")
const rt = runtime("compact", Plugin.defaultLayer, wide())
const rt = runtime("compact")
try {
const msgs = await Session.messages({ sessionID: session.id })
const result = await rt.runPromise(
@@ -636,9 +636,11 @@ describe("session.compaction.process", () => {
await Instance.provide({
directory: tmp.path,
fn: async () => {
spyOn(ProviderModule.Provider, "getModel").mockResolvedValue(createModel({ context: 100_000, output: 32_000 }))
const session = await Session.create({})
const msg = await user(session.id, "hello")
const rt = runtime("continue", Plugin.defaultLayer, wide())
const rt = runtime("continue")
try {
const msgs = await Session.messages({ sessionID: session.id })
const result = await rt.runPromise(
@@ -676,6 +678,8 @@ describe("session.compaction.process", () => {
await Instance.provide({
directory: tmp.path,
fn: async () => {
spyOn(ProviderModule.Provider, "getModel").mockResolvedValue(createModel({ context: 100_000, output: 32_000 }))
const session = await Session.create({})
await user(session.id, "root")
const replay = await user(session.id, "image")
@@ -689,7 +693,7 @@ describe("session.compaction.process", () => {
url: "https://example.com/cat.png",
})
const msg = await user(session.id, "current")
const rt = runtime("continue", Plugin.defaultLayer, wide())
const rt = runtime("continue")
try {
const msgs = await Session.messages({ sessionID: session.id })
const result = await rt.runPromise(
@@ -724,11 +728,13 @@ describe("session.compaction.process", () => {
await Instance.provide({
directory: tmp.path,
fn: async () => {
spyOn(ProviderModule.Provider, "getModel").mockResolvedValue(createModel({ context: 100_000, output: 32_000 }))
const session = await Session.create({})
await user(session.id, "earlier")
const msg = await user(session.id, "current")
const rt = runtime("continue", Plugin.defaultLayer, wide())
const rt = runtime("continue")
try {
const msgs = await Session.messages({ sessionID: session.id })
const result = await rt.runPromise(
@@ -784,11 +790,13 @@ describe("session.compaction.process", () => {
await Instance.provide({
directory: tmp.path,
fn: async () => {
spyOn(ProviderModule.Provider, "getModel").mockResolvedValue(createModel({ context: 100_000, output: 32_000 }))
const session = await Session.create({})
const msg = await user(session.id, "hello")
const msgs = await Session.messages({ sessionID: session.id })
const abort = new AbortController()
const rt = liveRuntime(stub.layer, wide())
const rt = liveRuntime(stub.layer)
let off: (() => void) | undefined
let run: Promise<"continue" | "stop"> | undefined
try {
@@ -858,11 +866,13 @@ describe("session.compaction.process", () => {
await Instance.provide({
directory: tmp.path,
fn: async () => {
spyOn(ProviderModule.Provider, "getModel").mockResolvedValue(createModel({ context: 100_000, output: 32_000 }))
const session = await Session.create({})
const msg = await user(session.id, "hello")
const msgs = await Session.messages({ sessionID: session.id })
const abort = new AbortController()
const rt = runtime("continue", plugin(ready), wide())
const rt = runtime("continue", plugin(ready))
let run: Promise<"continue" | "stop"> | undefined
try {
run = rt
@@ -960,9 +970,11 @@ describe("session.compaction.process", () => {
await Instance.provide({
directory: tmp.path,
fn: async () => {
spyOn(ProviderModule.Provider, "getModel").mockResolvedValue(createModel({ context: 100_000, output: 32_000 }))
const session = await Session.create({})
const msg = await user(session.id, "hello")
const rt = liveRuntime(stub.layer, wide())
const rt = liveRuntime(stub.layer)
try {
const msgs = await Session.messages({ sessionID: session.id })
await rt.runPromise(

View File

@@ -207,7 +207,7 @@ it.live("session.processor effect tests capture llm input cleanly", () =>
} satisfies LLM.StreamInput
const value = yield* handle.process(input)
const parts = yield* MessageV2.partsEffect(msg.id)
const parts = yield* Effect.promise(() => MessageV2.parts(msg.id))
const calls = yield* llm.calls
expect(value).toBe("continue")
@@ -254,7 +254,7 @@ it.live("session.processor effect tests stop after token overflow requests compa
tools: {},
})
const parts = yield* MessageV2.partsEffect(msg.id)
const parts = yield* Effect.promise(() => MessageV2.parts(msg.id))
expect(value).toBe("compact")
expect(parts.some((part) => part.type === "text" && part.text === "after")).toBe(true)
@@ -347,7 +347,7 @@ it.live("session.processor effect tests reset reasoning state across retries", (
tools: {},
})
const parts = yield* MessageV2.partsEffect(msg.id)
const parts = yield* Effect.promise(() => MessageV2.parts(msg.id))
const reasoning = parts.filter((part): part is MessageV2.ReasoningPart => part.type === "reasoning")
expect(value).toBe("continue")
@@ -438,7 +438,7 @@ it.live("session.processor effect tests retry recognized structured json errors"
tools: {},
})
const parts = yield* MessageV2.partsEffect(msg.id)
const parts = yield* Effect.promise(() => MessageV2.parts(msg.id))
expect(value).toBe("continue")
expect(yield* llm.calls).toBe(2)
@@ -596,7 +596,7 @@ it.live("session.processor effect tests mark pending tools as aborted on cleanup
if (Exit.isFailure(exit) && Cause.hasInterruptsOnly(exit.cause)) {
yield* handle.abort()
}
const parts = yield* MessageV2.partsEffect(msg.id)
const parts = yield* Effect.promise(() => MessageV2.parts(msg.id))
const call = parts.find((part): part is MessageV2.ToolPart => part.type === "tool")
expect(Exit.isFailure(exit)).toBe(true)
@@ -669,7 +669,7 @@ it.live("session.processor effect tests record aborted errors and idle state", (
yield* handle.abort()
}
yield* Effect.promise(() => seen.promise)
const stored = yield* MessageV2.getEffect({ sessionID: chat.id, messageID: msg.id })
const stored = yield* Effect.promise(() => MessageV2.get({ sessionID: chat.id, messageID: msg.id }))
const state = yield* sts.get(chat.id)
off()
@@ -731,7 +731,7 @@ it.live("session.processor effect tests mark interruptions aborted without manua
yield* Fiber.interrupt(run)
const exit = yield* Fiber.await(run)
const stored = yield* MessageV2.getEffect({ sessionID: chat.id, messageID: msg.id })
const stored = yield* Effect.promise(() => MessageV2.get({ sessionID: chat.id, messageID: msg.id }))
const state = yield* sts.get(chat.id)
expect(Exit.isFailure(exit)).toBe(true)

View File

@@ -0,0 +1,247 @@
import { describe, expect, spyOn, test } from "bun:test"
import { Instance } from "../../src/project/instance"
import { Provider } from "../../src/provider/provider"
import { Session } from "../../src/session"
import { MessageV2 } from "../../src/session/message-v2"
import { SessionPrompt } from "../../src/session/prompt"
import { SessionStatus } from "../../src/session/status"
import { MessageID, PartID, SessionID } from "../../src/session/schema"
import { Log } from "../../src/util/log"
import { tmpdir } from "../fixture/fixture"
Log.init({ print: false })
function deferred() {
let resolve!: () => void
const promise = new Promise<void>((done) => {
resolve = done
})
return { promise, resolve }
}
// Helper: seed a session with a user message + finished assistant message
// so loop() exits immediately without calling any LLM
async function seed(sessionID: SessionID) {
const userMsg: MessageV2.Info = {
id: MessageID.ascending(),
role: "user",
sessionID,
time: { created: Date.now() },
agent: "build",
model: { providerID: "openai" as any, modelID: "gpt-5.2" as any },
}
await Session.updateMessage(userMsg)
await Session.updatePart({
id: PartID.ascending(),
messageID: userMsg.id,
sessionID,
type: "text",
text: "hello",
})
const assistantMsg: MessageV2.Info = {
id: MessageID.ascending(),
role: "assistant",
parentID: userMsg.id,
sessionID,
mode: "build",
agent: "build",
cost: 0,
path: { cwd: "/tmp", root: "/tmp" },
tokens: { input: 0, output: 0, reasoning: 0, cache: { read: 0, write: 0 } },
modelID: "gpt-5.2" as any,
providerID: "openai" as any,
time: { created: Date.now(), completed: Date.now() },
finish: "stop",
}
await Session.updateMessage(assistantMsg)
await Session.updatePart({
id: PartID.ascending(),
messageID: assistantMsg.id,
sessionID,
type: "text",
text: "hi there",
})
return { userMsg, assistantMsg }
}
describe("session.prompt concurrency", () => {
test("loop returns assistant message and sets status to idle", async () => {
await using tmp = await tmpdir({ git: true })
await Instance.provide({
directory: tmp.path,
fn: async () => {
const session = await Session.create({})
await seed(session.id)
const result = await SessionPrompt.loop({ sessionID: session.id })
expect(result.info.role).toBe("assistant")
if (result.info.role === "assistant") expect(result.info.finish).toBe("stop")
const status = await SessionStatus.get(session.id)
expect(status.type).toBe("idle")
},
})
})
test("concurrent loop callers get the same result", async () => {
await using tmp = await tmpdir({ git: true })
await Instance.provide({
directory: tmp.path,
fn: async () => {
const session = await Session.create({})
await seed(session.id)
const [a, b] = await Promise.all([
SessionPrompt.loop({ sessionID: session.id }),
SessionPrompt.loop({ sessionID: session.id }),
])
expect(a.info.id).toBe(b.info.id)
expect(a.info.role).toBe("assistant")
},
})
})
test("assertNotBusy throws when loop is running", async () => {
await using tmp = await tmpdir({ git: true })
await Instance.provide({
directory: tmp.path,
fn: async () => {
const session = await Session.create({})
const userMsg: MessageV2.Info = {
id: MessageID.ascending(),
role: "user",
sessionID: session.id,
time: { created: Date.now() },
agent: "build",
model: { providerID: "openai" as any, modelID: "gpt-5.2" as any },
}
await Session.updateMessage(userMsg)
await Session.updatePart({
id: PartID.ascending(),
messageID: userMsg.id,
sessionID: session.id,
type: "text",
text: "hello",
})
const ready = deferred()
const gate = deferred()
const getModel = spyOn(Provider, "getModel").mockImplementation(async () => {
ready.resolve()
await gate.promise
throw new Error("test stop")
})
try {
const loopPromise = SessionPrompt.loop({ sessionID: session.id }).catch(() => undefined)
await ready.promise
await expect(SessionPrompt.assertNotBusy(session.id)).rejects.toBeInstanceOf(Session.BusyError)
gate.resolve()
await loopPromise
} finally {
gate.resolve()
getModel.mockRestore()
}
// After loop completes, assertNotBusy should succeed
await SessionPrompt.assertNotBusy(session.id)
},
})
})
test("cancel sets status to idle", async () => {
await using tmp = await tmpdir({ git: true })
await Instance.provide({
directory: tmp.path,
fn: async () => {
const session = await Session.create({})
// Seed only a user message — loop must call getModel to proceed
const userMsg: MessageV2.Info = {
id: MessageID.ascending(),
role: "user",
sessionID: session.id,
time: { created: Date.now() },
agent: "build",
model: { providerID: "openai" as any, modelID: "gpt-5.2" as any },
}
await Session.updateMessage(userMsg)
await Session.updatePart({
id: PartID.ascending(),
messageID: userMsg.id,
sessionID: session.id,
type: "text",
text: "hello",
})
// Also seed an assistant message so lastAssistant() fallback can find it
const assistantMsg: MessageV2.Info = {
id: MessageID.ascending(),
role: "assistant",
parentID: userMsg.id,
sessionID: session.id,
mode: "build",
agent: "build",
cost: 0,
path: { cwd: "/tmp", root: "/tmp" },
tokens: { input: 0, output: 0, reasoning: 0, cache: { read: 0, write: 0 } },
modelID: "gpt-5.2" as any,
providerID: "openai" as any,
time: { created: Date.now() },
}
await Session.updateMessage(assistantMsg)
await Session.updatePart({
id: PartID.ascending(),
messageID: assistantMsg.id,
sessionID: session.id,
type: "text",
text: "hi there",
})
const ready = deferred()
const gate = deferred()
const getModel = spyOn(Provider, "getModel").mockImplementation(async () => {
ready.resolve()
await gate.promise
throw new Error("test stop")
})
try {
// Start loop — it will block in getModel (assistant has no finish, so loop continues)
const loopPromise = SessionPrompt.loop({ sessionID: session.id })
await ready.promise
await SessionPrompt.cancel(session.id)
const status = await SessionStatus.get(session.id)
expect(status.type).toBe("idle")
// loop should resolve cleanly, not throw "All fibers interrupted"
const result = await loopPromise
expect(result.info.role).toBe("assistant")
expect(result.info.id).toBe(assistantMsg.id)
} finally {
gate.resolve()
getModel.mockRestore()
}
},
})
}, 10000)
test("cancel on idle session just sets idle", async () => {
await using tmp = await tmpdir({ git: true })
await Instance.provide({
directory: tmp.path,
fn: async () => {
const session = await Session.create({})
await SessionPrompt.cancel(session.id)
const status = await SessionStatus.get(session.id)
expect(status.type).toBe("idle")
},
})
})
})

View File

@@ -12,7 +12,6 @@ import { LSP } from "../../src/lsp"
import { MCP } from "../../src/mcp"
import { Permission } from "../../src/permission"
import { Plugin } from "../../src/plugin"
import { Provider as ProviderSvc } from "../../src/provider/provider"
import type { Provider } from "../../src/provider/provider"
import { ModelID, ProviderID } from "../../src/provider/schema"
import { Session } from "../../src/session"
@@ -152,7 +151,6 @@ function makeHttp() {
Permission.layer,
Plugin.defaultLayer,
Config.defaultLayer,
ProviderSvc.defaultLayer,
filetime,
lsp,
mcp,
@@ -469,7 +467,7 @@ it.live("failed subtask preserves metadata on error tool state", () =>
expect(result.info.role).toBe("assistant")
expect(yield* llm.calls).toBe(2)
const msgs = yield* MessageV2.filterCompactedEffect(chat.id)
const msgs = yield* Effect.promise(() => MessageV2.filterCompacted(MessageV2.stream(chat.id)))
const taskMsg = msgs.find((item) => item.info.role === "assistant" && item.info.agent === "general")
expect(taskMsg?.info.role).toBe("assistant")
if (!taskMsg || taskMsg.info.role !== "assistant") return
@@ -628,7 +626,7 @@ it.live(
const exit = yield* Fiber.await(fiber)
expect(Exit.isSuccess(exit)).toBe(true)
const msgs = yield* MessageV2.filterCompactedEffect(chat.id)
const msgs = yield* Effect.promise(() => MessageV2.filterCompacted(MessageV2.stream(chat.id)))
const taskMsg = msgs.find((item) => item.info.role === "assistant" && item.info.agent === "general")
expect(taskMsg?.info.role).toBe("assistant")
if (!taskMsg || taskMsg.info.role !== "assistant") return

View File

@@ -10,66 +10,9 @@ import { Instance } from "../../src/project/instance"
import { MessageID, PartID } from "../../src/session/schema"
import { tmpdir } from "../fixture/fixture"
const projectRoot = path.join(__dirname, "../..")
Log.init({ print: false })
function user(sessionID: string, agent = "default") {
return Session.updateMessage({
id: MessageID.ascending(),
role: "user" as const,
sessionID: sessionID as any,
agent,
model: { providerID: ProviderID.make("openai"), modelID: ModelID.make("gpt-4") },
time: { created: Date.now() },
})
}
function assistant(sessionID: string, parentID: string, dir: string) {
return Session.updateMessage({
id: MessageID.ascending(),
role: "assistant" as const,
sessionID: sessionID as any,
mode: "default",
agent: "default",
path: { cwd: dir, root: dir },
cost: 0,
tokens: { output: 0, input: 0, reasoning: 0, cache: { read: 0, write: 0 } },
modelID: ModelID.make("gpt-4"),
providerID: ProviderID.make("openai"),
parentID: parentID as any,
time: { created: Date.now() },
finish: "end_turn",
})
}
function text(sessionID: string, messageID: string, content: string) {
return Session.updatePart({
id: PartID.ascending(),
messageID: messageID as any,
sessionID: sessionID as any,
type: "text" as const,
text: content,
})
}
function tool(sessionID: string, messageID: string) {
return Session.updatePart({
id: PartID.ascending(),
messageID: messageID as any,
sessionID: sessionID as any,
type: "tool" as const,
tool: "bash",
callID: "call-1",
state: {
status: "completed" as const,
input: {},
output: "done",
title: "",
metadata: {},
time: { start: 0, end: 1 },
},
})
}
describe("revert + compact workflow", () => {
test("should properly handle compact command after revert", async () => {
await using tmp = await tmpdir({ git: true })
@@ -340,98 +283,4 @@ describe("revert + compact workflow", () => {
},
})
})
test("cleanup with partID removes parts from the revert point onward", async () => {
await using tmp = await tmpdir({ git: true })
await Instance.provide({
directory: tmp.path,
fn: async () => {
const session = await Session.create({})
const sid = session.id
const u1 = await user(sid)
const p1 = await text(sid, u1.id, "first part")
const p2 = await tool(sid, u1.id)
const p3 = await text(sid, u1.id, "third part")
// Set revert state pointing at a specific part
await Session.setRevert({
sessionID: sid,
revert: { messageID: u1.id, partID: p2.id },
summary: { additions: 0, deletions: 0, files: 0 },
})
const info = await Session.get(sid)
await SessionRevert.cleanup(info)
const msgs = await Session.messages({ sessionID: sid })
expect(msgs.length).toBe(1)
// Only the first part should remain (before the revert partID)
expect(msgs[0].parts.length).toBe(1)
expect(msgs[0].parts[0].id).toBe(p1.id)
const cleared = await Session.get(sid)
expect(cleared.revert).toBeUndefined()
},
})
})
test("cleanup removes messages after revert point but keeps earlier ones", async () => {
await using tmp = await tmpdir({ git: true })
await Instance.provide({
directory: tmp.path,
fn: async () => {
const session = await Session.create({})
const sid = session.id
const u1 = await user(sid)
await text(sid, u1.id, "hello")
const a1 = await assistant(sid, u1.id, tmp.path)
await text(sid, a1.id, "hi back")
const u2 = await user(sid)
await text(sid, u2.id, "second question")
const a2 = await assistant(sid, u2.id, tmp.path)
await text(sid, a2.id, "second answer")
// Revert from u2 onward
await Session.setRevert({
sessionID: sid,
revert: { messageID: u2.id },
summary: { additions: 0, deletions: 0, files: 0 },
})
const info = await Session.get(sid)
await SessionRevert.cleanup(info)
const msgs = await Session.messages({ sessionID: sid })
const ids = msgs.map((m) => m.info.id)
expect(ids).toContain(u1.id)
expect(ids).toContain(a1.id)
expect(ids).not.toContain(u2.id)
expect(ids).not.toContain(a2.id)
},
})
})
test("cleanup is a no-op when session has no revert state", async () => {
await using tmp = await tmpdir({ git: true })
await Instance.provide({
directory: tmp.path,
fn: async () => {
const session = await Session.create({})
const sid = session.id
const u1 = await user(sid)
await text(sid, u1.id, "hello")
const info = await Session.get(sid)
expect(info.revert).toBeUndefined()
await SessionRevert.cleanup(info)
const msgs = await Session.messages({ sessionID: sid })
expect(msgs.length).toBe(1)
},
})
})
})

View File

@@ -1,7 +1,7 @@
{
"$schema": "https://json.schemastore.org/package.json",
"name": "@opencode-ai/plugin",
"version": "1.3.13",
"version": "1.3.11",
"type": "module",
"license": "MIT",
"scripts": {
@@ -21,8 +21,8 @@
"zod": "catalog:"
},
"peerDependencies": {
"@opentui/core": ">=0.1.95",
"@opentui/solid": ">=0.1.95"
"@opentui/core": ">=0.1.93",
"@opentui/solid": ">=0.1.93"
},
"peerDependenciesMeta": {
"@opentui/core": {
@@ -33,8 +33,8 @@
}
},
"devDependencies": {
"@opentui/core": "0.1.95",
"@opentui/solid": "0.1.95",
"@opentui/core": "0.1.93",
"@opentui/solid": "0.1.93",
"@tsconfig/node22": "catalog:",
"@types/node": "catalog:",
"typescript": "catalog:",

View File

@@ -1,7 +1,7 @@
{
"$schema": "https://json.schemastore.org/package.json",
"name": "@opencode-ai/sdk",
"version": "1.3.13",
"version": "1.3.11",
"type": "module",
"license": "MIT",
"scripts": {

View File

@@ -7040,6 +7040,44 @@
},
"components": {
"schemas": {
"Event.installation.updated": {
"type": "object",
"properties": {
"type": {
"type": "string",
"const": "installation.updated"
},
"properties": {
"type": "object",
"properties": {
"version": {
"type": "string"
}
},
"required": ["version"]
}
},
"required": ["type", "properties"]
},
"Event.installation.update-available": {
"type": "object",
"properties": {
"type": {
"type": "string",
"const": "installation.update-available"
},
"properties": {
"type": "object",
"properties": {
"version": {
"type": "string"
}
},
"required": ["version"]
}
},
"required": ["type", "properties"]
},
"Project": {
"type": "object",
"properties": {
@@ -7116,44 +7154,6 @@
},
"required": ["type", "properties"]
},
"Event.installation.updated": {
"type": "object",
"properties": {
"type": {
"type": "string",
"const": "installation.updated"
},
"properties": {
"type": "object",
"properties": {
"version": {
"type": "string"
}
},
"required": ["version"]
}
},
"required": ["type", "properties"]
},
"Event.installation.update-available": {
"type": "object",
"properties": {
"type": {
"type": "string",
"const": "installation.update-available"
},
"properties": {
"type": "object",
"properties": {
"version": {
"type": "string"
}
},
"required": ["version"]
}
},
"required": ["type", "properties"]
},
"Event.server.instance.disposed": {
"type": "object",
"properties": {
@@ -9733,15 +9733,15 @@
},
"Event": {
"anyOf": [
{
"$ref": "#/components/schemas/Event.project.updated"
},
{
"$ref": "#/components/schemas/Event.installation.updated"
},
{
"$ref": "#/components/schemas/Event.installation.update-available"
},
{
"$ref": "#/components/schemas/Event.project.updated"
},
{
"$ref": "#/components/schemas/Event.server.instance.disposed"
},

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/slack",
"version": "1.3.13",
"version": "1.3.11",
"type": "module",
"license": "MIT",
"scripts": {

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/ui",
"version": "1.3.13",
"version": "1.3.11",
"type": "module",
"license": "MIT",
"exports": {

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/util",
"version": "1.3.13",
"version": "1.3.11",
"private": true,
"type": "module",
"license": "MIT",

View File

@@ -2,7 +2,7 @@
"name": "@opencode-ai/web",
"type": "module",
"license": "MIT",
"version": "1.3.13",
"version": "1.3.11",
"scripts": {
"dev": "astro dev",
"dev:remote": "VITE_API_URL=https://api.opencode.ai astro dev",

View File

@@ -2,7 +2,7 @@
"name": "opencode",
"displayName": "opencode",
"description": "opencode for VS Code",
"version": "1.3.13",
"version": "1.3.11",
"publisher": "sst-dev",
"repository": {
"type": "git",