Compare commits

...

104 Commits

Author SHA1 Message Date
opencode-agent[bot]
e1a4d8e14f Apply PR #13052: fix(win32): use ffi to get around bun raw input/ctrl+c issues 2026-02-12 04:48:15 +00:00
opencode-agent[bot]
5a3081fab7 Apply PR #12755: feat(session): add handoff functionality and agent updates 2026-02-12 04:48:14 +00:00
opencode-agent[bot]
bdbb226149 Apply PR #12022: feat: update tui model dialog to utilize model family to reduce noise in list 2026-02-12 04:48:14 +00:00
opencode-agent[bot]
8d6be7fea4 Apply PR #11811: feat: make plan mode the default 2026-02-12 04:48:14 +00:00
opencode-agent[bot]
3cbf203cdc Apply PR #10597: sqlite again 2026-02-12 04:48:13 +00:00
LukeParkerDev
b4a78c53c8 improve 2026-02-12 12:57:16 +10:00
Dax
6413e91947 Merge branch 'dev' into feature/session-handoff 2026-02-11 12:45:50 -05:00
Dax Raad
9106d315e4 Merge dev into feature/session-handoff
Resolved conflicts:
- Translation files: kept dev's proper localized versions
- prompt/index.tsx: preserved handoff functionality with dev's placeholderText() pattern
- home.tsx: kept dev's simpler layout
2026-02-11 12:45:20 -05:00
Dax Raad
6097c04a50 Merge remote-tracking branch 'origin' into sqlite2 2026-02-11 12:29:54 -05:00
Dax Raad
667953db85 core: fix json migration to preserve project commands
ensure custom commands configured by users are maintained when migrating from json storage to sqlite
2026-02-11 12:29:35 -05:00
Dax Raad
b2e86aaf24 core: allow users to define custom project commands like 'npm run dev' in project settings 2026-02-11 12:18:48 -05:00
Dax Raad
032931c749 Merge branch 'dev' into sqlite2 2026-02-11 11:45:35 -05:00
LukeParkerDev
a90e8de050 add missing return 2026-02-11 13:24:17 +10:00
LukeParkerDev
56dfbbbc93 hook better instead of 100ms loop lol 2026-02-11 12:57:01 +10:00
LukeParkerDev
943bf316b6 beta conflict workaround 2026-02-11 12:13:49 +10:00
LukeParkerDev
1f0346725a it works but its dumb 2026-02-11 11:37:17 +10:00
Aiden Cline
eabf770053 Merge branch 'dev' into utilize-family-in-dialog 2026-02-10 14:43:15 -06:00
Dax Raad
4666daa581 fix 2026-02-10 13:53:18 -05:00
Dax Raad
caf1316116 Merge branch 'dev' into sqlite2 2026-02-10 13:52:35 -05:00
Dax Raad
1de66812bf Merge branch 'dev' into sqlite2 2026-02-09 17:53:05 -05:00
Dax Raad
173c16581d Merge branch 'dev' into feature/session-handoff 2026-02-09 10:53:24 -05:00
Dax
94d0c9940a Merge branch 'dev' into sqlite2 2026-02-09 10:04:55 -05:00
Dax Raad
76bcd22802 docs: add architecture specs for session page improvements and parallel workstreams 2026-02-08 18:44:58 -05:00
Dax Raad
e06fcfdc43 app: update i18n translations for multiple languages 2026-02-08 18:44:53 -05:00
Dax Raad
246430cb8f tui: position prompt at bottom when resuming from handoff with initial prompt 2026-02-08 18:44:45 -05:00
Dax Raad
f689fc7f75 tui: convert handoff from text command to dedicated mode with slash command 2026-02-08 18:44:35 -05:00
Dax Raad
9b2fd57e6e tui: remove hardcoded handoff from autocomplete 2026-02-08 18:44:23 -05:00
Dax Raad
0d365fa613 tui: add handoff mode to prompt history types 2026-02-08 18:44:21 -05:00
Dax Raad
601e631624 tui: fix route navigation state reconciliation 2026-02-08 18:44:18 -05:00
Dax Raad
2fef02f487 tui: fix disabled commands from appearing in slash autocomplete 2026-02-08 18:44:12 -05:00
Dax Raad
4cc9104942 tui: add handoff text command handling in prompt 2026-02-08 18:44:09 -05:00
Dax Raad
0eaa6b5fc8 tui: add handoff autocomplete suggestion with text command 2026-02-08 18:44:07 -05:00
Dax Raad
e563cff034 core: add handoff API endpoint for session extraction 2026-02-08 18:44:04 -05:00
Dax Raad
3b2550106b sdk: regenerate client types for handoff endpoint 2026-02-08 18:44:01 -05:00
Dax Raad
3ec6bff038 core: add handoff session extraction logic with status tracking 2026-02-08 18:43:55 -05:00
Dax Raad
aab2a6df3b core: add tool-only output mode to LLM for handoff extraction 2026-02-08 18:43:53 -05:00
Dax Raad
84171018f2 core: add handoff prompt template for extracting session context 2026-02-08 18:43:51 -05:00
Dax Raad
280d7e6f91 core: add handoff agent for session context extraction 2026-02-08 18:43:49 -05:00
Dax Raad
5952891b1e core: filter session list to show only sessions relevant to current directory location
When running from a subdirectory, the session list now shows only sessions

that belong to the current directory or its subdirectories, instead of

showing all sessions from the entire project.
2026-02-07 01:18:57 -05:00
Dax Raad
d7c8a3f50d Merge remote-tracking branch 'origin/dev' into sqlite2 2026-02-07 01:11:15 -05:00
Dax Raad
ce353819e8 Merge branch 'dev' into sqlite2 2026-02-05 23:50:20 -05:00
Dax Raad
2dae94e5a3 core: show visual progress bar during database migration so users can see real-time status of projects, sessions, and messages being converted 2026-02-05 23:21:37 -05:00
Dax Raad
c6adc19e41 Merge branch 'dev' into sqlite2 2026-02-05 17:54:46 -05:00
Dax Raad
ce56166510 core: suppress error output when auto-installing dependencies to prevent confusing error messages from appearing in the UI 2026-02-05 12:47:05 -05:00
Dax Raad
5911e4c06a Merge branch 'dev' into sqlite2 2026-02-05 12:39:31 -05:00
Dax Raad
42fb840f22 Merge branch 'dev' into sqlite2 2026-02-05 11:52:58 -05:00
Dax Raad
4dcfdf6572 Merge branch 'dev' into sqlite2 2026-02-04 22:44:49 -05:00
Dax Raad
25f3d6d5a9 Merge branch 'dev' into sqlite2 2026-02-04 22:37:08 -05:00
Dax Raad
e19a9e9614 core: fix migration of legacy messages and parts missing IDs in JSON body
Users upgrading from older versions where message and part IDs were only stored in filenames (not the JSON body) can now have their session data properly migrated to SQLite. This ensures no data loss when transitioning to the new storage format.
2026-02-04 22:35:26 -05:00
Aiden Cline
bb4d978684 feat: update tui model dialog to utilize model family to reduce noise in list 2026-02-03 15:48:40 -06:00
Dax Raad
fcc903489b Merge branch 'dev' into sqlite2 2026-02-03 15:51:09 -05:00
Dax
949e69a9bf Merge branch 'dev' into sqlite2 2026-02-02 23:36:00 -05:00
Dax Raad
afec40e8da feat: make plan mode the default, remove experimental flag
- Remove OPENCODE_EXPERIMENTAL_PLAN_MODE flag from flag.ts
- Update prompt.ts to always use plan mode logic
- Update registry.ts to always include plan tools in CLI
- Remove flag documentation from cli.mdx
2026-02-02 10:40:40 -05:00
Dax
8c30f551e2 Merge branch 'dev' into sqlite2 2026-02-01 23:58:01 -05:00
opencode-agent[bot]
cb721497c1 chore: update nix node_modules hashes 2026-02-02 01:40:44 +00:00
Dax Raad
4ec6293054 fix: type errors in console-core and session
- Fix ExtractTablesWithRelations type compatibility with drizzle-orm beta
- Migrate Session.list() and Session.children() from Storage to SQLite
2026-02-01 20:31:02 -05:00
Dax Raad
b7a323355c fix: ExtractTablesWithRelations type parameter 2026-02-01 20:27:29 -05:00
Dax Raad
d4f053042c sync 2026-02-01 20:26:58 -05:00
Dax Raad
5f552534c7 Merge remote-tracking branch 'origin/sqlite2' into sqlite2 2026-02-01 20:26:24 -05:00
Dax Raad
ad5b790bb3 docs: simplify commit command by removing unnecessary instructions 2026-01-31 20:31:33 -05:00
Dax Raad
ed87341c4f core: fix storage directory existence check during json migration
Use fs.existsSync() instead of Bun.file().exists() since we're checking
for a directory, not a file.
2026-01-31 20:30:36 -05:00
opencode-agent[bot]
794ecab028 chore: update nix node_modules hashes 2026-02-01 01:20:05 +00:00
Dax Raad
eeb235724b Merge dev into sqlite2 2026-01-31 20:08:17 -05:00
Dax Raad
61084e7f6f sync 2026-01-31 16:32:37 -05:00
Dax Raad
200aef2eb3 sync 2026-01-31 16:28:07 -05:00
Dax Raad
f6e375a555 Merge origin/sqlite2 2026-01-31 16:10:11 -05:00
Dax Raad
db908deee5 Merge branch 'dev' into sqlite2 2026-01-31 16:08:47 -05:00
opencode-agent[bot]
7b72cc3a48 chore: update nix node_modules hashes 2026-01-30 16:21:25 +00:00
Dax Raad
b8cbfd48ec format 2026-01-30 11:17:33 -05:00
Dax Raad
498cbb2c26 core: split message part updates into delta events for smoother streaming
Streaming text and reasoning content now uses incremental delta events instead of sending full message parts on each update. This reduces bandwidth and improves real-time response smoothness in the TUI.
2026-01-30 11:16:30 -05:00
Dax Raad
d6fbd255b6 Merge branch 'dev' into sqlite2 2026-01-30 11:01:31 -05:00
Dax Raad
2de1c82bf7 Merge branch 'dev' into sqlite2 2026-01-30 10:03:47 -05:00
Dax Raad
34ebb3d051 Merge branch 'dev' into sqlite2 2026-01-29 16:07:20 -05:00
Dax Raad
9c3e3c1ab5 core: load migrations from timestamp-named directories instead of journal 2026-01-29 16:02:19 -05:00
Github Action
3ea499f04e chore: update nix node_modules hashes 2026-01-29 20:15:38 +00:00
Dax Raad
ab13c1d1c4 Merge branch 'dev' into sqlite2 2026-01-29 15:11:57 -05:00
Dax Raad
53b610c331 sync 2026-01-29 13:38:47 -05:00
Dax Raad
e3519356f2 sync 2026-01-29 13:32:20 -05:00
Dax Raad
2619acc0ff Merge branch 'dev' into sqlite2 2026-01-29 13:23:48 -05:00
Dax Raad
1bc45dc266 ignore: keep Chinese release notes label from blocking updates 2026-01-28 21:52:41 -05:00
Dax Raad
2e8feb1c78 core: significantly speed up data migration by optimizing SQLite settings and batch processing
Reduces migration time from minutes to seconds by enabling WAL mode, increasing batch size to 1000, and pre-scanning files upfront. Users upgrading to the SQLite backend will now see much faster startup times when their existing data is migrated.
2026-01-28 21:51:01 -05:00
Github Action
00e60899cc chore: update nix node_modules hashes 2026-01-28 23:49:18 +00:00
Dax Raad
30a918e9d4 Merge branch 'dev' into sqlite2 2026-01-28 18:45:12 -05:00
Dax Raad
ac16068140 Merge branch 'dev' into sqlite2 2026-01-27 17:45:05 -05:00
Dax Raad
19a41ab297 sync 2026-01-27 17:43:20 -05:00
Dax Raad
cd174d8cba sync 2026-01-27 16:57:51 -05:00
Dax Raad
246e901e42 Merge dev into sqlite2 2026-01-27 16:44:06 -05:00
Dax Raad
0ccef1b31f sync 2026-01-27 16:41:24 -05:00
Dax Raad
7706f5b6a8 core: switch commit command to kimi-k2.5 and improve worktree test reliability 2026-01-27 16:24:21 -05:00
Dax Raad
63e38555c9 sync 2026-01-27 15:33:44 -05:00
Dax Raad
f40685ab13 core: fix Drizzle ORM client initialization and type definitions 2026-01-27 12:38:38 -05:00
Dax Raad
a48a5a3462 core: migrate from custom JSON storage to standard Drizzle migrations to improve database reliability and performance
This replaces the previous manual JSON file system with standard Drizzle migrations, enabling:
- Proper database schema migrations with timestamp-based versioning
- Batched migration for faster migration of large datasets
- Better data integrity with proper table schemas instead of JSON blobs
- Easier database upgrades and rollback capabilities

Migration changes:
- Todo table now uses individual columns with composite PK instead of JSON blob
- Share table removes unused download share data
- Session diff table moved from database table to file storage
- All migrations now use proper Drizzle format with per-folder layout

Users will see a one-time migration on next run that migrates existing JSON data to the new SQLite database.
2026-01-27 12:36:05 -05:00
Dax Raad
5e1639de2b core: improve conversation loading performance by batching database queries
Reduces memory usage and speeds up conversation loading by using pagination
and inArray queries instead of loading all messages at once
2026-01-26 12:33:18 -05:00
Dax Raad
2b05833c32 core: ensure events publish reliably after database operations complete 2026-01-26 12:00:44 -05:00
Dax Raad
acdcf7fa88 core: remove dependency on remeda to simplify dependencies 2026-01-26 11:11:59 -05:00
Dax Raad
bf0754caeb sync 2026-01-26 10:35:53 -05:00
Dax Raad
4d50a32979 Merge dev into sqlite2 2026-01-26 08:53:01 -05:00
Dax Raad
57edb0ddc5 sync 2026-01-26 08:44:19 -05:00
Dax Raad
a614b78c6d tui: upgrade database migration system to drizzle migrator
Replaces custom migration system with drizzle-orm's built-in migrator, bundling migrations at build-time instead of runtime generation. This reduces bundle complexity and provides better integration with drizzle's migration tracking.
2026-01-25 22:27:04 -05:00
Github Action
b9f5a34247 chore: update nix node_modules hashes 2026-01-26 02:32:52 +00:00
Dax Raad
81b47a44e2 Merge branch 'dev' into sqlite2 2026-01-25 21:28:38 -05:00
Dax Raad
0c1c07467e Merge branch 'dev' into sqlite2 2026-01-25 20:18:36 -05:00
Dax Raad
105688bf90 sync 2026-01-25 20:16:56 -05:00
Dax Raad
1e7b4768b1 sync 2026-01-24 11:50:25 -05:00
102 changed files with 8774 additions and 1052 deletions

View File

@@ -16,15 +16,12 @@ wip:
For anything in the packages/web use the docs: prefix.
For anything in the packages/app use the ignore: prefix.
prefer to explain WHY something was done from an end user perspective instead of
WHAT was done.
do not do generic messages like "improved agent experience" be very specific
about what user facing changes were made
if there are changes do a git pull --rebase
if there are conflicts DO NOT FIX THEM. notify me and I will fix them
## GIT DIFF

View File

@@ -110,3 +110,4 @@ const table = sqliteTable("session", {
- Avoid mocks as much as possible
- Test actual implementation, do not duplicate logic into tests
- Tests cannot run from repo root (guard: `do-not-run-tests-from-root`); run from package dirs like `packages/opencode`.

469
bun.lock

File diff suppressed because it is too large Load Diff

View File

@@ -40,6 +40,8 @@
"@tailwindcss/vite": "4.1.11",
"diff": "8.0.2",
"dompurify": "3.3.1",
"drizzle-kit": "1.0.0-beta.12-a5629fb",
"drizzle-orm": "1.0.0-beta.12-a5629fb",
"ai": "5.0.124",
"hono": "4.10.7",
"hono-openapi": "1.1.2",

View File

@@ -231,6 +231,24 @@ export function applyDirectoryEvent(input: {
}
break
}
case "message.part.delta": {
const props = event.properties as { messageID: string; partID: string; field: string; delta: string }
const parts = input.store.part[props.messageID]
if (!parts) break
const result = Binary.search(parts, props.partID, (p) => p.id)
if (!result.found) break
input.setStore(
"part",
props.messageID,
produce((draft) => {
const part = draft[result.index]
const field = props.field as keyof typeof part
const existing = part[field] as string | undefined
;(part[field] as string) = (existing ?? "") + props.delta
}),
)
break
}
case "vcs.branch.updated": {
const props = event.properties as { branch: string }
const next = { branch: props.branch }

View File

@@ -12,7 +12,7 @@
"@opencode-ai/console-resource": "workspace:*",
"@planetscale/database": "1.19.0",
"aws4fetch": "1.0.20",
"drizzle-orm": "0.41.0",
"drizzle-orm": "catalog:",
"postgres": "3.4.7",
"stripe": "18.0.0",
"ulid": "catalog:",
@@ -44,7 +44,7 @@
"@tsconfig/node22": "22.0.2",
"@types/bun": "1.3.0",
"@types/node": "catalog:",
"drizzle-kit": "0.30.5",
"drizzle-kit": "catalog:",
"mysql2": "3.14.4",
"typescript": "catalog:",
"@typescript/native-preview": "catalog:"

View File

@@ -4,7 +4,6 @@ export * from "drizzle-orm"
import { Client } from "@planetscale/database"
import { MySqlTransaction, type MySqlTransactionConfig } from "drizzle-orm/mysql-core"
import type { ExtractTablesWithRelations } from "drizzle-orm"
import type { PlanetScalePreparedQueryHKT, PlanetscaleQueryResultHKT } from "drizzle-orm/planetscale-serverless"
import { Context } from "../context"
import { memo } from "../util/memo"
@@ -14,7 +13,7 @@ export namespace Database {
PlanetscaleQueryResultHKT,
PlanetScalePreparedQueryHKT,
Record<string, never>,
ExtractTablesWithRelations<Record<string, never>>
any
>
const client = memo(() => {
@@ -23,7 +22,7 @@ export namespace Database {
username: Resource.Database.username,
password: Resource.Database.password,
})
const db = drizzle(result, {})
const db = drizzle({ client: result })
return db
})

View File

@@ -1,27 +1,10 @@
# opencode agent guidelines
# opencode database guide
## Build/Test Commands
## Database
- **Install**: `bun install`
- **Run**: `bun run --conditions=browser ./src/index.ts`
- **Typecheck**: `bun run typecheck` (npm run typecheck)
- **Test**: `bun test` (runs all tests)
- **Single test**: `bun test test/tool/tool.test.ts` (specific test file)
## Code Style
- **Runtime**: Bun with TypeScript ESM modules
- **Imports**: Use relative imports for local modules, named imports preferred
- **Types**: Zod schemas for validation, TypeScript interfaces for structure
- **Naming**: camelCase for variables/functions, PascalCase for classes/namespaces
- **Error handling**: Use Result patterns, avoid throwing exceptions in tools
- **File structure**: Namespace-based organization (e.g., `Tool.define()`, `Session.create()`)
## Architecture
- **Tools**: Implement `Tool.Info` interface with `execute()` method
- **Context**: Pass `sessionID` in tool context, use `App.provide()` for DI
- **Validation**: All inputs validated with Zod schemas
- **Logging**: Use `Log.create({ service: "name" })` pattern
- **Storage**: Use `Storage` namespace for persistence
- **API Client**: The TypeScript TUI (built with SolidJS + OpenTUI) communicates with the OpenCode server using `@opencode-ai/sdk`. When adding/modifying server endpoints in `packages/opencode/src/server/server.ts`, run `./script/generate.ts` to regenerate the SDK and related files.
- **Schema**: Drizzle schema lives in `src/**/*.sql.ts`.
- **Naming**: tables and columns use snake*case; join columns are `<entity>_id`; indexes are `<table>*<column>\_idx`.
- **Migrations**: generated by Drizzle Kit using `drizzle.config.ts` (schema: `./src/**/*.sql.ts`, output: `./migration`).
- **Command**: `bun run db generate --name <slug>`.
- **Output**: creates `migration/<timestamp>_<slug>/migration.sql` and `snapshot.json`.
- **Tests**: migration tests should read the per-folder layout (no `_journal.json`).

View File

@@ -0,0 +1,10 @@
import { defineConfig } from "drizzle-kit"
export default defineConfig({
dialect: "sqlite",
schema: "./src/**/*.sql.ts",
out: "./migration",
dbCredentials: {
url: "/home/thdxr/.local/share/opencode/opencode.db",
},
})

View File

@@ -0,0 +1,90 @@
CREATE TABLE `project` (
`id` text PRIMARY KEY,
`worktree` text NOT NULL,
`vcs` text,
`name` text,
`icon_url` text,
`icon_color` text,
`time_created` integer NOT NULL,
`time_updated` integer NOT NULL,
`time_initialized` integer,
`sandboxes` text NOT NULL
);
--> statement-breakpoint
CREATE TABLE `message` (
`id` text PRIMARY KEY,
`session_id` text NOT NULL,
`time_created` integer NOT NULL,
`time_updated` integer NOT NULL,
`data` text NOT NULL,
CONSTRAINT `fk_message_session_id_session_id_fk` FOREIGN KEY (`session_id`) REFERENCES `session`(`id`) ON DELETE CASCADE
);
--> statement-breakpoint
CREATE TABLE `part` (
`id` text PRIMARY KEY,
`message_id` text NOT NULL,
`session_id` text NOT NULL,
`time_created` integer NOT NULL,
`time_updated` integer NOT NULL,
`data` text NOT NULL,
CONSTRAINT `fk_part_message_id_message_id_fk` FOREIGN KEY (`message_id`) REFERENCES `message`(`id`) ON DELETE CASCADE
);
--> statement-breakpoint
CREATE TABLE `permission` (
`project_id` text PRIMARY KEY,
`time_created` integer NOT NULL,
`time_updated` integer NOT NULL,
`data` text NOT NULL,
CONSTRAINT `fk_permission_project_id_project_id_fk` FOREIGN KEY (`project_id`) REFERENCES `project`(`id`) ON DELETE CASCADE
);
--> statement-breakpoint
CREATE TABLE `session` (
`id` text PRIMARY KEY,
`project_id` text NOT NULL,
`parent_id` text,
`slug` text NOT NULL,
`directory` text NOT NULL,
`title` text NOT NULL,
`version` text NOT NULL,
`share_url` text,
`summary_additions` integer,
`summary_deletions` integer,
`summary_files` integer,
`summary_diffs` text,
`revert` text,
`permission` text,
`time_created` integer NOT NULL,
`time_updated` integer NOT NULL,
`time_compacting` integer,
`time_archived` integer,
CONSTRAINT `fk_session_project_id_project_id_fk` FOREIGN KEY (`project_id`) REFERENCES `project`(`id`) ON DELETE CASCADE
);
--> statement-breakpoint
CREATE TABLE `todo` (
`session_id` text NOT NULL,
`content` text NOT NULL,
`status` text NOT NULL,
`priority` text NOT NULL,
`position` integer NOT NULL,
`time_created` integer NOT NULL,
`time_updated` integer NOT NULL,
CONSTRAINT `todo_pk` PRIMARY KEY(`session_id`, `position`),
CONSTRAINT `fk_todo_session_id_session_id_fk` FOREIGN KEY (`session_id`) REFERENCES `session`(`id`) ON DELETE CASCADE
);
--> statement-breakpoint
CREATE TABLE `session_share` (
`session_id` text PRIMARY KEY,
`id` text NOT NULL,
`secret` text NOT NULL,
`url` text NOT NULL,
`time_created` integer NOT NULL,
`time_updated` integer NOT NULL,
CONSTRAINT `fk_session_share_session_id_session_id_fk` FOREIGN KEY (`session_id`) REFERENCES `session`(`id`) ON DELETE CASCADE
);
--> statement-breakpoint
CREATE INDEX `message_session_idx` ON `message` (`session_id`);--> statement-breakpoint
CREATE INDEX `part_message_idx` ON `part` (`message_id`);--> statement-breakpoint
CREATE INDEX `part_session_idx` ON `part` (`session_id`);--> statement-breakpoint
CREATE INDEX `session_project_idx` ON `session` (`project_id`);--> statement-breakpoint
CREATE INDEX `session_parent_idx` ON `session` (`parent_id`);--> statement-breakpoint
CREATE INDEX `todo_session_idx` ON `todo` (`session_id`);

View File

@@ -0,0 +1,796 @@
{
"version": "7",
"dialect": "sqlite",
"id": "068758ed-a97a-46f6-8a59-6c639ae7c20c",
"prevIds": ["00000000-0000-0000-0000-000000000000"],
"ddl": [
{
"name": "project",
"entityType": "tables"
},
{
"name": "message",
"entityType": "tables"
},
{
"name": "part",
"entityType": "tables"
},
{
"name": "permission",
"entityType": "tables"
},
{
"name": "session",
"entityType": "tables"
},
{
"name": "todo",
"entityType": "tables"
},
{
"name": "session_share",
"entityType": "tables"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "id",
"entityType": "columns",
"table": "project"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "worktree",
"entityType": "columns",
"table": "project"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "vcs",
"entityType": "columns",
"table": "project"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "name",
"entityType": "columns",
"table": "project"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "icon_url",
"entityType": "columns",
"table": "project"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "icon_color",
"entityType": "columns",
"table": "project"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_created",
"entityType": "columns",
"table": "project"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_updated",
"entityType": "columns",
"table": "project"
},
{
"type": "integer",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_initialized",
"entityType": "columns",
"table": "project"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "sandboxes",
"entityType": "columns",
"table": "project"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "id",
"entityType": "columns",
"table": "message"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "session_id",
"entityType": "columns",
"table": "message"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_created",
"entityType": "columns",
"table": "message"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_updated",
"entityType": "columns",
"table": "message"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "data",
"entityType": "columns",
"table": "message"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "id",
"entityType": "columns",
"table": "part"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "message_id",
"entityType": "columns",
"table": "part"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "session_id",
"entityType": "columns",
"table": "part"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_created",
"entityType": "columns",
"table": "part"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_updated",
"entityType": "columns",
"table": "part"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "data",
"entityType": "columns",
"table": "part"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "project_id",
"entityType": "columns",
"table": "permission"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_created",
"entityType": "columns",
"table": "permission"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_updated",
"entityType": "columns",
"table": "permission"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "data",
"entityType": "columns",
"table": "permission"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "id",
"entityType": "columns",
"table": "session"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "project_id",
"entityType": "columns",
"table": "session"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "parent_id",
"entityType": "columns",
"table": "session"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "slug",
"entityType": "columns",
"table": "session"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "directory",
"entityType": "columns",
"table": "session"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "title",
"entityType": "columns",
"table": "session"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "version",
"entityType": "columns",
"table": "session"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "share_url",
"entityType": "columns",
"table": "session"
},
{
"type": "integer",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "summary_additions",
"entityType": "columns",
"table": "session"
},
{
"type": "integer",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "summary_deletions",
"entityType": "columns",
"table": "session"
},
{
"type": "integer",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "summary_files",
"entityType": "columns",
"table": "session"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "summary_diffs",
"entityType": "columns",
"table": "session"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "revert",
"entityType": "columns",
"table": "session"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "permission",
"entityType": "columns",
"table": "session"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_created",
"entityType": "columns",
"table": "session"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_updated",
"entityType": "columns",
"table": "session"
},
{
"type": "integer",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_compacting",
"entityType": "columns",
"table": "session"
},
{
"type": "integer",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_archived",
"entityType": "columns",
"table": "session"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "session_id",
"entityType": "columns",
"table": "todo"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "content",
"entityType": "columns",
"table": "todo"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "status",
"entityType": "columns",
"table": "todo"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "priority",
"entityType": "columns",
"table": "todo"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "position",
"entityType": "columns",
"table": "todo"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_created",
"entityType": "columns",
"table": "todo"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_updated",
"entityType": "columns",
"table": "todo"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "session_id",
"entityType": "columns",
"table": "session_share"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "id",
"entityType": "columns",
"table": "session_share"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "secret",
"entityType": "columns",
"table": "session_share"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "url",
"entityType": "columns",
"table": "session_share"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_created",
"entityType": "columns",
"table": "session_share"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_updated",
"entityType": "columns",
"table": "session_share"
},
{
"columns": ["session_id"],
"tableTo": "session",
"columnsTo": ["id"],
"onUpdate": "NO ACTION",
"onDelete": "CASCADE",
"nameExplicit": false,
"name": "fk_message_session_id_session_id_fk",
"entityType": "fks",
"table": "message"
},
{
"columns": ["message_id"],
"tableTo": "message",
"columnsTo": ["id"],
"onUpdate": "NO ACTION",
"onDelete": "CASCADE",
"nameExplicit": false,
"name": "fk_part_message_id_message_id_fk",
"entityType": "fks",
"table": "part"
},
{
"columns": ["project_id"],
"tableTo": "project",
"columnsTo": ["id"],
"onUpdate": "NO ACTION",
"onDelete": "CASCADE",
"nameExplicit": false,
"name": "fk_permission_project_id_project_id_fk",
"entityType": "fks",
"table": "permission"
},
{
"columns": ["project_id"],
"tableTo": "project",
"columnsTo": ["id"],
"onUpdate": "NO ACTION",
"onDelete": "CASCADE",
"nameExplicit": false,
"name": "fk_session_project_id_project_id_fk",
"entityType": "fks",
"table": "session"
},
{
"columns": ["session_id"],
"tableTo": "session",
"columnsTo": ["id"],
"onUpdate": "NO ACTION",
"onDelete": "CASCADE",
"nameExplicit": false,
"name": "fk_todo_session_id_session_id_fk",
"entityType": "fks",
"table": "todo"
},
{
"columns": ["session_id"],
"tableTo": "session",
"columnsTo": ["id"],
"onUpdate": "NO ACTION",
"onDelete": "CASCADE",
"nameExplicit": false,
"name": "fk_session_share_session_id_session_id_fk",
"entityType": "fks",
"table": "session_share"
},
{
"columns": ["session_id", "position"],
"nameExplicit": false,
"name": "todo_pk",
"entityType": "pks",
"table": "todo"
},
{
"columns": ["id"],
"nameExplicit": false,
"name": "project_pk",
"table": "project",
"entityType": "pks"
},
{
"columns": ["id"],
"nameExplicit": false,
"name": "message_pk",
"table": "message",
"entityType": "pks"
},
{
"columns": ["id"],
"nameExplicit": false,
"name": "part_pk",
"table": "part",
"entityType": "pks"
},
{
"columns": ["project_id"],
"nameExplicit": false,
"name": "permission_pk",
"table": "permission",
"entityType": "pks"
},
{
"columns": ["id"],
"nameExplicit": false,
"name": "session_pk",
"table": "session",
"entityType": "pks"
},
{
"columns": ["session_id"],
"nameExplicit": false,
"name": "session_share_pk",
"table": "session_share",
"entityType": "pks"
},
{
"columns": [
{
"value": "session_id",
"isExpression": false
}
],
"isUnique": false,
"where": null,
"origin": "manual",
"name": "message_session_idx",
"entityType": "indexes",
"table": "message"
},
{
"columns": [
{
"value": "message_id",
"isExpression": false
}
],
"isUnique": false,
"where": null,
"origin": "manual",
"name": "part_message_idx",
"entityType": "indexes",
"table": "part"
},
{
"columns": [
{
"value": "session_id",
"isExpression": false
}
],
"isUnique": false,
"where": null,
"origin": "manual",
"name": "part_session_idx",
"entityType": "indexes",
"table": "part"
},
{
"columns": [
{
"value": "project_id",
"isExpression": false
}
],
"isUnique": false,
"where": null,
"origin": "manual",
"name": "session_project_idx",
"entityType": "indexes",
"table": "session"
},
{
"columns": [
{
"value": "parent_id",
"isExpression": false
}
],
"isUnique": false,
"where": null,
"origin": "manual",
"name": "session_parent_idx",
"entityType": "indexes",
"table": "session"
},
{
"columns": [
{
"value": "session_id",
"isExpression": false
}
],
"isUnique": false,
"where": null,
"origin": "manual",
"name": "todo_session_idx",
"entityType": "indexes",
"table": "todo"
}
],
"renames": []
}

View File

@@ -0,0 +1 @@
ALTER TABLE `project` ADD `commands` text;

View File

@@ -0,0 +1,847 @@
{
"version": "7",
"dialect": "sqlite",
"id": "8bc2d11d-97fa-4ba8-8bfa-6c5956c49aeb",
"prevIds": [
"068758ed-a97a-46f6-8a59-6c639ae7c20c"
],
"ddl": [
{
"name": "project",
"entityType": "tables"
},
{
"name": "message",
"entityType": "tables"
},
{
"name": "part",
"entityType": "tables"
},
{
"name": "permission",
"entityType": "tables"
},
{
"name": "session",
"entityType": "tables"
},
{
"name": "todo",
"entityType": "tables"
},
{
"name": "session_share",
"entityType": "tables"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "id",
"entityType": "columns",
"table": "project"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "worktree",
"entityType": "columns",
"table": "project"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "vcs",
"entityType": "columns",
"table": "project"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "name",
"entityType": "columns",
"table": "project"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "icon_url",
"entityType": "columns",
"table": "project"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "icon_color",
"entityType": "columns",
"table": "project"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_created",
"entityType": "columns",
"table": "project"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_updated",
"entityType": "columns",
"table": "project"
},
{
"type": "integer",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_initialized",
"entityType": "columns",
"table": "project"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "sandboxes",
"entityType": "columns",
"table": "project"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "commands",
"entityType": "columns",
"table": "project"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "id",
"entityType": "columns",
"table": "message"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "session_id",
"entityType": "columns",
"table": "message"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_created",
"entityType": "columns",
"table": "message"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_updated",
"entityType": "columns",
"table": "message"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "data",
"entityType": "columns",
"table": "message"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "id",
"entityType": "columns",
"table": "part"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "message_id",
"entityType": "columns",
"table": "part"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "session_id",
"entityType": "columns",
"table": "part"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_created",
"entityType": "columns",
"table": "part"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_updated",
"entityType": "columns",
"table": "part"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "data",
"entityType": "columns",
"table": "part"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "project_id",
"entityType": "columns",
"table": "permission"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_created",
"entityType": "columns",
"table": "permission"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_updated",
"entityType": "columns",
"table": "permission"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "data",
"entityType": "columns",
"table": "permission"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "id",
"entityType": "columns",
"table": "session"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "project_id",
"entityType": "columns",
"table": "session"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "parent_id",
"entityType": "columns",
"table": "session"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "slug",
"entityType": "columns",
"table": "session"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "directory",
"entityType": "columns",
"table": "session"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "title",
"entityType": "columns",
"table": "session"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "version",
"entityType": "columns",
"table": "session"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "share_url",
"entityType": "columns",
"table": "session"
},
{
"type": "integer",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "summary_additions",
"entityType": "columns",
"table": "session"
},
{
"type": "integer",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "summary_deletions",
"entityType": "columns",
"table": "session"
},
{
"type": "integer",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "summary_files",
"entityType": "columns",
"table": "session"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "summary_diffs",
"entityType": "columns",
"table": "session"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "revert",
"entityType": "columns",
"table": "session"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "permission",
"entityType": "columns",
"table": "session"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_created",
"entityType": "columns",
"table": "session"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_updated",
"entityType": "columns",
"table": "session"
},
{
"type": "integer",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_compacting",
"entityType": "columns",
"table": "session"
},
{
"type": "integer",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_archived",
"entityType": "columns",
"table": "session"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "session_id",
"entityType": "columns",
"table": "todo"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "content",
"entityType": "columns",
"table": "todo"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "status",
"entityType": "columns",
"table": "todo"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "priority",
"entityType": "columns",
"table": "todo"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "position",
"entityType": "columns",
"table": "todo"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_created",
"entityType": "columns",
"table": "todo"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_updated",
"entityType": "columns",
"table": "todo"
},
{
"type": "text",
"notNull": false,
"autoincrement": false,
"default": null,
"generated": null,
"name": "session_id",
"entityType": "columns",
"table": "session_share"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "id",
"entityType": "columns",
"table": "session_share"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "secret",
"entityType": "columns",
"table": "session_share"
},
{
"type": "text",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "url",
"entityType": "columns",
"table": "session_share"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_created",
"entityType": "columns",
"table": "session_share"
},
{
"type": "integer",
"notNull": true,
"autoincrement": false,
"default": null,
"generated": null,
"name": "time_updated",
"entityType": "columns",
"table": "session_share"
},
{
"columns": [
"session_id"
],
"tableTo": "session",
"columnsTo": [
"id"
],
"onUpdate": "NO ACTION",
"onDelete": "CASCADE",
"nameExplicit": false,
"name": "fk_message_session_id_session_id_fk",
"entityType": "fks",
"table": "message"
},
{
"columns": [
"message_id"
],
"tableTo": "message",
"columnsTo": [
"id"
],
"onUpdate": "NO ACTION",
"onDelete": "CASCADE",
"nameExplicit": false,
"name": "fk_part_message_id_message_id_fk",
"entityType": "fks",
"table": "part"
},
{
"columns": [
"project_id"
],
"tableTo": "project",
"columnsTo": [
"id"
],
"onUpdate": "NO ACTION",
"onDelete": "CASCADE",
"nameExplicit": false,
"name": "fk_permission_project_id_project_id_fk",
"entityType": "fks",
"table": "permission"
},
{
"columns": [
"project_id"
],
"tableTo": "project",
"columnsTo": [
"id"
],
"onUpdate": "NO ACTION",
"onDelete": "CASCADE",
"nameExplicit": false,
"name": "fk_session_project_id_project_id_fk",
"entityType": "fks",
"table": "session"
},
{
"columns": [
"session_id"
],
"tableTo": "session",
"columnsTo": [
"id"
],
"onUpdate": "NO ACTION",
"onDelete": "CASCADE",
"nameExplicit": false,
"name": "fk_todo_session_id_session_id_fk",
"entityType": "fks",
"table": "todo"
},
{
"columns": [
"session_id"
],
"tableTo": "session",
"columnsTo": [
"id"
],
"onUpdate": "NO ACTION",
"onDelete": "CASCADE",
"nameExplicit": false,
"name": "fk_session_share_session_id_session_id_fk",
"entityType": "fks",
"table": "session_share"
},
{
"columns": [
"session_id",
"position"
],
"nameExplicit": false,
"name": "todo_pk",
"entityType": "pks",
"table": "todo"
},
{
"columns": [
"id"
],
"nameExplicit": false,
"name": "project_pk",
"table": "project",
"entityType": "pks"
},
{
"columns": [
"id"
],
"nameExplicit": false,
"name": "message_pk",
"table": "message",
"entityType": "pks"
},
{
"columns": [
"id"
],
"nameExplicit": false,
"name": "part_pk",
"table": "part",
"entityType": "pks"
},
{
"columns": [
"project_id"
],
"nameExplicit": false,
"name": "permission_pk",
"table": "permission",
"entityType": "pks"
},
{
"columns": [
"id"
],
"nameExplicit": false,
"name": "session_pk",
"table": "session",
"entityType": "pks"
},
{
"columns": [
"session_id"
],
"nameExplicit": false,
"name": "session_share_pk",
"table": "session_share",
"entityType": "pks"
},
{
"columns": [
{
"value": "session_id",
"isExpression": false
}
],
"isUnique": false,
"where": null,
"origin": "manual",
"name": "message_session_idx",
"entityType": "indexes",
"table": "message"
},
{
"columns": [
{
"value": "message_id",
"isExpression": false
}
],
"isUnique": false,
"where": null,
"origin": "manual",
"name": "part_message_idx",
"entityType": "indexes",
"table": "part"
},
{
"columns": [
{
"value": "session_id",
"isExpression": false
}
],
"isUnique": false,
"where": null,
"origin": "manual",
"name": "part_session_idx",
"entityType": "indexes",
"table": "part"
},
{
"columns": [
{
"value": "project_id",
"isExpression": false
}
],
"isUnique": false,
"where": null,
"origin": "manual",
"name": "session_project_idx",
"entityType": "indexes",
"table": "session"
},
{
"columns": [
{
"value": "parent_id",
"isExpression": false
}
],
"isUnique": false,
"where": null,
"origin": "manual",
"name": "session_parent_idx",
"entityType": "indexes",
"table": "session"
},
{
"columns": [
{
"value": "session_id",
"isExpression": false
}
],
"isUnique": false,
"where": null,
"origin": "manual",
"name": "todo_session_idx",
"entityType": "indexes",
"table": "todo"
}
],
"renames": []
}

View File

@@ -15,7 +15,8 @@
"lint": "echo 'Running lint checks...' && bun test --coverage",
"format": "echo 'Formatting code...' && bun run --prettier --write src/**/*.ts",
"docs": "echo 'Generating documentation...' && find src -name '*.ts' -exec echo 'Processing: {}' \\;",
"deploy": "echo 'Deploying application...' && bun run build && echo 'Deployment completed successfully'"
"deploy": "echo 'Deploying application...' && bun run build && echo 'Deployment completed successfully'",
"db": "bun drizzle-kit"
},
"bin": {
"opencode": "./bin/opencode"
@@ -42,6 +43,8 @@
"@types/turndown": "5.0.5",
"@types/yargs": "17.0.33",
"@typescript/native-preview": "catalog:",
"drizzle-kit": "1.0.0-beta.12-a5629fb",
"drizzle-orm": "1.0.0-beta.12-a5629fb",
"typescript": "catalog:",
"vscode-languageserver-types": "3.17.5",
"why-is-node-running": "3.2.2",
@@ -100,6 +103,7 @@
"clipboardy": "4.0.0",
"decimal.js": "10.5.0",
"diff": "catalog:",
"drizzle-orm": "1.0.0-beta.12-a5629fb",
"fuzzysort": "3.1.0",
"gray-matter": "4.0.3",
"hono": "catalog:",
@@ -122,5 +126,8 @@
"yargs": "18.0.0",
"zod": "catalog:",
"zod-to-json-schema": "3.24.5"
},
"overrides": {
"drizzle-orm": "1.0.0-beta.12-a5629fb"
}
}

View File

@@ -25,6 +25,32 @@ await Bun.write(
)
console.log("Generated models-snapshot.ts")
// Load migrations from migration directories
const migrationDirs = (await fs.promises.readdir(path.join(dir, "migration"), { withFileTypes: true }))
.filter((entry) => entry.isDirectory() && /^\d{4}\d{2}\d{2}\d{2}\d{2}\d{2}/.test(entry.name))
.map((entry) => entry.name)
.sort()
const migrations = await Promise.all(
migrationDirs.map(async (name) => {
const file = path.join(dir, "migration", name, "migration.sql")
const sql = await Bun.file(file).text()
const match = /^(\d{4})(\d{2})(\d{2})(\d{2})(\d{2})(\d{2})/.exec(name)
const timestamp = match
? Date.UTC(
Number(match[1]),
Number(match[2]) - 1,
Number(match[3]),
Number(match[4]),
Number(match[5]),
Number(match[6]),
)
: 0
return { sql, timestamp }
}),
)
console.log(`Loaded ${migrations.length} migrations`)
const singleFlag = process.argv.includes("--single")
const baselineFlag = process.argv.includes("--baseline")
const skipInstall = process.argv.includes("--skip-install")
@@ -156,6 +182,7 @@ for (const item of targets) {
entrypoints: ["./src/index.ts", parserWorker, workerPath],
define: {
OPENCODE_VERSION: `'${Script.version}'`,
OPENCODE_MIGRATIONS: JSON.stringify(migrations),
OTUI_TREE_SITTER_WORKER_PATH: bunfsRoot + workerRelativePath,
OPENCODE_WORKER_PATH: workerPath,
OPENCODE_CHANNEL: `'${Script.channel}'`,

View File

@@ -0,0 +1,16 @@
#!/usr/bin/env bun
import { $ } from "bun"
// drizzle-kit check compares schema to migrations, exits non-zero if drift
const result = await $`bun drizzle-kit check`.quiet().nothrow()
if (result.exitCode !== 0) {
console.error("Schema has changes not captured in migrations!")
console.error("Run: bun drizzle-kit generate")
console.error("")
console.error(result.stderr.toString())
process.exit(1)
}
console.log("Migrations are up to date")

View File

@@ -435,46 +435,68 @@ export namespace ACP {
return
}
}
return
}
if (part.type === "text") {
const delta = props.delta
if (delta && part.ignored !== true) {
await this.connection
.sessionUpdate({
sessionId,
update: {
sessionUpdate: "agent_message_chunk",
content: {
type: "text",
text: delta,
},
case "message.part.delta": {
const props = event.properties
const session = this.sessionManager.tryGet(props.sessionID)
if (!session) return
const sessionId = session.id
const message = await this.sdk.session
.message(
{
sessionID: props.sessionID,
messageID: props.messageID,
directory: session.cwd,
},
{ throwOnError: true },
)
.then((x) => x.data)
.catch((error) => {
log.error("unexpected error when fetching message", { error })
return undefined
})
if (!message || message.info.role !== "assistant") return
const part = message.parts.find((p) => p.id === props.partID)
if (!part) return
if (part.type === "text" && props.field === "text" && part.ignored !== true) {
await this.connection
.sessionUpdate({
sessionId,
update: {
sessionUpdate: "agent_message_chunk",
content: {
type: "text",
text: props.delta,
},
})
.catch((error) => {
log.error("failed to send text to ACP", { error })
})
}
},
})
.catch((error) => {
log.error("failed to send text delta to ACP", { error })
})
return
}
if (part.type === "reasoning") {
const delta = props.delta
if (delta) {
await this.connection
.sessionUpdate({
sessionId,
update: {
sessionUpdate: "agent_thought_chunk",
content: {
type: "text",
text: delta,
},
if (part.type === "reasoning" && props.field === "text") {
await this.connection
.sessionUpdate({
sessionId,
update: {
sessionUpdate: "agent_thought_chunk",
content: {
type: "text",
text: props.delta,
},
})
.catch((error) => {
log.error("failed to send reasoning to ACP", { error })
})
}
},
})
.catch((error) => {
log.error("failed to send reasoning delta to ACP", { error })
})
}
return
}

View File

@@ -184,6 +184,18 @@ export namespace Agent {
),
prompt: PROMPT_TITLE,
},
handoff: {
name: "handoff",
mode: "primary",
options: {},
native: true,
hidden: true,
temperature: 0.5,
permission: PermissionNext.fromConfig({
"*": "allow",
}),
prompt: "none",
},
summary: {
name: "summary",
mode: "primary",

View File

@@ -3,7 +3,8 @@ import type { Session as SDKSession, Message, Part } from "@opencode-ai/sdk/v2"
import { Session } from "../../session"
import { cmd } from "./cmd"
import { bootstrap } from "../bootstrap"
import { Storage } from "../../storage/storage"
import { Database } from "../../storage/db"
import { SessionTable, MessageTable, PartTable } from "../../session/session.sql"
import { Instance } from "../../project/instance"
import { ShareNext } from "../../share/share-next"
import { EOL } from "os"
@@ -130,13 +131,35 @@ export const ImportCommand = cmd({
return
}
await Storage.write(["session", Instance.project.id, exportData.info.id], exportData.info)
Database.use((db) => db.insert(SessionTable).values(Session.toRow(exportData.info)).onConflictDoNothing().run())
for (const msg of exportData.messages) {
await Storage.write(["message", exportData.info.id, msg.info.id], msg.info)
Database.use((db) =>
db
.insert(MessageTable)
.values({
id: msg.info.id,
session_id: exportData.info.id,
time_created: msg.info.time?.created ?? Date.now(),
data: msg.info,
})
.onConflictDoNothing()
.run(),
)
for (const part of msg.parts) {
await Storage.write(["part", msg.info.id, part.id], part)
Database.use((db) =>
db
.insert(PartTable)
.values({
id: part.id,
message_id: msg.info.id,
session_id: exportData.info.id,
data: part,
})
.onConflictDoNothing()
.run(),
)
}
}

View File

@@ -2,7 +2,8 @@ import type { Argv } from "yargs"
import { cmd } from "./cmd"
import { Session } from "../../session"
import { bootstrap } from "../bootstrap"
import { Storage } from "../../storage/storage"
import { Database } from "../../storage/db"
import { SessionTable } from "../../session/session.sql"
import { Project } from "../../project/project"
import { Instance } from "../../project/instance"
@@ -87,25 +88,8 @@ async function getCurrentProject(): Promise<Project.Info> {
}
async function getAllSessions(): Promise<Session.Info[]> {
const sessions: Session.Info[] = []
const projectKeys = await Storage.list(["project"])
const projects = await Promise.all(projectKeys.map((key) => Storage.read<Project.Info>(key)))
for (const project of projects) {
if (!project) continue
const sessionKeys = await Storage.list(["session", project.id])
const projectSessions = await Promise.all(sessionKeys.map((key) => Storage.read<Session.Info>(key)))
for (const session of projectSessions) {
if (session) {
sessions.push(session)
}
}
}
return sessions
const rows = Database.use((db) => db.select().from(SessionTable).all())
return rows.map((row) => Session.fromRow(row))
}
export async function aggregateSessionStats(days?: number, projectFilter?: string): Promise<SessionStats> {

View File

@@ -3,6 +3,7 @@ import { Clipboard } from "@tui/util/clipboard"
import { TextAttributes } from "@opentui/core"
import { RouteProvider, useRoute } from "@tui/context/route"
import { Switch, Match, createEffect, untrack, ErrorBoundary, createSignal, onMount, batch, Show, on } from "solid-js"
import { win32DisableProcessedInput, win32FlushInputBuffer, win32InstallCtrlCGuard } from "./win32"
import { Installation } from "@/installation"
import { Flag } from "@/flag/flag"
import { DialogProvider, useDialog } from "@tui/ui/dialog"
@@ -110,8 +111,17 @@ export function tui(input: {
}) {
// promise to prevent immediate exit
return new Promise<void>(async (resolve) => {
const unguard = win32InstallCtrlCGuard()
win32DisableProcessedInput()
const mode = await getTerminalBackgroundColor()
// Re-clear after getTerminalBackgroundColor() — setRawMode(false) restores
// the original console mode which re-enables ENABLE_PROCESSED_INPUT.
win32DisableProcessedInput()
const onExit = async () => {
unguard?.()
await input.onExit?.()
resolve()
}
@@ -730,7 +740,8 @@ function ErrorComponent(props: {
const handleExit = async () => {
renderer.setTerminalTitle("")
renderer.destroy()
props.onExit()
win32FlushInputBuffer()
await props.onExit()
}
useKeyboard((evt) => {

View File

@@ -1,5 +1,6 @@
import { cmd } from "../cmd"
import { tui } from "./app"
import { win32DisableProcessedInput, win32InstallCtrlCGuard } from "./win32"
export const AttachCommand = cmd({
command: "attach <url>",
@@ -26,27 +27,34 @@ export const AttachCommand = cmd({
describe: "basic auth password (defaults to OPENCODE_SERVER_PASSWORD)",
}),
handler: async (args) => {
const directory = (() => {
if (!args.dir) return undefined
try {
process.chdir(args.dir)
return process.cwd()
} catch {
// If the directory doesn't exist locally (remote attach), pass it through.
return args.dir
}
})()
const headers = (() => {
const password = args.password ?? process.env.OPENCODE_SERVER_PASSWORD
if (!password) return undefined
const auth = `Basic ${Buffer.from(`opencode:${password}`).toString("base64")}`
return { Authorization: auth }
})()
await tui({
url: args.url,
args: { sessionID: args.session },
directory,
headers,
})
const unguard = win32InstallCtrlCGuard()
try {
win32DisableProcessedInput()
const directory = (() => {
if (!args.dir) return undefined
try {
process.chdir(args.dir)
return process.cwd()
} catch {
// If the directory doesn't exist locally (remote attach), pass it through.
return args.dir
}
})()
const headers = (() => {
const password = args.password ?? process.env.OPENCODE_SERVER_PASSWORD
if (!password) return undefined
const auth = `Basic ${Buffer.from(`opencode:${password}`).toString("base64")}`
return { Authorization: auth }
})()
await tui({
url: args.url,
args: { sessionID: args.session },
directory,
headers,
})
} finally {
unguard?.()
}
},
})

View File

@@ -83,6 +83,7 @@ function init() {
},
slashes() {
return visibleOptions().flatMap((option) => {
if (option.disabled) return []
const slash = option.slash
if (!slash) return []
return {

View File

@@ -7,6 +7,27 @@ import { useDialog } from "@tui/ui/dialog"
import { createDialogProviderOptions, DialogProvider } from "./dialog-provider"
import { useKeybind } from "../context/keybind"
import * as fuzzysort from "fuzzysort"
import type { Provider } from "@opencode-ai/sdk/v2"
function pickLatest(models: [string, Provider["models"][string]][]) {
const picks: Record<string, [string, Provider["models"][string]]> = {}
for (const item of models) {
const model = item[0]
const info = item[1]
const key = info.family ?? model
const prev = picks[key]
if (!prev) {
picks[key] = item
continue
}
if (info.release_date !== prev[1].release_date) {
if (info.release_date > prev[1].release_date) picks[key] = item
continue
}
if (model > prev[0]) picks[key] = item
}
return Object.values(picks)
}
export function useConnected() {
const sync = useSync()
@@ -21,6 +42,7 @@ export function DialogModel(props: { providerID?: string }) {
const dialog = useDialog()
const keybind = useKeybind()
const [query, setQuery] = createSignal("")
const [all, setAll] = createSignal(false)
const connected = useConnected()
const providers = createDialogProviderOptions()
@@ -72,8 +94,8 @@ export function DialogModel(props: { providerID?: string }) {
(provider) => provider.id !== "opencode",
(provider) => provider.name,
),
flatMap((provider) =>
pipe(
flatMap((provider) => {
const items = pipe(
provider.models,
entries(),
filter(([_, info]) => info.status !== "deprecated"),
@@ -104,8 +126,9 @@ export function DialogModel(props: { providerID?: string }) {
(x) => x.footer !== "Free",
(x) => x.title,
),
),
),
)
return items
}),
)
const popularProviders = !connected()
@@ -154,6 +177,13 @@ export function DialogModel(props: { providerID?: string }) {
local.model.toggleFavorite(option.value as { providerID: string; modelID: string })
},
},
{
keybind: keybind.all.model_show_all_toggle?.[0],
title: all() ? "Show latest only" : "Show all models",
onTrigger: () => {
setAll((value) => !value)
},
},
]}
onFilter={setQuery}
flat={true}

View File

@@ -9,7 +9,7 @@ import type { AgentPart, FilePart, TextPart } from "@opencode-ai/sdk/v2"
export type PromptInfo = {
input: string
mode?: "normal" | "shell"
mode?: "normal" | "shell" | "handoff"
parts: (
| Omit<FilePart, "id" | "messageID" | "sessionID">
| Omit<AgentPart, "id" | "messageID" | "sessionID">

View File

@@ -120,7 +120,7 @@ export function Prompt(props: PromptProps) {
const [store, setStore] = createStore<{
prompt: PromptInfo
mode: "normal" | "shell"
mode: "normal" | "shell" | "handoff"
extmarkToPartIndex: Map<number, number>
interrupt: number
placeholder: number
@@ -349,6 +349,20 @@ export function Prompt(props: PromptProps) {
))
},
},
{
title: "Handoff",
value: "prompt.handoff",
disabled: props.sessionID === undefined,
category: "Prompt",
slash: {
name: "handoff",
},
onSelect: () => {
input.clear()
setStore("mode", "handoff")
setStore("prompt", { input: "", parts: [] })
},
},
]
})
@@ -526,17 +540,45 @@ export function Prompt(props: PromptProps) {
async function submit() {
if (props.disabled) return
if (autocomplete?.visible) return
const selectedModel = local.model.current()
if (!selectedModel) {
promptModelWarning()
return
}
if (store.mode === "handoff") {
const result = await sdk.client.session.handoff({
sessionID: props.sessionID!,
goal: store.prompt.input,
model: {
providerID: selectedModel.providerID,
modelID: selectedModel.modelID,
},
})
if (result.data) {
route.navigate({
type: "home",
initialPrompt: {
input: result.data.text,
parts:
result.data.files.map((file) => ({
type: "file",
url: file,
filename: file,
mime: "text/plain",
})) ?? [],
},
})
}
return
}
if (!store.prompt.input) return
const trimmed = store.prompt.input.trim()
if (trimmed === "exit" || trimmed === "quit" || trimmed === ":q") {
exit()
return
}
const selectedModel = local.model.current()
if (!selectedModel) {
promptModelWarning()
return
}
const sessionID = props.sessionID
? props.sessionID
: await (async () => {
@@ -737,6 +779,7 @@ export function Prompt(props: PromptProps) {
const highlight = createMemo(() => {
if (keybind.leader) return theme.border
if (store.mode === "shell") return theme.primary
if (store.mode === "handoff") return theme.warning
return local.agent.color(local.agent.current().name)
})
@@ -748,6 +791,7 @@ export function Prompt(props: PromptProps) {
})
const placeholderText = createMemo(() => {
if (store.mode === "handoff") return "Goal for the new session"
if (props.sessionID) return undefined
if (store.mode === "shell") {
const example = SHELL_PLACEHOLDERS[store.placeholder % SHELL_PLACEHOLDERS.length]
@@ -875,7 +919,7 @@ export function Prompt(props: PromptProps) {
e.preventDefault()
return
}
if (store.mode === "shell") {
if (store.mode === "shell" || store.mode === "handoff") {
if ((e.name === "backspace" && input.visualCursor.offset === 0) || e.name === "escape") {
setStore("mode", "normal")
e.preventDefault()
@@ -996,7 +1040,11 @@ export function Prompt(props: PromptProps) {
/>
<box flexDirection="row" flexShrink={0} paddingTop={1} gap={1}>
<text fg={highlight()}>
{store.mode === "shell" ? "Shell" : Locale.titlecase(local.agent.current().name)}{" "}
<Switch>
<Match when={store.mode === "normal"}>{Locale.titlecase(local.agent.current().name)}</Match>
<Match when={store.mode === "shell"}>Shell</Match>
<Match when={store.mode === "handoff"}>Handoff</Match>
</Switch>
</text>
<Show when={store.mode === "normal"}>
<box flexDirection="row" gap={1}>
@@ -1143,6 +1191,11 @@ export function Prompt(props: PromptProps) {
esc <span style={{ fg: theme.textMuted }}>exit shell mode</span>
</text>
</Match>
<Match when={store.mode === "handoff"}>
<text fg={theme.text}>
esc <span style={{ fg: theme.textMuted }}>exit handoff mode</span>
</text>
</Match>
</Switch>
</box>
</Show>

View File

@@ -1,6 +1,7 @@
import { useRenderer } from "@opentui/solid"
import { createSimpleContext } from "./helper"
import { FormatError, FormatUnknownError } from "@/cli/error"
import { win32FlushInputBuffer } from "../win32"
type Exit = ((reason?: unknown) => Promise<void>) & {
message: {
set: (value?: string) => () => void
@@ -32,6 +33,7 @@ export const { use: useExit, provider: ExitProvider } = createSimpleContext({
// Reset window title before destroying renderer
renderer.setTerminalTitle("")
renderer.destroy()
win32FlushInputBuffer()
await input.onExit?.()
if (reason) {
const formatted = FormatError(reason) ?? FormatUnknownError(reason)

View File

@@ -1,4 +1,4 @@
import { createStore } from "solid-js/store"
import { createStore, reconcile } from "solid-js/store"
import { createSimpleContext } from "./helper"
import type { PromptInfo } from "../component/prompt/history"
@@ -32,7 +32,7 @@ export const { use: useRoute, provider: RouteProvider } = createSimpleContext({
},
navigate(route: Route) {
console.log("navigate", route)
setStore(route)
setStore(reconcile(route))
},
}
},

View File

@@ -299,6 +299,24 @@ export const { use: useSync, provider: SyncProvider } = createSimpleContext({
break
}
case "message.part.delta": {
const parts = store.part[event.properties.messageID]
if (!parts) break
const result = Binary.search(parts, event.properties.partID, (p) => p.id)
if (!result.found) break
setStore(
"part",
event.properties.messageID,
produce((draft) => {
const part = draft[result.index]
const field = event.properties.field as keyof typeof part
const existing = part[field] as string | undefined
;(part[field] as string) = (existing ?? "") + event.properties.delta
}),
)
break
}
case "message.part.removed": {
const parts = store.part[event.properties.messageID]
const result = Binary.search(parts, event.properties.partID, (p) => p.id)

View File

@@ -2042,8 +2042,8 @@ function ApplyPatch(props: ToolProps<typeof ApplyPatchTool>) {
</For>
</Match>
<Match when={true}>
<InlineTool icon="%" pending="Preparing apply_patch..." complete={false} part={props.part}>
apply_patch
<InlineTool icon="%" pending="Preparing patch..." complete={false} part={props.part}>
Patch
</InlineTool>
</Match>
</Switch>

View File

@@ -9,6 +9,7 @@ import { Log } from "@/util/log"
import { withNetworkOptions, resolveNetworkOptions } from "@/cli/network"
import type { Event } from "@opencode-ai/sdk/v2"
import type { EventSource } from "./context/sdk"
import { win32DisableProcessedInput, win32InstallCtrlCGuard } from "./win32"
declare global {
const OPENCODE_WORKER_PATH: string
@@ -77,99 +78,111 @@ export const TuiThreadCommand = cmd({
describe: "agent to use",
}),
handler: async (args) => {
if (args.fork && !args.continue && !args.session) {
UI.error("--fork requires --continue or --session")
process.exit(1)
}
// Resolve relative paths against PWD to preserve behavior when using --cwd flag
const baseCwd = process.env.PWD ?? process.cwd()
const cwd = args.project ? path.resolve(baseCwd, args.project) : process.cwd()
const localWorker = new URL("./worker.ts", import.meta.url)
const distWorker = new URL("./cli/cmd/tui/worker.js", import.meta.url)
const workerPath = await iife(async () => {
if (typeof OPENCODE_WORKER_PATH !== "undefined") return OPENCODE_WORKER_PATH
if (await Bun.file(distWorker).exists()) return distWorker
return localWorker
})
// Keep ENABLE_PROCESSED_INPUT cleared even if other code flips it.
// (Important when running under `bun run` wrappers on Windows.)
const unguard = win32InstallCtrlCGuard()
try {
process.chdir(cwd)
} catch (e) {
UI.error("Failed to change directory to " + cwd)
return
// Must be the very first thing — disables CTRL_C_EVENT before any Worker
// spawn or async work so the OS cannot kill the process group.
win32DisableProcessedInput()
if (args.fork && !args.continue && !args.session) {
UI.error("--fork requires --continue or --session")
process.exitCode = 1
return
}
// Resolve relative paths against PWD to preserve behavior when using --cwd flag
const baseCwd = process.env.PWD ?? process.cwd()
const cwd = args.project ? path.resolve(baseCwd, args.project) : process.cwd()
const localWorker = new URL("./worker.ts", import.meta.url)
const distWorker = new URL("./cli/cmd/tui/worker.js", import.meta.url)
const workerPath = await iife(async () => {
if (typeof OPENCODE_WORKER_PATH !== "undefined") return OPENCODE_WORKER_PATH
if (await Bun.file(distWorker).exists()) return distWorker
return localWorker
})
try {
process.chdir(cwd)
} catch (e) {
UI.error("Failed to change directory to " + cwd)
return
}
const worker = new Worker(workerPath, {
env: Object.fromEntries(
Object.entries(process.env).filter((entry): entry is [string, string] => entry[1] !== undefined),
),
})
worker.onerror = (e) => {
Log.Default.error(e)
}
const client = Rpc.client<typeof rpc>(worker)
process.on("uncaughtException", (e) => {
Log.Default.error(e)
})
process.on("unhandledRejection", (e) => {
Log.Default.error(e)
})
process.on("SIGUSR2", async () => {
await client.call("reload", undefined)
})
const prompt = await iife(async () => {
const piped = !process.stdin.isTTY ? await Bun.stdin.text() : undefined
if (!args.prompt) return piped
return piped ? piped + "\n" + args.prompt : args.prompt
})
// Check if server should be started (port or hostname explicitly set in CLI or config)
const networkOpts = await resolveNetworkOptions(args)
const shouldStartServer =
process.argv.includes("--port") ||
process.argv.includes("--hostname") ||
process.argv.includes("--mdns") ||
networkOpts.mdns ||
networkOpts.port !== 0 ||
networkOpts.hostname !== "127.0.0.1"
let url: string
let customFetch: typeof fetch | undefined
let events: EventSource | undefined
if (shouldStartServer) {
// Start HTTP server for external access
const server = await client.call("server", networkOpts)
url = server.url
} else {
// Use direct RPC communication (no HTTP)
url = "http://opencode.internal"
customFetch = createWorkerFetch(client)
events = createEventSource(client)
}
const tuiPromise = tui({
url,
fetch: customFetch,
events,
args: {
continue: args.continue,
sessionID: args.session,
agent: args.agent,
model: args.model,
prompt,
fork: args.fork,
},
onExit: async () => {
await client.call("shutdown", undefined)
},
})
setTimeout(() => {
client.call("checkUpgrade", { directory: cwd }).catch(() => {})
}, 1000)
await tuiPromise
} finally {
unguard?.()
}
const worker = new Worker(workerPath, {
env: Object.fromEntries(
Object.entries(process.env).filter((entry): entry is [string, string] => entry[1] !== undefined),
),
})
worker.onerror = (e) => {
Log.Default.error(e)
}
const client = Rpc.client<typeof rpc>(worker)
process.on("uncaughtException", (e) => {
Log.Default.error(e)
})
process.on("unhandledRejection", (e) => {
Log.Default.error(e)
})
process.on("SIGUSR2", async () => {
await client.call("reload", undefined)
})
const prompt = await iife(async () => {
const piped = !process.stdin.isTTY ? await Bun.stdin.text() : undefined
if (!args.prompt) return piped
return piped ? piped + "\n" + args.prompt : args.prompt
})
// Check if server should be started (port or hostname explicitly set in CLI or config)
const networkOpts = await resolveNetworkOptions(args)
const shouldStartServer =
process.argv.includes("--port") ||
process.argv.includes("--hostname") ||
process.argv.includes("--mdns") ||
networkOpts.mdns ||
networkOpts.port !== 0 ||
networkOpts.hostname !== "127.0.0.1"
let url: string
let customFetch: typeof fetch | undefined
let events: EventSource | undefined
if (shouldStartServer) {
// Start HTTP server for external access
const server = await client.call("server", networkOpts)
url = server.url
} else {
// Use direct RPC communication (no HTTP)
url = "http://opencode.internal"
customFetch = createWorkerFetch(client)
events = createEventSource(client)
}
const tuiPromise = tui({
url,
fetch: customFetch,
events,
args: {
continue: args.continue,
sessionID: args.session,
agent: args.agent,
model: args.model,
prompt,
fork: args.fork,
},
onExit: async () => {
await client.call("shutdown", undefined)
},
})
setTimeout(() => {
client.call("checkUpgrade", { directory: cwd }).catch(() => {})
}, 1000)
await tuiPromise
},
})

View File

@@ -0,0 +1,129 @@
import { dlopen, ptr } from "bun:ffi"
const STD_INPUT_HANDLE = -10
const ENABLE_PROCESSED_INPUT = 0x0001
const kernel = () =>
dlopen("kernel32.dll", {
GetStdHandle: { args: ["i32"], returns: "ptr" },
GetConsoleMode: { args: ["ptr", "ptr"], returns: "i32" },
SetConsoleMode: { args: ["ptr", "u32"], returns: "i32" },
FlushConsoleInputBuffer: { args: ["ptr"], returns: "i32" },
})
let k32: ReturnType<typeof kernel> | undefined
function load() {
if (process.platform !== "win32") return false
try {
k32 ??= kernel()
return true
} catch {
return false
}
}
/**
* Clear ENABLE_PROCESSED_INPUT on the console stdin handle.
*/
export function win32DisableProcessedInput() {
if (process.platform !== "win32") return
if (!process.stdin.isTTY) return
if (!load()) return
const handle = k32!.symbols.GetStdHandle(STD_INPUT_HANDLE)
const buf = new Uint32Array(1)
if (k32!.symbols.GetConsoleMode(handle, ptr(buf)) === 0) return
const mode = buf[0]!
if ((mode & ENABLE_PROCESSED_INPUT) === 0) return
k32!.symbols.SetConsoleMode(handle, mode & ~ENABLE_PROCESSED_INPUT)
}
/**
* Discard any queued console input (mouse events, key presses, etc.).
*/
export function win32FlushInputBuffer() {
if (process.platform !== "win32") return
if (!process.stdin.isTTY) return
if (!load()) return
const handle = k32!.symbols.GetStdHandle(STD_INPUT_HANDLE)
k32!.symbols.FlushConsoleInputBuffer(handle)
}
let unhook: (() => void) | undefined
/**
* Keep ENABLE_PROCESSED_INPUT disabled.
*
* On Windows, Ctrl+C becomes a CTRL_C_EVENT (instead of stdin input) when
* ENABLE_PROCESSED_INPUT is set. Various runtimes can re-apply console modes
* (sometimes on a later tick), and the flag is console-global, not per-process.
*
* We combine:
* - A `setRawMode(...)` hook to re-clear after known raw-mode toggles.
* - A low-frequency poll as a backstop for native/external mode changes.
*/
export function win32InstallCtrlCGuard() {
if (process.platform !== "win32") return
if (!process.stdin.isTTY) return
if (!load()) return
if (unhook) return unhook
const stdin = process.stdin as any
const original = stdin.setRawMode
const handle = k32!.symbols.GetStdHandle(STD_INPUT_HANDLE)
const buf = new Uint32Array(1)
if (k32!.symbols.GetConsoleMode(handle, ptr(buf)) === 0) return
const initial = buf[0]!
const enforce = () => {
if (k32!.symbols.GetConsoleMode(handle, ptr(buf)) === 0) return
const mode = buf[0]!
if ((mode & ENABLE_PROCESSED_INPUT) === 0) return
k32!.symbols.SetConsoleMode(handle, mode & ~ENABLE_PROCESSED_INPUT)
}
// Some runtimes can re-apply console modes on the next tick; enforce twice.
const later = () => {
enforce()
setImmediate(enforce)
}
let wrapped: ((mode: boolean) => unknown) | undefined
if (typeof original === "function") {
wrapped = (mode: boolean) => {
const result = original.call(stdin, mode)
later()
return result
}
stdin.setRawMode = wrapped
}
// Ensure it's cleared immediately too (covers any earlier mode changes).
later()
const interval = setInterval(enforce, 100)
interval.unref()
let done = false
unhook = () => {
if (done) return
done = true
clearInterval(interval)
if (wrapped && stdin.setRawMode === wrapped) {
stdin.setRawMode = original
}
k32!.symbols.SetConsoleMode(handle, initial)
unhook = undefined
}
return unhook
}

View File

@@ -778,6 +778,7 @@ export namespace Config {
stash_delete: z.string().optional().default("ctrl+d").describe("Delete stash entry"),
model_provider_list: z.string().optional().default("ctrl+a").describe("Open provider list from model dialog"),
model_favorite_toggle: z.string().optional().default("ctrl+f").describe("Toggle model favorite status"),
model_show_all_toggle: z.string().optional().default("ctrl+o").describe("Toggle showing all models"),
session_share: z.string().optional().default("none").describe("Share current session"),
session_unshare: z.string().optional().default("none").describe("Unshare current session"),
session_interrupt: z.string().optional().default("escape").describe("Interrupt current session"),

View File

@@ -46,7 +46,7 @@ export namespace Flag {
export const OPENCODE_EXPERIMENTAL_LSP_TY = truthy("OPENCODE_EXPERIMENTAL_LSP_TY")
export const OPENCODE_EXPERIMENTAL_LSP_TOOL = OPENCODE_EXPERIMENTAL || truthy("OPENCODE_EXPERIMENTAL_LSP_TOOL")
export const OPENCODE_DISABLE_FILETIME_CHECK = truthy("OPENCODE_DISABLE_FILETIME_CHECK")
export const OPENCODE_EXPERIMENTAL_PLAN_MODE = OPENCODE_EXPERIMENTAL || truthy("OPENCODE_EXPERIMENTAL_PLAN_MODE")
export const OPENCODE_EXPERIMENTAL_MARKDOWN = truthy("OPENCODE_EXPERIMENTAL_MARKDOWN")
export const OPENCODE_MODELS_URL = process.env["OPENCODE_MODELS_URL"]
export const OPENCODE_MODELS_PATH = process.env["OPENCODE_MODELS_PATH"]

View File

@@ -26,6 +26,10 @@ import { EOL } from "os"
import { WebCommand } from "./cli/cmd/web"
import { PrCommand } from "./cli/cmd/pr"
import { SessionCommand } from "./cli/cmd/session"
import path from "path"
import { Global } from "./global"
import { JsonMigration } from "./storage/json-migration"
import { Database } from "./storage/db"
process.on("unhandledRejection", (e) => {
Log.Default.error("rejection", {
@@ -74,6 +78,37 @@ const cli = yargs(hideBin(process.argv))
version: Installation.VERSION,
args: process.argv.slice(2),
})
const marker = path.join(Global.Path.data, "opencode.db")
if (!(await Bun.file(marker).exists())) {
console.log("Performing one time database migration, may take a few minutes...")
const tty = process.stdout.isTTY
const width = 36
const orange = "\x1b[38;5;214m"
const muted = "\x1b[0;2m"
const reset = "\x1b[0m"
let last = -1
if (tty) process.stdout.write("\x1b[?25l")
try {
await JsonMigration.run(Database.Client().$client, {
progress: (event) => {
if (!tty) return
const percent = Math.floor((event.current / event.total) * 100)
if (percent === last && event.current !== event.total) return
last = percent
const fill = Math.round((percent / 100) * width)
const bar = `${"■".repeat(fill)}${"・".repeat(width - fill)}`
process.stdout.write(
`\r${orange}${bar} ${percent.toString().padStart(3)}%${reset} ${muted}${event.label.padEnd(12)} ${event.current}/${event.total}${reset}`,
)
if (event.current === event.total) process.stdout.write("\n")
},
})
} finally {
if (tty) process.stdout.write("\x1b[?25h")
}
console.log("Database migration complete.")
}
})
.usage("\n" + UI.logo())
.completion("completion", "generate shell completion script")

View File

@@ -3,7 +3,8 @@ import { BusEvent } from "@/bus/bus-event"
import { Config } from "@/config/config"
import { Identifier } from "@/id/id"
import { Instance } from "@/project/instance"
import { Storage } from "@/storage/storage"
import { Database, eq } from "@/storage/db"
import { PermissionTable } from "@/session/session.sql"
import { fn } from "@/util/fn"
import { Log } from "@/util/log"
import { Wildcard } from "@/util/wildcard"
@@ -105,9 +106,12 @@ export namespace PermissionNext {
),
}
const state = Instance.state(async () => {
const state = Instance.state(() => {
const projectID = Instance.project.id
const stored = await Storage.read<Ruleset>(["permission", projectID]).catch(() => [] as Ruleset)
const row = Database.use((db) =>
db.select().from(PermissionTable).where(eq(PermissionTable.project_id, projectID)).get(),
)
const stored = row?.data ?? ([] as Ruleset)
const pending: Record<
string,
@@ -222,7 +226,8 @@ export namespace PermissionNext {
// TODO: we don't save the permission ruleset to disk yet until there's
// UI to manage it
// await Storage.write(["permission", Instance.project.id], s.approved)
// db().insert(PermissionTable).values({ projectID: Instance.project.id, data: s.approved })
// .onConflictDoUpdate({ target: PermissionTable.projectID, set: { data: s.approved } }).run()
return
}
},
@@ -275,6 +280,7 @@ export namespace PermissionNext {
}
export async function list() {
return state().then((x) => Object.values(x.pending).map((x) => x.info))
const s = await state()
return Object.values(s.pending).map((x) => x.info)
}
}

View File

@@ -1,5 +1,4 @@
import { Plugin } from "../plugin"
import { Share } from "../share/share"
import { Format } from "../format"
import { LSP } from "../lsp"
import { FileWatcher } from "../file/watcher"
@@ -17,7 +16,6 @@ import { Truncate } from "../tool/truncation"
export async function InstanceBootstrap() {
Log.Default.info("bootstrapping", { directory: Instance.directory })
await Plugin.init()
Share.init()
ShareNext.init()
Format.init()
await LSP.init()

View File

@@ -0,0 +1,15 @@
import { sqliteTable, text, integer } from "drizzle-orm/sqlite-core"
import { Timestamps } from "@/storage/schema.sql"
export const ProjectTable = sqliteTable("project", {
id: text().primaryKey(),
worktree: text().notNull(),
vcs: text(),
name: text(),
icon_url: text(),
icon_color: text(),
...Timestamps,
time_initialized: integer(),
sandboxes: text({ mode: "json" }).notNull().$type<string[]>(),
commands: text({ mode: "json" }).$type<{ start?: string }>(),
})

View File

@@ -1,18 +1,17 @@
import z from "zod"
import fs from "fs/promises"
import { Filesystem } from "../util/filesystem"
import path from "path"
import { $ } from "bun"
import { Storage } from "../storage/storage"
import { Database, eq } from "../storage/db"
import { ProjectTable } from "./project.sql"
import { SessionTable } from "../session/session.sql"
import { Log } from "../util/log"
import { Flag } from "@/flag/flag"
import { Session } from "../session"
import { work } from "../util/queue"
import { fn } from "@opencode-ai/util/fn"
import { BusEvent } from "@/bus/bus-event"
import { iife } from "@/util/iife"
import { GlobalBus } from "@/bus/global"
import { existsSync } from "fs"
export namespace Project {
const log = Log.create({ service: "project" })
@@ -50,66 +49,86 @@ export namespace Project {
Updated: BusEvent.define("project.updated", Info),
}
type Row = typeof ProjectTable.$inferSelect
export function fromRow(row: Row): Info {
const icon =
row.icon_url || row.icon_color
? { url: row.icon_url ?? undefined, color: row.icon_color ?? undefined }
: undefined
return {
id: row.id,
worktree: row.worktree,
vcs: row.vcs ? Info.shape.vcs.parse(row.vcs) : undefined,
name: row.name ?? undefined,
icon,
time: {
created: row.time_created,
updated: row.time_updated,
initialized: row.time_initialized ?? undefined,
},
sandboxes: row.sandboxes,
commands: row.commands ?? undefined,
}
}
export async function fromDirectory(directory: string) {
log.info("fromDirectory", { directory })
const { id, sandbox, worktree, vcs } = await iife(async () => {
const data = await iife(async () => {
const matches = Filesystem.up({ targets: [".git"], start: directory })
const git = await matches.next().then((x) => x.value)
await matches.return()
if (git) {
let sandbox = path.dirname(git)
const sandbox = path.dirname(git)
const bin = Bun.which("git")
const gitBinary = Bun.which("git")
// cached id calculation
let id = await Bun.file(path.join(git, "opencode"))
const cached = await Bun.file(path.join(git, "opencode"))
.text()
.then((x) => x.trim())
.catch(() => undefined)
if (!gitBinary) {
if (!bin) {
return {
id: id ?? "global",
id: cached ?? "global",
worktree: sandbox,
sandbox: sandbox,
vcs: Info.shape.vcs.parse(Flag.OPENCODE_FAKE_VCS),
}
}
// generate id from root commit
if (!id) {
const roots = await $`git rev-list --max-parents=0 --all`
.quiet()
.nothrow()
.cwd(sandbox)
.text()
.then((x) =>
x
.split("\n")
.filter(Boolean)
.map((x) => x.trim())
.toSorted(),
)
.catch(() => undefined)
if (!roots) {
return {
id: "global",
worktree: sandbox,
sandbox: sandbox,
vcs: Info.shape.vcs.parse(Flag.OPENCODE_FAKE_VCS),
}
}
id = roots[0]
if (id) {
void Bun.file(path.join(git, "opencode"))
.write(id)
const roots = cached
? undefined
: await $`git rev-list --max-parents=0 --all`
.quiet()
.nothrow()
.cwd(sandbox)
.text()
.then((x) =>
x
.split("\n")
.filter(Boolean)
.map((x) => x.trim())
.toSorted(),
)
.catch(() => undefined)
if (!cached && !roots) {
return {
id: "global",
worktree: sandbox,
sandbox: sandbox,
vcs: Info.shape.vcs.parse(Flag.OPENCODE_FAKE_VCS),
}
}
const id = cached ?? roots?.[0]
if (!cached && id) {
void Bun.file(path.join(git, "opencode"))
.write(id)
.catch(() => undefined)
}
if (!id) {
return {
id: "global",
@@ -136,33 +155,31 @@ export namespace Project {
}
}
sandbox = top
const worktree = await $`git rev-parse --git-common-dir`
const tree = await $`git rev-parse --git-common-dir`
.quiet()
.nothrow()
.cwd(sandbox)
.cwd(top)
.text()
.then((x) => {
const dirname = path.dirname(x.trim())
if (dirname === ".") return sandbox
if (dirname === ".") return top
return dirname
})
.catch(() => undefined)
if (!worktree) {
if (!tree) {
return {
id,
sandbox,
worktree: sandbox,
sandbox: top,
worktree: top,
vcs: Info.shape.vcs.parse(Flag.OPENCODE_FAKE_VCS),
}
}
return {
id,
sandbox,
worktree,
sandbox: top,
worktree: tree,
vcs: "git",
}
}
@@ -175,47 +192,80 @@ export namespace Project {
}
})
let existing = await Storage.read<Info>(["project", id]).catch(() => undefined)
if (!existing) {
existing = {
id,
worktree,
vcs: vcs as Info["vcs"],
const row = Database.use((db) => db.select().from(ProjectTable).where(eq(ProjectTable.id, data.id)).get())
const existing = await iife(async () => {
if (row) return fromRow(row)
const fresh: Info = {
id: data.id,
worktree: data.worktree,
vcs: data.vcs as Info["vcs"],
sandboxes: [],
time: {
created: Date.now(),
updated: Date.now(),
},
}
if (id !== "global") {
await migrateFromGlobal(id, worktree)
if (data.id !== "global") {
await migrateFromGlobal(data.id, data.worktree)
}
}
// migrate old projects before sandboxes
if (!existing.sandboxes) existing.sandboxes = []
return fresh
})
if (Flag.OPENCODE_EXPERIMENTAL_ICON_DISCOVERY) discover(existing)
const result: Info = {
...existing,
worktree,
vcs: vcs as Info["vcs"],
worktree: data.worktree,
vcs: data.vcs as Info["vcs"],
time: {
...existing.time,
updated: Date.now(),
},
}
if (sandbox !== result.worktree && !result.sandboxes.includes(sandbox)) result.sandboxes.push(sandbox)
result.sandboxes = result.sandboxes.filter((x) => existsSync(x))
await Storage.write<Info>(["project", id], result)
if (data.sandbox !== result.worktree && !result.sandboxes.includes(data.sandbox))
result.sandboxes.push(data.sandbox)
const sandboxes: string[] = []
for (const x of result.sandboxes) {
const stat = await Bun.file(x)
.stat()
.catch(() => undefined)
if (stat) sandboxes.push(x)
}
result.sandboxes = sandboxes
const insert = {
id: result.id,
worktree: result.worktree,
vcs: result.vcs ?? null,
name: result.name,
icon_url: result.icon?.url,
icon_color: result.icon?.color,
time_created: result.time.created,
time_updated: result.time.updated,
time_initialized: result.time.initialized,
sandboxes: result.sandboxes,
commands: result.commands,
}
const updateSet = {
worktree: result.worktree,
vcs: result.vcs ?? null,
name: result.name,
icon_url: result.icon?.url,
icon_color: result.icon?.color,
time_updated: result.time.updated,
time_initialized: result.time.initialized,
sandboxes: result.sandboxes,
commands: result.commands,
}
Database.use((db) =>
db.insert(ProjectTable).values(insert).onConflictDoUpdate({ target: ProjectTable.id, set: updateSet }).run(),
)
GlobalBus.emit("event", {
payload: {
type: Event.Updated.type,
properties: result,
},
})
return { project: result, sandbox }
return { project: result, sandbox: data.sandbox }
}
export async function discover(input: Info) {
@@ -248,43 +298,54 @@ export namespace Project {
return
}
async function migrateFromGlobal(newProjectID: string, worktree: string) {
const globalProject = await Storage.read<Info>(["project", "global"]).catch(() => undefined)
if (!globalProject) return
async function migrateFromGlobal(id: string, worktree: string) {
const row = Database.use((db) => db.select().from(ProjectTable).where(eq(ProjectTable.id, "global")).get())
if (!row) return
const globalSessions = await Storage.list(["session", "global"]).catch(() => [])
if (globalSessions.length === 0) return
const sessions = Database.use((db) =>
db.select().from(SessionTable).where(eq(SessionTable.project_id, "global")).all(),
)
if (sessions.length === 0) return
log.info("migrating sessions from global", { newProjectID, worktree, count: globalSessions.length })
log.info("migrating sessions from global", { newProjectID: id, worktree, count: sessions.length })
await work(10, globalSessions, async (key) => {
const sessionID = key[key.length - 1]
const session = await Storage.read<Session.Info>(key).catch(() => undefined)
if (!session) return
if (session.directory && session.directory !== worktree) return
await work(10, sessions, async (row) => {
// Skip sessions that belong to a different directory
if (row.directory && row.directory !== worktree) return
session.projectID = newProjectID
log.info("migrating session", { sessionID, from: "global", to: newProjectID })
await Storage.write(["session", newProjectID, sessionID], session)
await Storage.remove(key)
log.info("migrating session", { sessionID: row.id, from: "global", to: id })
Database.use((db) => db.update(SessionTable).set({ project_id: id }).where(eq(SessionTable.id, row.id)).run())
}).catch((error) => {
log.error("failed to migrate sessions from global to project", { error, projectId: newProjectID })
log.error("failed to migrate sessions from global to project", { error, projectId: id })
})
}
export async function setInitialized(projectID: string) {
await Storage.update<Info>(["project", projectID], (draft) => {
draft.time.initialized = Date.now()
})
export function setInitialized(id: string) {
Database.use((db) =>
db
.update(ProjectTable)
.set({
time_initialized: Date.now(),
})
.where(eq(ProjectTable.id, id))
.run(),
)
}
export async function list() {
const keys = await Storage.list(["project"])
const projects = await Promise.all(keys.map((x) => Storage.read<Info>(x)))
return projects.map((project) => ({
...project,
sandboxes: project.sandboxes?.filter((x) => existsSync(x)),
}))
export function list() {
return Database.use((db) =>
db
.select()
.from(ProjectTable)
.all()
.map((row) => fromRow(row)),
)
}
export function get(id: string): Info | undefined {
const row = Database.use((db) => db.select().from(ProjectTable).where(eq(ProjectTable.id, id)).get())
if (!row) return undefined
return fromRow(row)
}
export const update = fn(
@@ -295,77 +356,90 @@ export namespace Project {
commands: Info.shape.commands.optional(),
}),
async (input) => {
const result = await Storage.update<Info>(["project", input.projectID], (draft) => {
if (input.name !== undefined) draft.name = input.name
if (input.icon !== undefined) {
draft.icon = {
...draft.icon,
}
if (input.icon.url !== undefined) draft.icon.url = input.icon.url
if (input.icon.override !== undefined) draft.icon.override = input.icon.override || undefined
if (input.icon.color !== undefined) draft.icon.color = input.icon.color
}
if (input.commands?.start !== undefined) {
const start = input.commands.start || undefined
draft.commands = {
...(draft.commands ?? {}),
}
draft.commands.start = start
if (!draft.commands.start) draft.commands = undefined
}
draft.time.updated = Date.now()
})
const result = Database.use((db) =>
db
.update(ProjectTable)
.set({
name: input.name,
icon_url: input.icon?.url,
icon_color: input.icon?.color,
commands: input.commands,
time_updated: Date.now(),
})
.where(eq(ProjectTable.id, input.projectID))
.returning()
.get(),
)
if (!result) throw new Error(`Project not found: ${input.projectID}`)
const data = fromRow(result)
GlobalBus.emit("event", {
payload: {
type: Event.Updated.type,
properties: result,
properties: data,
},
})
return result
return data
},
)
export async function sandboxes(projectID: string) {
const project = await Storage.read<Info>(["project", projectID]).catch(() => undefined)
if (!project?.sandboxes) return []
export async function sandboxes(id: string) {
const row = Database.use((db) => db.select().from(ProjectTable).where(eq(ProjectTable.id, id)).get())
if (!row) return []
const data = fromRow(row)
const valid: string[] = []
for (const dir of project.sandboxes) {
const stat = await fs.stat(dir).catch(() => undefined)
for (const dir of data.sandboxes) {
const stat = await Bun.file(dir)
.stat()
.catch(() => undefined)
if (stat?.isDirectory()) valid.push(dir)
}
return valid
}
export async function addSandbox(projectID: string, directory: string) {
const result = await Storage.update<Info>(["project", projectID], (draft) => {
const sandboxes = draft.sandboxes ?? []
if (!sandboxes.includes(directory)) sandboxes.push(directory)
draft.sandboxes = sandboxes
draft.time.updated = Date.now()
})
export async function addSandbox(id: string, directory: string) {
const row = Database.use((db) => db.select().from(ProjectTable).where(eq(ProjectTable.id, id)).get())
if (!row) throw new Error(`Project not found: ${id}`)
const sandboxes = [...row.sandboxes]
if (!sandboxes.includes(directory)) sandboxes.push(directory)
const result = Database.use((db) =>
db
.update(ProjectTable)
.set({ sandboxes, time_updated: Date.now() })
.where(eq(ProjectTable.id, id))
.returning()
.get(),
)
if (!result) throw new Error(`Project not found: ${id}`)
const data = fromRow(result)
GlobalBus.emit("event", {
payload: {
type: Event.Updated.type,
properties: result,
properties: data,
},
})
return result
return data
}
export async function removeSandbox(projectID: string, directory: string) {
const result = await Storage.update<Info>(["project", projectID], (draft) => {
const sandboxes = draft.sandboxes ?? []
draft.sandboxes = sandboxes.filter((sandbox) => sandbox !== directory)
draft.time.updated = Date.now()
})
export async function removeSandbox(id: string, directory: string) {
const row = Database.use((db) => db.select().from(ProjectTable).where(eq(ProjectTable.id, id)).get())
if (!row) throw new Error(`Project not found: ${id}`)
const sandboxes = row.sandboxes.filter((s) => s !== directory)
const result = Database.use((db) =>
db
.update(ProjectTable)
.set({ sandboxes, time_updated: Date.now() })
.where(eq(ProjectTable.id, id))
.returning()
.get(),
)
if (!result) throw new Error(`Project not found: ${id}`)
const data = fromRow(result)
GlobalBus.emit("event", {
payload: {
type: Event.Updated.type,
properties: result,
properties: data,
},
})
return result
return data
}
}

View File

@@ -1,6 +1,6 @@
import { resolver } from "hono-openapi"
import z from "zod"
import { Storage } from "../storage/storage"
import { NotFoundError } from "../storage/db"
export const ERRORS = {
400: {
@@ -25,7 +25,7 @@ export const ERRORS = {
description: "Not found",
content: {
"application/json": {
schema: resolver(Storage.NotFoundError.Schema),
schema: resolver(NotFoundError.Schema),
},
},
},

View File

@@ -3,7 +3,7 @@ import { describeRoute, validator, resolver } from "hono-openapi"
import { upgradeWebSocket } from "hono/bun"
import z from "zod"
import { Pty } from "@/pty"
import { Storage } from "../../storage/storage"
import { NotFoundError } from "../../storage/db"
import { errors } from "../error"
import { lazy } from "../../util/lazy"
@@ -76,7 +76,7 @@ export const PtyRoutes = lazy(() =>
async (c) => {
const info = Pty.get(c.req.valid("param").ptyID)
if (!info) {
throw new Storage.NotFoundError({ message: "Session not found" })
throw new NotFoundError({ message: "Session not found" })
}
return c.json(info)
},

View File

@@ -7,6 +7,7 @@ import { MessageV2 } from "../../session/message-v2"
import { SessionPrompt } from "../../session/prompt"
import { SessionCompaction } from "../../session/compaction"
import { SessionRevert } from "../../session/revert"
import { SessionHandoff } from "../../session/handoff"
import { SessionStatus } from "@/session/status"
import { SessionSummary } from "@/session/summary"
import { Todo } from "../../session/todo"
@@ -276,18 +277,15 @@ export const SessionRoutes = lazy(() =>
const sessionID = c.req.valid("param").sessionID
const updates = c.req.valid("json")
const updatedSession = await Session.update(
sessionID,
(session) => {
if (updates.title !== undefined) {
session.title = updates.title
}
if (updates.time?.archived !== undefined) session.time.archived = updates.time.archived
},
{ touch: false },
)
let session = await Session.get(sessionID)
if (updates.title !== undefined) {
session = await Session.setTitle({ sessionID, title: updates.title })
}
if (updates.time?.archived !== undefined) {
session = await Session.setArchived({ sessionID, time: updates.time.archived })
}
return c.json(updatedSession)
return c.json(session)
},
)
.post(
@@ -935,5 +933,41 @@ export const SessionRoutes = lazy(() =>
})
return c.json(true)
},
)
.post(
"/:sessionID/handoff",
describeRoute({
summary: "Handoff session",
description: "Extract context and relevant files for another agent to continue the conversation.",
operationId: "session.handoff",
responses: {
200: {
description: "Handoff data extracted",
content: {
"application/json": {
schema: resolver(z.object({ text: z.string(), files: z.string().array() })),
},
},
},
...errors(400, 404),
},
}),
validator(
"param",
z.object({
sessionID: z.string().meta({ description: "Session ID" }),
}),
),
validator("json", SessionHandoff.handoff.schema.omit({ sessionID: true })),
async (c) => {
const params = c.req.valid("param")
const body = c.req.valid("json")
const result = await SessionHandoff.handoff({
sessionID: params.sessionID,
model: body.model,
goal: body.goal,
})
return c.json(result)
},
),
)

View File

@@ -31,7 +31,7 @@ import { ExperimentalRoutes } from "./routes/experimental"
import { ProviderRoutes } from "./routes/provider"
import { lazy } from "../util/lazy"
import { InstanceBootstrap } from "../project/bootstrap"
import { Storage } from "../storage/storage"
import { NotFoundError } from "../storage/db"
import type { ContentfulStatusCode } from "hono/utils/http-status"
import { websocket } from "hono/bun"
import { HTTPException } from "hono/http-exception"
@@ -65,7 +65,7 @@ export namespace Server {
})
if (err instanceof NamedError) {
let status: ContentfulStatusCode
if (err instanceof Storage.NotFoundError) status = 404
if (err instanceof NotFoundError) status = 404
else if (err instanceof Provider.ModelNotFoundError) status = 400
else if (err.name.startsWith("Worktree")) status = 400
else status = 500

View File

@@ -0,0 +1,105 @@
import { fn } from "@/util/fn"
import z from "zod"
import { MessageV2 } from "./message-v2"
import { LLM } from "./llm"
import { Agent } from "@/agent/agent"
import { Provider } from "@/provider/provider"
import { iife } from "@/util/iife"
import { Identifier } from "@/id/id"
import PROMPT_HANDOFF from "./prompt/handoff.txt"
import { type Tool } from "ai"
import { SessionStatus } from "./status"
import { defer } from "@/util/defer"
export namespace SessionHandoff {
const HandoffTool: Tool = {
description:
"A tool to extract relevant information from the thread and select relevant files for another agent to continue the conversation. Use this tool to identify the most important context and files needed.",
inputSchema: z.object({
text: z.string().describe(PROMPT_HANDOFF),
files: z
.string()
.array()
.describe(
[
"An array of file or directory paths (workspace-relative) that are relevant to accomplishing the goal.",
"",
'IMPORTANT: Return as a JSON array of strings, e.g., ["packages/core/src/session/message-v2.ts", "packages/core/src/session/prompt/handoff.txt"]',
"",
"Rules:",
"- Maximum 10 files. Only include the most critical files needed for the task.",
"- You can include directories if multiple files from that directory are needed",
"- Prioritize by importance and relevance. PUT THE MOST IMPORTANT FILES FIRST.",
'- Return workspace-relative paths (e.g., "packages/core/src/session/message-v2.ts")',
"- Do not use absolute paths or invent files",
].join("\n"),
),
}),
async execute(_args, _ctx) {
return {}
},
}
export const handoff = fn(
z.object({
sessionID: z.string(),
model: z.object({ providerID: z.string(), modelID: z.string() }),
goal: z.string().optional(),
}),
async (input) => {
SessionStatus.set(input.sessionID, { type: "busy" })
using _ = defer(() => SessionStatus.set(input.sessionID, { type: "idle" }))
const messages = await MessageV2.filterCompacted(MessageV2.stream(input.sessionID))
const agent = await Agent.get("handoff")
const model = await iife(async () => {
if (agent.model) return Provider.getModel(agent.model.providerID, agent.model.modelID)
const small = await Provider.getSmallModel(input.model.providerID)
if (small) return small
return Provider.getModel(input.model.providerID, input.model.modelID)
})
const user = {
info: {
model: {
providerID: model.providerID,
modelID: model.id,
},
agent: agent.name,
sessionID: input.sessionID,
id: Identifier.ascending("user"),
role: "user",
time: {
created: Date.now(),
},
} satisfies MessageV2.User,
parts: [
{
type: "text",
text: PROMPT_HANDOFF + "\n\nMy request:\n" + (input.goal ?? "general summarization"),
id: Identifier.ascending("part"),
sessionID: input.sessionID,
messageID: Identifier.ascending("message"),
},
] satisfies MessageV2.TextPart[],
} satisfies MessageV2.WithParts
const abort = new AbortController()
const stream = await LLM.stream({
agent,
messages: MessageV2.toModelMessages([...messages, user], model),
sessionID: input.sessionID,
abort: abort.signal,
model,
system: [],
small: true,
user: user.info,
output: "tool",
tools: {
handoff: HandoffTool,
},
})
const [result] = await stream.toolCalls
if (!result) throw new Error("Handoff tool did not return a result")
return result.input
},
)
}

View File

@@ -10,7 +10,9 @@ import { Flag } from "../flag/flag"
import { Identifier } from "../id/id"
import { Installation } from "../installation"
import { Storage } from "../storage/storage"
import { Database, NotFoundError, eq, and, or, like } from "../storage/db"
import { SessionTable, MessageTable, PartTable } from "./session.sql"
import { Storage } from "@/storage/storage"
import { Log } from "../util/log"
import { MessageV2 } from "./message-v2"
import { Instance } from "../project/instance"
@@ -41,6 +43,64 @@ export namespace Session {
).test(title)
}
type SessionRow = typeof SessionTable.$inferSelect
export function fromRow(row: SessionRow): Info {
const summary =
row.summary_additions !== null || row.summary_deletions !== null || row.summary_files !== null
? {
additions: row.summary_additions ?? 0,
deletions: row.summary_deletions ?? 0,
files: row.summary_files ?? 0,
diffs: row.summary_diffs ?? undefined,
}
: undefined
const share = row.share_url ? { url: row.share_url } : undefined
const revert = row.revert ?? undefined
return {
id: row.id,
slug: row.slug,
projectID: row.project_id,
directory: row.directory,
parentID: row.parent_id ?? undefined,
title: row.title,
version: row.version,
summary,
share,
revert,
permission: row.permission ?? undefined,
time: {
created: row.time_created,
updated: row.time_updated,
compacting: row.time_compacting ?? undefined,
archived: row.time_archived ?? undefined,
},
}
}
export function toRow(info: Info) {
return {
id: info.id,
project_id: info.projectID,
parent_id: info.parentID,
slug: info.slug,
directory: info.directory,
title: info.title,
version: info.version,
share_url: info.share?.url,
summary_additions: info.summary?.additions,
summary_deletions: info.summary?.deletions,
summary_files: info.summary?.files,
summary_diffs: info.summary?.diffs,
revert: info.revert ?? null,
permission: info.permission,
time_created: info.time.created,
time_updated: info.time.updated,
time_compacting: info.time.compacting,
time_archived: info.time.archived,
}
}
function getForkedTitle(title: string): string {
const match = title.match(/^(.+) \(fork #(\d+)\)$/)
if (match) {
@@ -94,16 +154,6 @@ export namespace Session {
})
export type Info = z.output<typeof Info>
export const ShareInfo = z
.object({
secret: z.string(),
url: z.string(),
})
.meta({
ref: "SessionShare",
})
export type ShareInfo = z.output<typeof ShareInfo>
export const Event = {
Created: BusEvent.define(
"session.created",
@@ -200,8 +250,17 @@ export namespace Session {
)
export const touch = fn(Identifier.schema("session"), async (sessionID) => {
await update(sessionID, (draft) => {
draft.time.updated = Date.now()
const now = Date.now()
Database.use((db) => {
const row = db
.update(SessionTable)
.set({ time_updated: now })
.where(eq(SessionTable.id, sessionID))
.returning()
.get()
if (!row) throw new NotFoundError({ message: `Session not found: ${sessionID}` })
const info = fromRow(row)
Database.effect(() => Bus.publish(Event.Updated, { info }))
})
})
@@ -227,21 +286,19 @@ export namespace Session {
},
}
log.info("created", result)
await Storage.write(["session", Instance.project.id, result.id], result)
Bus.publish(Event.Created, {
info: result,
Database.use((db) => {
db.insert(SessionTable).values(toRow(result)).run()
Database.effect(() =>
Bus.publish(Event.Created, {
info: result,
}),
)
})
const cfg = await Config.get()
if (!result.parentID && (Flag.OPENCODE_AUTO_SHARE || cfg.share === "auto"))
share(result.id)
.then((share) => {
update(result.id, (draft) => {
draft.share = share
})
})
.catch(() => {
// Silently ignore sharing errors during session creation
})
share(result.id).catch(() => {
// Silently ignore sharing errors during session creation
})
Bus.publish(Event.Updated, {
info: result,
})
@@ -256,12 +313,9 @@ export namespace Session {
}
export const get = fn(Identifier.schema("session"), async (id) => {
const read = await Storage.read<Info>(["session", Instance.project.id, id])
return read as Info
})
export const getShare = fn(Identifier.schema("session"), async (id) => {
return Storage.read<ShareInfo>(["share", id])
const row = Database.use((db) => db.select().from(SessionTable).where(eq(SessionTable.id, id)).get())
if (!row) throw new NotFoundError({ message: `Session not found: ${id}` })
return fromRow(row)
})
export const share = fn(Identifier.schema("session"), async (id) => {
@@ -271,15 +325,12 @@ export namespace Session {
}
const { ShareNext } = await import("@/share/share-next")
const share = await ShareNext.create(id)
await update(
id,
(draft) => {
draft.share = {
url: share.url,
}
},
{ touch: false },
)
Database.use((db) => {
const row = db.update(SessionTable).set({ share_url: share.url }).where(eq(SessionTable.id, id)).returning().get()
if (!row) throw new NotFoundError({ message: `Session not found: ${id}` })
const info = fromRow(row)
Database.effect(() => Bus.publish(Event.Updated, { info }))
})
return share
})
@@ -287,32 +338,155 @@ export namespace Session {
// Use ShareNext to remove the share (same as share function uses ShareNext to create)
const { ShareNext } = await import("@/share/share-next")
await ShareNext.remove(id)
await update(
id,
(draft) => {
draft.share = undefined
},
{ touch: false },
)
Database.use((db) => {
const row = db.update(SessionTable).set({ share_url: null }).where(eq(SessionTable.id, id)).returning().get()
if (!row) throw new NotFoundError({ message: `Session not found: ${id}` })
const info = fromRow(row)
Database.effect(() => Bus.publish(Event.Updated, { info }))
})
})
export async function update(id: string, editor: (session: Info) => void, options?: { touch?: boolean }) {
const project = Instance.project
const result = await Storage.update<Info>(["session", project.id, id], (draft) => {
editor(draft)
if (options?.touch !== false) {
draft.time.updated = Date.now()
}
export const setTitle = fn(
z.object({
sessionID: Identifier.schema("session"),
title: z.string(),
}),
async (input) => {
return Database.use((db) => {
const row = db
.update(SessionTable)
.set({ title: input.title })
.where(eq(SessionTable.id, input.sessionID))
.returning()
.get()
if (!row) throw new NotFoundError({ message: `Session not found: ${input.sessionID}` })
const info = fromRow(row)
Database.effect(() => Bus.publish(Event.Updated, { info }))
return info
})
},
)
export const setArchived = fn(
z.object({
sessionID: Identifier.schema("session"),
time: z.number().optional(),
}),
async (input) => {
return Database.use((db) => {
const row = db
.update(SessionTable)
.set({ time_archived: input.time })
.where(eq(SessionTable.id, input.sessionID))
.returning()
.get()
if (!row) throw new NotFoundError({ message: `Session not found: ${input.sessionID}` })
const info = fromRow(row)
Database.effect(() => Bus.publish(Event.Updated, { info }))
return info
})
},
)
export const setPermission = fn(
z.object({
sessionID: Identifier.schema("session"),
permission: PermissionNext.Ruleset,
}),
async (input) => {
return Database.use((db) => {
const row = db
.update(SessionTable)
.set({ permission: input.permission, time_updated: Date.now() })
.where(eq(SessionTable.id, input.sessionID))
.returning()
.get()
if (!row) throw new NotFoundError({ message: `Session not found: ${input.sessionID}` })
const info = fromRow(row)
Database.effect(() => Bus.publish(Event.Updated, { info }))
return info
})
},
)
export const setRevert = fn(
z.object({
sessionID: Identifier.schema("session"),
revert: Info.shape.revert,
summary: Info.shape.summary,
}),
async (input) => {
return Database.use((db) => {
const row = db
.update(SessionTable)
.set({
revert: input.revert ?? null,
summary_additions: input.summary?.additions,
summary_deletions: input.summary?.deletions,
summary_files: input.summary?.files,
time_updated: Date.now(),
})
.where(eq(SessionTable.id, input.sessionID))
.returning()
.get()
if (!row) throw new NotFoundError({ message: `Session not found: ${input.sessionID}` })
const info = fromRow(row)
Database.effect(() => Bus.publish(Event.Updated, { info }))
return info
})
},
)
export const clearRevert = fn(Identifier.schema("session"), async (sessionID) => {
return Database.use((db) => {
const row = db
.update(SessionTable)
.set({
revert: null,
time_updated: Date.now(),
})
.where(eq(SessionTable.id, sessionID))
.returning()
.get()
if (!row) throw new NotFoundError({ message: `Session not found: ${sessionID}` })
const info = fromRow(row)
Database.effect(() => Bus.publish(Event.Updated, { info }))
return info
})
Bus.publish(Event.Updated, {
info: result,
})
return result
}
})
export const setSummary = fn(
z.object({
sessionID: Identifier.schema("session"),
summary: Info.shape.summary,
}),
async (input) => {
return Database.use((db) => {
const row = db
.update(SessionTable)
.set({
summary_additions: input.summary?.additions,
summary_deletions: input.summary?.deletions,
summary_files: input.summary?.files,
time_updated: Date.now(),
})
.where(eq(SessionTable.id, input.sessionID))
.returning()
.get()
if (!row) throw new NotFoundError({ message: `Session not found: ${input.sessionID}` })
const info = fromRow(row)
Database.effect(() => Bus.publish(Event.Updated, { info }))
return info
})
},
)
export const diff = fn(Identifier.schema("session"), async (sessionID) => {
const diffs = await Storage.read<Snapshot.FileDiff[]>(["session_diff", sessionID])
return diffs ?? []
try {
return await Storage.read<Snapshot.FileDiff[]>(["session_diff", sessionID])
} catch {
return []
}
})
export const messages = fn(
@@ -331,25 +505,37 @@ export namespace Session {
},
)
export async function* list() {
export function* list() {
const project = Instance.project
for (const item of await Storage.list(["session", project.id])) {
const session = await Storage.read<Info>(item).catch(() => undefined)
if (!session) continue
yield session
const rel = path.relative(Instance.worktree, Instance.directory)
const suffix = path.sep + rel
const rows = Database.use((db) =>
db
.select()
.from(SessionTable)
.where(
and(
eq(SessionTable.project_id, project.id),
or(eq(SessionTable.directory, Instance.directory), like(SessionTable.directory, `%${suffix}`)),
),
)
.all(),
)
for (const row of rows) {
yield fromRow(row)
}
}
export const children = fn(Identifier.schema("session"), async (parentID) => {
const project = Instance.project
const result = [] as Session.Info[]
for (const item of await Storage.list(["session", project.id])) {
const session = await Storage.read<Info>(item).catch(() => undefined)
if (!session) continue
if (session.parentID !== parentID) continue
result.push(session)
}
return result
const rows = Database.use((db) =>
db
.select()
.from(SessionTable)
.where(and(eq(SessionTable.project_id, project.id), eq(SessionTable.parent_id, parentID)))
.all(),
)
return rows.map(fromRow)
})
export const remove = fn(Identifier.schema("session"), async (sessionID) => {
@@ -360,15 +546,14 @@ export namespace Session {
await remove(child.id)
}
await unshare(sessionID).catch(() => {})
for (const msg of await Storage.list(["message", sessionID])) {
for (const part of await Storage.list(["part", msg.at(-1)!])) {
await Storage.remove(part)
}
await Storage.remove(msg)
}
await Storage.remove(["session", project.id, sessionID])
Bus.publish(Event.Deleted, {
info: session,
// CASCADE delete handles messages and parts automatically
Database.use((db) => {
db.delete(SessionTable).where(eq(SessionTable.id, sessionID)).run()
Database.effect(() =>
Bus.publish(Event.Deleted, {
info: session,
}),
)
})
} catch (e) {
log.error(e)
@@ -376,9 +561,23 @@ export namespace Session {
})
export const updateMessage = fn(MessageV2.Info, async (msg) => {
await Storage.write(["message", msg.sessionID, msg.id], msg)
Bus.publish(MessageV2.Event.Updated, {
info: msg,
const time_created = msg.role === "user" ? msg.time.created : msg.time.created
const { id, sessionID, ...data } = msg
Database.use((db) => {
db.insert(MessageTable)
.values({
id,
session_id: sessionID,
time_created,
data,
})
.onConflictDoUpdate({ target: MessageTable.id, set: { data } })
.run()
Database.effect(() =>
Bus.publish(MessageV2.Event.Updated, {
info: msg,
}),
)
})
return msg
})
@@ -389,10 +588,15 @@ export namespace Session {
messageID: Identifier.schema("message"),
}),
async (input) => {
await Storage.remove(["message", input.sessionID, input.messageID])
Bus.publish(MessageV2.Event.Removed, {
sessionID: input.sessionID,
messageID: input.messageID,
// CASCADE delete handles parts automatically
Database.use((db) => {
db.delete(MessageTable).where(eq(MessageTable.id, input.messageID)).run()
Database.effect(() =>
Bus.publish(MessageV2.Event.Removed, {
sessionID: input.sessionID,
messageID: input.messageID,
}),
)
})
return input.messageID
},
@@ -405,39 +609,58 @@ export namespace Session {
partID: Identifier.schema("part"),
}),
async (input) => {
await Storage.remove(["part", input.messageID, input.partID])
Bus.publish(MessageV2.Event.PartRemoved, {
sessionID: input.sessionID,
messageID: input.messageID,
partID: input.partID,
Database.use((db) => {
db.delete(PartTable).where(eq(PartTable.id, input.partID)).run()
Database.effect(() =>
Bus.publish(MessageV2.Event.PartRemoved, {
sessionID: input.sessionID,
messageID: input.messageID,
partID: input.partID,
}),
)
})
return input.partID
},
)
const UpdatePartInput = z.union([
MessageV2.Part,
z.object({
part: MessageV2.TextPart,
delta: z.string(),
}),
z.object({
part: MessageV2.ReasoningPart,
delta: z.string(),
}),
])
const UpdatePartInput = MessageV2.Part
export const updatePart = fn(UpdatePartInput, async (input) => {
const part = "delta" in input ? input.part : input
const delta = "delta" in input ? input.delta : undefined
await Storage.write(["part", part.messageID, part.id], part)
Bus.publish(MessageV2.Event.PartUpdated, {
part,
delta,
export const updatePart = fn(UpdatePartInput, async (part) => {
const { id, messageID, sessionID, ...data } = part
const time = Date.now()
Database.use((db) => {
db.insert(PartTable)
.values({
id,
message_id: messageID,
session_id: sessionID,
time_created: time,
data,
})
.onConflictDoUpdate({ target: PartTable.id, set: { data } })
.run()
Database.effect(() =>
Bus.publish(MessageV2.Event.PartUpdated, {
part,
}),
)
})
return part
})
export const updatePartDelta = fn(
z.object({
sessionID: z.string(),
messageID: z.string(),
partID: z.string(),
field: z.string(),
delta: z.string(),
}),
async (input) => {
Bus.publish(MessageV2.Event.PartDelta, input)
},
)
export const getUsage = fn(
z.object({
model: z.custom<Provider.Model>(),

View File

@@ -38,6 +38,7 @@ export namespace LLM {
small?: boolean
tools: Record<string, Tool>
retries?: number
output?: "tool"
}
export type StreamOutput = StreamTextResult<ToolSet, unknown>
@@ -207,6 +208,7 @@ export namespace LLM {
tools,
maxOutputTokens,
abortSignal: input.abort,
toolChoice: input.output === "tool" ? "required" : undefined,
headers: {
...(input.model.providerID.startsWith("opencode")
? {

View File

@@ -6,6 +6,10 @@ import { Identifier } from "../id/id"
import { LSP } from "../lsp"
import { Snapshot } from "@/snapshot"
import { fn } from "@/util/fn"
import { Database, eq, desc, inArray } from "@/storage/db"
import { MessageTable, PartTable } from "./session.sql"
import { ProviderTransform } from "@/provider/transform"
import { STATUS_CODES } from "http"
import { Storage } from "@/storage/storage"
import { ProviderError } from "@/provider/error"
import { iife } from "@/util/iife"
@@ -423,7 +427,16 @@ export namespace MessageV2 {
"message.part.updated",
z.object({
part: Part,
delta: z.string().optional(),
}),
),
PartDelta: BusEvent.define(
"message.part.delta",
z.object({
sessionID: z.string(),
messageID: z.string(),
partID: z.string(),
field: z.string(),
delta: z.string(),
}),
),
PartRemoved: BusEvent.define(
@@ -668,23 +681,65 @@ export namespace MessageV2 {
}
export const stream = fn(Identifier.schema("session"), async function* (sessionID) {
const list = await Array.fromAsync(await Storage.list(["message", sessionID]))
for (let i = list.length - 1; i >= 0; i--) {
yield await get({
sessionID,
messageID: list[i][2],
})
const size = 50
let offset = 0
while (true) {
const rows = Database.use((db) =>
db
.select()
.from(MessageTable)
.where(eq(MessageTable.session_id, sessionID))
.orderBy(desc(MessageTable.time_created))
.limit(size)
.offset(offset)
.all(),
)
if (rows.length === 0) break
const ids = rows.map((row) => row.id)
const partsByMessage = new Map<string, MessageV2.Part[]>()
if (ids.length > 0) {
const partRows = Database.use((db) =>
db
.select()
.from(PartTable)
.where(inArray(PartTable.message_id, ids))
.orderBy(PartTable.message_id, PartTable.id)
.all(),
)
for (const row of partRows) {
const part = {
...row.data,
id: row.id,
sessionID: row.session_id,
messageID: row.message_id,
} as MessageV2.Part
const list = partsByMessage.get(row.message_id)
if (list) list.push(part)
else partsByMessage.set(row.message_id, [part])
}
}
for (const row of rows) {
const info = { ...row.data, id: row.id, sessionID: row.session_id } as MessageV2.Info
yield {
info,
parts: partsByMessage.get(row.id) ?? [],
}
}
offset += rows.length
if (rows.length < size) break
}
})
export const parts = fn(Identifier.schema("message"), async (messageID) => {
const result = [] as MessageV2.Part[]
for (const item of await Storage.list(["part", messageID])) {
const read = await Storage.read<MessageV2.Part>(item)
result.push(read)
}
result.sort((a, b) => (a.id > b.id ? 1 : -1))
return result
export const parts = fn(Identifier.schema("message"), async (message_id) => {
const rows = Database.use((db) =>
db.select().from(PartTable).where(eq(PartTable.message_id, message_id)).orderBy(PartTable.id).all(),
)
return rows.map(
(row) => ({ ...row.data, id: row.id, sessionID: row.session_id, messageID: row.message_id }) as MessageV2.Part,
)
})
export const get = fn(
@@ -693,8 +748,11 @@ export namespace MessageV2 {
messageID: Identifier.schema("message"),
}),
async (input): Promise<WithParts> => {
const row = Database.use((db) => db.select().from(MessageTable).where(eq(MessageTable.id, input.messageID)).get())
if (!row) throw new Error(`Message not found: ${input.messageID}`)
const info = { ...row.data, id: row.id, sessionID: row.session_id } as MessageV2.Info
return {
info: await Storage.read<MessageV2.Info>(["message", input.sessionID, input.messageID]),
info,
parts: await parts(input.messageID),
}
},

View File

@@ -63,17 +63,19 @@ export namespace SessionProcessor {
if (value.id in reasoningMap) {
continue
}
reasoningMap[value.id] = {
const reasoningPart = {
id: Identifier.ascending("part"),
messageID: input.assistantMessage.id,
sessionID: input.assistantMessage.sessionID,
type: "reasoning",
type: "reasoning" as const,
text: "",
time: {
start: Date.now(),
},
metadata: value.providerMetadata,
}
reasoningMap[value.id] = reasoningPart
await Session.updatePart(reasoningPart)
break
case "reasoning-delta":
@@ -81,7 +83,13 @@ export namespace SessionProcessor {
const part = reasoningMap[value.id]
part.text += value.text
if (value.providerMetadata) part.metadata = value.providerMetadata
if (part.text) await Session.updatePart({ part, delta: value.text })
await Session.updatePartDelta({
sessionID: part.sessionID,
messageID: part.messageID,
partID: part.id,
field: "text",
delta: value.text,
})
}
break
@@ -288,17 +296,20 @@ export namespace SessionProcessor {
},
metadata: value.providerMetadata,
}
await Session.updatePart(currentText)
break
case "text-delta":
if (currentText) {
currentText.text += value.text
if (value.providerMetadata) currentText.metadata = value.providerMetadata
if (currentText.text)
await Session.updatePart({
part: currentText,
delta: value.text,
})
await Session.updatePartDelta({
sessionID: currentText.sessionID,
messageID: currentText.messageID,
partID: currentText.id,
field: "text",
delta: value.text,
})
}
break

View File

@@ -164,9 +164,7 @@ export namespace SessionPrompt {
}
if (permissions.length > 0) {
session.permission = permissions
await Session.update(session.id, (draft) => {
draft.permission = permissions
})
await Session.setPermission({ sessionID: session.id, permission: permissions })
}
if (input.noReply === true) {
@@ -1235,33 +1233,7 @@ export namespace SessionPrompt {
const userMessage = input.messages.findLast((msg) => msg.info.role === "user")
if (!userMessage) return input.messages
// Original logic when experimental plan mode is disabled
if (!Flag.OPENCODE_EXPERIMENTAL_PLAN_MODE) {
if (input.agent.name === "plan") {
userMessage.parts.push({
id: Identifier.ascending("part"),
messageID: userMessage.info.id,
sessionID: userMessage.info.sessionID,
type: "text",
text: PROMPT_PLAN,
synthetic: true,
})
}
const wasPlan = input.messages.some((msg) => msg.info.role === "assistant" && msg.info.agent === "plan")
if (wasPlan && input.agent.name === "build") {
userMessage.parts.push({
id: Identifier.ascending("part"),
messageID: userMessage.info.id,
sessionID: userMessage.info.sessionID,
type: "text",
text: BUILD_SWITCH,
synthetic: true,
})
}
return input.messages
}
// New plan mode logic when flag is enabled
// Plan mode logic
const assistantMessage = input.messages.findLast((msg) => msg.info.role === "assistant")
// Switching from plan mode to build mode
@@ -1853,21 +1825,16 @@ NOTE: At any point in time through this workflow you should feel free to ask the
],
})
const text = await result.text.catch((err) => log.error("failed to generate title", { error: err }))
if (text)
return Session.update(
input.session.id,
(draft) => {
const cleaned = text
.replace(/<think>[\s\S]*?<\/think>\s*/g, "")
.split("\n")
.map((line) => line.trim())
.find((line) => line.length > 0)
if (!cleaned) return
if (text) {
const cleaned = text
.replace(/<think>[\s\S]*?<\/think>\s*/g, "")
.split("\n")
.map((line) => line.trim())
.find((line) => line.length > 0)
if (!cleaned) return
const title = cleaned.length > 100 ? cleaned.substring(0, 97) + "..." : cleaned
draft.title = title
},
{ touch: false },
)
const title = cleaned.length > 100 ? cleaned.substring(0, 97) + "..." : cleaned
return Session.setTitle({ sessionID: input.session.id, title })
}
}
}

View File

@@ -0,0 +1,17 @@
Extract relevant context from the conversation above for continuing this work. Write from my perspective (first person: "I did...", "I told you...").
Consider what would be useful to know based on my request below. Questions that might be relevant:
- What did I just do or implement?
- What instructions did I already give you which are still relevant (e.g. follow patterns in the codebase)?
- What files did I already tell you that's important or that I am working on (and should continue working on)?
- Did I provide a plan or spec that should be included?
- What did I already tell you that's important (certain libraries, patterns, constraints, preferences)?
- What important technical details did I discover (APIs, methods, patterns)?
- What caveats, limitations, or open questions did I find?
Extract what matters for the specific request below. Don't answer questions that aren't relevant. Pick an appropriate length based on the complexity of the request.
Focus on capabilities and behavior, not file-by-file changes. Avoid excessive implementation details (variable names, storage keys, constants) unless critical.
Format: Plain text with bullets. No markdown headers, no bold/italic, no code fences. Use workspace-relative paths for files.

View File

@@ -4,8 +4,9 @@ import { Snapshot } from "../snapshot"
import { MessageV2 } from "./message-v2"
import { Session } from "."
import { Log } from "../util/log"
import { splitWhen } from "remeda"
import { Storage } from "../storage/storage"
import { Database, eq } from "../storage/db"
import { MessageTable, PartTable } from "./session.sql"
import { Storage } from "@/storage/storage"
import { Bus } from "../bus"
import { SessionPrompt } from "./prompt"
import { SessionSummary } from "./summary"
@@ -65,13 +66,14 @@ export namespace SessionRevert {
sessionID: input.sessionID,
diff: diffs,
})
return Session.update(input.sessionID, (draft) => {
draft.revert = revert
draft.summary = {
return Session.setRevert({
sessionID: input.sessionID,
revert,
summary: {
additions: diffs.reduce((sum, x) => sum + x.additions, 0),
deletions: diffs.reduce((sum, x) => sum + x.deletions, 0),
files: diffs.length,
}
},
})
}
return session
@@ -83,39 +85,54 @@ export namespace SessionRevert {
const session = await Session.get(input.sessionID)
if (!session.revert) return session
if (session.revert.snapshot) await Snapshot.restore(session.revert.snapshot)
const next = await Session.update(input.sessionID, (draft) => {
draft.revert = undefined
})
return next
return Session.clearRevert(input.sessionID)
}
export async function cleanup(session: Session.Info) {
if (!session.revert) return
const sessionID = session.id
let msgs = await Session.messages({ sessionID })
const msgs = await Session.messages({ sessionID })
const messageID = session.revert.messageID
const [preserve, remove] = splitWhen(msgs, (x) => x.info.id === messageID)
msgs = preserve
const preserve = [] as MessageV2.WithParts[]
const remove = [] as MessageV2.WithParts[]
let target: MessageV2.WithParts | undefined
for (const msg of msgs) {
if (msg.info.id < messageID) {
preserve.push(msg)
continue
}
if (msg.info.id > messageID) {
remove.push(msg)
continue
}
if (session.revert.partID) {
preserve.push(msg)
target = msg
continue
}
remove.push(msg)
}
for (const msg of remove) {
await Storage.remove(["message", sessionID, msg.info.id])
Database.use((db) => db.delete(MessageTable).where(eq(MessageTable.id, msg.info.id)).run())
await Bus.publish(MessageV2.Event.Removed, { sessionID: sessionID, messageID: msg.info.id })
}
const last = preserve.at(-1)
if (session.revert.partID && last) {
if (session.revert.partID && target) {
const partID = session.revert.partID
const [preserveParts, removeParts] = splitWhen(last.parts, (x) => x.id === partID)
last.parts = preserveParts
for (const part of removeParts) {
await Storage.remove(["part", last.info.id, part.id])
await Bus.publish(MessageV2.Event.PartRemoved, {
sessionID: sessionID,
messageID: last.info.id,
partID: part.id,
})
const removeStart = target.parts.findIndex((part) => part.id === partID)
if (removeStart >= 0) {
const preserveParts = target.parts.slice(0, removeStart)
const removeParts = target.parts.slice(removeStart)
target.parts = preserveParts
for (const part of removeParts) {
Database.use((db) => db.delete(PartTable).where(eq(PartTable.id, part.id)).run())
await Bus.publish(MessageV2.Event.PartRemoved, {
sessionID: sessionID,
messageID: target.info.id,
partID: part.id,
})
}
}
}
await Session.update(sessionID, (draft) => {
draft.revert = undefined
})
await Session.clearRevert(sessionID)
}
}

View File

@@ -0,0 +1,88 @@
import { sqliteTable, text, integer, index, primaryKey } from "drizzle-orm/sqlite-core"
import { ProjectTable } from "../project/project.sql"
import type { MessageV2 } from "./message-v2"
import type { Snapshot } from "@/snapshot"
import type { PermissionNext } from "@/permission/next"
import { Timestamps } from "@/storage/schema.sql"
type PartData = Omit<MessageV2.Part, "id" | "sessionID" | "messageID">
type InfoData = Omit<MessageV2.Info, "id" | "sessionID">
export const SessionTable = sqliteTable(
"session",
{
id: text().primaryKey(),
project_id: text()
.notNull()
.references(() => ProjectTable.id, { onDelete: "cascade" }),
parent_id: text(),
slug: text().notNull(),
directory: text().notNull(),
title: text().notNull(),
version: text().notNull(),
share_url: text(),
summary_additions: integer(),
summary_deletions: integer(),
summary_files: integer(),
summary_diffs: text({ mode: "json" }).$type<Snapshot.FileDiff[]>(),
revert: text({ mode: "json" }).$type<{ messageID: string; partID?: string; snapshot?: string; diff?: string }>(),
permission: text({ mode: "json" }).$type<PermissionNext.Ruleset>(),
...Timestamps,
time_compacting: integer(),
time_archived: integer(),
},
(table) => [index("session_project_idx").on(table.project_id), index("session_parent_idx").on(table.parent_id)],
)
export const MessageTable = sqliteTable(
"message",
{
id: text().primaryKey(),
session_id: text()
.notNull()
.references(() => SessionTable.id, { onDelete: "cascade" }),
...Timestamps,
data: text({ mode: "json" }).notNull().$type<InfoData>(),
},
(table) => [index("message_session_idx").on(table.session_id)],
)
export const PartTable = sqliteTable(
"part",
{
id: text().primaryKey(),
message_id: text()
.notNull()
.references(() => MessageTable.id, { onDelete: "cascade" }),
session_id: text().notNull(),
...Timestamps,
data: text({ mode: "json" }).notNull().$type<PartData>(),
},
(table) => [index("part_message_idx").on(table.message_id), index("part_session_idx").on(table.session_id)],
)
export const TodoTable = sqliteTable(
"todo",
{
session_id: text()
.notNull()
.references(() => SessionTable.id, { onDelete: "cascade" }),
content: text().notNull(),
status: text().notNull(),
priority: text().notNull(),
position: integer().notNull(),
...Timestamps,
},
(table) => [
primaryKey({ columns: [table.session_id, table.position] }),
index("todo_session_idx").on(table.session_id),
],
)
export const PermissionTable = sqliteTable("permission", {
project_id: text()
.primaryKey()
.references(() => ProjectTable.id, { onDelete: "cascade" }),
...Timestamps,
data: text({ mode: "json" }).notNull().$type<PermissionNext.Ruleset>(),
})

View File

@@ -90,12 +90,13 @@ export namespace SessionSummary {
async function summarizeSession(input: { sessionID: string; messages: MessageV2.WithParts[] }) {
const diffs = await computeDiff({ messages: input.messages })
await Session.update(input.sessionID, (draft) => {
draft.summary = {
await Session.setSummary({
sessionID: input.sessionID,
summary: {
additions: diffs.reduce((sum, x) => sum + x.additions, 0),
deletions: diffs.reduce((sum, x) => sum + x.deletions, 0),
files: diffs.length,
}
},
})
await Storage.write(["session_diff", input.sessionID], diffs)
Bus.publish(Session.Event.Diff, {

View File

@@ -1,7 +1,8 @@
import { BusEvent } from "@/bus/bus-event"
import { Bus } from "@/bus"
import z from "zod"
import { Storage } from "../storage/storage"
import { Database, eq, asc } from "../storage/db"
import { TodoTable } from "./session.sql"
export namespace Todo {
export const Info = z
@@ -9,7 +10,6 @@ export namespace Todo {
content: z.string().describe("Brief description of the task"),
status: z.string().describe("Current status of the task: pending, in_progress, completed, cancelled"),
priority: z.string().describe("Priority level of the task: high, medium, low"),
id: z.string().describe("Unique identifier for the todo item"),
})
.meta({ ref: "Todo" })
export type Info = z.infer<typeof Info>
@@ -24,14 +24,33 @@ export namespace Todo {
),
}
export async function update(input: { sessionID: string; todos: Info[] }) {
await Storage.write(["todo", input.sessionID], input.todos)
export function update(input: { sessionID: string; todos: Info[] }) {
Database.transaction((db) => {
db.delete(TodoTable).where(eq(TodoTable.session_id, input.sessionID)).run()
if (input.todos.length === 0) return
db.insert(TodoTable)
.values(
input.todos.map((todo, position) => ({
session_id: input.sessionID,
content: todo.content,
status: todo.status,
priority: todo.priority,
position,
})),
)
.run()
})
Bus.publish(Event.Updated, input)
}
export async function get(sessionID: string) {
return Storage.read<Info[]>(["todo", sessionID])
.then((x) => x || [])
.catch(() => [])
export function get(sessionID: string) {
const rows = Database.use((db) =>
db.select().from(TodoTable).where(eq(TodoTable.session_id, sessionID)).orderBy(asc(TodoTable.position)).all(),
)
return rows.map((row) => ({
content: row.content,
status: row.status,
priority: row.priority,
}))
}
}

View File

@@ -4,7 +4,8 @@ import { ulid } from "ulid"
import { Provider } from "@/provider/provider"
import { Session } from "@/session"
import { MessageV2 } from "@/session/message-v2"
import { Storage } from "@/storage/storage"
import { Database, eq } from "@/storage/db"
import { SessionShareTable } from "./share.sql"
import { Log } from "@/util/log"
import type * as SDK from "@opencode-ai/sdk/v2"
@@ -77,17 +78,26 @@ export namespace ShareNext {
})
.then((x) => x.json())
.then((x) => x as { id: string; url: string; secret: string })
await Storage.write(["session_share", sessionID], result)
Database.use((db) =>
db
.insert(SessionShareTable)
.values({ session_id: sessionID, id: result.id, secret: result.secret, url: result.url })
.onConflictDoUpdate({
target: SessionShareTable.session_id,
set: { id: result.id, secret: result.secret, url: result.url },
})
.run(),
)
fullSync(sessionID)
return result
}
function get(sessionID: string) {
return Storage.read<{
id: string
secret: string
url: string
}>(["session_share", sessionID])
const row = Database.use((db) =>
db.select().from(SessionShareTable).where(eq(SessionShareTable.session_id, sessionID)).get(),
)
if (!row) return
return { id: row.id, secret: row.secret, url: row.url }
}
type Data =
@@ -132,7 +142,7 @@ export namespace ShareNext {
const queued = queue.get(sessionID)
if (!queued) return
queue.delete(sessionID)
const share = await get(sessionID).catch(() => undefined)
const share = get(sessionID)
if (!share) return
await fetch(`${await url()}/api/share/${share.id}/sync`, {
@@ -152,7 +162,7 @@ export namespace ShareNext {
export async function remove(sessionID: string) {
if (disabled) return
log.info("removing share", { sessionID })
const share = await get(sessionID)
const share = get(sessionID)
if (!share) return
await fetch(`${await url()}/api/share/${share.id}`, {
method: "DELETE",
@@ -163,7 +173,7 @@ export namespace ShareNext {
secret: share.secret,
}),
})
await Storage.remove(["session_share", sessionID])
Database.use((db) => db.delete(SessionShareTable).where(eq(SessionShareTable.session_id, sessionID)).run())
}
async function fullSync(sessionID: string) {

View File

@@ -0,0 +1,13 @@
import { sqliteTable, text } from "drizzle-orm/sqlite-core"
import { SessionTable } from "../session/session.sql"
import { Timestamps } from "@/storage/schema.sql"
export const SessionShareTable = sqliteTable("session_share", {
session_id: text()
.primaryKey()
.references(() => SessionTable.id, { onDelete: "cascade" }),
id: text().notNull(),
secret: text().notNull(),
url: text().notNull(),
...Timestamps,
})

View File

@@ -1,92 +0,0 @@
import { Bus } from "../bus"
import { Installation } from "../installation"
import { Session } from "../session"
import { MessageV2 } from "../session/message-v2"
import { Log } from "../util/log"
export namespace Share {
const log = Log.create({ service: "share" })
let queue: Promise<void> = Promise.resolve()
const pending = new Map<string, any>()
export async function sync(key: string, content: any) {
if (disabled) return
const [root, ...splits] = key.split("/")
if (root !== "session") return
const [sub, sessionID] = splits
if (sub === "share") return
const share = await Session.getShare(sessionID).catch(() => {})
if (!share) return
const { secret } = share
pending.set(key, content)
queue = queue
.then(async () => {
const content = pending.get(key)
if (content === undefined) return
pending.delete(key)
return fetch(`${URL}/share_sync`, {
method: "POST",
body: JSON.stringify({
sessionID: sessionID,
secret,
key: key,
content,
}),
})
})
.then((x) => {
if (x) {
log.info("synced", {
key: key,
status: x.status,
})
}
})
}
export function init() {
Bus.subscribe(Session.Event.Updated, async (evt) => {
await sync("session/info/" + evt.properties.info.id, evt.properties.info)
})
Bus.subscribe(MessageV2.Event.Updated, async (evt) => {
await sync("session/message/" + evt.properties.info.sessionID + "/" + evt.properties.info.id, evt.properties.info)
})
Bus.subscribe(MessageV2.Event.PartUpdated, async (evt) => {
await sync(
"session/part/" +
evt.properties.part.sessionID +
"/" +
evt.properties.part.messageID +
"/" +
evt.properties.part.id,
evt.properties.part,
)
})
}
export const URL =
process.env["OPENCODE_API"] ??
(Installation.isPreview() || Installation.isLocal() ? "https://api.dev.opencode.ai" : "https://api.opencode.ai")
const disabled = process.env["OPENCODE_DISABLE_SHARE"] === "true" || process.env["OPENCODE_DISABLE_SHARE"] === "1"
export async function create(sessionID: string) {
if (disabled) return { url: "", secret: "" }
return fetch(`${URL}/share_create`, {
method: "POST",
body: JSON.stringify({ sessionID: sessionID }),
})
.then((x) => x.json())
.then((x) => x as { url: string; secret: string })
}
export async function remove(sessionID: string, secret: string) {
if (disabled) return {}
return fetch(`${URL}/share_delete`, {
method: "POST",
body: JSON.stringify({ sessionID, secret }),
}).then((x) => x.json())
}
}

4
packages/opencode/src/sql.d.ts vendored Normal file
View File

@@ -0,0 +1,4 @@
declare module "*.sql" {
const content: string
export default content
}

View File

@@ -0,0 +1,140 @@
import { Database as BunDatabase } from "bun:sqlite"
import { drizzle, type SQLiteBunDatabase } from "drizzle-orm/bun-sqlite"
import { migrate } from "drizzle-orm/bun-sqlite/migrator"
import { type SQLiteTransaction } from "drizzle-orm/sqlite-core"
export * from "drizzle-orm"
import { Context } from "../util/context"
import { lazy } from "../util/lazy"
import { Global } from "../global"
import { Log } from "../util/log"
import { NamedError } from "@opencode-ai/util/error"
import z from "zod"
import path from "path"
import { readFileSync, readdirSync } from "fs"
import fs from "fs/promises"
import { Instance } from "@/project/instance"
declare const OPENCODE_MIGRATIONS: { sql: string; timestamp: number }[] | undefined
export const NotFoundError = NamedError.create(
"NotFoundError",
z.object({
message: z.string(),
}),
)
const log = Log.create({ service: "db" })
export namespace Database {
export type Transaction = SQLiteTransaction<"sync", void, Record<string, never>, Record<string, never>>
type Client = SQLiteBunDatabase
type Journal = { sql: string; timestamp: number }[]
function time(tag: string) {
const match = /^(\d{4})(\d{2})(\d{2})(\d{2})(\d{2})(\d{2})/.exec(tag)
if (!match) return 0
return Date.UTC(
Number(match[1]),
Number(match[2]) - 1,
Number(match[3]),
Number(match[4]),
Number(match[5]),
Number(match[6]),
)
}
function migrations(dir: string): Journal {
const dirs = readdirSync(dir, { withFileTypes: true })
.filter((entry) => entry.isDirectory())
.map((entry) => entry.name)
const sql = dirs
.map((name) => {
const file = path.join(dir, name, "migration.sql")
if (!Bun.file(file).size) return
return {
sql: readFileSync(file, "utf-8"),
timestamp: time(name),
}
})
.filter(Boolean) as Journal
return sql.sort((a, b) => a.timestamp - b.timestamp)
}
export const Client = lazy(() => {
log.info("opening database", { path: path.join(Global.Path.data, "opencode.db") })
const sqlite = new BunDatabase(path.join(Global.Path.data, "opencode.db"), { create: true })
sqlite.run("PRAGMA journal_mode = WAL")
sqlite.run("PRAGMA synchronous = NORMAL")
sqlite.run("PRAGMA busy_timeout = 5000")
sqlite.run("PRAGMA cache_size = -64000")
sqlite.run("PRAGMA foreign_keys = ON")
const db = drizzle({ client: sqlite })
// Apply schema migrations
const entries =
typeof OPENCODE_MIGRATIONS !== "undefined"
? OPENCODE_MIGRATIONS
: migrations(path.join(import.meta.dirname, "../../migration"))
if (entries.length > 0) {
log.info("applying migrations", {
count: entries.length,
mode: typeof OPENCODE_MIGRATIONS !== "undefined" ? "bundled" : "dev",
})
migrate(db, entries)
}
return db
})
export type TxOrDb = Transaction | Client
const ctx = Context.create<{
tx: TxOrDb
effects: (() => void | Promise<void>)[]
}>("database")
export function use<T>(callback: (trx: TxOrDb) => T): T {
try {
return callback(ctx.use().tx)
} catch (err) {
if (err instanceof Context.NotFound) {
const effects: (() => void | Promise<void>)[] = []
const result = ctx.provide({ effects, tx: Client() }, () => callback(Client()))
for (const effect of effects) effect()
return result
}
throw err
}
}
export function effect(fn: () => any | Promise<any>) {
try {
ctx.use().effects.push(fn)
} catch {
fn()
}
}
export function transaction<T>(callback: (tx: TxOrDb) => T): T {
try {
return callback(ctx.use().tx)
} catch (err) {
if (err instanceof Context.NotFound) {
const effects: (() => void | Promise<void>)[] = []
const result = Client().transaction((tx) => {
return ctx.provide({ tx, effects }, () => callback(tx))
})
for (const effect of effects) effect()
return result
}
throw err
}
}
}

View File

@@ -0,0 +1,437 @@
import { Database } from "bun:sqlite"
import { drizzle } from "drizzle-orm/bun-sqlite"
import { Global } from "../global"
import { Log } from "../util/log"
import { ProjectTable } from "../project/project.sql"
import { SessionTable, MessageTable, PartTable, TodoTable, PermissionTable } from "../session/session.sql"
import { SessionShareTable } from "../share/share.sql"
import path from "path"
import { existsSync } from "fs"
export namespace JsonMigration {
const log = Log.create({ service: "json-migration" })
export type Progress = {
current: number
total: number
label: string
}
type Options = {
progress?: (event: Progress) => void
}
export async function run(sqlite: Database, options?: Options) {
const storageDir = path.join(Global.Path.data, "storage")
if (!existsSync(storageDir)) {
log.info("storage directory does not exist, skipping migration")
return {
projects: 0,
sessions: 0,
messages: 0,
parts: 0,
todos: 0,
permissions: 0,
shares: 0,
errors: [] as string[],
}
}
log.info("starting json to sqlite migration", { storageDir })
const start = performance.now()
const db = drizzle({ client: sqlite })
// Optimize SQLite for bulk inserts
sqlite.exec("PRAGMA journal_mode = WAL")
sqlite.exec("PRAGMA synchronous = OFF")
sqlite.exec("PRAGMA cache_size = 10000")
sqlite.exec("PRAGMA temp_store = MEMORY")
const stats = {
projects: 0,
sessions: 0,
messages: 0,
parts: 0,
todos: 0,
permissions: 0,
shares: 0,
errors: [] as string[],
}
const orphans = {
sessions: 0,
todos: 0,
permissions: 0,
shares: 0,
}
const errs = stats.errors
const batchSize = 1000
const now = Date.now()
async function list(pattern: string) {
const items: string[] = []
const scan = new Bun.Glob(pattern)
for await (const file of scan.scan({ cwd: storageDir, absolute: true })) {
items.push(file)
}
return items
}
async function read(files: string[], start: number, end: number) {
const count = end - start
const tasks = new Array(count)
for (let i = 0; i < count; i++) {
tasks[i] = Bun.file(files[start + i]).json()
}
const results = await Promise.allSettled(tasks)
const items = new Array(count)
for (let i = 0; i < results.length; i++) {
const result = results[i]
if (result.status === "fulfilled") {
items[i] = result.value
continue
}
errs.push(`failed to read ${files[start + i]}: ${result.reason}`)
}
return items
}
function insert(values: any[], table: any, label: string) {
if (values.length === 0) return 0
try {
db.insert(table).values(values).onConflictDoNothing().run()
return values.length
} catch (e) {
errs.push(`failed to migrate ${label} batch: ${e}`)
return 0
}
}
// Pre-scan all files upfront to avoid repeated glob operations
log.info("scanning files...")
const [projectFiles, sessionFiles, messageFiles, partFiles, todoFiles, permFiles, shareFiles] = await Promise.all([
list("project/*.json"),
list("session/*/*.json"),
list("message/*/*.json"),
list("part/*/*.json"),
list("todo/*.json"),
list("permission/*.json"),
list("session_share/*.json"),
])
log.info("file scan complete", {
projects: projectFiles.length,
sessions: sessionFiles.length,
messages: messageFiles.length,
parts: partFiles.length,
todos: todoFiles.length,
permissions: permFiles.length,
shares: shareFiles.length,
})
const total = Math.max(
1,
projectFiles.length +
sessionFiles.length +
messageFiles.length +
partFiles.length +
todoFiles.length +
permFiles.length +
shareFiles.length,
)
const progress = options?.progress
let current = 0
const step = (label: string, count: number) => {
current = Math.min(total, current + count)
progress?.({ current, total, label })
}
progress?.({ current, total, label: "starting" })
sqlite.exec("BEGIN TRANSACTION")
// Migrate projects first (no FK deps)
const projectIds = new Set<string>()
const projectValues = [] as any[]
for (let i = 0; i < projectFiles.length; i += batchSize) {
const end = Math.min(i + batchSize, projectFiles.length)
const batch = await read(projectFiles, i, end)
projectValues.length = 0
for (let j = 0; j < batch.length; j++) {
const data = batch[j]
if (!data) continue
if (!data?.id) {
errs.push(`project missing id: ${projectFiles[i + j]}`)
continue
}
projectIds.add(data.id)
projectValues.push({
id: data.id,
worktree: data.worktree ?? "/",
vcs: data.vcs,
name: data.name ?? undefined,
icon_url: data.icon?.url,
icon_color: data.icon?.color,
time_created: data.time?.created ?? now,
time_updated: data.time?.updated ?? now,
time_initialized: data.time?.initialized,
sandboxes: data.sandboxes ?? [],
commands: data.commands,
})
}
stats.projects += insert(projectValues, ProjectTable, "project")
step("projects", end - i)
}
log.info("migrated projects", { count: stats.projects, duration: Math.round(performance.now() - start) })
// Migrate sessions (depends on projects)
const sessionIds = new Set<string>()
const sessionValues = [] as any[]
for (let i = 0; i < sessionFiles.length; i += batchSize) {
const end = Math.min(i + batchSize, sessionFiles.length)
const batch = await read(sessionFiles, i, end)
sessionValues.length = 0
for (let j = 0; j < batch.length; j++) {
const data = batch[j]
if (!data) continue
if (!data?.id || !data?.projectID) {
errs.push(`session missing id or projectID: ${sessionFiles[i + j]}`)
continue
}
if (!projectIds.has(data.projectID)) {
orphans.sessions++
continue
}
sessionIds.add(data.id)
sessionValues.push({
id: data.id,
project_id: data.projectID,
parent_id: data.parentID ?? null,
slug: data.slug ?? "",
directory: data.directory ?? "",
title: data.title ?? "",
version: data.version ?? "",
share_url: data.share?.url ?? null,
summary_additions: data.summary?.additions ?? null,
summary_deletions: data.summary?.deletions ?? null,
summary_files: data.summary?.files ?? null,
summary_diffs: data.summary?.diffs ?? null,
revert: data.revert ?? null,
permission: data.permission ?? null,
time_created: data.time?.created ?? now,
time_updated: data.time?.updated ?? now,
time_compacting: data.time?.compacting ?? null,
time_archived: data.time?.archived ?? null,
})
}
stats.sessions += insert(sessionValues, SessionTable, "session")
step("sessions", end - i)
}
log.info("migrated sessions", { count: stats.sessions })
if (orphans.sessions > 0) {
log.warn("skipped orphaned sessions", { count: orphans.sessions })
}
// Migrate messages using pre-scanned file map
const allMessageFiles = [] as string[]
const allMessageSessions = [] as string[]
const messageSessions = new Map<string, string>()
for (const file of messageFiles) {
const sessionID = path.basename(path.dirname(file))
if (!sessionIds.has(sessionID)) continue
allMessageFiles.push(file)
allMessageSessions.push(sessionID)
}
for (let i = 0; i < allMessageFiles.length; i += batchSize) {
const end = Math.min(i + batchSize, allMessageFiles.length)
const batch = await read(allMessageFiles, i, end)
const values = new Array(batch.length)
let count = 0
for (let j = 0; j < batch.length; j++) {
const data = batch[j]
if (!data) continue
const file = allMessageFiles[i + j]
const id = data.id ?? path.basename(file, ".json")
if (!id) {
errs.push(`message missing id: ${file}`)
continue
}
const sessionID = allMessageSessions[i + j]
messageSessions.set(id, sessionID)
const rest = data
delete rest.id
delete rest.sessionID
values[count++] = {
id,
session_id: sessionID,
time_created: data.time?.created ?? now,
time_updated: data.time?.updated ?? now,
data: rest,
}
}
values.length = count
stats.messages += insert(values, MessageTable, "message")
step("messages", end - i)
}
log.info("migrated messages", { count: stats.messages })
// Migrate parts using pre-scanned file map
for (let i = 0; i < partFiles.length; i += batchSize) {
const end = Math.min(i + batchSize, partFiles.length)
const batch = await read(partFiles, i, end)
const values = new Array(batch.length)
let count = 0
for (let j = 0; j < batch.length; j++) {
const data = batch[j]
if (!data) continue
const file = partFiles[i + j]
const id = data.id ?? path.basename(file, ".json")
const messageID = data.messageID ?? path.basename(path.dirname(file))
if (!id || !messageID) {
errs.push(`part missing id/messageID/sessionID: ${file}`)
continue
}
const sessionID = messageSessions.get(messageID)
if (!sessionID) {
errs.push(`part missing message session: ${file}`)
continue
}
if (!sessionIds.has(sessionID)) continue
const rest = data
delete rest.id
delete rest.messageID
delete rest.sessionID
values[count++] = {
id,
message_id: messageID,
session_id: sessionID,
time_created: data.time?.created ?? now,
time_updated: data.time?.updated ?? now,
data: rest,
}
}
values.length = count
stats.parts += insert(values, PartTable, "part")
step("parts", end - i)
}
log.info("migrated parts", { count: stats.parts })
// Migrate todos
const todoSessions = todoFiles.map((file) => path.basename(file, ".json"))
for (let i = 0; i < todoFiles.length; i += batchSize) {
const end = Math.min(i + batchSize, todoFiles.length)
const batch = await read(todoFiles, i, end)
const values = [] as any[]
for (let j = 0; j < batch.length; j++) {
const data = batch[j]
if (!data) continue
const sessionID = todoSessions[i + j]
if (!sessionIds.has(sessionID)) {
orphans.todos++
continue
}
if (!Array.isArray(data)) {
errs.push(`todo not an array: ${todoFiles[i + j]}`)
continue
}
for (let position = 0; position < data.length; position++) {
const todo = data[position]
if (!todo?.content || !todo?.status || !todo?.priority) continue
values.push({
session_id: sessionID,
content: todo.content,
status: todo.status,
priority: todo.priority,
position,
time_created: now,
time_updated: now,
})
}
}
stats.todos += insert(values, TodoTable, "todo")
step("todos", end - i)
}
log.info("migrated todos", { count: stats.todos })
if (orphans.todos > 0) {
log.warn("skipped orphaned todos", { count: orphans.todos })
}
// Migrate permissions
const permProjects = permFiles.map((file) => path.basename(file, ".json"))
const permValues = [] as any[]
for (let i = 0; i < permFiles.length; i += batchSize) {
const end = Math.min(i + batchSize, permFiles.length)
const batch = await read(permFiles, i, end)
permValues.length = 0
for (let j = 0; j < batch.length; j++) {
const data = batch[j]
if (!data) continue
const projectID = permProjects[i + j]
if (!projectIds.has(projectID)) {
orphans.permissions++
continue
}
permValues.push({ project_id: projectID, data })
}
stats.permissions += insert(permValues, PermissionTable, "permission")
step("permissions", end - i)
}
log.info("migrated permissions", { count: stats.permissions })
if (orphans.permissions > 0) {
log.warn("skipped orphaned permissions", { count: orphans.permissions })
}
// Migrate session shares
const shareSessions = shareFiles.map((file) => path.basename(file, ".json"))
const shareValues = [] as any[]
for (let i = 0; i < shareFiles.length; i += batchSize) {
const end = Math.min(i + batchSize, shareFiles.length)
const batch = await read(shareFiles, i, end)
shareValues.length = 0
for (let j = 0; j < batch.length; j++) {
const data = batch[j]
if (!data) continue
const sessionID = shareSessions[i + j]
if (!sessionIds.has(sessionID)) {
orphans.shares++
continue
}
if (!data?.id || !data?.secret || !data?.url) {
errs.push(`session_share missing id/secret/url: ${shareFiles[i + j]}`)
continue
}
shareValues.push({ session_id: sessionID, id: data.id, secret: data.secret, url: data.url })
}
stats.shares += insert(shareValues, SessionShareTable, "session_share")
step("shares", end - i)
}
log.info("migrated session shares", { count: stats.shares })
if (orphans.shares > 0) {
log.warn("skipped orphaned session shares", { count: orphans.shares })
}
sqlite.exec("COMMIT")
log.info("json migration complete", {
projects: stats.projects,
sessions: stats.sessions,
messages: stats.messages,
parts: stats.parts,
todos: stats.todos,
permissions: stats.permissions,
shares: stats.shares,
errorCount: stats.errors.length,
duration: Math.round(performance.now() - start),
})
if (stats.errors.length > 0) {
log.warn("migration errors", { errors: stats.errors.slice(0, 20) })
}
progress?.({ current: total, total, label: "complete" })
return stats
}
}

View File

@@ -0,0 +1,10 @@
import { integer } from "drizzle-orm/sqlite-core"
export const Timestamps = {
time_created: integer()
.notNull()
.$default(() => Date.now()),
time_updated: integer()
.notNull()
.$onUpdate(() => Date.now()),
}

View File

@@ -114,7 +114,7 @@ export namespace ToolRegistry {
ApplyPatchTool,
...(Flag.OPENCODE_EXPERIMENTAL_LSP_TOOL ? [LspTool] : []),
...(config.experimental?.batch_tool === true ? [BatchTool] : []),
...(Flag.OPENCODE_EXPERIMENTAL_PLAN_MODE && Flag.OPENCODE_CLIENT === "cli" ? [PlanExitTool, PlanEnterTool] : []),
...(Flag.OPENCODE_CLIENT === "cli" ? [PlanExitTool, PlanEnterTool] : []),
...custom,
]
}

View File

@@ -4,9 +4,14 @@ export function lazy<T>(fn: () => T) {
const result = (): T => {
if (loaded) return value as T
loaded = true
value = fn()
return value as T
try {
value = fn()
loaded = true
return value as T
} catch (e) {
// Don't mark as loaded if initialization failed
throw e
}
}
result.reset = () => {

View File

@@ -7,7 +7,8 @@ import { Global } from "../global"
import { Instance } from "../project/instance"
import { InstanceBootstrap } from "../project/bootstrap"
import { Project } from "../project/project"
import { Storage } from "../storage/storage"
import { Database, eq } from "../storage/db"
import { ProjectTable } from "../project/project.sql"
import { fn } from "../util/fn"
import { Log } from "../util/log"
import { BusEvent } from "@/bus/bus-event"
@@ -307,7 +308,8 @@ export namespace Worktree {
}
async function runStartScripts(directory: string, input: { projectID: string; extra?: string }) {
const project = await Storage.read<Project.Info>(["project", input.projectID]).catch(() => undefined)
const row = Database.use((db) => db.select().from(ProjectTable).where(eq(ProjectTable.id, input.projectID)).get())
const project = row ? Project.fromRow(row) : undefined
const startup = project?.commands?.start?.trim() ?? ""
const ok = await runStartScript(directory, startup, "project")
if (!ok) return false

View File

@@ -122,12 +122,20 @@ function createFakeAgent() {
messages: async () => {
return { data: [] }
},
message: async () => {
message: async (params?: any) => {
// Return a message with parts that can be looked up by partID
return {
data: {
info: {
role: "assistant",
},
parts: [
{
id: params?.messageID ? `${params.messageID}_part` : "part_1",
type: "text",
text: "",
},
],
},
}
},
@@ -193,7 +201,7 @@ function createFakeAgent() {
}
describe("acp.agent event subscription", () => {
test("routes message.part.updated by the event sessionID (no cross-session pollution)", async () => {
test("routes message.part.delta by the event sessionID (no cross-session pollution)", async () => {
await using tmp = await tmpdir()
await Instance.provide({
directory: tmp.path,
@@ -207,14 +215,12 @@ describe("acp.agent event subscription", () => {
controller.push({
directory: cwd,
payload: {
type: "message.part.updated",
type: "message.part.delta",
properties: {
part: {
sessionID: sessionB,
messageID: "msg_1",
type: "text",
synthetic: false,
},
sessionID: sessionB,
messageID: "msg_1",
partID: "msg_1_part",
field: "text",
delta: "hello",
},
},
@@ -230,7 +236,7 @@ describe("acp.agent event subscription", () => {
})
})
test("keeps concurrent sessions isolated when message.part.updated events are interleaved", async () => {
test("keeps concurrent sessions isolated when message.part.delta events are interleaved", async () => {
await using tmp = await tmpdir()
await Instance.provide({
directory: tmp.path,
@@ -248,14 +254,12 @@ describe("acp.agent event subscription", () => {
controller.push({
directory: cwd,
payload: {
type: "message.part.updated",
type: "message.part.delta",
properties: {
part: {
sessionID: sessionId,
messageID,
type: "text",
synthetic: false,
},
sessionID: sessionId,
messageID,
partID: `${messageID}_part`,
field: "text",
delta,
},
},
@@ -402,14 +406,12 @@ describe("acp.agent event subscription", () => {
controller.push({
directory: cwd,
payload: {
type: "message.part.updated",
type: "message.part.delta",
properties: {
part: {
sessionID: sessionB,
messageID: "msg_b",
type: "text",
synthetic: false,
},
sessionID: sessionB,
messageID: "msg_b",
partID: "msg_b_part",
field: "text",
delta: "session_b_message",
},
},

View File

@@ -2,7 +2,6 @@ import { test, expect } from "bun:test"
import os from "os"
import { PermissionNext } from "../../src/permission/next"
import { Instance } from "../../src/project/instance"
import { Storage } from "../../src/storage/storage"
import { tmpdir } from "../fixture/fixture"
// fromConfig tests

View File

@@ -1,63 +1,70 @@
// IMPORTANT: Set env vars BEFORE any imports from src/ directory
// xdg-basedir reads env vars at import time, so we must set these first
import os from "os"
import path from "path"
import fs from "fs/promises"
import fsSync from "fs"
import { afterAll } from "bun:test"
import os from "os";
import path from "path";
import fs from "fs/promises";
import fsSync from "fs";
import { afterAll } from "bun:test";
const dir = path.join(os.tmpdir(), "opencode-test-data-" + process.pid)
await fs.mkdir(dir, { recursive: true })
// Set XDG env vars FIRST, before any src/ imports
const dir = path.join(os.tmpdir(), "opencode-test-data-" + process.pid);
await fs.mkdir(dir, { recursive: true });
afterAll(() => {
fsSync.rmSync(dir, { recursive: true, force: true })
})
fsSync.rmSync(dir, { recursive: true, force: true });
});
process.env["XDG_DATA_HOME"] = path.join(dir, "share");
process.env["XDG_CACHE_HOME"] = path.join(dir, "cache");
process.env["XDG_CONFIG_HOME"] = path.join(dir, "config");
process.env["XDG_STATE_HOME"] = path.join(dir, "state");
process.env["OPENCODE_MODELS_PATH"] = path.join(
import.meta.dir,
"tool",
"fixtures",
"models-api.json",
);
// Set test home directory to isolate tests from user's actual home directory
// This prevents tests from picking up real user configs/skills from ~/.claude/skills
const testHome = path.join(dir, "home")
await fs.mkdir(testHome, { recursive: true })
process.env["OPENCODE_TEST_HOME"] = testHome
const testHome = path.join(dir, "home");
await fs.mkdir(testHome, { recursive: true });
process.env["OPENCODE_TEST_HOME"] = testHome;
// Set test managed config directory to isolate tests from system managed settings
const testManagedConfigDir = path.join(dir, "managed")
process.env["OPENCODE_TEST_MANAGED_CONFIG_DIR"] = testManagedConfigDir
process.env["XDG_DATA_HOME"] = path.join(dir, "share")
process.env["XDG_CACHE_HOME"] = path.join(dir, "cache")
process.env["XDG_CONFIG_HOME"] = path.join(dir, "config")
process.env["XDG_STATE_HOME"] = path.join(dir, "state")
process.env["OPENCODE_MODELS_PATH"] = path.join(import.meta.dir, "tool", "fixtures", "models-api.json")
const testManagedConfigDir = path.join(dir, "managed");
process.env["OPENCODE_TEST_MANAGED_CONFIG_DIR"] = testManagedConfigDir;
// Write the cache version file to prevent global/index.ts from clearing the cache
const cacheDir = path.join(dir, "cache", "opencode")
await fs.mkdir(cacheDir, { recursive: true })
await fs.writeFile(path.join(cacheDir, "version"), "14")
const cacheDir = path.join(dir, "cache", "opencode");
await fs.mkdir(cacheDir, { recursive: true });
await fs.writeFile(path.join(cacheDir, "version"), "14");
// Clear provider env vars to ensure clean test state
delete process.env["ANTHROPIC_API_KEY"]
delete process.env["OPENAI_API_KEY"]
delete process.env["GOOGLE_API_KEY"]
delete process.env["GOOGLE_GENERATIVE_AI_API_KEY"]
delete process.env["AZURE_OPENAI_API_KEY"]
delete process.env["AWS_ACCESS_KEY_ID"]
delete process.env["AWS_PROFILE"]
delete process.env["AWS_REGION"]
delete process.env["AWS_BEARER_TOKEN_BEDROCK"]
delete process.env["OPENROUTER_API_KEY"]
delete process.env["GROQ_API_KEY"]
delete process.env["MISTRAL_API_KEY"]
delete process.env["PERPLEXITY_API_KEY"]
delete process.env["TOGETHER_API_KEY"]
delete process.env["XAI_API_KEY"]
delete process.env["DEEPSEEK_API_KEY"]
delete process.env["FIREWORKS_API_KEY"]
delete process.env["CEREBRAS_API_KEY"]
delete process.env["SAMBANOVA_API_KEY"]
delete process.env["ANTHROPIC_API_KEY"];
delete process.env["OPENAI_API_KEY"];
delete process.env["GOOGLE_API_KEY"];
delete process.env["GOOGLE_GENERATIVE_AI_API_KEY"];
delete process.env["AZURE_OPENAI_API_KEY"];
delete process.env["AWS_ACCESS_KEY_ID"];
delete process.env["AWS_PROFILE"];
delete process.env["AWS_REGION"];
delete process.env["AWS_BEARER_TOKEN_BEDROCK"];
delete process.env["OPENROUTER_API_KEY"];
delete process.env["GROQ_API_KEY"];
delete process.env["MISTRAL_API_KEY"];
delete process.env["PERPLEXITY_API_KEY"];
delete process.env["TOGETHER_API_KEY"];
delete process.env["XAI_API_KEY"];
delete process.env["DEEPSEEK_API_KEY"];
delete process.env["FIREWORKS_API_KEY"];
delete process.env["CEREBRAS_API_KEY"];
delete process.env["SAMBANOVA_API_KEY"];
// Now safe to import from src/
const { Log } = await import("../src/util/log")
const { Log } = await import("../src/util/log");
Log.init({
print: false,
dev: true,
level: "DEBUG",
})
});

View File

@@ -1,7 +1,6 @@
import { describe, expect, test } from "bun:test"
import { Project } from "../../src/project/project"
import { Log } from "../../src/util/log"
import { Storage } from "../../src/storage/storage"
import { $ } from "bun"
import path from "path"
import { tmpdir } from "../fixture/fixture"
@@ -55,37 +54,50 @@ describe("Project.fromDirectory with worktrees", () => {
test("should set worktree to root when called from a worktree", async () => {
await using tmp = await tmpdir({ git: true })
const worktreePath = path.join(tmp.path, "..", "worktree-test")
await $`git worktree add ${worktreePath} -b test-branch`.cwd(tmp.path).quiet()
const worktreePath = path.join(tmp.path, "..", path.basename(tmp.path) + "-worktree")
try {
await $`git worktree add ${worktreePath} -b test-branch-${Date.now()}`.cwd(tmp.path).quiet()
const { project, sandbox } = await Project.fromDirectory(worktreePath)
const { project, sandbox } = await Project.fromDirectory(worktreePath)
expect(project.worktree).toBe(tmp.path)
expect(sandbox).toBe(worktreePath)
expect(project.sandboxes).toContain(worktreePath)
expect(project.sandboxes).not.toContain(tmp.path)
await $`git worktree remove ${worktreePath}`.cwd(tmp.path).quiet()
expect(project.worktree).toBe(tmp.path)
expect(sandbox).toBe(worktreePath)
expect(project.sandboxes).toContain(worktreePath)
expect(project.sandboxes).not.toContain(tmp.path)
} finally {
await $`git worktree remove ${worktreePath}`
.cwd(tmp.path)
.quiet()
.catch(() => {})
}
})
test("should accumulate multiple worktrees in sandboxes", async () => {
await using tmp = await tmpdir({ git: true })
const worktree1 = path.join(tmp.path, "..", "worktree-1")
const worktree2 = path.join(tmp.path, "..", "worktree-2")
await $`git worktree add ${worktree1} -b branch-1`.cwd(tmp.path).quiet()
await $`git worktree add ${worktree2} -b branch-2`.cwd(tmp.path).quiet()
const worktree1 = path.join(tmp.path, "..", path.basename(tmp.path) + "-wt1")
const worktree2 = path.join(tmp.path, "..", path.basename(tmp.path) + "-wt2")
try {
await $`git worktree add ${worktree1} -b branch-${Date.now()}`.cwd(tmp.path).quiet()
await $`git worktree add ${worktree2} -b branch-${Date.now() + 1}`.cwd(tmp.path).quiet()
await Project.fromDirectory(worktree1)
const { project } = await Project.fromDirectory(worktree2)
await Project.fromDirectory(worktree1)
const { project } = await Project.fromDirectory(worktree2)
expect(project.worktree).toBe(tmp.path)
expect(project.sandboxes).toContain(worktree1)
expect(project.sandboxes).toContain(worktree2)
expect(project.sandboxes).not.toContain(tmp.path)
await $`git worktree remove ${worktree1}`.cwd(tmp.path).quiet()
await $`git worktree remove ${worktree2}`.cwd(tmp.path).quiet()
expect(project.worktree).toBe(tmp.path)
expect(project.sandboxes).toContain(worktree1)
expect(project.sandboxes).toContain(worktree2)
expect(project.sandboxes).not.toContain(tmp.path)
} finally {
await $`git worktree remove ${worktree1}`
.cwd(tmp.path)
.quiet()
.catch(() => {})
await $`git worktree remove ${worktree2}`
.cwd(tmp.path)
.quiet()
.catch(() => {})
}
})
})
@@ -99,11 +111,12 @@ describe("Project.discover", () => {
await Project.discover(project)
const updated = await Storage.read<Project.Info>(["project", project.id])
expect(updated.icon).toBeDefined()
expect(updated.icon?.url).toStartWith("data:")
expect(updated.icon?.url).toContain("base64")
expect(updated.icon?.color).toBeUndefined()
const updated = Project.get(project.id)
expect(updated).toBeDefined()
expect(updated!.icon).toBeDefined()
expect(updated!.icon?.url).toStartWith("data:")
expect(updated!.icon?.url).toContain("base64")
expect(updated!.icon?.color).toBeUndefined()
})
test("should not discover non-image files", async () => {
@@ -114,7 +127,8 @@ describe("Project.discover", () => {
await Project.discover(project)
const updated = await Storage.read<Project.Info>(["project", project.id])
expect(updated.icon).toBeUndefined()
const updated = Project.get(project.id)
expect(updated).toBeDefined()
expect(updated!.icon).toBeUndefined()
})
})

View File

@@ -0,0 +1,687 @@
import { describe, test, expect, beforeEach, afterEach } from "bun:test"
import { Database } from "bun:sqlite"
import { drizzle } from "drizzle-orm/bun-sqlite"
import { migrate } from "drizzle-orm/bun-sqlite/migrator"
import path from "path"
import fs from "fs/promises"
import { readFileSync, readdirSync } from "fs"
import { JsonMigration } from "../../src/storage/json-migration"
import { Global } from "../../src/global"
import { ProjectTable } from "../../src/project/project.sql"
import { SessionTable, MessageTable, PartTable, TodoTable, PermissionTable } from "../../src/session/session.sql"
import { SessionShareTable } from "../../src/share/share.sql"
// Test fixtures
const fixtures = {
project: {
id: "proj_test123abc",
name: "Test Project",
worktree: "/test/path",
vcs: "git" as const,
sandboxes: [],
},
session: {
id: "ses_test456def",
projectID: "proj_test123abc",
slug: "test-session",
directory: "/test/path",
title: "Test Session",
version: "1.0.0",
time: { created: 1700000000000, updated: 1700000001000 },
},
message: {
id: "msg_test789ghi",
sessionID: "ses_test456def",
role: "user" as const,
agent: "default",
model: { providerID: "openai", modelID: "gpt-4" },
time: { created: 1700000000000 },
},
part: {
id: "prt_testabc123",
messageID: "msg_test789ghi",
sessionID: "ses_test456def",
type: "text" as const,
text: "Hello, world!",
},
}
// Helper to create test storage directory structure
async function setupStorageDir() {
const storageDir = path.join(Global.Path.data, "storage")
await fs.rm(storageDir, { recursive: true, force: true })
await fs.mkdir(path.join(storageDir, "project"), { recursive: true })
await fs.mkdir(path.join(storageDir, "session", "proj_test123abc"), { recursive: true })
await fs.mkdir(path.join(storageDir, "message", "ses_test456def"), { recursive: true })
await fs.mkdir(path.join(storageDir, "part", "msg_test789ghi"), { recursive: true })
await fs.mkdir(path.join(storageDir, "session_diff"), { recursive: true })
await fs.mkdir(path.join(storageDir, "todo"), { recursive: true })
await fs.mkdir(path.join(storageDir, "permission"), { recursive: true })
await fs.mkdir(path.join(storageDir, "session_share"), { recursive: true })
// Create legacy marker to indicate JSON storage exists
await Bun.write(path.join(storageDir, "migration"), "1")
return storageDir
}
async function writeProject(storageDir: string, project: Record<string, unknown>) {
await Bun.write(path.join(storageDir, "project", `${project.id}.json`), JSON.stringify(project))
}
async function writeSession(storageDir: string, projectID: string, session: Record<string, unknown>) {
await Bun.write(path.join(storageDir, "session", projectID, `${session.id}.json`), JSON.stringify(session))
}
// Helper to create in-memory test database with schema
function createTestDb() {
const sqlite = new Database(":memory:")
sqlite.exec("PRAGMA foreign_keys = ON")
// Apply schema migrations using drizzle migrate
const dir = path.join(import.meta.dirname, "../../migration")
const entries = readdirSync(dir, { withFileTypes: true })
const migrations = entries
.filter((entry) => entry.isDirectory())
.map((entry) => ({
sql: readFileSync(path.join(dir, entry.name, "migration.sql"), "utf-8"),
timestamp: Number(entry.name.split("_")[0]),
}))
.sort((a, b) => a.timestamp - b.timestamp)
migrate(drizzle({ client: sqlite }), migrations)
return sqlite
}
describe("JSON to SQLite migration", () => {
let storageDir: string
let sqlite: Database
beforeEach(async () => {
storageDir = await setupStorageDir()
sqlite = createTestDb()
})
afterEach(async () => {
sqlite.close()
await fs.rm(storageDir, { recursive: true, force: true })
})
test("migrates project", async () => {
await writeProject(storageDir, {
id: "proj_test123abc",
worktree: "/test/path",
vcs: "git",
name: "Test Project",
time: { created: 1700000000000, updated: 1700000001000 },
sandboxes: ["/test/sandbox"],
})
const stats = await JsonMigration.run(sqlite)
expect(stats?.projects).toBe(1)
const db = drizzle({ client: sqlite })
const projects = db.select().from(ProjectTable).all()
expect(projects.length).toBe(1)
expect(projects[0].id).toBe("proj_test123abc")
expect(projects[0].worktree).toBe("/test/path")
expect(projects[0].name).toBe("Test Project")
expect(projects[0].sandboxes).toEqual(["/test/sandbox"])
})
test("migrates project with commands", async () => {
await writeProject(storageDir, {
id: "proj_with_commands",
worktree: "/test/path",
vcs: "git",
name: "Project With Commands",
time: { created: 1700000000000, updated: 1700000001000 },
sandboxes: ["/test/sandbox"],
commands: { start: "npm run dev" },
})
const stats = await JsonMigration.run(sqlite)
expect(stats?.projects).toBe(1)
const db = drizzle({ client: sqlite })
const projects = db.select().from(ProjectTable).all()
expect(projects.length).toBe(1)
expect(projects[0].id).toBe("proj_with_commands")
expect(projects[0].commands).toEqual({ start: "npm run dev" })
})
test("migrates project without commands field", async () => {
await writeProject(storageDir, {
id: "proj_no_commands",
worktree: "/test/path",
vcs: "git",
name: "Project Without Commands",
time: { created: 1700000000000, updated: 1700000001000 },
sandboxes: [],
})
const stats = await JsonMigration.run(sqlite)
expect(stats?.projects).toBe(1)
const db = drizzle({ client: sqlite })
const projects = db.select().from(ProjectTable).all()
expect(projects.length).toBe(1)
expect(projects[0].id).toBe("proj_no_commands")
expect(projects[0].commands).toBeNull()
})
test("migrates session with individual columns", async () => {
await writeProject(storageDir, {
id: "proj_test123abc",
worktree: "/test/path",
time: { created: Date.now(), updated: Date.now() },
sandboxes: [],
})
await writeSession(storageDir, "proj_test123abc", {
id: "ses_test456def",
projectID: "proj_test123abc",
slug: "test-session",
directory: "/test/dir",
title: "Test Session Title",
version: "1.0.0",
time: { created: 1700000000000, updated: 1700000001000 },
summary: { additions: 10, deletions: 5, files: 3 },
share: { url: "https://example.com/share" },
})
await JsonMigration.run(sqlite)
const db = drizzle({ client: sqlite })
const sessions = db.select().from(SessionTable).all()
expect(sessions.length).toBe(1)
expect(sessions[0].id).toBe("ses_test456def")
expect(sessions[0].project_id).toBe("proj_test123abc")
expect(sessions[0].slug).toBe("test-session")
expect(sessions[0].title).toBe("Test Session Title")
expect(sessions[0].summary_additions).toBe(10)
expect(sessions[0].summary_deletions).toBe(5)
expect(sessions[0].share_url).toBe("https://example.com/share")
})
test("migrates messages and parts", async () => {
await writeProject(storageDir, {
id: "proj_test123abc",
worktree: "/",
time: { created: Date.now(), updated: Date.now() },
sandboxes: [],
})
await writeSession(storageDir, "proj_test123abc", { ...fixtures.session })
await Bun.write(
path.join(storageDir, "message", "ses_test456def", "msg_test789ghi.json"),
JSON.stringify({ ...fixtures.message }),
)
await Bun.write(
path.join(storageDir, "part", "msg_test789ghi", "prt_testabc123.json"),
JSON.stringify({ ...fixtures.part }),
)
const stats = await JsonMigration.run(sqlite)
expect(stats?.messages).toBe(1)
expect(stats?.parts).toBe(1)
const db = drizzle({ client: sqlite })
const messages = db.select().from(MessageTable).all()
expect(messages.length).toBe(1)
expect(messages[0].id).toBe("msg_test789ghi")
const parts = db.select().from(PartTable).all()
expect(parts.length).toBe(1)
expect(parts[0].id).toBe("prt_testabc123")
})
test("migrates legacy parts without ids in body", async () => {
await writeProject(storageDir, {
id: "proj_test123abc",
worktree: "/",
time: { created: Date.now(), updated: Date.now() },
sandboxes: [],
})
await writeSession(storageDir, "proj_test123abc", { ...fixtures.session })
await Bun.write(
path.join(storageDir, "message", "ses_test456def", "msg_test789ghi.json"),
JSON.stringify({
role: "user",
agent: "default",
model: { providerID: "openai", modelID: "gpt-4" },
time: { created: 1700000000000 },
}),
)
await Bun.write(
path.join(storageDir, "part", "msg_test789ghi", "prt_testabc123.json"),
JSON.stringify({
type: "text",
text: "Hello, world!",
}),
)
const stats = await JsonMigration.run(sqlite)
expect(stats?.messages).toBe(1)
expect(stats?.parts).toBe(1)
const db = drizzle({ client: sqlite })
const messages = db.select().from(MessageTable).all()
expect(messages.length).toBe(1)
expect(messages[0].id).toBe("msg_test789ghi")
expect(messages[0].session_id).toBe("ses_test456def")
expect(messages[0].data).not.toHaveProperty("id")
expect(messages[0].data).not.toHaveProperty("sessionID")
const parts = db.select().from(PartTable).all()
expect(parts.length).toBe(1)
expect(parts[0].id).toBe("prt_testabc123")
expect(parts[0].message_id).toBe("msg_test789ghi")
expect(parts[0].session_id).toBe("ses_test456def")
expect(parts[0].data).not.toHaveProperty("id")
expect(parts[0].data).not.toHaveProperty("messageID")
expect(parts[0].data).not.toHaveProperty("sessionID")
})
test("skips orphaned sessions (no parent project)", async () => {
await Bun.write(
path.join(storageDir, "session", "proj_test123abc", "ses_orphan.json"),
JSON.stringify({
id: "ses_orphan",
projectID: "proj_nonexistent",
slug: "orphan",
directory: "/",
title: "Orphan",
version: "1.0.0",
time: { created: Date.now(), updated: Date.now() },
}),
)
const stats = await JsonMigration.run(sqlite)
expect(stats?.sessions).toBe(0)
})
test("is idempotent (running twice doesn't duplicate)", async () => {
await writeProject(storageDir, {
id: "proj_test123abc",
worktree: "/",
time: { created: Date.now(), updated: Date.now() },
sandboxes: [],
})
await JsonMigration.run(sqlite)
await JsonMigration.run(sqlite)
const db = drizzle({ client: sqlite })
const projects = db.select().from(ProjectTable).all()
expect(projects.length).toBe(1) // Still only 1 due to onConflictDoNothing
})
test("migrates todos", async () => {
await writeProject(storageDir, {
id: "proj_test123abc",
worktree: "/",
time: { created: Date.now(), updated: Date.now() },
sandboxes: [],
})
await writeSession(storageDir, "proj_test123abc", { ...fixtures.session })
// Create todo file (named by sessionID, contains array of todos)
await Bun.write(
path.join(storageDir, "todo", "ses_test456def.json"),
JSON.stringify([
{
id: "todo_1",
content: "First todo",
status: "pending",
priority: "high",
},
{
id: "todo_2",
content: "Second todo",
status: "completed",
priority: "medium",
},
]),
)
const stats = await JsonMigration.run(sqlite)
expect(stats?.todos).toBe(2)
const db = drizzle({ client: sqlite })
const todos = db.select().from(TodoTable).orderBy(TodoTable.position).all()
expect(todos.length).toBe(2)
expect(todos[0].content).toBe("First todo")
expect(todos[0].status).toBe("pending")
expect(todos[0].priority).toBe("high")
expect(todos[0].position).toBe(0)
expect(todos[1].content).toBe("Second todo")
expect(todos[1].position).toBe(1)
})
test("todos are ordered by position", async () => {
await writeProject(storageDir, {
id: "proj_test123abc",
worktree: "/",
time: { created: Date.now(), updated: Date.now() },
sandboxes: [],
})
await writeSession(storageDir, "proj_test123abc", { ...fixtures.session })
await Bun.write(
path.join(storageDir, "todo", "ses_test456def.json"),
JSON.stringify([
{ content: "Third", status: "pending", priority: "low" },
{ content: "First", status: "pending", priority: "high" },
{ content: "Second", status: "in_progress", priority: "medium" },
]),
)
await JsonMigration.run(sqlite)
const db = drizzle({ client: sqlite })
const todos = db.select().from(TodoTable).orderBy(TodoTable.position).all()
expect(todos.length).toBe(3)
expect(todos[0].content).toBe("Third")
expect(todos[0].position).toBe(0)
expect(todos[1].content).toBe("First")
expect(todos[1].position).toBe(1)
expect(todos[2].content).toBe("Second")
expect(todos[2].position).toBe(2)
})
test("migrates permissions", async () => {
await writeProject(storageDir, {
id: "proj_test123abc",
worktree: "/",
time: { created: Date.now(), updated: Date.now() },
sandboxes: [],
})
// Create permission file (named by projectID, contains array of rules)
const permissionData = [
{ permission: "file.read", pattern: "/test/file1.ts", action: "allow" as const },
{ permission: "file.write", pattern: "/test/file2.ts", action: "ask" as const },
{ permission: "command.run", pattern: "npm install", action: "deny" as const },
]
await Bun.write(path.join(storageDir, "permission", "proj_test123abc.json"), JSON.stringify(permissionData))
const stats = await JsonMigration.run(sqlite)
expect(stats?.permissions).toBe(1)
const db = drizzle({ client: sqlite })
const permissions = db.select().from(PermissionTable).all()
expect(permissions.length).toBe(1)
expect(permissions[0].project_id).toBe("proj_test123abc")
expect(permissions[0].data).toEqual(permissionData)
})
test("migrates session shares", async () => {
await writeProject(storageDir, {
id: "proj_test123abc",
worktree: "/",
time: { created: Date.now(), updated: Date.now() },
sandboxes: [],
})
await writeSession(storageDir, "proj_test123abc", { ...fixtures.session })
// Create session share file (named by sessionID)
await Bun.write(
path.join(storageDir, "session_share", "ses_test456def.json"),
JSON.stringify({
id: "share_123",
secret: "supersecretkey",
url: "https://share.example.com/ses_test456def",
}),
)
const stats = await JsonMigration.run(sqlite)
expect(stats?.shares).toBe(1)
const db = drizzle({ client: sqlite })
const shares = db.select().from(SessionShareTable).all()
expect(shares.length).toBe(1)
expect(shares[0].session_id).toBe("ses_test456def")
expect(shares[0].id).toBe("share_123")
expect(shares[0].secret).toBe("supersecretkey")
expect(shares[0].url).toBe("https://share.example.com/ses_test456def")
})
test("returns empty stats when storage directory does not exist", async () => {
await fs.rm(storageDir, { recursive: true, force: true })
const stats = await JsonMigration.run(sqlite)
expect(stats.projects).toBe(0)
expect(stats.sessions).toBe(0)
expect(stats.messages).toBe(0)
expect(stats.parts).toBe(0)
expect(stats.todos).toBe(0)
expect(stats.permissions).toBe(0)
expect(stats.shares).toBe(0)
expect(stats.errors).toEqual([])
})
test("continues when a JSON file is unreadable and records an error", async () => {
await writeProject(storageDir, {
id: "proj_test123abc",
worktree: "/",
time: { created: Date.now(), updated: Date.now() },
sandboxes: [],
})
await Bun.write(path.join(storageDir, "project", "broken.json"), "{ invalid json")
const stats = await JsonMigration.run(sqlite)
expect(stats.projects).toBe(1)
expect(stats.errors.some((x) => x.includes("failed to read") && x.includes("broken.json"))).toBe(true)
const db = drizzle({ client: sqlite })
const projects = db.select().from(ProjectTable).all()
expect(projects.length).toBe(1)
expect(projects[0].id).toBe("proj_test123abc")
})
test("skips invalid todo entries while preserving source positions", async () => {
await writeProject(storageDir, {
id: "proj_test123abc",
worktree: "/",
time: { created: Date.now(), updated: Date.now() },
sandboxes: [],
})
await writeSession(storageDir, "proj_test123abc", { ...fixtures.session })
await Bun.write(
path.join(storageDir, "todo", "ses_test456def.json"),
JSON.stringify([
{ content: "keep-0", status: "pending", priority: "high" },
{ content: "drop-1", priority: "low" },
{ content: "keep-2", status: "completed", priority: "medium" },
]),
)
const stats = await JsonMigration.run(sqlite)
expect(stats.todos).toBe(2)
const db = drizzle({ client: sqlite })
const todos = db.select().from(TodoTable).orderBy(TodoTable.position).all()
expect(todos.length).toBe(2)
expect(todos[0].content).toBe("keep-0")
expect(todos[0].position).toBe(0)
expect(todos[1].content).toBe("keep-2")
expect(todos[1].position).toBe(2)
})
test("skips orphaned todos, permissions, and shares", async () => {
await writeProject(storageDir, {
id: "proj_test123abc",
worktree: "/",
time: { created: Date.now(), updated: Date.now() },
sandboxes: [],
})
await writeSession(storageDir, "proj_test123abc", { ...fixtures.session })
await Bun.write(
path.join(storageDir, "todo", "ses_test456def.json"),
JSON.stringify([{ content: "valid", status: "pending", priority: "high" }]),
)
await Bun.write(
path.join(storageDir, "todo", "ses_missing.json"),
JSON.stringify([{ content: "orphan", status: "pending", priority: "high" }]),
)
await Bun.write(
path.join(storageDir, "permission", "proj_test123abc.json"),
JSON.stringify([{ permission: "file.read" }]),
)
await Bun.write(
path.join(storageDir, "permission", "proj_missing.json"),
JSON.stringify([{ permission: "file.write" }]),
)
await Bun.write(
path.join(storageDir, "session_share", "ses_test456def.json"),
JSON.stringify({ id: "share_ok", secret: "secret", url: "https://ok.example.com" }),
)
await Bun.write(
path.join(storageDir, "session_share", "ses_missing.json"),
JSON.stringify({ id: "share_missing", secret: "secret", url: "https://missing.example.com" }),
)
const stats = await JsonMigration.run(sqlite)
expect(stats.todos).toBe(1)
expect(stats.permissions).toBe(1)
expect(stats.shares).toBe(1)
const db = drizzle({ client: sqlite })
expect(db.select().from(TodoTable).all().length).toBe(1)
expect(db.select().from(PermissionTable).all().length).toBe(1)
expect(db.select().from(SessionShareTable).all().length).toBe(1)
})
test("handles mixed corruption and partial validity in one migration run", async () => {
await writeProject(storageDir, {
id: "proj_test123abc",
worktree: "/ok",
time: { created: 1700000000000, updated: 1700000001000 },
sandboxes: [],
})
await Bun.write(
path.join(storageDir, "project", "proj_missing_id.json"),
JSON.stringify({ worktree: "/bad", sandboxes: [] }),
)
await Bun.write(path.join(storageDir, "project", "proj_broken.json"), "{ nope")
await writeSession(storageDir, "proj_test123abc", {
id: "ses_test456def",
projectID: "proj_test123abc",
slug: "ok",
directory: "/ok",
title: "Ok",
version: "1",
time: { created: 1700000000000, updated: 1700000001000 },
})
await Bun.write(
path.join(storageDir, "session", "proj_test123abc", "ses_missing_project.json"),
JSON.stringify({
id: "ses_missing_project",
slug: "bad",
directory: "/bad",
title: "Bad",
version: "1",
}),
)
await Bun.write(
path.join(storageDir, "session", "proj_test123abc", "ses_orphan.json"),
JSON.stringify({
id: "ses_orphan",
projectID: "proj_missing",
slug: "orphan",
directory: "/bad",
title: "Orphan",
version: "1",
}),
)
await Bun.write(
path.join(storageDir, "message", "ses_test456def", "msg_ok.json"),
JSON.stringify({ role: "user", time: { created: 1700000000000 } }),
)
await Bun.write(path.join(storageDir, "message", "ses_test456def", "msg_broken.json"), "{ nope")
await Bun.write(
path.join(storageDir, "message", "ses_missing", "msg_orphan.json"),
JSON.stringify({ role: "user", time: { created: 1700000000000 } }),
)
await Bun.write(
path.join(storageDir, "part", "msg_ok", "part_ok.json"),
JSON.stringify({ type: "text", text: "ok" }),
)
await Bun.write(
path.join(storageDir, "part", "msg_missing", "part_missing_message.json"),
JSON.stringify({ type: "text", text: "bad" }),
)
await Bun.write(path.join(storageDir, "part", "msg_ok", "part_broken.json"), "{ nope")
await Bun.write(
path.join(storageDir, "todo", "ses_test456def.json"),
JSON.stringify([
{ content: "ok", status: "pending", priority: "high" },
{ content: "skip", status: "pending" },
]),
)
await Bun.write(
path.join(storageDir, "todo", "ses_missing.json"),
JSON.stringify([{ content: "orphan", status: "pending", priority: "high" }]),
)
await Bun.write(path.join(storageDir, "todo", "ses_broken.json"), "{ nope")
await Bun.write(
path.join(storageDir, "permission", "proj_test123abc.json"),
JSON.stringify([{ permission: "file.read" }]),
)
await Bun.write(
path.join(storageDir, "permission", "proj_missing.json"),
JSON.stringify([{ permission: "file.write" }]),
)
await Bun.write(path.join(storageDir, "permission", "proj_broken.json"), "{ nope")
await Bun.write(
path.join(storageDir, "session_share", "ses_test456def.json"),
JSON.stringify({ id: "share_ok", secret: "secret", url: "https://ok.example.com" }),
)
await Bun.write(
path.join(storageDir, "session_share", "ses_missing.json"),
JSON.stringify({ id: "share_orphan", secret: "secret", url: "https://missing.example.com" }),
)
await Bun.write(path.join(storageDir, "session_share", "ses_broken.json"), "{ nope")
const stats = await JsonMigration.run(sqlite)
expect(stats.projects).toBe(1)
expect(stats.sessions).toBe(1)
expect(stats.messages).toBe(1)
expect(stats.parts).toBe(1)
expect(stats.todos).toBe(1)
expect(stats.permissions).toBe(1)
expect(stats.shares).toBe(1)
expect(stats.errors.length).toBeGreaterThanOrEqual(6)
const db = drizzle({ client: sqlite })
expect(db.select().from(ProjectTable).all().length).toBe(1)
expect(db.select().from(SessionTable).all().length).toBe(1)
expect(db.select().from(MessageTable).all().length).toBe(1)
expect(db.select().from(PartTable).all().length).toBe(1)
expect(db.select().from(TodoTable).all().length).toBe(1)
expect(db.select().from(PermissionTable).all().length).toBe(1)
expect(db.select().from(SessionShareTable).all().length).toBe(1)
})
})

View File

View File

@@ -110,6 +110,8 @@ import type {
SessionForkResponses,
SessionGetErrors,
SessionGetResponses,
SessionHandoffErrors,
SessionHandoffResponses,
SessionInitErrors,
SessionInitResponses,
SessionListResponses,
@@ -1766,6 +1768,48 @@ export class Session extends HeyApiClient {
...params,
})
}
/**
* Handoff session
*
* Extract context and relevant files for another agent to continue the conversation.
*/
public handoff<ThrowOnError extends boolean = false>(
parameters: {
sessionID: string
directory?: string
model?: {
providerID: string
modelID: string
}
goal?: string
},
options?: Options<never, ThrowOnError>,
) {
const params = buildClientParams(
[parameters],
[
{
args: [
{ in: "path", key: "sessionID" },
{ in: "query", key: "directory" },
{ in: "body", key: "model" },
{ in: "body", key: "goal" },
],
},
],
)
return (options?.client ?? this.client).post<SessionHandoffResponses, SessionHandoffErrors, ThrowOnError>({
url: "/session/{sessionID}/handoff",
...options,
...params,
headers: {
"Content-Type": "application/json",
...options?.headers,
...params.headers,
},
})
}
}
export class Part extends HeyApiClient {

View File

@@ -498,7 +498,17 @@ export type EventMessagePartUpdated = {
type: "message.part.updated"
properties: {
part: Part
delta?: string
}
}
export type EventMessagePartDelta = {
type: "message.part.delta"
properties: {
sessionID: string
messageID: string
partID: string
field: string
delta: string
}
}
@@ -668,10 +678,6 @@ export type Todo = {
* Priority level of the task: high, medium, low
*/
priority: string
/**
* Unique identifier for the todo item
*/
id: string
}
export type EventTodoUpdated = {
@@ -920,6 +926,7 @@ export type Event =
| EventMessageUpdated
| EventMessageRemoved
| EventMessagePartUpdated
| EventMessagePartDelta
| EventMessagePartRemoved
| EventPermissionAsked
| EventPermissionReplied
@@ -1032,6 +1039,10 @@ export type KeybindsConfig = {
* Toggle model favorite status
*/
model_favorite_toggle?: string
/**
* Toggle showing all models
*/
model_show_all_toggle?: string
/**
* Share current session
*/
@@ -3830,6 +3841,51 @@ export type PermissionRespondResponses = {
export type PermissionRespondResponse = PermissionRespondResponses[keyof PermissionRespondResponses]
export type SessionHandoffData = {
body?: {
model: {
providerID: string
modelID: string
}
goal?: string
}
path: {
/**
* Session ID
*/
sessionID: string
}
query?: {
directory?: string
}
url: "/session/{sessionID}/handoff"
}
export type SessionHandoffErrors = {
/**
* Bad request
*/
400: BadRequestError
/**
* Not found
*/
404: NotFoundError
}
export type SessionHandoffError = SessionHandoffErrors[keyof SessionHandoffErrors]
export type SessionHandoffResponses = {
/**
* Handoff data extracted
*/
200: {
text: string
files: Array<string>
}
}
export type SessionHandoffResponse = SessionHandoffResponses[keyof SessionHandoffResponses]
export type PermissionReplyData = {
body?: {
reply: "once" | "always" | "reject"

View File

@@ -3297,6 +3297,108 @@
]
}
},
"/session/{sessionID}/handoff": {
"post": {
"operationId": "session.handoff",
"parameters": [
{
"in": "query",
"name": "directory",
"schema": {
"type": "string"
}
},
{
"in": "path",
"name": "sessionID",
"schema": {
"type": "string"
},
"required": true,
"description": "Session ID"
}
],
"summary": "Handoff session",
"description": "Extract context and relevant files for another agent to continue the conversation.",
"responses": {
"200": {
"description": "Handoff data extracted",
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {
"text": {
"type": "string"
},
"files": {
"type": "array",
"items": {
"type": "string"
}
}
},
"required": ["text", "files"]
}
}
}
},
"400": {
"description": "Bad request",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/BadRequestError"
}
}
}
},
"404": {
"description": "Not found",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/NotFoundError"
}
}
}
}
},
"requestBody": {
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {
"model": {
"type": "object",
"properties": {
"providerID": {
"type": "string"
},
"modelID": {
"type": "string"
}
},
"required": ["providerID", "modelID"]
},
"goal": {
"type": "string"
}
},
"required": ["model"]
}
}
}
},
"x-codeSamples": [
{
"lang": "js",
"source": "import { createOpencodeClient } from \"@opencode-ai/sdk\n\nconst client = createOpencodeClient()\nawait client.session.handoff({\n ...\n})"
}
]
}
},
"/permission/{requestID}/reply": {
"post": {
"operationId": "permission.reply",
@@ -7258,12 +7360,40 @@
"properties": {
"part": {
"$ref": "#/components/schemas/Part"
}
},
"required": ["part"]
}
},
"required": ["type", "properties"]
},
"Event.message.part.delta": {
"type": "object",
"properties": {
"type": {
"type": "string",
"const": "message.part.delta"
},
"properties": {
"type": "object",
"properties": {
"sessionID": {
"type": "string"
},
"messageID": {
"type": "string"
},
"partID": {
"type": "string"
},
"field": {
"type": "string"
},
"delta": {
"type": "string"
}
},
"required": ["part"]
"required": ["sessionID", "messageID", "partID", "field", "delta"]
}
},
"required": ["type", "properties"]
@@ -7677,13 +7807,9 @@
"priority": {
"description": "Priority level of the task: high, medium, low",
"type": "string"
},
"id": {
"description": "Unique identifier for the todo item",
"type": "string"
}
},
"required": ["content", "status", "priority", "id"]
"required": ["content", "status", "priority"]
},
"Event.todo.updated": {
"type": "object",
@@ -8351,6 +8477,9 @@
{
"$ref": "#/components/schemas/Event.message.part.updated"
},
{
"$ref": "#/components/schemas/Event.message.part.delta"
},
{
"$ref": "#/components/schemas/Event.message.part.removed"
},

View File

@@ -600,4 +600,3 @@ These environment variables enable experimental features that may change or be r
| `OPENCODE_EXPERIMENTAL_EXA` | boolean | Enable experimental Exa features |
| `OPENCODE_EXPERIMENTAL_LSP_TY` | boolean | Enable experimental LSP type checking |
| `OPENCODE_EXPERIMENTAL_MARKDOWN` | boolean | Enable experimental markdown features |
| `OPENCODE_EXPERIMENTAL_PLAN_MODE` | boolean | Enable plan mode |

View File

@@ -0,0 +1,206 @@
## Payload limits
Prevent blocking storage writes and runaway persisted size
---
### Summary
Large payloads (base64 images, terminal buffers) are currently persisted inside key-value stores:
- web: `localStorage` (sync, blocks the main thread)
- desktop: Tauri Store-backed async storage files (still expensive when values are huge)
Well introduce size-aware persistence policies plus a dedicated “blob store” for large/binary data (IndexedDB on web; separate files on desktop). Prompt/history state will persist only lightweight references to blobs and load them on demand.
---
### Goals
- Stop persisting image `dataUrl` blobs inside web `localStorage`
- Stop persisting image `dataUrl` blobs inside desktop store `.dat` files
- Store image payloads out-of-band (blob store) and load lazily when needed (e.g. when restoring a history item)
- Prevent terminal buffer persistence from exceeding safe size limits
- Keep persistence behavior predictable across web (sync) and desktop (async)
- Provide escape hatches via flags and per-key size caps
---
### Non-goals
- Cross-device sync of images or terminal buffers
- Lossless persistence of full terminal scrollback on web
- Perfect blob deduplication or a complex reference-counting system on day one
---
### Current state
- `packages/app/src/utils/persist.ts` uses `localStorage` (sync) on web and async storage only on desktop.
- Desktop storage is implemented via `@tauri-apps/plugin-store` and writes to named `.dat` files (see `packages/desktop/src/index.tsx`). Large values bloat these files and increase flush costs.
- Prompt history persists under `Persist.global("prompt-history")` (`packages/app/src/components/prompt-input.tsx`) and can include image parts (`dataUrl`).
- Prompt draft persistence uses `packages/app/src/context/prompt.tsx` and can also include image parts (`dataUrl`).
- Terminal buffer is serialized in `packages/app/src/components/terminal.tsx` and persisted in `packages/app/src/context/terminal.tsx`.
---
### Proposed approach
#### 1) Add per-key persistence policies (KV store guardrails)
In `packages/app/src/utils/persist.ts`, add policy hooks for each persisted key:
- `warnBytes` (soft warning threshold)
- `maxBytes` (hard cap)
- `transformIn` / `transformOut` for lossy persistence (e.g. strip or refactor fields)
- `onOversize` strategy: `drop`, `truncate`, or `migrateToBlobRef`
This protects both:
- web (`localStorage` is sync)
- desktop (async, but still expensive to store/flush giant values)
#### 2) Add a dedicated blob store for large data
Introduce a small blob-store abstraction used by the app layer:
- web backend: IndexedDB (store `Blob` values keyed by `id`)
- desktop backend: filesystem directory under the app data directory (store one file per blob)
Store _references_ to blobs inside the persisted JSON instead of the blob contents.
#### 3) Persist image parts as references (not base64 payloads)
Update the prompt image model so the in-memory shape can still use a `dataUrl` for UI, but the persisted representation is reference-based.
Suggested approach:
- Keep `ImageAttachmentPart` with:
- required: `id`, `filename`, `mime`
- optional/ephemeral: `dataUrl?: string`
- new: `blobID?: string` (or `ref: string`)
Persistence rules:
- When writing persisted prompt/history state:
- ensure each image part is stored in blob store (`blobID`)
- persist only metadata + `blobID` (no `dataUrl`)
- When reading persisted prompt/history state:
- do not eagerly load blob payloads
- hydrate `dataUrl` only when needed:
- when applying a history entry into the editor
- before submission (ensure all image parts have usable `dataUrl`)
- when rendering an attachment preview, if required
---
### Phased implementation steps
1. Add guardrails in `persist.ts`
- Implement size estimation in `packages/app/src/utils/persist.ts` using `TextEncoder` byte length on JSON strings.
- Add a policy registry keyed by persist name (e.g. `"prompt-history"`, `"prompt"`, `"terminal"`).
- Add a feature flag (e.g. `persist.payloadLimits`) to enable enforcement gradually.
2. Add blob-store abstraction + platform hooks
- Add a new app-level module (e.g. `packages/app/src/utils/blob.ts`) defining:
- `put(id, bytes|Blob)`
- `get(id)`
- `remove(id)`
- Extend the `Platform` interface (`packages/app/src/context/platform.tsx`) with optional blob methods, or provide a default web implementation and override on desktop:
- web: implement via IndexedDB
- desktop: implement via filesystem files (requires adding a Tauri fs plugin or `invoke` wrappers)
3. Update prompt history + prompt draft persistence to use blob refs
- Update prompt/history serialization paths to ensure image parts are stored as blob refs:
- Prompt history: `packages/app/src/components/prompt-input.tsx`
- Prompt draft: `packages/app/src/context/prompt.tsx`
- Ensure “apply history prompt” hydrates image blobs only when applying the prompt (not during background load).
4. One-time migration for existing persisted base64 images
- On read, detect legacy persisted image parts that include `dataUrl`.
- If a `dataUrl` is found:
- write it into the blob store (convert dataUrl → bytes)
- replace persisted payload with `{ blobID, filename, mime, id }` only
- re-save the reduced version
- If migration fails (missing permissions, quota, etc.), fall back to:
- keep the prompt entry but drop the image payload and mark as unavailable
5. Fix terminal persistence (bounded snapshot)
- In `packages/app/src/context/terminal.tsx`, persist only:
- last `maxLines` and/or
- last `maxBytes` of combined text
- In `packages/app/src/components/terminal.tsx`, keep the full in-memory buffer unchanged.
6. Add basic blob lifecycle cleanup
To avoid “blob directory grows forever”, add one of:
- TTL-based cleanup: store `lastAccessed` per blob and delete blobs older than N days
- Reference scan cleanup: periodically scan prompt-history + prompt drafts, build a set of referenced `blobID`s, and delete unreferenced blobs
Start with TTL-based cleanup (simpler, fewer cross-store dependencies), then consider scan-based cleanup if needed.
---
### Data migration / backward compatibility
- KV store data:
- policies should be tolerant of missing fields (e.g. `dataUrl` missing)
- Image parts:
- treat missing `dataUrl` as “not hydrated yet”
- treat missing `blobID` (legacy) as “not persisted” or “needs migration”
- Desktop:
- blob files should be namespaced (e.g. `opencode/blobs/<blobID>`) to avoid collisions
---
### Risk + mitigations
- Risk: blob store is unavailable (IndexedDB disabled, desktop fs permissions).
- Mitigation: keep base state functional; persist prompts without image payloads and show a clear placeholder.
- Risk: lazy hydration introduces edge cases when submitting.
- Mitigation: add a pre-submit “ensure images hydrated” step; if hydration fails, block submission with a clear error or submit without images.
- Risk: dataUrl→bytes conversion cost during migration.
- Mitigation: migrate incrementally (only when reading an entry) and/or use `requestIdleCallback` on web.
- Risk: blob cleanup deletes blobs still needed.
- Mitigation: TTL default should be conservative; scan-based cleanup should only delete blobs unreferenced by current persisted state.
---
### Validation plan
- Unit-level:
- size estimation + policy enforcement in `persist.ts`
- blob store put/get/remove round trips (web + desktop backends)
- Manual scenarios:
- attach multiple images, reload, and confirm:
- KV store files do not balloon
- images can be restored when selecting history items
- open terminal with large output and confirm reload restores bounded snapshot quickly
- confirm prompt draft persistence still works in `packages/app/src/context/prompt.tsx`
---
### Rollout plan
- Phase 1: ship with `persist.payloadLimits` off; log oversize detections in dev.
- Phase 2: enable image blob refs behind `persist.imageBlobs` (web + desktop).
- Phase 3: enable terminal truncation and enforce hard caps for known hot keys.
- Phase 4: enable blob cleanup behind `persist.blobGc` (TTL first).
- Provide quick kill switches by disabling each flag independently.
---
### Open questions
- What should the canonical persisted image schema be (`blobID` field name, placeholder shape, etc.)?
- Desktop implementation detail:
- add `@tauri-apps/plugin-fs` vs custom `invoke()` commands for blob read/write?
- where should blob files live (appDataDir) and what retention policy is acceptable?
- Web implementation detail:
- do we store `Blob` directly in IndexedDB, or store base64 strings?
- Should prompt-history images be retained indefinitely, or only for the last `MAX_HISTORY` entries?

141
specs/02-cache-eviction.md Normal file
View File

@@ -0,0 +1,141 @@
## Cache eviction
Add explicit bounds for long-lived in-memory state
---
### Summary
Several in-memory caches grow without limits during long sessions. Well introduce explicit eviction (LRU + TTL + size caps) for sessions/messages/file contents and global per-directory sync stores.
---
### Goals
- Prevent unbounded memory growth from caches that survive navigation
- Add consistent eviction primitives shared across contexts
- Keep UI responsive under heavy usage (many sessions, large files)
---
### Non-goals
- Perfect cache hit rates or prefetch strategies
- Changing server APIs or adding background jobs
- Persisting caches for offline use
---
### Current state
- Global sync uses per-directory child stores without eviction in `packages/app/src/context/global-sync.tsx`.
- File contents cached in `packages/app/src/context/file.tsx` with no cap.
- Session-heavy pages include `packages/app/src/pages/session.tsx` and `packages/app/src/pages/layout.tsx`.
---
### Proposed approach
- Introduce a shared cache utility that supports:
- `maxEntries`, `maxBytes` (approx), and `ttlMs`
- LRU ordering with explicit `touch(key)` on access
- deterministic `evict()` and `clear()` APIs
- Apply the utility to:
- global-sync per-directory child stores (cap number of directories kept “hot”)
- file contents cache (cap by entries + bytes, with TTL)
- session/message caches (cap by session count, and optionally message count)
- Add feature flags per cache domain to allow partial rollout (e.g. `cache.eviction.files`).
---
### Phased implementation steps
1. Add a generic cache helper
- Create `packages/app/src/utils/cache.ts` with a small, dependency-free LRU+TTL.
- Keep it framework-agnostic and usable from Solid contexts.
Sketch:
```ts
type CacheOpts = {
maxEntries: number
ttlMs?: number
maxBytes?: number
sizeOf?: (value: unknown) => number
}
function createLruCache<T>(opts: CacheOpts) {
// get, set, delete, clear, evictExpired, stats
}
```
2. Apply eviction to file contents
- In `packages/app/src/context/file.tsx`:
- wrap the existing file-content map in the LRU helper
- approximate size via `TextEncoder` length of content strings
- evict on `set` and periodically via `requestIdleCallback` when available
- Add a small TTL (e.g. 1030 minutes) to discard stale contents.
3. Apply eviction to global-sync child stores
- In `packages/app/src/context/global-sync.tsx`:
- track child stores by directory key in an LRU with `maxEntries`
- call a `dispose()` hook on eviction to release subscriptions and listeners
- Ensure “currently active directory” is always `touch()`d to avoid surprise evictions.
4. Apply eviction to session/message caches
- Identify the session/message caching touchpoints used by `packages/app/src/pages/session.tsx`.
- Add caps that reflect UI needs (e.g. last 1020 sessions kept, last N messages per session if cached).
5. Add developer tooling
- Add a debug-only stats readout (console or dev panel) for cache sizes and eviction counts.
- Add a one-click “clear caches” action for troubleshooting.
---
### Data migration / backward compatibility
- No persisted schema changes are required since this targets in-memory caches.
- If any cache is currently mirrored into persistence, keep keys stable and only change in-memory retention.
---
### Risk + mitigations
- Risk: evicting content still needed causes extra refetches and flicker.
- Mitigation: always pin “active” entities and evict least-recently-used first.
- Risk: disposing global-sync child stores could leak listeners if not cleaned up correctly.
- Mitigation: require an explicit `dispose()` contract and add dev assertions for listener counts.
- Risk: approximate byte sizing is imprecise.
- Mitigation: combine entry caps with byte caps and keep thresholds conservative.
---
### Validation plan
- Add tests for `createLruCache` covering TTL expiry, LRU ordering, and eviction triggers.
- Manual scenarios:
- open many files and confirm memory stabilizes and UI remains responsive
- switch across many directories and confirm global-sync does not continuously grow
- long session navigation loop and confirm caches plateau
---
### Rollout plan
- Land cache utility first with flags default off.
- Enable file cache eviction first (lowest behavioral risk).
- Enable global-sync eviction next with conservative caps and strong logging in dev.
- Enable session/message eviction last after observing real usage patterns.
---
### Open questions
- What are the current session/message cache structures and their ownership boundaries?
- Which child stores in `global-sync.tsx` have resources that must be disposed explicitly?
- What caps are acceptable for typical workflows (files open, directories visited, sessions viewed)?

View File

@@ -0,0 +1,145 @@
## Request throttling
Debounce and cancel high-frequency server calls
---
### Summary
Some user interactions trigger bursts of server requests that can overlap and return out of order. Well debounce frequent triggers and cancel in-flight requests (or ignore stale results) for file search and LSP refresh.
---
### Goals
- Reduce redundant calls from file search and LSP refresh
- Prevent stale responses from overwriting newer UI state
- Preserve responsive typing and scrolling during high activity
---
### Non-goals
- Changing server-side behavior or adding new endpoints
- Implementing global request queues for all SDK calls
- Persisting search results across reloads
---
### Current state
- File search calls `sdk.client.find.files` via `files.searchFilesAndDirectories`.
- LSP refresh is triggered frequently (exact call sites vary, but the refresh behavior is high-frequency).
- Large UI modules involved include `packages/app/src/pages/layout.tsx` and `packages/app/src/components/prompt-input.tsx`.
---
### Proposed approach
- Add a small request coordinator utility:
- debounced triggering (leading/trailing configurable)
- cancellation via `AbortController` when supported
- stale-result protection via monotonic request ids when abort is not supported
- Integrate coordinator into:
- `files.searchFilesAndDirectories` (wrap `sdk.client.find.files`)
- LSP refresh call path (wrap refresh invocation and ensure only latest applies)
---
### Phased implementation steps
1. Add a debounced + cancellable helper
- Create `packages/app/src/utils/requests.ts` with:
- `createDebouncedAsync(fn, delayMs)`
- `createLatestOnlyAsync(fn)` that drops stale responses
- Prefer explicit, readable primitives over a single complex abstraction.
Sketch:
```ts
function createLatestOnlyAsync<TArgs extends unknown[], TResult>(
fn: (args: { input: TArgs; signal?: AbortSignal }) => Promise<TResult>,
) {
let id = 0
let controller: AbortController | undefined
return async (...input: TArgs) => {
id += 1
const current = id
controller?.abort()
controller = new AbortController()
const result = await fn({ input, signal: controller.signal })
if (current !== id) return
return result
}
}
```
2. Apply to file search
- Update `files.searchFilesAndDirectories` to:
- debounce input changes (e.g. 150300 ms)
- abort prior request when a new query begins
- ignore results if they are stale
- Ensure “empty query” is handled locally without calling the server.
3. Apply to LSP refresh
- Identify the refresh trigger points used during typing and file switching.
- Add:
- debounce for rapid triggers (e.g. 250500 ms)
- cancellation for in-flight refresh if supported
- last-write-wins behavior for applying diagnostics/results
4. Add feature flags and metrics
- Add flags:
- `requests.debounce.fileSearch`
- `requests.latestOnly.lspRefresh`
- Add simple dev-only counters for “requests started / aborted / applied”.
---
### Data migration / backward compatibility
- No persisted data changes.
- Behavior is compatible as long as UI state updates only when the “latest” request resolves.
---
### Risk + mitigations
- Risk: aggressive debounce makes UI feel laggy.
- Mitigation: keep delays small and tune separately for search vs refresh.
- Risk: aborting requests may surface as errors in logs.
- Mitigation: treat `AbortError` as expected and do not log it as a failure.
- Risk: SDK method may not accept `AbortSignal`.
- Mitigation: use request-id stale protection even without true cancellation.
---
### Validation plan
- Manual scenarios:
- type quickly in file search and confirm requests collapse and results stay correct
- trigger LSP refresh repeatedly and confirm diagnostics do not flicker backward
- Add a small unit test for latest-only behavior (stale results are ignored).
---
### Rollout plan
- Ship helpers behind flags default off.
- Enable file search debounce first (high impact, easy to validate).
- Enable LSP latest-only next, then add cancellation if SDK supports signals.
- Keep a quick rollback by disabling the flags.
---
### Open questions
- Does `sdk.client.find.files` accept an abort signal today, or do we need stale-result protection only?
- Where is LSP refresh initiated, and does it have a single chokepoint we can wrap?
- What debounce values feel best for common repos and slower machines?

View File

@@ -0,0 +1,125 @@
## Spy acceleration
Replace O(N) DOM scans in session view
---
### Summary
The session scroll-spy currently scans the DOM with `querySelectorAll` and walks message nodes, which becomes expensive as message count grows. Well replace the scan with an observer-based or indexed approach that scales smoothly.
---
### Goals
- Remove repeated full DOM scans during scroll in the session view
- Keep “current message” tracking accurate during streaming and layout shifts
- Provide a safe fallback path for older browsers and edge cases
---
### Non-goals
- Visual redesign of the session page
- Changing message rendering structure or IDs
- Perfect accuracy during extreme layout thrash
---
### Current state
- `packages/app/src/pages/session.tsx` uses `querySelectorAll('[data-message-id]')` for scroll-spy.
- The page is large and handles many responsibilities, increasing the chance of perf regressions.
---
### Proposed approach
Implement a two-tier scroll-spy:
- Primary: `IntersectionObserver` to track which message elements are visible, updated incrementally.
- Secondary: binary search over precomputed offsets when observer is unavailable or insufficient.
- Use `ResizeObserver` (and a lightweight “dirty” flag) to refresh offsets only when layout changes.
---
### Phased implementation steps
1. Extract a dedicated scroll-spy module
- Create `packages/app/src/pages/session/scroll-spy.ts` (or similar) that exposes:
- `register(el, id)` and `unregister(id)`
- `getActiveId()` signal/store
- Keep DOM operations centralized and easy to profile.
2. Add IntersectionObserver tracking
- Observe each `[data-message-id]` element once, on mount.
- Maintain a small map of `id -> intersectionRatio` (or visible boolean).
- Pick the active id by:
- highest intersection ratio, then
- nearest to top of viewport as a tiebreaker
3. Add binary search fallback
- Maintain an ordered list of `{ id, top }` positions.
- On scroll (throttled via `requestAnimationFrame`), compute target Y and binary search to find nearest message.
- Refresh the positions list on:
- message list mutations (new messages)
- container resize events (ResizeObserver)
- explicit “layout changed” events after streaming completes
4. Remove `querySelectorAll` hot path
- Keep a one-time initial query only as a bridge during rollout, then remove it.
- Ensure newly rendered messages are registered via refs rather than scanning the whole DOM.
5. Add a feature flag and fallback
- Add `session.scrollSpyOptimized` flag.
- If observer setup fails, fall back to the existing scan behavior temporarily.
---
### Data migration / backward compatibility
- No persisted data changes.
- IDs remain sourced from existing `data-message-id` attributes.
---
### Risk + mitigations
- Risk: observer ordering differs from previous “active message” logic.
- Mitigation: keep selection rules simple, document them, and add a small tolerance for tie cases.
- Risk: layout shifts cause incorrect offset indexing.
- Mitigation: refresh offsets with ResizeObserver and after message streaming batches.
- Risk: performance regressions from observing too many nodes.
- Mitigation: prefer one observer instance and avoid per-node observers.
---
### Validation plan
- Manual scenarios:
- very long sessions (hundreds of messages) and continuous scrolling
- streaming responses that append content and change heights
- resizing the window and toggling side panels
- Add a dev-only profiler hook to log time spent in scroll-spy updates per second.
---
### Rollout plan
- Land extracted module first, still using the old scan internally.
- Add observer implementation behind `session.scrollSpyOptimized` off by default.
- Enable flag for internal testing, then default on after stability.
- Keep fallback code for one release cycle, then remove scan path.
---
### Open questions
- What is the exact definition of “active” used elsewhere (URL hash, sidebar highlight, breadcrumb)?
- Are messages virtualized today, or are all DOM nodes mounted at once?
- Which container is the scroll root (window vs an inner div), and does it change by layout mode?

View File

@@ -0,0 +1,153 @@
## Component modularity
Split mega-components and dedupe scoped caches
---
### Summary
Several large UI files combine rendering, state, persistence, and caching patterns, including repeated “scoped session cache” infrastructure. Well extract reusable primitives and break large components into smaller units without changing user-facing behavior.
---
### Goals
- Reduce complexity in:
- `packages/app/src/pages/session.tsx`
- `packages/app/src/pages/layout.tsx`
- `packages/app/src/components/prompt-input.tsx`
- Deduplicate “scoped session cache” logic into a shared utility
- Make performance fixes (eviction, throttling) easier to implement safely
---
### Non-goals
- Large redesign of routing or page structure
- Moving to a different state management approach
- Rewriting all contexts in one pass
---
### Current state
- Session page is large and mixes concerns (`packages/app/src/pages/session.tsx`).
- Layout is also large and likely coordinates multiple global concerns (`packages/app/src/pages/layout.tsx`).
- Prompt input is large and includes persistence and interaction logic (`packages/app/src/components/prompt-input.tsx`).
- Similar “scoped cache” patterns appear in multiple places (session-bound maps, per-session stores, ad hoc memoization).
---
### Proposed approach
- Introduce a shared “scoped store” utility to standardize session-bound caches:
- keyed by `sessionId`
- automatic cleanup via TTL or explicit `dispose(sessionId)`
- optional LRU cap for many sessions
- Break mega-components into focused modules with clear boundaries:
- “view” components (pure rendering)
- “controller” hooks (state + effects)
- “services” (SDK calls, persistence adapters)
---
### Phased implementation steps
1. Inventory and name the repeated pattern
- Identify the repeated “scoped session cache” usage sites in:
- `packages/app/src/pages/session.tsx`
- `packages/app/src/pages/layout.tsx`
- `packages/app/src/components/prompt-input.tsx`
- Write down the common operations (get-or-create, clear-on-session-change, dispose).
2. Add a shared scoped-cache utility
- Create `packages/app/src/utils/scoped-cache.ts`:
- `createScopedCache(createValue, opts)` returning `get(key)`, `peek(key)`, `delete(key)`, `clear()`
- optional TTL + LRU caps to avoid leak-by-design
- Keep the API tiny and explicit so call sites stay readable.
Sketch:
```ts
type ScopedOpts = { maxEntries?: number; ttlMs?: number }
function createScopedCache<T>(createValue: (key: string) => T, opts: ScopedOpts) {
// store + eviction + dispose hooks
}
```
3. Extract session page submodules
- Split `packages/app/src/pages/session.tsx` into:
- `session/view.tsx` for rendering layout
- `session/messages.tsx` for message list
- `session/composer.tsx` for input wiring
- `session/scroll-spy.ts` for active message tracking
- Keep exports stable so routing code changes minimally.
4. Extract layout coordination logic
- Split `packages/app/src/pages/layout.tsx` into:
- shell layout view
- navigation/controller logic
- global keyboard shortcuts (if present)
- Ensure each extracted piece has a narrow prop surface and no hidden globals.
5. Extract prompt-input state machine
- Split `packages/app/src/components/prompt-input.tsx` into:
- `usePromptComposer()` hook (draft, submission, attachments)
- presentational input component
- Route persistence through existing `packages/app/src/context/prompt.tsx`, but isolate wiring code.
6. Replace ad hoc scoped caches with the shared utility
- Swap one call site at a time and keep behavior identical.
- Add a flag `scopedCache.shared` to fall back to the old implementation if needed.
---
### Data migration / backward compatibility
- No persisted schema changes are required by modularization alone.
- If any cache keys change due to refactors, keep a compatibility reader for one release cycle.
---
### Risk + mitigations
- Risk: refactors cause subtle behavior changes (focus, keyboard shortcuts, scroll position).
- Mitigation: extract without logic changes first, then improve behavior in later diffs.
- Risk: new shared cache introduces lifecycle bugs.
- Mitigation: require explicit cleanup hooks and add dev assertions for retained keys.
- Risk: increased file count makes navigation harder temporarily.
- Mitigation: use consistent naming and keep the folder structure shallow.
---
### Validation plan
- Manual regression checklist:
- compose, attach images, submit, and reload draft
- navigate between sessions and confirm caches dont bleed across IDs
- verify terminal, file search, and scroll-spy still behave normally
- Add lightweight unit tests for `createScopedCache` eviction and disposal behavior.
---
### Rollout plan
- Phase 1: introduce `createScopedCache` unused, then adopt in one low-risk area.
- Phase 2: extract session submodules with no behavior changes.
- Phase 3: flip remaining scoped caches to shared utility behind `scopedCache.shared`.
- Phase 4: remove old duplicated implementations after confidence.
---
### Open questions
- Where exactly is “scoped session cache” duplicated today, and what are the differing lifecycle rules?
- Which extracted modules must remain synchronous for Solid reactivity to behave correctly?
- Are there implicit dependencies in the large files (module-level state) that need special handling?

237
specs/06-app-i18n-audit.md Normal file
View File

@@ -0,0 +1,237 @@
# App i18n Audit (Remaining Work)
Scope: `packages/app/`
Date: 2026-01-20
This report documents the remaining user-facing strings in `packages/app/src` that are still hardcoded (not routed through `useLanguage().t(...)` / translation keys), plus i18n-adjacent issues like locale-sensitive formatting.
## Current State
- The app uses `useLanguage().t("...")` with dictionaries in `packages/app/src/i18n/en.ts` and `packages/app/src/i18n/zh.ts`.
- Recent progress (already translated): `packages/app/src/pages/home.tsx`, `packages/app/src/pages/layout.tsx`, `packages/app/src/pages/session.tsx`, `packages/app/src/components/prompt-input.tsx`, `packages/app/src/components/dialog-connect-provider.tsx`, `packages/app/src/components/session/session-header.tsx`, `packages/app/src/pages/error.tsx`, `packages/app/src/components/session/session-new-view.tsx`, `packages/app/src/components/session-context-usage.tsx`, `packages/app/src/components/session/session-context-tab.tsx`, `packages/app/src/components/session-lsp-indicator.tsx`, `packages/app/src/components/session/session-sortable-tab.tsx`, `packages/app/src/components/titlebar.tsx`, `packages/app/src/components/dialog-select-model.tsx`, `packages/app/src/context/notification.tsx`, `packages/app/src/context/global-sync.tsx`, `packages/app/src/context/file.tsx`, `packages/app/src/context/local.tsx`, `packages/app/src/utils/prompt.ts`, `packages/app/src/context/terminal.tsx`, `packages/app/src/components/session/session-sortable-terminal-tab.tsx` (plus new keys added in both dictionaries).
- Dictionary parity check: `en.ts` and `zh.ts` currently contain the same key set (373 keys each; no missing or extra keys).
## Methodology
- Scanned `packages/app/src` (excluding `packages/app/src/i18n/*` and tests).
- Grepped for:
- Hardcoded JSX text nodes (e.g. `>Some text<`)
- Hardcoded prop strings (e.g. `title="..."`, `placeholder="..."`, `label="..."`, `description="..."`, `Tooltip value="..."`)
- Toast/notification strings, default fallbacks, and error message templates.
- Manually reviewed top hits to distinguish:
- User-facing UI copy (needs translation)
- Developer-only logs (`console.*`) (typically does not need translation)
- Technical identifiers (e.g. `MCP`, `LSP`, URLs) (may remain untranslated by choice).
## Highest Priority: Pages
### 1) Error Page
File: `packages/app/src/pages/error.tsx`
Completed (2026-01-20):
- Localized page UI copy via `error.page.*` keys (title, description, buttons, report text, version label).
- Localized error chain framing and common init error templates via `error.chain.*` keys.
- Kept raw server/provider error messages as-is when provided (only localizing labels and structure).
## Highest Priority: Components
### 2) Prompt Input
File: `packages/app/src/components/prompt-input.tsx`
Completed (2026-01-20):
- Localized placeholder examples by replacing the hardcoded `PLACEHOLDERS` list with `prompt.example.*` keys.
- Localized toast titles/descriptions via `prompt.toast.*` and reused `common.requestFailed` for fallback error text.
- Localized popover empty states and drag/drop overlay copy (`prompt.popover.*`, `prompt.dropzone.label`).
- Localized smaller labels (slash "custom" badge, attach button tooltip, Send/Stop tooltip labels).
- Kept the `ESC` keycap itself untranslated (key label).
### 3) Provider Connection / Auth Flow
File: `packages/app/src/components/dialog-connect-provider.tsx`
Completed (2026-01-20):
- Localized all user-visible copy via `provider.connect.*` keys (titles, statuses, validations, instructions, OpenCode Zen onboarding).
- Added `common.submit` and used it for both API + OAuth submit buttons.
- Localized the success toast via `provider.connect.toast.connected.*`.
### 4) Session Header (Share/Publish UI)
File: `packages/app/src/components/session/session-header.tsx`
Completed (2026-01-20):
- Localized search placeholder via `session.header.search.placeholder`.
- Localized share/publish UI via `session.share.*` keys (popover title/description, button states, copy tooltip).
- Reused existing command keys for toggle/share tooltips (`command.review.toggle`, `command.terminal.toggle`, `command.session.share`).
## Medium Priority: Components
### 5) New Session View
File: `packages/app/src/components/session/session-new-view.tsx`
Completed (2026-01-20):
- Reused existing `command.session.new` for the heading.
- Localized worktree labels via `session.new.worktree.*` (main branch, main branch w/ branch name, create worktree).
- Localized "Last modified" via `session.new.lastModified` and used `language.locale()` for Luxon relative time.
### 6) Context Usage Tooltip
File: `packages/app/src/components/session-context-usage.tsx`
Completed (2026-01-20):
- Localized tooltip labels + CTA via `context.usage.*` keys.
- Switched currency and number formatting to the active locale (`language.locale()`).
### 7) Session Context Tab (Formatting)
File: `packages/app/src/components/session/session-context-tab.tsx`
Completed (2026-01-20):
- Switched currency formatting to the active locale (`language.locale()`).
- Also used `language.locale()` for number/date formatting.
- Note: "—" placeholders remain hardcoded; optional to localize.
### 8) LSP Indicator
File: `packages/app/src/components/session-lsp-indicator.tsx`
Completed (2026-01-20):
- Localized tooltip/label framing via `lsp.*` keys (kept the acronym itself).
### 9) Session Tab Close Tooltip
File: `packages/app/src/components/session/session-sortable-tab.tsx`
Completed (2026-01-20):
- Reused `common.closeTab` for the close tooltip.
### 10) Titlebar Tooltip
File: `packages/app/src/components/titlebar.tsx`
Completed (2026-01-20):
- Reused `command.sidebar.toggle` for the tooltip title.
### 11) Model Selection "Recent" Group
File: `packages/app/src/components/dialog-select-model.tsx`
Completed (2026-01-20):
- Removed the unused hardcoded "Recent" group comparisons to avoid locale-coupled sorting.
### 12) Select Server Dialog Placeholder (Optional)
File: `packages/app/src/components/dialog-select-server.tsx`
Completed (2026-01-20):
- Moved the placeholder example URL behind `dialog.server.add.placeholder` (value unchanged).
## Medium Priority: Context Modules
### 13) OS/Desktop Notifications
File: `packages/app/src/context/notification.tsx`
Completed (2026-01-20):
- Localized OS notification titles/fallback copy via `notification.session.*` keys.
### 14) Global Sync (Bootstrap Errors + Toast)
File: `packages/app/src/context/global-sync.tsx`
Completed (2026-01-20):
- Localized the sessions list failure toast via `toast.session.listFailed.title`.
- Localized the bootstrap connection error via `error.globalSync.connectFailed`.
### 15) File Load Failure Toast (Duplicate)
Files:
- `packages/app/src/context/file.tsx`
- `packages/app/src/context/local.tsx`
Completed (2026-01-20):
- Introduced `toast.file.loadFailed.title` and reused it in both contexts.
### 16) Terminal Naming (Tricky)
File: `packages/app/src/context/terminal.tsx`
Completed (2026-01-20):
- Terminal display labels are now rendered from a stable numeric `titleNumber` and localized via `terminal.title.*`.
- Added a one-time migration to backfill missing `titleNumber` by parsing the stored title string.
## Low Priority: Utils / Dev-Only Copy
### 17) Default Attachment Filename
File: `packages/app/src/utils/prompt.ts`
Completed (2026-01-20):
- Added `common.attachment` and plumbed it into `extractPromptFromParts(...)` as `opts.attachmentName`.
### 18) Dev-only Root Mount Error
File: `packages/app/src/entry.tsx`
Completed (2026-01-20):
- Localized the DEV-only root mount error via `error.dev.rootNotFound`.
- Selected locale using `navigator.languages` to match the apps default detection.
## Prioritized Implementation Plan
No remaining work in `packages/app/` as of 2026-01-20.
## Suggested Key Naming Conventions
To keep the dictionaries navigable, prefer grouping by surface:
- `error.page.*`, `error.chain.*`
- `prompt.*` (including examples, tooltips, empty states, toasts)
- `provider.connect.*` (auth flow UI + validation + success)
- `session.share.*` (publish/unpublish/copy link)
- `context.usage.*` (Tokens/Usage/Cost + call to action)
- `lsp.*` (and potentially `mcp.*` if expanded)
- `notification.session.*`
- `toast.file.*`, `toast.session.*`
Also reuse existing command keys for tooltip titles whenever possible (e.g. `command.sidebar.toggle`, `command.review.toggle`, `command.terminal.toggle`).
## Appendix: Remaining Files At-a-Glance
Pages:
- (none)
Components:
- (none)
Context:
- (none)
Utils:
- (none)

156
specs/07-ui-i18n-audit.md Normal file
View File

@@ -0,0 +1,156 @@
# UI i18n Audit (Remaining Work)
Scope: `packages/ui/` (and consumers: `packages/app/`, `packages/enterprise/`)
Date: 2026-01-20
This report documents the remaining user-facing strings in `packages/ui/src` that are still hardcoded (not routed through a translation function), and proposes an i18n architecture that works long-term across multiple packages.
## Current State
- `packages/app/` already has i18n via `useLanguage().t("...")` with dictionaries in `packages/app/src/i18n/en.ts` and `packages/app/src/i18n/zh.ts`.
- `packages/ui/` is a shared component library used by:
- `packages/app/src/pages/session.tsx` (Session UI)
- `packages/enterprise/src/routes/share/[shareID].tsx` (shared session rendering)
- `packages/ui/` currently has **hardcoded English UI copy** in several components (notably `session-turn.tsx`, `session-review.tsx`, `message-part.tsx`).
- `packages/enterprise/` does not currently have an i18n system, so any i18n approach must be usable without depending on `packages/app/`.
## Decision: How We Should Add i18n To `@opencode-ai/ui`
Introduce a small, app-agnostic i18n interface in `packages/ui/` and keep UI-owned strings in UI-owned dictionaries.
Why this is the best long-term shape:
- Keeps dependency direction clean: `packages/enterprise/` (and any future consumer) can translate UI without importing `packages/app/` dictionaries.
- Avoids prop-drilling strings through shared components.
- Allows each package to own its strings while still rendering a single, coherent locale in the product.
### Proposed Architecture
1. **UI provides an i18n context (no persistence)**
- Add `packages/ui/src/context/i18n.tsx`:
- Exports `I18nProvider` and `useI18n()`.
- Context value includes:
- `t(key, params?)` translation function (template interpolation supported by the consumer).
- `locale()` accessor for locale-sensitive formatting (Luxon/Intl).
- Context should have a safe default (English) so UI components can render even if a consumer forgets the provider.
2. **UI owns UI strings (dictionaries live in UI)**
- Add `packages/ui/src/i18n/en.ts` and `packages/ui/src/i18n/zh.ts`.
- Export them from `@opencode-ai/ui` via `packages/ui/package.json` exports (e.g. `"./i18n/*": "./src/i18n/*.ts"`).
- Use a clear namespace prefix for all UI keys to avoid collisions:
- Recommended: `ui.*` (e.g. `ui.sessionReview.title`).
3. **Consumers merge dictionaries and provide `t`/`locale` once**
- `packages/app/`:
- Keep `packages/app/src/context/language.tsx` as the source of truth for locale selection/persistence.
- Extend it to merge UI dictionaries into its translation table.
- Add a tiny bridge provider in `packages/app/src/app.tsx` to feed `useLanguage()` into `@opencode-ai/ui`'s `I18nProvider`.
- `packages/enterprise/`:
- Add a lightweight locale detector (similar to `packages/app/src/context/language.tsx`), likely based on `Accept-Language` on the server and/or `navigator.languages` on the client.
- Merge `@opencode-ai/ui` dictionaries and (optionally) enterprise-local dictionaries.
- Wrap the share route in `I18nProvider`.
### Key Naming Conventions (UI)
- Prefer component + semantic grouping:
- `ui.sessionReview.title`
- `ui.sessionReview.diffStyle.unified`
- `ui.sessionReview.diffStyle.split`
- `ui.sessionReview.expandAll`
- `ui.sessionReview.collapseAll`
- For `SessionTurn`:
- `ui.sessionTurn.steps.show`
- `ui.sessionTurn.steps.hide`
- `ui.sessionTurn.summary.response`
- `ui.sessionTurn.diff.more` (use templating: `Show more changes ({{count}})`)
- `ui.sessionTurn.retry.retrying` / `ui.sessionTurn.retry.inSeconds` / etc (avoid string concatenation that is English-order dependent)
- Status text:
- `ui.sessionTurn.status.delegating`
- `ui.sessionTurn.status.planning`
- `ui.sessionTurn.status.gatheringContext`
- `ui.sessionTurn.status.searchingCode`
- `ui.sessionTurn.status.searchingWeb`
- `ui.sessionTurn.status.makingEdits`
- `ui.sessionTurn.status.runningCommands`
- `ui.sessionTurn.status.thinking`
- `ui.sessionTurn.status.thinkingWithTopic` (template: `Thinking - {{topic}}`)
- `ui.sessionTurn.status.gatheringThoughts`
- `ui.sessionTurn.status.consideringNextSteps` (fallback)
## Locale-Sensitive Formatting (UI)
`SessionTurn` currently formats durations via Luxon `Interval.toDuration(...).toHuman(...)` without an explicit locale.
When i18n is added:
- Use `useI18n().locale()` and pass locale explicitly:
- Luxon: `duration.toHuman({ locale: locale(), ... })` (or set `.setLocale(locale())` where applicable).
- Intl numbers/currency (if added later): `new Intl.NumberFormat(locale(), ...)`.
## Initial Hardcoded Strings (Audit Findings)
These are the highest-impact UI surfaces to translate first.
### 1) `packages/ui/src/components/session-review.tsx`
- `Session changes`
- `Unified` / `Split`
- `Collapse all` / `Expand all`
### 2) `packages/ui/src/components/session-turn.tsx`
- Tool/task status strings (e.g. `Delegating work`, `Searching the codebase`)
- Steps toggle labels: `Show steps` / `Hide steps`
- Summary section title: `Response`
- Pagination CTA: `Show more changes ({{count}})`
### 3) `packages/ui/src/components/message-part.tsx`
Examples (non-exhaustive):
- `Error`
- `Edit`
- `Write`
- `Type your own answer`
- `Review your answers`
### 4) Additional Hardcoded Strings (Full Audit)
Found during a full `packages/ui/src/components` + `packages/ui/src/context` sweep:
- `packages/ui/src/components/list.tsx`
- `Loading`
- `No results`
- `No results for "{{filter}}"`
- `packages/ui/src/components/message-nav.tsx`
- `New message`
- `packages/ui/src/components/text-field.tsx`
- `Copied`
- `Copy to clipboard`
- `packages/ui/src/components/image-preview.tsx`
- `Image preview` (alt text)
## Prioritized Implementation Plan
1. Completed (2026-01-20): Add `@opencode-ai/ui` i18n context (`packages/ui/src/context/i18n.tsx`) + export it.
2. Completed (2026-01-20): Add UI dictionaries (`packages/ui/src/i18n/en.ts`, `packages/ui/src/i18n/zh.ts`) + export them.
3. Completed (2026-01-20): Wire `I18nProvider` into:
- `packages/app/src/app.tsx`
- `packages/enterprise/src/app.tsx`
4. Completed (2026-01-20): Convert `packages/ui/src/components/session-review.tsx` and `packages/ui/src/components/session-turn.tsx` to use `useI18n().t(...)`.
5. Completed (2026-01-20): Convert `packages/ui/src/components/message-part.tsx`.
6. Completed (2026-01-20): Do a full `packages/ui/src/components` + `packages/ui/src/context` audit for additional hardcoded copy.
## Notes / Risks
- **SSR:** Enterprise share pages render on the server. Ensure the i18n provider works in SSR and does not assume `window`/`navigator`.
- **Key collisions:** Use a consistent `ui.*` prefix to avoid clashing with app keys.
- **Fallback behavior:** Decide whether missing keys should:
- fall back to English, or
- render the key (useful for catching missing translations).

View File

@@ -0,0 +1,255 @@
## App E2E Smoke Suite (CI)
Implement a small set of high-signal, low-flake Playwright tests to run in CI.
These tests are intended to catch regressions in the “core shell” of the app (navigation, dialogs, prompt UX, file viewer, terminal), without relying on model output.
---
### Summary
Add 6 smoke tests to `packages/app/e2e/`:
- Settings dialog: open, switch tabs, close
- Prompt slash command: `/open` opens the file picker dialog
- Prompt @mention: `@<file>` inserts a file pill token
- Model picker: open model selection and choose a model
- File viewer: open a known file and assert contents render
- Terminal: open terminal, verify Ghostty mounts, create a second terminal
---
### Progress
- [x] 1. Settings dialog open / switch / close (`packages/app/e2e/settings.spec.ts`)
- [x] 2. Prompt slash command path: `/open` opens file picker (`packages/app/e2e/prompt-slash-open.spec.ts`)
- [x] 3. Prompt @mention inserts a file pill token (`packages/app/e2e/prompt-mention.spec.ts`)
- [x] 4. Model selection UI works end-to-end (`packages/app/e2e/model-picker.spec.ts`)
- [x] 5. File viewer renders real file content (`packages/app/e2e/file-viewer.spec.ts`)
- [x] 8. Terminal init + create new terminal (`packages/app/e2e/terminal-init.spec.ts`)
---
### Goals
- Tests run reliably in CI using the existing local runner (`packages/app/script/e2e-local.ts`).
- Cover “wiring” regressions across UI + backend APIs:
- dialogs + command routing
- prompt contenteditable parsing
- file search + file read + code viewer render
- terminal open + pty creation + Ghostty mount
- Avoid assertions that depend on LLM output.
- Keep runtime low (these should be “smoke”, not full workflows).
---
### Non-goals
- Verifying complex model behavior, streaming correctness, or tool call semantics.
- Testing provider auth flows (CI has no secrets).
- Testing share, MCP, or LSP download flows (disabled in the e2e runner).
---
### Current State
Existing tests in `packages/app/e2e/` already cover:
- Home renders + server picker opens
- Directory route redirects to `/session`
- Sidebar collapse/expand
- Command palette opens/closes
- Basic session open + prompt input + (optional) prompt/reply flow
- File open via palette (but shallow assertion: tab exists)
- Terminal panel toggles (but doesnt assert Ghostty mounted)
- Context panel open
We want to add a focused smoke layer that increases coverage of the most regression-prone UI paths.
---
### Proposed Tests
All tests should use the shared fixtures in:
- `packages/app/e2e/fixtures.ts` (for `sdk`, `directory`, `gotoSession`)
- `packages/app/e2e/utils.ts` (for `modKey`, `promptSelector`, `terminalToggleKey`)
Prefer creating new spec files rather than overloading existing ones, so its easy to run these tests as a group via grep.
Suggested file layout:
- `packages/app/e2e/settings.spec.ts`
- `packages/app/e2e/prompt-slash-open.spec.ts`
- `packages/app/e2e/prompt-mention.spec.ts`
- `packages/app/e2e/model-picker.spec.ts`
- `packages/app/e2e/file-viewer.spec.ts`
- `packages/app/e2e/terminal-init.spec.ts`
Name each test with a “smoke” prefix so CI can run only this suite if needed.
#### 1) Settings dialog open / switch / close
Purpose: catch regressions in dialog infra, settings rendering, tabs.
Steps:
1. `await gotoSession()`.
2. Open settings via keybind (preferred for stability): `await page.keyboard.press(`${modKey}+Comma`)`.
3. Assert dialog visible (`page.getByRole('dialog')`).
4. Click the "Shortcuts" tab (role `tab`, name "Shortcuts").
5. Assert shortcuts view renders (e.g. the search field placeholder or reset button exists).
6. Close with `Escape` and assert dialog removed.
Notes:
- If `Meta+Comma` / `Control+Comma` key name is flaky, fall back to clicking the sidebar settings icon.
- Favor role-based selectors over brittle class selectors.
- If `Escape` doesnt dismiss reliably (tooltips can intercept), fall back to clicking the dialog overlay.
Implementation: `packages/app/e2e/settings.spec.ts`
Acceptance criteria:
- Settings dialog opens reliably.
- Switching to Shortcuts tab works.
- Escape closes the dialog.
#### 2) Prompt slash command path: `/open` opens file picker
Purpose: validate contenteditable parsing + slash popover + builtin command dispatch (distinct from `mod+p`).
Steps:
1. `await gotoSession()`.
2. Click prompt (`promptSelector`).
3. Type `/open`.
4. Press `Enter` (while slash popover is active).
5. Assert a dialog appears and contains a textbox (the file picker search input).
6. Close dialog with `Escape`.
Acceptance criteria:
- `/open` triggers `file.open` and opens `DialogSelectFile`.
#### 3) Prompt @mention inserts a file pill token
Purpose: validate the most fragile prompt behavior: structured tokens inside contenteditable.
Steps:
1. `await gotoSession()`.
2. Focus the prompt.
3. Type `@packages/app/package.json`.
4. Press `Tab` to accept the active @mention suggestion.
5. Assert a pill element is inserted:
- `page.locator('[data-component="prompt-input"] [data-type="file"][data-path="packages/app/package.json"]')` exists.
Acceptance criteria:
- A file pill is inserted and has the expected `data-*` attributes.
- Prompt editor remains interactable (e.g. typing a trailing space works).
#### 4) Model selection UI works end-to-end
Purpose: validate model list rendering, selection wiring, and prompt footer updating.
Implementation approach:
- Use `/model` to open the model selection dialog (builtin command).
Steps:
1. `await gotoSession()`.
2. Focus prompt, type `/model`, press `Enter`.
3. In the model dialog, pick a visible model that is not the current selection (if available).
4. Use the search field to filter to that model (use its id from the list item's `data-key` to avoid time-based model visibility drift).
5. Select the filtered model.
6. Assert dialog closed.
7. Assert the prompt footer now shows the chosen model name.
Acceptance criteria:
- A model can be selected without requiring provider auth.
- The prompt footer reflects the new selection.
#### 5) File viewer renders real file content
Purpose: ensure file search + open + file.read + code viewer render all work.
Steps:
1. `await gotoSession()`.
2. Open file picker (either `mod+p` or `/open`).
3. Search for `packages/app/package.json`.
4. Click the matching file result.
5. Ensure the new file tab is active (click the `package.json` tab if needed so the viewer mounts).
6. Assert the code viewer contains a known substring:
- `"name": "@opencode-ai/app"`.
7. Optionally assert the file tab is active and visible.
Acceptance criteria:
- Code view shows expected content (not just “tab exists”).
#### 8) Terminal init + create new terminal
Purpose: ensure terminal isnt only “visible”, but actually mounted and functional.
Steps:
1. `await gotoSession()`.
2. Open terminal with `terminalToggleKey` (currently `Control+Backquote`).
3. Assert terminal container exists and is visible: `[data-component="terminal"]`.
4. Assert Ghostty textarea exists: `[data-component="terminal"] textarea`.
5. Create a new terminal via keybind (`terminal.new` is `ctrl+alt+t`).
6. Assert terminal tab count increases to 2.
Acceptance criteria:
- Ghostty mounts (textarea present).
- Creating a new terminal results in a second tab.
---
### CI Stability + Flake Avoidance
These tests run with `fullyParallel: true` in `packages/app/playwright.config.ts`. Keep them isolated and deterministic.
- Avoid ordering-based assertions: never assume a “first” session/project/file is stable unless you filtered by unique text.
- Prefer deterministic targets:
- use `packages/app/package.json` rather than bare `package.json` (multiple hits possible)
- for models, avoid hardcoding a single model id; pick from the visible list and filter by its `data-key` instead
- Prefer robust selectors:
- role selectors: `getByRole('dialog')`, `getByRole('textbox')`, `getByRole('tab')`
- stable data attributes already present: `promptSelector`, `[data-component="terminal"]`
- Keep tests local and fast:
- do not submit prompts that require real model replies
- avoid `page.waitForTimeout`; use `expect(...).toBeVisible()` and `expect.poll` when needed
- Watch for silent UI failures:
- capture `page.on('pageerror')` and fail test if any are emitted
- optionally capture console errors (`page.on('console', ...)`) and fail on `type==='error'`
- Cleanup:
- these tests should not need to create sessions
- if a test ever creates sessions or PTYs directly, clean up with SDK calls in `finally`
---
### Validation Plan
Run locally:
- `cd packages/app`
- `bun run test:e2e:local -- --grep smoke`
Verify:
- all new tests pass consistently across multiple runs
- overall e2e suite time does not increase significantly
---
### Open Questions
- Should we add a small helper in `packages/app/e2e/utils.ts` for “type into prompt contenteditable” to reduce duplication?
- Do we want to gate these smoke tests with a dedicated `@smoke` naming convention (or `test.describe('smoke', ...)`) so CI can target them explicitly?

View File

@@ -0,0 +1,113 @@
## Session page decomposition
Split `pages/session.tsx` into focused modules without behavior changes.
---
### Summary
`packages/app/src/pages/session.tsx` is still a large (~3,655 LOC) route coordinator. Recent refactoring already extracted `packages/app/src/pages/session/helpers.ts` and `packages/app/src/pages/session/scroll-spy.ts`, but review-panel wiring, message timeline orchestration, file-tab rendering, and terminal coordination remain tightly coupled. This spec continues the decomposition from that updated baseline.
---
### Goals
- Reduce complexity in `packages/app/src/pages/session.tsx`.
- Isolate major concerns into dedicated modules under `packages/app/src/pages/session/`.
- Keep behavior and route/API contracts unchanged.
- Preserve current keyboard, scroll, hash, and review interactions.
---
### Non-goals
- No redesign of session UX.
- No changes to SDK contracts.
- No refactor of `context/global-sync.tsx`, `context/file.tsx`, or `components/prompt-input.tsx` in this workstream.
---
### Parallel ownership (important)
This workstream owns:
- `packages/app/src/pages/session.tsx`
- New files under `packages/app/src/pages/session/**`
This workstream must not edit:
- `packages/app/src/pages/layout.tsx` (owned by spec 10)
- `packages/app/src/components/prompt-input.tsx` (owned by spec 11)
- `packages/app/src/context/global-sync.tsx` (owned by spec 12)
- `packages/app/src/context/file.tsx` (owned by spec 13)
---
### Current state
- File size: ~3,655 LOC.
- Existing extracted modules:
- `packages/app/src/pages/session/helpers.ts` (terminal focus and shared handlers)
- `packages/app/src/pages/session/scroll-spy.ts` (message visibility + active-section tracking)
- High effect density (`createEffect`) and local-state density (`createStore` + `createSignal`) remain in `session.tsx`.
- Remaining interleaved responsibilities:
- review panel state + scrolling integration
- message timeline + hash navigation wiring
- file tab renderers + per-tab scroll sync
- terminal panel and tab coordination
---
### Proposed module split
Build on the existing `packages/app/src/pages/session/` directory and keep current extracted helpers in place. Add modules such as:
- `review-panel.tsx` - review tab rendering and focused diff logic.
- `message-timeline.tsx` - session turn rendering and active message tracking UI wiring.
- `file-tabs.tsx` - file tab content rendering, file scroll persistence, and line-comment overlays.
- `terminal-panel.tsx` - terminal tabs and focus behavior.
- `use-session-page-state.ts` - page-level derived state and imperative handlers.
`packages/app/src/pages/session.tsx` remains the route entry and orchestrator only.
---
### Phased steps
1. Keep `helpers.ts` and `scroll-spy.ts` as baseline; extract any additional pure helpers first (no behavior changes).
2. Extract review panel subtree and related handlers.
3. Extract file-tab subtree and scroll synchronization logic.
4. Extract terminal panel subtree.
5. Move page-level state/effects into `use-session-page-state.ts`.
6. Reduce `session.tsx` to composition and routing glue.
---
### Acceptance criteria
- `packages/app/src/pages/session.tsx` is reduced substantially (target: under 1,400 LOC).
- No user-facing behavior changes in session, review, file tabs, or terminal tabs.
- Event listeners and observers are still correctly cleaned up.
- New modules have clear prop boundaries and minimal hidden coupling.
---
### Validation plan
- Typecheck: `bun run typecheck` (from `packages/app`).
- Targeted e2e checks:
- `e2e/session/session.spec.ts`
- `e2e/files/file-viewer.spec.ts`
- `e2e/terminal/terminal.spec.ts`
- Manual checks:
- message hash navigation
- review diff focus + open-file action
- terminal tab create/reorder/focus behavior
---
### Handoff notes
- Keep module interfaces narrow and data-oriented.
- Prefer extracting code unchanged before doing any cleanup refactors.
- If a helper is useful to other specs, place it under `pages/session/` for now; cross-spec shared utilities can be unified later.

View File

@@ -0,0 +1,105 @@
## Session hot paths
Reduce render work and duplication in `session.tsx`
---
### Summary
`packages/app/src/pages/session.tsx` mixes routing, commands, tab rendering, review panel wiring, terminal focus logic, and message scrolling. This spec targets hot-path performance + local code quality improvements that can ship together in one session-page-focused PR. It should follow the keyed command-registration pattern introduced in `packages/app/src/context/command.tsx`.
---
### Goals
- Render heavy file-tab content only for the active tab
- Deduplicate review-panel wiring used in desktop and mobile paths
- Centralize terminal-focus DOM logic into one helper
- Reduce churn in command registration setup
---
### Non-goals
- Scroll-spy rewrite (covered by `specs/04-scroll-spy-optimization.md`)
- Large routing/layout redesign
- Behavior changes to prompt submission or session history
---
### Parallel execution contract
This spec owns:
- `packages/app/src/pages/session.tsx`
- New files under `packages/app/src/pages/session/*` (if extracted)
This spec should not modify:
- `packages/app/src/context/*`
- `packages/app/src/components/prompt-input.tsx`
- `packages/app/src/components/file-tree.tsx`
---
### Implementation plan
1. Add shared helpers for repeated session-page actions
- Extract `openReviewFile(path)` helper to replace repeated inline `onViewFile` bodies.
- Extract `focusTerminalById(id)` helper and reuse in both:
- terminal active change effect
- terminal drag-end focus restoration
2. Deduplicate review panel construction
- Build a shared review props factory (or local render helper) so desktop/mobile paths do not duplicate comment wiring, `onViewFile`, and classes glue.
- Keep per-surface differences limited to layout classes and diff style.
3. Gate heavy file-tab rendering by active tab
- Keep tab trigger list rendered for all opened tabs.
- Render `Tabs.Content` body only for `activeTab()`, plus lightweight placeholders as needed.
- Ensure per-tab scroll state restore still works when reactivating a tab.
4. Reduce command registry reallocation
- Register session commands with a stable key (`command.register("session", ...)`) so remounts replace prior session command entries.
- Move large command-array construction into smaller memoized blocks:
- stable command definitions
- dynamic state fields (`disabled`, titles) as narrow computed closures
- Keep command IDs, keybinds, and behavior identical.
---
### Acceptance criteria
- File tab bodies are not all mounted at once for large open-tab sets.
- `onViewFile` review behavior is defined in one shared helper.
- Terminal focus query/dispatch logic lives in one function and is reused.
- Session command registration uses a stable key (`"session"`) and `command.register` no longer contains one monolithic inline array with repeated inline handlers for shared actions.
- Session UX remains unchanged for:
- opening files from review
- drag-reordering terminal tabs
- keyboard command execution
---
### Validation plan
- Manual:
- Open 12+ file tabs, switch quickly, verify active tab restore and no blank states.
- Open review panel (desktop and mobile), use "view file" from diffs, verify same behavior as before.
- Drag terminal tab, ensure terminal input focus is restored.
- Run key commands: `mod+p`, `mod+w`, `mod+shift+r`, `ctrl+``.
- Perf sanity:
- Compare CPU usage while switching tabs with many opened files before/after.
---
### Risks and mitigations
- Risk: unmounted tab content loses transient editor state.
- Mitigation: keep persisted scroll/selection restore path intact and verify reactivation behavior.
- Risk: command refactor subtly changes command ordering.
- Mitigation: keep IDs and registration order stable, diff against current command list in dev.

View File

@@ -0,0 +1,99 @@
## File cache accounting
Make file-content eviction bookkeeping O(1)
---
### Summary
`packages/app/src/context/file.tsx` currently recomputes total cached bytes by reducing the entire LRU map inside the eviction loop. This creates avoidable overhead on large file sets. We will switch to incremental byte accounting while keeping LRU behavior unchanged.
---
### Goals
- Remove repeated full-map reductions from eviction path
- Maintain accurate total byte tracking incrementally
- Preserve existing eviction semantics (entry count + byte cap)
---
### Non-goals
- Changing cache limits
- Changing file loading API behavior
- Introducing cross-session shared caches
---
### Parallel execution contract
This spec owns:
- `packages/app/src/context/file.tsx`
- Optional tests in `packages/app/src/context/*file*.test.ts`
This spec should not modify:
- `packages/app/src/pages/session.tsx`
- `packages/app/src/components/file-tree.tsx`
---
### Implementation plan
1. Introduce incremental byte counters
- Add module-level `contentBytesTotal`.
- Add helper(s):
- `setContentBytes(path, nextBytes)`
- `removeContentBytes(path)`
- `resetContentBytes()`
2. Refactor LRU touch/update path
- Keep `contentLru` as LRU order map.
- Update byte total only when a path is inserted/updated/removed.
- Ensure replacing existing byte value updates total correctly.
3. Refactor eviction loop
- Use `contentBytesTotal` in loop condition instead of `Array.from(...).reduce(...)`.
- On eviction, remove from both `contentLru` and byte counter.
4. Keep scope reset correct
- On directory scope change, clear inflight maps + `contentLru` + byte counter.
---
### Acceptance criteria
- `evictContent` performs no full-map reduction per iteration.
- Total bytes remain accurate after:
- loading file A
- loading file B
- force-reloading file A with a different size
- evicting entries
- scope reset
- Existing caps (`MAX_FILE_CONTENT_ENTRIES`, `MAX_FILE_CONTENT_BYTES`) continue to enforce correctly.
---
### Validation plan
- Manual:
- Open many files with mixed sizes and verify old files still evict as before.
- Switch directory scope and verify cache clears safely.
- Optional unit coverage:
- size counter updates on overwrite + delete.
- eviction condition uses count and bytes as expected.
---
### Risks and mitigations
- Risk: byte counter drifts from map contents.
- Mitigation: route all updates through centralized helpers.
- Risk: stale bytes retained on early returns.
- Mitigation: assert cleanup paths in `finally`/scope reset still execute.

View File

@@ -0,0 +1,109 @@
## Layout page decomposition
Split `pages/layout.tsx` into composable layout modules with stable behavior.
---
### Summary
`packages/app/src/pages/layout.tsx` is a 3,000+ line coordinator for sidebar navigation, project/workspace controls, deep-link handling, dialogs, drag/drop overlays, and global shell interactions. This spec decomposes it into focused modules to improve maintainability and reduce merge risk for future features.
---
### Goals
- Break up `packages/app/src/pages/layout.tsx` into smaller units.
- Separate rendering concerns from orchestration/state concerns.
- Keep existing URL/navigation semantics and sidebar behavior.
- Preserve all current command and dialog entry points.
---
### Non-goals
- No major UX redesign of the sidebar or project/workspace UI.
- No changes to server/global-sync contracts.
- No refactor of `pages/session.tsx` in this workstream.
---
### Parallel ownership (important)
This workstream owns:
- `packages/app/src/pages/layout.tsx`
- New files under `packages/app/src/pages/layout/**`
This workstream must not edit:
- `packages/app/src/pages/session.tsx` (spec 09)
- `packages/app/src/components/prompt-input.tsx` (spec 11)
- `packages/app/src/context/global-sync.tsx` (spec 12)
---
### Current state
- File size: ~3,004 LOC.
- Contains mixed concerns:
- app-shell rendering
- sidebar/project/workspace UI + drag/drop
- deep-link handling and startup flows
- workspace reset/delete actions and toasts
---
### Proposed module split
Create `packages/app/src/pages/layout/` modules such as:
- `use-layout-page-state.ts` - orchestration state and handlers.
- `sidebar-panel.tsx` - sidebar shell and root interactions.
- `project-item.tsx` - project-level row and actions.
- `workspace-item.tsx` - workspace row, sessions list, and workspace actions.
- `deep-links.ts` - deep-link parsing/draining/handler utilities.
Keep `packages/app/src/pages/layout.tsx` as route-level composition and provider wiring.
---
### Phased steps
1. Extract pure helpers first (deep-link parse, shared label helpers, small utility functions).
2. Extract workspace subtree and action handlers.
3. Extract project subtree and menu actions.
4. Extract sidebar shell and drag overlay components.
5. Move orchestration logic into `use-layout-page-state.ts`.
6. Reduce `layout.tsx` to composition-only entry.
---
### Acceptance criteria
- `packages/app/src/pages/layout.tsx` is significantly smaller (target: under 1,200 LOC).
- Behavior parity for:
- project open/close/rename
- workspace expand/collapse/reset/delete
- deep-link handling
- drag/drop ordering
- No regressions in keyboard navigation and dialog actions.
---
### Validation plan
- Typecheck: `bun run typecheck` (from `packages/app`).
- Targeted e2e checks:
- `e2e/sidebar/sidebar.spec.ts`
- `e2e/projects/workspaces.spec.ts`
- `e2e/projects/project-edit.spec.ts`
- `e2e/app/navigation.spec.ts`
- Manual check: deep-link open-project flow still opens and navigates correctly.
---
### Handoff notes
- Keep action handlers close to their domain module.
- Do not merge in behavior cleanups during extraction; preserve semantics first.
- If shared components are needed, add them under `pages/layout/` for now to avoid cross-spec conflicts.

View File

@@ -0,0 +1,92 @@
## Layout reactivity
Reduce per-call reactive overhead in `useLayout`
---
### Summary
`packages/app/src/context/layout.tsx` creates reactive effects inside `view(sessionKey)` and `tabs(sessionKey)` each time these helpers are called. Multiple consumers for the same key can accumulate duplicate watchers. This spec simplifies the API internals so calls stay lightweight while preserving behavior.
---
### Goals
- Remove avoidable per-call `createEffect` allocations in `view()` and `tabs()`
- Preserve scroll seeding, pruning, and touch semantics
- Keep external `useLayout` API stable
---
### Non-goals
- Persistence schema migration
- Session tab behavior redesign
- New layout features
---
### Parallel execution contract
This spec owns:
- `packages/app/src/context/layout.tsx`
- `packages/app/src/context/layout-scroll.test.ts` (if updates needed)
This spec should not modify:
- `packages/app/src/pages/session.tsx`
- `packages/app/src/components/session/*`
---
### Implementation plan
1. Consolidate key-touch logic
- Introduce shared internal helper, e.g. `ensureSessionKey(key)` that performs:
- `touch(key)`
- `scroll.seed(key)`
2. Remove per-call effects in `view()` / `tabs()`
- Replace internal `createEffect(on(key, ...))` usage with lazy key reads inside accessors/memos.
- Ensure reads still invoke `ensureSessionKey` at safe points.
3. Keep return API stable
- Preserve current method names and behavior:
- `view(...).scroll`, `setScroll`, `terminal`, `reviewPanel`, `review`
- `tabs(...).active`, `all`, `open`, `close`, `move`, etc.
4. Verify pruning behavior
- Ensure session-key pruning still runs when key set grows and active key changes.
---
### Acceptance criteria
- `view()` and `tabs()` no longer instantiate per-call key-change effects.
- Existing callers do not require API changes.
- Scroll restore and tab persistence still work across session navigation.
- No regressions in handoff/pending-message behavior.
---
### Validation plan
- Manual:
- Navigate across multiple sessions; verify tabs + review open state + scroll positions restore.
- Toggle terminal/review panels and confirm persisted state remains consistent.
- Tests:
- Update/add targeted tests for key seeding/pruning if behavior changed.
---
### Risks and mitigations
- Risk: subtle key-touch ordering changes affect prune timing.
- Mitigation: keep `touch` and `seed` coupled through one helper and verify prune boundaries.
- Risk: removing effects misses updates for dynamic accessor keys.
- Mitigation: ensure every public accessor path reads current key and calls helper.

View File

@@ -0,0 +1,121 @@
## Prompt input and optimistic-state consolidation
Decompose prompt-input and unify optimistic message mutations.
---
### Summary
`packages/app/src/components/prompt-input.tsx` has already been partially decomposed and is now ~1,391 LOC. Editor DOM helpers, attachments, history, and submit flow were extracted into `packages/app/src/components/prompt-input/*.ts`, but optimistic mutation ownership and some UI/controller responsibilities are still split across call sites. This spec continues from that refactored baseline.
---
### Goals
- Split `prompt-input.tsx` into modular UI + controller pieces.
- Centralize optimistic message add/remove behavior behind sync-context APIs.
- Remove unsafe cast path around optimistic parts (`as unknown as Part[]`).
- Keep existing prompt UX and submission semantics unchanged.
---
### Non-goals
- No redesign of prompt input visuals.
- No changes to session protocol or backend APIs.
- No changes to unrelated page modules (`pages/session.tsx`, `pages/layout.tsx`).
---
### Parallel ownership (important)
This workstream owns:
- `packages/app/src/components/prompt-input.tsx`
- New files under `packages/app/src/components/prompt-input/**`
- `packages/app/src/context/sync.tsx` (optimistic API surface only)
This workstream must not edit:
- `packages/app/src/pages/session.tsx` (spec 09)
- `packages/app/src/pages/layout.tsx` (spec 10)
- `packages/app/src/context/global-sync.tsx` (spec 12)
- `packages/app/src/context/file.tsx` (spec 13)
---
### Current state
- File size: ~1,391 LOC for `prompt-input.tsx`.
- Existing extracted modules:
- `prompt-input/editor-dom.ts`
- `prompt-input/attachments.ts`
- `prompt-input/history.ts`
- `prompt-input/submit.ts`
- Optimistic mutation and request-part casting still need consolidation (including remaining `as unknown as Part[]` in submit path).
- Remaining concerns still tightly coupled in `prompt-input.tsx`:
- slash/mention UI rendering and keyboard orchestration
- context pill interactions and focus behavior
- composition glue across history/attachments/submit
---
### Proposed structure
Build on the existing `packages/app/src/components/prompt-input/` modules by adding/further splitting modules such as:
- `use-prompt-composer.ts` - state machine for submit/abort/history.
- `build-request-parts.ts` - typed request-part construction.
- `slash-popover.tsx` - slash command list rendering.
- `context-items.tsx` - context pills and interactions.
Keep existing lower-level modules (`attachments.ts`, `editor-dom.ts`, `history.ts`, `submit.ts`) and narrow their responsibilities where needed.
Add sync-level optimistic APIs (in `context/sync.tsx` or `context/sync-optimistic.ts`):
- `session.optimistic.add(...)`
- `session.optimistic.remove(...)`
Prompt input should call these APIs instead of directly mutating message/part stores.
---
### Phased steps
1. Extract typed request-part builder (likely from `prompt-input/submit.ts`) to remove ad hoc casting.
2. Introduce sync optimistic APIs with current behavior.
3. Replace remaining direct `produce(...)` optimistic mutations with optimistic APIs.
4. Extract remaining UI subtrees (slash popover, context items, toolbar controls).
5. Extract controller hook and keep route component as composition shell.
---
### Acceptance criteria
- Optimistic update logic exists in one place only.
- `prompt-input.tsx` is significantly smaller (target: under 1,200 LOC).
- Prompt submit/abort/history behavior remains unchanged.
- No `as unknown as Part[]` in optimistic request construction path.
---
### Validation plan
- Typecheck: `bun run typecheck` (from `packages/app`).
- Targeted e2e checks:
- `e2e/prompt/prompt.spec.ts`
- `e2e/prompt/context.spec.ts`
- `e2e/prompt/prompt-slash-open.spec.ts`
- `e2e/prompt/prompt-mention.spec.ts`
- Manual check:
- submit with file/image/context attachments
- abort in-flight turn
- history up/down restore behavior
---
### Handoff notes
- Preserve sequence semantics around optimistic insert, worktree wait, send, and rollback.
- Keep sync optimistic API data-oriented and reusable by future callers.
- Do not mix this with broader sync/global-sync refactors in the same diff.

View File

@@ -0,0 +1,105 @@
## Global sync domain split
Refactor `context/global-sync.tsx` into domain modules while preserving behavior.
---
### Summary
`packages/app/src/context/global-sync.tsx` is a large multi-domain module (1,000+ LOC) that currently owns queue scheduling, bootstrap, child store creation, persistence bridges, session trimming, and event reduction. This workstream splits it into clear domains without changing runtime behavior.
---
### Goals
- Decompose global sync internals into maintainable modules.
- Keep `useGlobalSync()` public API unchanged.
- Isolate pure logic (session trimming, ordering, grouping) from side effects.
- Keep event handling deterministic and easier to test.
---
### Non-goals
- No protocol/API changes to server events.
- No behavior changes in session ordering, trimming, or cache semantics.
- No changes to page-level UI logic.
---
### Parallel ownership (important)
This workstream owns:
- `packages/app/src/context/global-sync.tsx`
- New files under `packages/app/src/context/global-sync/**`
This workstream must not edit:
- `packages/app/src/context/file.tsx` (spec 13)
- `packages/app/src/components/prompt-input.tsx` (spec 11)
- `packages/app/src/pages/session.tsx` and `packages/app/src/pages/layout.tsx` (specs 09/10)
---
### Current state
- Single large module with many responsibilities.
- Event reducer is embedded in component lifecycle code.
- Queue/scheduler, bootstrap, and child-store lifecycle are tightly interwoven.
---
### Proposed module split
Create `packages/app/src/context/global-sync/` modules like:
- `types.ts` - shared types.
- `queue.ts` - refresh queue and drain scheduler.
- `child-store.ts` - child store creation, persistence wiring, cache maps.
- `session-trim.ts` - pure session sorting/trimming helpers.
- `bootstrap.ts` - global and per-directory bootstrap flows.
- `event-reducer.ts` - event handlers for SDK event stream.
Keep `global-sync.tsx` as provider/composition entry point.
---
### Phased steps
1. Extract pure helpers (`cmp`, session trim/recent logic) first.
2. Extract queue/drain scheduler.
3. Extract child-store creation and persisted cache wiring.
4. Extract bootstrap flows.
5. Extract event reducer and wire into existing listener.
6. Keep API surface stable and documented.
---
### Acceptance criteria
- Public API of `useGlobalSync()` remains backward compatible.
- `global-sync.tsx` is substantially reduced (target: under 500 LOC).
- Event handling logic is isolated and easier to trace.
- No behavior regressions in project/session/provider sync.
---
### Validation plan
- Typecheck: `bun run typecheck` (from `packages/app`).
- Targeted e2e checks:
- `e2e/app/session.spec.ts`
- `e2e/sidebar/sidebar-session-links.spec.ts`
- `e2e/projects/projects-switch.spec.ts`
- Manual checks:
- switching directories/projects still hydrates child stores correctly
- session list/pagination behavior remains stable
---
### Handoff notes
- Favor function extraction with unchanged code first.
- Keep event handler ordering explicit; avoid implicit fallthrough behaviors.
- Add focused tests only for extracted pure helpers if practical, but avoid broad test-suite changes here.

View File

@@ -0,0 +1,96 @@
## Context metrics shared
Unify duplicate session usage calculations
---
### Summary
`session-context-tab.tsx` and `session-context-usage.tsx` both compute overlapping session metrics (cost, last assistant token totals, provider/model context usage). This creates duplicate loops and raises drift risk. We will centralize shared calculations in one helper module and have both components consume it.
---
### Goals
- Compute shared session usage metrics in one place
- Remove duplicate loops for cost and latest-token context usage
- Keep UI output unchanged in both components
---
### Non-goals
- Rewriting the detailed context breakdown estimator logic
- Changing translations or labels
- Moving metrics into backend API responses
---
### Parallel execution contract
This spec owns:
- `packages/app/src/components/session/session-context-tab.tsx`
- `packages/app/src/components/session-context-usage.tsx`
- New helper in `packages/app/src/components/session/*` or `packages/app/src/utils/*`
This spec should not modify:
- `packages/app/src/pages/session.tsx`
- `packages/app/src/context/sync.tsx`
---
### Implementation plan
1. Add shared metrics helper
- Create helper for raw metrics from message list + provider map, e.g.:
- `totalCost`
- `lastAssistantWithTokens`
- `tokenTotal`
- `tokenUsagePercent`
- provider/model labels
- Return raw numeric values; keep locale formatting in consumers.
2. Add memoization guard
- Use reference-based memoization (e.g. by message-array identity) inside helper or component-level memo to avoid duplicate recalculation on unchanged arrays.
3. Migrate both components
- Replace duplicated loops in:
- `session-context-tab.tsx`
- `session-context-usage.tsx`
- Keep existing UI structure and i18n keys unchanged.
---
### Acceptance criteria
- Shared cost + token calculations are defined in one module.
- Both components read from the shared helper.
- Rendered values remain identical for:
- total cost
- token totals
- usage percentage
- provider/model fallback labels
---
### Validation plan
- Manual:
- Open session context tab and compare values with header/context indicator tooltip.
- Verify values update correctly while new assistant messages stream in.
- Regression:
- locale change still formats numbers/currency correctly.
---
### Risks and mitigations
- Risk: helper changes semantic edge cases (no provider, no model, missing token fields).
- Mitigation: preserve existing fallback behavior (`"—"`, null percent).
- Risk: memoization over-caches stale values.
- Mitigation: key cache by message-array reference and dependent IDs only.

View File

@@ -0,0 +1,111 @@
## File context domain split
Refactor `context/file.tsx` into focused modules with unchanged API.
---
### Summary
`packages/app/src/context/file.tsx` still combines path normalization, file-content caching/eviction, file-tree loading, watcher event handling, and file-view persistence orchestration. Recent refactoring extracted generic scoped-cache primitives to `packages/app/src/utils/scoped-cache.ts`, but most file-domain behavior remains in one module. This spec separates those concerns while preserving the existing `useFile()` interface.
---
### Goals
- Keep `useFile()` API stable for all callers.
- Extract independent domains into dedicated modules.
- Improve readability and lower risk for future file-tree/perf changes.
- Preserve current caching and watcher semantics.
---
### Non-goals
- No redesign of file tree UI.
- No change to backend file APIs.
- No simultaneous refactor of `components/file-tree.tsx` in this workstream.
---
### Parallel ownership (important)
This workstream owns:
- `packages/app/src/context/file.tsx`
- New files under `packages/app/src/context/file/**`
- `packages/app/src/utils/scoped-cache.ts` (only when required for file-view cache extraction)
This workstream must not edit:
- `packages/app/src/context/global-sync.tsx` (spec 12)
- `packages/app/src/pages/session.tsx` (spec 09)
- `packages/app/src/components/prompt-input.tsx` (spec 11)
---
### Current state
- File size: ~751 LOC.
- `packages/app/src/utils/scoped-cache.ts` now exists as a shared cache primitive used by file view persistence.
- Multiple domains in one module:
- path normalization/parsing
- LRU content memory management
- tree node/directory state management
- event-driven watcher invalidation
- per-session view cache bootstrapping
---
### Proposed module split
Create `packages/app/src/context/file/` modules such as:
- `path.ts` - normalize/strip helpers.
- `content-cache.ts` - content LRU + byte caps.
- `view-cache.ts` - per-session file view persistence cache (building on `createScopedCache`).
- `tree-store.ts` - directory/node store and list/expand/collapse actions.
- `watcher.ts` - watcher event handling and invalidation routines.
`file.tsx` remains the provider entry that composes these modules.
---
### Phased steps
1. Extract path helper functions with no behavior changes.
2. Extract content cache and eviction logic.
3. Extract file-specific view-cache loading/pruning logic on top of `createScopedCache`.
4. Extract tree-store list/refresh/toggle actions.
5. Extract watcher update handler and wire cleanup.
6. Keep `useFile()` return shape unchanged.
---
### Acceptance criteria
- `useFile()` API remains backward compatible.
- `context/file.tsx` is reduced significantly (target: under 350 LOC).
- Tree loading/refresh and content eviction behavior remain unchanged.
- Watcher-driven reload behavior still works for changed/added/deleted files.
---
### Validation plan
- Typecheck: `bun run typecheck` (from `packages/app`).
- Targeted e2e checks:
- `e2e/files/file-tree.spec.ts`
- `e2e/files/file-viewer.spec.ts`
- `e2e/files/file-open.spec.ts`
- Manual checks:
- directory expand/collapse and refresh
- large file navigation and cache reuse
- watcher-driven updates in active file tabs
---
### Handoff notes
- Keep tree/data stores colocated with their mutation helpers.
- Avoid changing persisted key names or cache key shapes in this pass.
- Save broader API cleanups for a follow-up once modules are stable.

View File

@@ -0,0 +1,88 @@
## File tree fetches
Make directory listing triggers explicit and minimal
---
### Summary
`packages/app/src/components/file-tree.tsx` currently invokes `file.tree.list(path)` from a generic effect in each tree instance. Even with inflight guards, this pattern causes avoidable list calls and makes load behavior harder to reason about. This spec tightens fetch triggers.
---
### Goals
- Avoid redundant list invocations from passive rerenders
- Fetch directory data only when needed (mount + expansion + explicit refresh)
- Keep tree behavior unchanged for users
---
### Non-goals
- Replacing recursive tree rendering with virtualization
- Changing file-tree visual design
- Backend/API changes for file listing
---
### Parallel execution contract
This spec owns:
- `packages/app/src/components/file-tree.tsx`
This spec should not modify:
- `packages/app/src/context/file.tsx`
- `packages/app/src/pages/session.tsx`
---
### Implementation plan
1. Replace broad list effect with explicit triggers
- Load root path on mount.
- For nested directories, list only when:
- node is expanded, or
- parent explicitly requests refresh.
2. Guard expansion-driven fetches
- Keep `file.tree.expand(path)` as the primary source of truth for expansion fetches.
- Ensure passive rerenders do not retrigger `list(path)` calls for already loaded dirs.
3. Keep filter auto-expand behavior
- Preserve existing "allowed filter" directory auto-expansion.
- Ensure auto-expanded directories still fetch exactly once unless force refresh occurs.
---
### Acceptance criteria
- `file-tree.tsx` no longer calls `file.tree.list(path)` from an unscoped rerender effect.
- Expanding a folder still loads its children correctly.
- Filtering by `allowed` still opens and shows required parent directories.
- No regressions in change/all tabs where `FileTree` is used.
---
### Validation plan
- Manual:
- Expand/collapse deep directory trees repeatedly.
- Switch between "changes" and "all" tree tabs.
- Open review, click files, verify tree stays responsive.
- Optional instrumentation:
- count list calls per user action and compare before/after.
---
### Risks and mitigations
- Risk: directories fail to load when expansion timing changes.
- Mitigation: rely on `expand()` path and verify for root + nested nodes.
- Risk: filter-driven auto-expand misses one level.
- Mitigation: keep existing auto-expand iteration and add regression checks.

View File

@@ -0,0 +1,87 @@
## Comments indexing
Avoid repeated flatten+sort for comment aggregates
---
### Summary
`packages/app/src/context/comments.tsx` derives `all` by flattening all file comment arrays and sorting on every change. This is simple but can become expensive with many comments. We will maintain an indexed aggregate structure incrementally.
---
### Goals
- Keep `comments.list(file)` behavior unchanged
- Make `comments.all()` retrieval near O(1) for reads
- Preserve chronological ordering guarantees
---
### Non-goals
- Persisting comments in a new schema
- Adding new comment metadata fields
- UI changes for comment display
---
### Parallel execution contract
This spec owns:
- `packages/app/src/context/comments.tsx`
- Optional tests for comments context
This spec should not modify:
- `packages/app/src/pages/session.tsx`
- `packages/ui/src/components/line-comment.tsx`
---
### Implementation plan
1. Add aggregate index state
- Maintain `commentsByFile` (existing) plus an `allComments` array in chronological order.
- Keep both updated through the same mutator paths.
2. Update mutators
- `add`: append new comment to file list and aggregate list.
- `remove`: remove from file list and aggregate list by id/file.
- `clear`: reset both structures and focus/active state.
3. Simplify selectors
- `list(file)` reads file list directly.
- `all()` returns pre-indexed aggregate list without per-read flatten+sort.
---
### Acceptance criteria
- `comments.all()` no longer flattens and sorts every reactive run.
- Comment order stays chronological by `time`.
- `add/remove/clear/focus/active` semantics remain unchanged.
---
### Validation plan
- Manual:
- Add multiple comments across different files.
- Remove one comment and verify both file-level and global views update correctly.
- Submit prompt (which clears comments) and verify reset behavior.
- Optional unit test:
- add/remove/clear keeps aggregate ordering and integrity.
---
### Risks and mitigations
- Risk: aggregate list and per-file lists diverge.
- Mitigation: funnel all writes through centralized mutators; avoid direct store writes elsewhere.
- Risk: ID collision edge cases.
- Mitigation: keep UUID creation unchanged and remove by `file + id` pair.

View File

@@ -0,0 +1,108 @@
## Server health and row dedupe
Unify server health checks and deduplicate server-row UI logic.
---
### Summary
Server health logic is duplicated across multiple files, and server row rendering/truncation logic is repeated in both the status popover and server dialog. This creates drift risk and inconsistent behavior. This spec centralizes health checks and row rendering while preserving existing UX.
---
### Goals
- Introduce one shared server-health checker.
- Use consistent timeout and error semantics in all server health call sites.
- Deduplicate repeated server row truncation/tooltip behavior.
- Keep current polling interval and status semantics unless explicitly changed.
---
### Non-goals
- No redesign of the status popover or server dialog.
- No changes to server persistence model.
- No broad refactor of unrelated status tabs (MCP/LSP/plugins).
---
### Parallel ownership (important)
This workstream owns:
- `packages/app/src/components/dialog-select-server.tsx`
- `packages/app/src/components/status-popover.tsx`
- `packages/app/src/context/server.tsx`
- New files under `packages/app/src/components/server/**` and/or `packages/app/src/utils/server-health.ts`
This workstream must not edit:
- `packages/app/src/components/terminal.tsx` (spec 15)
- `packages/app/src/pages/session.tsx` and `packages/app/src/pages/layout.tsx` (specs 09/10)
---
### Current state
- Duplicate `checkHealth` implementation in:
- `components/dialog-select-server.tsx`
- `components/status-popover.tsx`
- Similar health check logic in `context/server.tsx`.
- Duplicate row truncation + resize listener logic in status and dialog server lists.
---
### Proposed approach
1. Add shared health utility:
- `checkServerHealth(url, fetch, opts)`
- one timeout strategy
- one return shape: `{ healthy: boolean, version?: string }`
2. Add shared server row primitive:
- common rendering for status dot, truncated name/version handling, tooltip content
- optional action slots for per-screen controls
3. Adopt utility and row primitive in both consumers.
---
### Phased steps
1. Create `utils/server-health.ts` and migrate all health call sites.
2. Create shared row component (`components/server/server-row.tsx`).
3. Replace duplicated row logic in server dialog and status popover.
4. Confirm polling and active/default server behavior still match existing UX.
---
### Acceptance criteria
- Exactly one app-level server health check implementation remains.
- Server row truncation/tooltip behavior is shared, not duplicated.
- No regressions when switching active/default server.
- Existing status dot semantics are preserved.
---
### Validation plan
- Typecheck: `bun run typecheck` (from `packages/app`).
- Targeted e2e checks:
- `e2e/status/status-popover.spec.ts`
- `e2e/app/server-default.spec.ts`
- Manual checks:
- add/edit/remove server
- blocked unhealthy server behavior
- default server toggles and persistence
---
### Handoff notes
- Keep shared server row API minimal and composable.
- Avoid introducing new global state for this refactor.
- Prefer deterministic helper behavior over UI-specific branching inside the utility.

View File

@@ -0,0 +1,104 @@
## Prompt input split
Modularize `prompt-input.tsx` without behavior changes
---
### Summary
`packages/app/src/components/prompt-input.tsx` is a very large component that combines editor DOM parsing, popovers, history, drag/drop + paste uploads, worktree/session creation, optimistic messages, and send/abort flow. This spec splits it into focused modules so future changes are safer.
---
### Goals
- Reduce `prompt-input.tsx` complexity and file size
- Extract cohesive logic into testable hooks/helpers
- Keep runtime behavior and UX unchanged
---
### Non-goals
- Replacing contenteditable editor approach
- Major UX redesign of composer controls
- API contract changes for prompt submission
---
### Parallel execution contract
This spec owns:
- `packages/app/src/components/prompt-input.tsx`
- New files under `packages/app/src/components/prompt-input/*`
This spec should not modify:
- `packages/app/src/pages/session.tsx`
- `packages/app/src/context/prompt.tsx` (except minor type-only imports if needed)
---
### Implementation plan
1. Extract editor DOM helpers
- Move pure DOM/selection helpers into `prompt-input/editor-dom.ts`:
- `createTextFragment`
- `getNodeLength`
- `getTextLength`
- cursor get/set helpers
2. Extract history controller
- Move prompt history read/write/navigation logic into `prompt-input/history.ts` hook.
- Keep existing persisted keys and history semantics unchanged.
3. Extract attachment interactions
- Move image/file paste + drag/drop + file-input attachment flows to `prompt-input/attachments.ts` hook.
4. Extract submit pipeline
- Move send/abort/optimistic message pipeline to `prompt-input/submit.ts` service/hook.
- Keep existing error toasts, worktree handling, and rollback behavior.
5. Keep composition shell stable
- `PromptInput` component remains the integration shell that wires hooks + JSX.
- Preserve exported component API and props.
---
### Acceptance criteria
- `prompt-input.tsx` becomes primarily orchestration + view code.
- Extracted modules contain the heavy imperative logic.
- All existing behaviors remain intact:
- slash and @ popovers
- history up/down navigation
- image attach/paste/drag-drop
- shell mode submit/abort
- optimistic message + rollback on failure
---
### Validation plan
- Manual regression checklist:
- type prompt, submit, stop, retry
- use `/` command selection and `@` selector
- history navigation with arrows
- paste image, drag image, remove attachment
- start in new session + worktree create path
- failure path restores prompt and context comments
---
### Risks and mitigations
- Risk: subtle ordering changes in submit rollback logic.
- Mitigation: migrate logic mechanically first, then cleanup.
- Risk: editor selection bugs after helper extraction.
- Mitigation: keep existing cursor helpers unchanged and add focused manual checks.

View File

@@ -0,0 +1,106 @@
## Runtime adapter type safety
Reduce unsafe casts at browser and third-party integration boundaries.
---
### Summary
Several integration points rely on `as any` or `unknown as` casts (terminal internals, speech recognition, add-on internals, generic trigger props). This spec introduces typed adapters and narrow interfaces to improve maintainability and make type errors actionable.
---
### Goals
- Remove or significantly reduce unsafe casts in scoped files.
- Introduce explicit adapter interfaces around unstable third-party APIs.
- Preserve behavior with no UX changes.
- Improve maintainability of terminal and speech integrations.
---
### Non-goals
- No server health dedupe work (owned by spec 14).
- No large architectural changes to terminal or speech subsystems.
- No changes to business logic semantics.
---
### Parallel ownership (important)
This workstream owns:
- `packages/app/src/components/terminal.tsx`
- `packages/app/src/utils/speech.ts`
- `packages/app/src/addons/serialize.ts`
- `packages/app/src/components/dialog-select-model.tsx`
- New utility files under `packages/app/src/utils/**` related to adapter typing
This workstream must not edit:
- `components/dialog-select-server.tsx`, `components/status-popover.tsx`, `context/server.tsx` (spec 14)
- `components/prompt-input.tsx` (spec 11)
---
### Current state
- Explicit `as any` appears in `serialize.ts` and `speech.ts`.
- Multiple `unknown as` casts in `terminal.tsx` for option/disposable access.
- Generic trigger props in `dialog-select-model.tsx` use `as any` spread.
---
### Proposed approach
1. Add narrow adapter types for third-party internals:
- terminal option setter/disposable handles
- speech recognition constructor on `window`
- serialize addon internal terminal buffer access
2. Introduce tiny helper guards/utilities:
- `isDisposable(value): value is { dispose(): void }`
- `hasSetOption(value): value is { setOption(...): void }`
3. Replace broad casts with adapter functions and runtime checks.
---
### Phased steps
1. Refactor terminal helpers (`setOption`, disposal cleanups) to typed guards.
2. Refactor speech recognition window access to typed constructor lookup.
3. Replace `serialize.ts` `as any` internals with explicit local interface.
4. Remove `dialog-select-model.tsx` `as any` trigger props cast via stricter generic typing.
---
### Acceptance criteria
- No `as any` remains in the scoped files (or document unavoidable cases inline).
- `unknown as` usage in scoped files is minimized and justified.
- Typecheck passes with no new suppression comments.
- Runtime behavior remains unchanged.
---
### Validation plan
- Typecheck: `bun run typecheck` (from `packages/app`).
- Targeted e2e checks:
- `e2e/terminal/terminal.spec.ts`
- `e2e/models/model-picker.spec.ts`
- Manual checks:
- terminal open/connect/resize/cleanup
- speech start/stop and interim/final behavior
---
### Handoff notes
- Prefer small typed wrapper functions over inline complex narrowing.
- Keep adapter names explicit and local to their integration point.
- If a cast cannot be removed safely, add a short comment describing why.

View File

@@ -0,0 +1,107 @@
## i18n hardening and parity
Strengthen locale correctness and remove remaining hardcoded copy.
---
### Summary
The app has broad translation coverage but still has maintainability gaps: locale dictionaries are typed as `Partial`, some non-English dictionaries contain English values for specific keys, and a few user-facing strings are still hardcoded in components/pages. This spec hardens i18n guarantees and cleans up remaining drift.
---
### Goals
- Enforce stricter dictionary key parity across all app locales.
- Remove known English fallback strings from non-English locale files.
- Localize remaining hardcoded user-facing strings in scoped files.
- Keep existing localization architecture (`useLanguage().t(...)`) intact.
---
### Non-goals
- No translation quality rewrite for all strings.
- No locale expansion beyond existing languages.
- No changes to non-user-facing log/diagnostic strings.
---
### Parallel ownership (important)
This workstream owns:
- `packages/app/src/context/language.tsx`
- `packages/app/src/i18n/*.ts`
- `packages/app/src/components/dialog-custom-provider.tsx`
- `packages/app/src/pages/directory-layout.tsx`
This workstream must not edit:
- `pages/session.tsx`, `pages/layout.tsx`, `components/prompt-input.tsx`
- server/terminal integration files owned by specs 14/15
---
### Current state
- Locale files are large and manually maintained.
- Non-English locales are typed with `Partial<Record<Keys, string>>`, which allows silent missing keys.
- Known untranslated strings exist for keys like:
- `command.session.previous.unseen`
- `command.session.next.unseen`
- Some user-facing strings remain hardcoded in scoped files.
---
### Proposed approach
1. Tighten locale typing:
- Move from `Partial<Record<Keys, string>>` to stricter parity enforcement.
- Keep `en.ts` as source-of-truth key set.
2. Fix known untranslated key values in non-English dictionaries.
3. Localize scoped hardcoded strings by adding translation keys and using `language.t(...)`.
---
### Phased steps
1. Add/adjust shared locale typing pattern for parity safety.
2. Update all locale files to satisfy stricter typing.
3. Translate known English carry-over keys in non-English dictionaries.
4. Replace hardcoded copy in:
- `components/dialog-custom-provider.tsx`
- `pages/directory-layout.tsx`
5. Run typecheck and parity checks.
---
### Acceptance criteria
- Locale files enforce full key parity against `en` (compile-time).
- No known English carry-over values remain for the targeted keys in non-English locales.
- Scoped hardcoded user-facing strings are replaced with translation keys.
- Typecheck passes.
---
### Validation plan
- Typecheck: `bun run typecheck` (from `packages/app`).
- Grep sanity checks:
- targeted keys no longer English in non-English locales
- scoped files no longer contain hardcoded user-facing copy
- Manual spot checks in at least 2 locales (for example: `de`, `zh`).
---
### Handoff notes
- Keep key naming consistent with existing conventions.
- Avoid broad copy changes outside scoped files to reduce review surface.
- If translation wording is uncertain, keep it simple and literal for now; quality passes can follow.

View File

@@ -0,0 +1,82 @@
## Terminal cache scope
Clarify workspace-only terminal cache semantics
---
### Summary
`packages/app/src/context/terminal.tsx` accepts `(dir, session)` but currently keys cache entries as `${dir}:${WORKSPACE_KEY}`. The behavior is workspace-scoped, but the API shape suggests session-scoped caching. This spec aligns naming and implementation to avoid confusion and future bugs.
---
### Goals
- Make terminal cache scope explicit (workspace-scoped)
- Remove misleading unused session-keying surface
- Preserve existing runtime behavior
---
### Non-goals
- Changing terminal persistence behavior
- Moving terminals to per-session isolation
- UI changes to terminal tabs
---
### Parallel execution contract
This spec owns:
- `packages/app/src/context/terminal.tsx`
This spec should not modify:
- `packages/app/src/pages/session.tsx`
- `packages/app/src/components/session/session-sortable-terminal-tab.tsx`
---
### Implementation plan
1. Rename internals for clarity
- Update internal function names/variables from session-oriented to workspace-oriented where applicable.
2. Remove unused session cache-key parametering
- Simplify `load`/factory signatures so keying intent is explicit.
- Keep key format workspace-only by directory.
3. Add inline documentation
- Add short comment near cache key creation clarifying why terminals are shared across sessions in the same workspace.
4. Keep behavior stable
- Ensure active terminal, tab order, clone/new/close behavior remain unchanged.
---
### Acceptance criteria
- No unused session-derived cache key logic remains.
- Code communicates workspace-scoped terminal lifecycle clearly.
- No functional changes to terminal operations.
---
### Validation plan
- Manual:
- Create multiple terminals, navigate between sessions in same workspace, confirm state continuity.
- Switch workspace directory, confirm separate terminal state.
---
### Risks and mitigations
- Risk: accidental behavior change to session-scoped terminals.
- Mitigation: keep cache key unchanged; refactor naming/signatures only.

View File

@@ -0,0 +1,101 @@
## Unit test foundation
Establish reliable unit coverage for core app logic.
---
### Summary
`packages/app` is still e2e-first, but recent refactoring added a first wave of active source-unit tests (session helpers/scroll spy, prompt-input modules, file-tree, comments/layout/terminal/file context, and scoped-cache). This spec focuses on turning that momentum into a stable, explicit unit-test baseline in CI/local and unblocking the remaining skipped legacy suites.
---
### Goals
- Add a clear unit-test command for app source tests.
- Unskip and stabilize existing skipped unit tests.
- Add fast tests for high-value pure logic.
- Keep unit suite independent of full e2e environment.
---
### Non-goals
- No replacement of e2e tests.
- No broad product-code refactors unless required to make logic testable.
- No flaky browser-automation tests added here.
---
### Parallel ownership (important)
This workstream owns:
- `packages/app/package.json` (test scripts only)
- `packages/app/happydom.ts` (if harness tweaks are needed)
- `packages/app/src/**/*.test.ts`
- `packages/app/src/**/*.test.tsx`
This workstream should avoid editing product code files owned by other specs, unless a tiny testability export is strictly required.
---
### Current state
- Active unit coverage now exists across several `src/**/*.test.*` files (including context, pages/session, components/prompt-input, and utils).
- Remaining skipped legacy suites:
- `src/context/layout-scroll.test.ts` (`test.skip`)
- `src/addons/serialize.test.ts` (`describe.skip`)
- `package.json` scripts still focus on Playwright e2e and do not expose a dedicated `test:unit` entrypoint.
---
### Proposed approach
1. Add dedicated unit-test script(s), for example:
- `test:unit` using Bun test + happydom preload where needed.
2. Unskip and stabilize remaining skipped legacy tests:
- make `layout-scroll.test.ts` deterministic
- enable a reliable subset of `serialize.test.ts` (or split smoke vs heavy integration cases)
3. Add/expand fast unit tests for high-value pure logic not yet covered:
- keybind parsing/formatting/matching (`context/command.tsx` exports)
- worktree state machine (`utils/worktree.ts`)
---
### Phased steps
1. Wire `test:unit` in `package.json`.
2. Make existing skipped tests runnable and stable.
3. Add at least 2 new unit test files for core pure logic.
4. Ensure unit suite can run standalone without Playwright server setup.
---
### Acceptance criteria
- `bun run test:unit` exists and passes locally.
- No full-file `describe.skip`/`test.skip` remains in `packages/app/src/**/*.test.*` (unless documented as intentionally quarantined with reason).
- Unit suite includes meaningful assertions for keybind + worktree logic.
- Runtime for unit suite remains fast (target: under 15 seconds locally, excluding first install).
---
### Validation plan
- Run: `bun run test:unit`.
- Run: `bun run typecheck`.
- Verify unit tests can execute without starting full app/backend servers.
---
### Handoff notes
- Keep tests implementation-focused, not duplicated business logic.
- Avoid mocks where practical; prefer real small-scope code paths.
- If integration-heavy serialize cases remain flaky, separate them into a clearly named non-default test target.

View File

@@ -0,0 +1,51 @@
## Parallel workstream map
Use this as the assignment sheet for running multiple agents at once.
---
### Workstreams
1. `specs/09-session-page-decomposition.md`
2. `specs/10-layout-page-decomposition.md`
3. `specs/11-prompt-input-and-optimistic-state.md`
4. `specs/12-global-sync-domain-split.md`
5. `specs/13-file-context-domain-split.md`
6. `specs/14-server-health-and-row-dedupe.md`
7. `specs/15-runtime-adapter-type-safety.md`
8. `specs/16-i18n-hardening-and-parity.md`
9. `specs/17-unit-test-foundation.md`
---
### File-ownership matrix
| Spec | Primary ownership | Avoid editing |
| ---- | ----------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------ |
| 09 | `pages/session.tsx`, `pages/session/**` | `pages/layout.tsx`, `components/prompt-input.tsx`, `context/global-sync.tsx`, `context/file.tsx` |
| 10 | `pages/layout.tsx`, `pages/layout/**` | `pages/session.tsx`, `components/prompt-input.tsx`, `context/global-sync.tsx` |
| 11 | `components/prompt-input.tsx`, `components/prompt-input/**`, `context/sync.tsx` (optimistic API only) | `pages/session.tsx`, `pages/layout.tsx`, `context/global-sync.tsx`, `context/file.tsx` |
| 12 | `context/global-sync.tsx`, `context/global-sync/**` | `context/file.tsx`, `components/prompt-input.tsx`, page files |
| 13 | `context/file.tsx`, `context/file/**`, `utils/scoped-cache.ts` (only when file-view cache extraction needs it) | `context/global-sync.tsx`, `components/prompt-input.tsx`, page files |
| 14 | `components/dialog-select-server.tsx`, `components/status-popover.tsx`, `context/server.tsx`, shared server utility/component | terminal/speech/serialize files |
| 15 | `components/terminal.tsx`, `utils/speech.ts`, `addons/serialize.ts`, `components/dialog-select-model.tsx`, adapter utilities | server status/dialog/context files |
| 16 | `context/language.tsx`, `i18n/*.ts`, `components/dialog-custom-provider.tsx`, `pages/directory-layout.tsx` | major page/context refactors |
| 17 | `package.json` (test scripts), `happydom.ts`, `src/**/*.test.*` | product code files in other specs unless strictly needed |
---
### Recommended execution order (if all start together)
- Start all 9 in parallel.
- Merge low-conflict streams first: 12, 13, 14, 15, 16, 17.
- Then merge 09, 10, 11 (largest diff sizes and highest rebase probability).
---
### Integration checkpoint
After all streams merge, run a full verification pass in `packages/app`:
- `bun run typecheck`
- `bun run test:unit` (from spec 17)
- targeted e2e smoke for session/layout/prompt/server/terminal flows

Some files were not shown because too many files have changed in this diff Show More