mirror of
https://github.com/anomalyco/opencode.git
synced 2026-03-04 05:33:55 +00:00
Compare commits
14 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
dc2c9c7673 | ||
|
|
07025ef5b9 | ||
|
|
6dd903c348 | ||
|
|
37fdbe7fa6 | ||
|
|
c27779e1b2 | ||
|
|
8d232d35eb | ||
|
|
98309fdd58 | ||
|
|
50cb49fa6c | ||
|
|
ec56e95b9c | ||
|
|
ce9900dca0 | ||
|
|
5527b4ea4d | ||
|
|
1f3ee037aa | ||
|
|
e5c5c1df12 | ||
|
|
0a451c73c9 |
@@ -1,38 +0,0 @@
|
||||
# tr Glossary
|
||||
|
||||
## Sources
|
||||
|
||||
- PR #15835: https://github.com/anomalyco/opencode/pull/15835
|
||||
|
||||
## Do Not Translate (Locale Additions)
|
||||
|
||||
- `OpenCode` (preserve casing in prose, docs, and UI copy)
|
||||
- Keep lowercase `opencode` in commands, package names, paths, URLs, and other exact identifiers
|
||||
- `<TAB>` stays the literal key token in code blocks; use `Tab` for the nearby explanatory label in prose
|
||||
- Commands, flags, file paths, and code literals (keep exactly as written)
|
||||
|
||||
## Preferred Terms
|
||||
|
||||
These are PR-backed wording preferences and may evolve.
|
||||
|
||||
| English / Context | Preferred | Notes |
|
||||
| ------------------------- | --------------------------------------- | ------------------------------------------------------------- |
|
||||
| available in beta | `beta olarak mevcut` | Prefer this over `beta olarak kullanılabilir` |
|
||||
| privacy-first | `Gizlilik öncelikli tasarlandı` | Prefer this over `Önce gizlilik için tasarlandı` |
|
||||
| connect your local models | `yerel modellerinizi bağlayabilirsiniz` | Use the fuller, more direct action phrase |
|
||||
| `<TAB>` key label | `Tab` | Use `Tab` in prose; keep `<TAB>` in literal UI or code blocks |
|
||||
| cross-platform | `cross-platform (tüm platformlarda)` | Keep the English term, add a short clarification when helpful |
|
||||
|
||||
## Guidance
|
||||
|
||||
- Prefer natural Turkish phrasing over literal translation
|
||||
- Merge broken sentence fragments into one clear sentence when the source is a single thought
|
||||
- Keep product naming consistent: `OpenCode` in prose, `opencode` only for exact technical identifiers
|
||||
- When an English technical term is intentionally kept, add a short Turkish clarification only if it improves readability
|
||||
|
||||
## Avoid
|
||||
|
||||
- Avoid `beta olarak kullanılabilir` when `beta olarak mevcut` fits
|
||||
- Avoid `Önce gizlilik için tasarlandı`; use the more natural reviewed wording instead
|
||||
- Avoid `Sekme` for the translated key label in prose when referring to `<TAB>`
|
||||
- Avoid changing `opencode` to `OpenCode` inside commands, URLs, package names, or code literals
|
||||
11
AGENTS.md
11
AGENTS.md
@@ -20,17 +20,6 @@
|
||||
|
||||
Prefer single word names for variables and functions. Only use multiple words if necessary.
|
||||
|
||||
### Naming Enforcement (Read This)
|
||||
|
||||
THIS RULE IS MANDATORY FOR AGENT WRITTEN CODE.
|
||||
|
||||
- Use single word names by default for new locals, params, and helper functions.
|
||||
- Multi-word names are allowed only when a single word would be unclear or ambiguous.
|
||||
- Do not introduce new camelCase compounds when a short single-word alternative is clear.
|
||||
- Before finishing edits, review touched lines and shorten newly introduced identifiers where possible.
|
||||
- Good short names to prefer: `pid`, `cfg`, `err`, `opts`, `dir`, `root`, `child`, `state`, `timeout`.
|
||||
- Examples to avoid unless truly required: `inputPID`, `existingClient`, `connectTimeout`, `workerPath`.
|
||||
|
||||
```ts
|
||||
// Good
|
||||
const foo = 1
|
||||
|
||||
50
bun.lock
50
bun.lock
@@ -25,7 +25,7 @@
|
||||
},
|
||||
"packages/app": {
|
||||
"name": "@opencode-ai/app",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"dependencies": {
|
||||
"@kobalte/core": "catalog:",
|
||||
"@opencode-ai/sdk": "workspace:*",
|
||||
@@ -75,7 +75,7 @@
|
||||
},
|
||||
"packages/console/app": {
|
||||
"name": "@opencode-ai/console-app",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"dependencies": {
|
||||
"@cloudflare/vite-plugin": "1.15.2",
|
||||
"@ibm/plex": "6.4.1",
|
||||
@@ -109,7 +109,7 @@
|
||||
},
|
||||
"packages/console/core": {
|
||||
"name": "@opencode-ai/console-core",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"dependencies": {
|
||||
"@aws-sdk/client-sts": "3.782.0",
|
||||
"@jsx-email/render": "1.1.1",
|
||||
@@ -136,7 +136,7 @@
|
||||
},
|
||||
"packages/console/function": {
|
||||
"name": "@opencode-ai/console-function",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"dependencies": {
|
||||
"@ai-sdk/anthropic": "2.0.0",
|
||||
"@ai-sdk/openai": "2.0.2",
|
||||
@@ -160,7 +160,7 @@
|
||||
},
|
||||
"packages/console/mail": {
|
||||
"name": "@opencode-ai/console-mail",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"dependencies": {
|
||||
"@jsx-email/all": "2.2.3",
|
||||
"@jsx-email/cli": "1.4.3",
|
||||
@@ -184,7 +184,7 @@
|
||||
},
|
||||
"packages/desktop": {
|
||||
"name": "@opencode-ai/desktop",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"dependencies": {
|
||||
"@opencode-ai/app": "workspace:*",
|
||||
"@opencode-ai/ui": "workspace:*",
|
||||
@@ -217,7 +217,7 @@
|
||||
},
|
||||
"packages/enterprise": {
|
||||
"name": "@opencode-ai/enterprise",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"dependencies": {
|
||||
"@opencode-ai/ui": "workspace:*",
|
||||
"@opencode-ai/util": "workspace:*",
|
||||
@@ -246,7 +246,7 @@
|
||||
},
|
||||
"packages/function": {
|
||||
"name": "@opencode-ai/function",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"dependencies": {
|
||||
"@octokit/auth-app": "8.0.1",
|
||||
"@octokit/rest": "catalog:",
|
||||
@@ -262,7 +262,7 @@
|
||||
},
|
||||
"packages/opencode": {
|
||||
"name": "opencode",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"bin": {
|
||||
"opencode": "./bin/opencode",
|
||||
},
|
||||
@@ -304,8 +304,8 @@
|
||||
"@opencode-ai/sdk": "workspace:*",
|
||||
"@opencode-ai/util": "workspace:*",
|
||||
"@openrouter/ai-sdk-provider": "1.5.4",
|
||||
"@opentui/core": "0.1.86",
|
||||
"@opentui/solid": "0.1.86",
|
||||
"@opentui/core": "0.1.81",
|
||||
"@opentui/solid": "0.1.81",
|
||||
"@parcel/watcher": "2.5.1",
|
||||
"@pierre/diffs": "catalog:",
|
||||
"@solid-primitives/event-bus": "1.1.2",
|
||||
@@ -376,7 +376,7 @@
|
||||
},
|
||||
"packages/plugin": {
|
||||
"name": "@opencode-ai/plugin",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"dependencies": {
|
||||
"@opencode-ai/sdk": "workspace:*",
|
||||
"zod": "catalog:",
|
||||
@@ -396,7 +396,7 @@
|
||||
},
|
||||
"packages/sdk/js": {
|
||||
"name": "@opencode-ai/sdk",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"devDependencies": {
|
||||
"@hey-api/openapi-ts": "0.90.10",
|
||||
"@tsconfig/node22": "catalog:",
|
||||
@@ -407,7 +407,7 @@
|
||||
},
|
||||
"packages/slack": {
|
||||
"name": "@opencode-ai/slack",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"dependencies": {
|
||||
"@opencode-ai/sdk": "workspace:*",
|
||||
"@slack/bolt": "^3.17.1",
|
||||
@@ -442,7 +442,7 @@
|
||||
},
|
||||
"packages/ui": {
|
||||
"name": "@opencode-ai/ui",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"dependencies": {
|
||||
"@kobalte/core": "catalog:",
|
||||
"@opencode-ai/sdk": "workspace:*",
|
||||
@@ -487,7 +487,7 @@
|
||||
},
|
||||
"packages/util": {
|
||||
"name": "@opencode-ai/util",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"dependencies": {
|
||||
"zod": "catalog:",
|
||||
},
|
||||
@@ -498,7 +498,7 @@
|
||||
},
|
||||
"packages/web": {
|
||||
"name": "@opencode-ai/web",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"dependencies": {
|
||||
"@astrojs/cloudflare": "12.6.3",
|
||||
"@astrojs/markdown-remark": "6.3.1",
|
||||
@@ -1345,21 +1345,21 @@
|
||||
|
||||
"@opentelemetry/api": ["@opentelemetry/api@1.9.0", "", {}, "sha512-3giAOQvZiH5F9bMlMiv8+GSPMeqg0dbaeo58/0SlA9sxSqZhnUtxzX9/2FzyhS9sWQf5S0GJE0AKBrFqjpeYcg=="],
|
||||
|
||||
"@opentui/core": ["@opentui/core@0.1.86", "", { "dependencies": { "bun-ffi-structs": "0.1.2", "diff": "8.0.2", "jimp": "1.6.0", "marked": "17.0.1", "yoga-layout": "3.2.1" }, "optionalDependencies": { "@dimforge/rapier2d-simd-compat": "^0.17.3", "@opentui/core-darwin-arm64": "0.1.86", "@opentui/core-darwin-x64": "0.1.86", "@opentui/core-linux-arm64": "0.1.86", "@opentui/core-linux-x64": "0.1.86", "@opentui/core-win32-arm64": "0.1.86", "@opentui/core-win32-x64": "0.1.86", "bun-webgpu": "0.1.5", "planck": "^1.4.2", "three": "0.177.0" }, "peerDependencies": { "web-tree-sitter": "0.25.10" } }, "sha512-3tRLbI9ADrQE1jEEn4x2aJexEOQZkv9Emk2BixMZqxfVhz2zr2SxtpimDAX0vmZK3+GnWAwBWxuaCAsxZpY4+w=="],
|
||||
"@opentui/core": ["@opentui/core@0.1.81", "", { "dependencies": { "bun-ffi-structs": "0.1.2", "diff": "8.0.2", "jimp": "1.6.0", "marked": "17.0.1", "yoga-layout": "3.2.1" }, "optionalDependencies": { "@dimforge/rapier2d-simd-compat": "^0.17.3", "@opentui/core-darwin-arm64": "0.1.81", "@opentui/core-darwin-x64": "0.1.81", "@opentui/core-linux-arm64": "0.1.81", "@opentui/core-linux-x64": "0.1.81", "@opentui/core-win32-arm64": "0.1.81", "@opentui/core-win32-x64": "0.1.81", "bun-webgpu": "0.1.5", "planck": "^1.4.2", "three": "0.177.0" }, "peerDependencies": { "web-tree-sitter": "0.25.10" } }, "sha512-ooFjkkQ80DDC4X5eLvH8dBcLAtWwGp9RTaWsaeWet3GOv4N0SDcN8mi1XGhYnUlTuxmofby5eQrPegjtWHODlA=="],
|
||||
|
||||
"@opentui/core-darwin-arm64": ["@opentui/core-darwin-arm64@0.1.86", "", { "os": "darwin", "cpu": "arm64" }, "sha512-Zp7q64+d+Dcx6YrH3mRcnHq8EOBnrfc1RvjgSWLhpXr49hY6LzuhqpfZM57aGErPYlR+ff8QM6e5FUkFnDfyjw=="],
|
||||
"@opentui/core-darwin-arm64": ["@opentui/core-darwin-arm64@0.1.81", "", { "os": "darwin", "cpu": "arm64" }, "sha512-I3Ry5JbkSQXs2g1me8yYr0v3CUcIIfLHzbWz9WMFla8kQDSa+HOr8IpZbqZDeIFgOVzolAXBmZhg0VJI3bZ7MA=="],
|
||||
|
||||
"@opentui/core-darwin-x64": ["@opentui/core-darwin-x64@0.1.86", "", { "os": "darwin", "cpu": "x64" }, "sha512-NcxfjCJm1kLnTMVOpAPdRYNi8W8XdAXNa6N7i9khiVFrl2v5KRQfUjbrSOUYVxFJNc3jKFG6rsn3jEApvn92qA=="],
|
||||
"@opentui/core-darwin-x64": ["@opentui/core-darwin-x64@0.1.81", "", { "os": "darwin", "cpu": "x64" }, "sha512-CrtNKu41D6+bOQdUOmDX4Q3hTL6p+sT55wugPzbDq7cdqFZabCeguBAyOlvRl2g2aJ93kmOWW6MXG0bPPklEFg=="],
|
||||
|
||||
"@opentui/core-linux-arm64": ["@opentui/core-linux-arm64@0.1.86", "", { "os": "linux", "cpu": "arm64" }, "sha512-EDHAvqSOr8CXzbDvo1aE5blJ6wu1aSbR2LqoXtoeXHemr2T2W42D2TdIWewG6K+/BuRbzZnqt9wnYFBksLW6lw=="],
|
||||
"@opentui/core-linux-arm64": ["@opentui/core-linux-arm64@0.1.81", "", { "os": "linux", "cpu": "arm64" }, "sha512-FJw9zmJop9WiMvtT07nSrfBLPLqskxL6xfV3GNft0mSYV+C3hdJ0qkiczGSHUX/6V7fmouM84RWwmY53Rb6hYQ=="],
|
||||
|
||||
"@opentui/core-linux-x64": ["@opentui/core-linux-x64@0.1.86", "", { "os": "linux", "cpu": "x64" }, "sha512-VBaBkVdQDxYV4WcKjb+jgyMS5PiVHepvfaoKWpz1Bq+J01xXW4XPcXyPGkgR1+2R93KzaugEnLscTW4mWtLHlQ=="],
|
||||
"@opentui/core-linux-x64": ["@opentui/core-linux-x64@0.1.81", "", { "os": "linux", "cpu": "x64" }, "sha512-Rj2AFIiuWI0BEMIvh/Jeuxty9Gp5ZhLuQU7ZHJJhojKo/mpBpMs9X+5kwZPZya/tyR8uVDAVyB6AOLkhdRW5lw=="],
|
||||
|
||||
"@opentui/core-win32-arm64": ["@opentui/core-win32-arm64@0.1.86", "", { "os": "win32", "cpu": "arm64" }, "sha512-xKbT7sEKYKGwUPkoqmLfHjbJU+vwHPDwf/r/mIunL41JXQBB35CSZ3/QgIwpp2kkteu7oE1tdBdg15ogUU4OMg=="],
|
||||
"@opentui/core-win32-arm64": ["@opentui/core-win32-arm64@0.1.81", "", { "os": "win32", "cpu": "arm64" }, "sha512-AiZB+mZ1cVr8plAPrPT98e3kw6D0OdOSe2CQYLgJRbfRlPqq3jl26lHPzDb3ZO2OR0oVGRPJvXraus939mvoiQ=="],
|
||||
|
||||
"@opentui/core-win32-x64": ["@opentui/core-win32-x64@0.1.86", "", { "os": "win32", "cpu": "x64" }, "sha512-HRfgAUlcu71/MrtgfX4Gj7PsDtfXZiuC506Pkn1OnRN1Xomcu10BVRDweUa0/g8ldU9i9kLjMGGnpw6/NjaBFg=="],
|
||||
"@opentui/core-win32-x64": ["@opentui/core-win32-x64@0.1.81", "", { "os": "win32", "cpu": "x64" }, "sha512-l8R2Ni1CR4eHi3DTmSkEL/EjHAtOZ/sndYs3VVw+Ej2esL3Mf0W7qSO5S0YNBanz2VXZhbkmM6ERm9keH8RD3w=="],
|
||||
|
||||
"@opentui/solid": ["@opentui/solid@0.1.86", "", { "dependencies": { "@babel/core": "7.28.0", "@babel/preset-typescript": "7.27.1", "@opentui/core": "0.1.86", "babel-plugin-module-resolver": "5.0.2", "babel-preset-solid": "1.9.9", "s-js": "^0.4.9" }, "peerDependencies": { "solid-js": "1.9.9" } }, "sha512-pOZC9dlZIH+bpstVVZ2AvYukBnslZTKSl/y5H8FWcMTHGv/BzpGxXBxstL65E/IQASqPFbvFcs7yMRzdLhynmA=="],
|
||||
"@opentui/solid": ["@opentui/solid@0.1.81", "", { "dependencies": { "@babel/core": "7.28.0", "@babel/preset-typescript": "7.27.1", "@opentui/core": "0.1.81", "babel-plugin-module-resolver": "5.0.2", "babel-preset-solid": "1.9.9", "s-js": "^0.4.9" }, "peerDependencies": { "solid-js": "1.9.9" } }, "sha512-QRjS0wPuIhBRdY8tpG3yprCM4ZnOxWWHTuaZ4hhia2wFZygf7Ome6EuZnLXmtuOQjkjCwu0if8Yik6toc6QylA=="],
|
||||
|
||||
"@oslojs/asn1": ["@oslojs/asn1@1.0.0", "", { "dependencies": { "@oslojs/binary": "1.0.0" } }, "sha512-zw/wn0sj0j0QKbIXfIlnEcTviaCzYOY3V5rAyjR6YtOByFtJiT574+8p9Wlach0lZH9fddD4yb9laEAIl4vXQA=="],
|
||||
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
{
|
||||
"nodeModules": {
|
||||
"x86_64-linux": "sha256-8jEwsY7X7N/vKKbVZ0L8Djj2SfH9HCY+2jKSlaCrm9o=",
|
||||
"aarch64-linux": "sha256-L0G7mSzzR+sZW0uACosJGsE2y/Uh3Vi4piXL4UJOmCw=",
|
||||
"aarch64-darwin": "sha256-1S/g/51MSHjDfsL+U8wlt9Rl50hFf7I3fHgbhSqBIP4=",
|
||||
"x86_64-darwin": "sha256-cveFpKVwcrUOzomU4J3wgYEKRwmJQF0KQiRqKgLJqWs="
|
||||
"x86_64-linux": "sha256-bhX2N9wtE5Uy+oUHkaYBXovB8JmnCnDXvWEbM/xHF0E=",
|
||||
"aarch64-linux": "sha256-WmwpHpV1fNbfm0EDD523XZZdifLAbhUQabiP5sEBRa0=",
|
||||
"aarch64-darwin": "sha256-8LgZ3d1g6dKfFSi/ZMEaD45GWseY0KpAtkxhzMBaOdY=",
|
||||
"x86_64-darwin": "sha256-DqM5DePr4f3Lr3p+KKXkzJwUisjipfsSSL7ORDxPgfY="
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@opencode-ai/app",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"description": "",
|
||||
"type": "module",
|
||||
"exports": {
|
||||
|
||||
@@ -138,12 +138,12 @@ function useSessionShare(args: {
|
||||
globalSDK: ReturnType<typeof useGlobalSDK>
|
||||
currentSession: () =>
|
||||
| {
|
||||
id: string
|
||||
share?: {
|
||||
url?: string
|
||||
}
|
||||
}
|
||||
| undefined
|
||||
sessionID: () => string | undefined
|
||||
projectDirectory: () => string
|
||||
platform: ReturnType<typeof usePlatform>
|
||||
}) {
|
||||
@@ -167,11 +167,11 @@ function useSessionShare(args: {
|
||||
})
|
||||
|
||||
const shareSession = () => {
|
||||
const sessionID = args.sessionID()
|
||||
if (!sessionID || state.share) return
|
||||
const session = args.currentSession()
|
||||
if (!session || state.share) return
|
||||
setState("share", true)
|
||||
args.globalSDK.client.session
|
||||
.share({ sessionID, directory: args.projectDirectory() })
|
||||
.share({ sessionID: session.id, directory: args.projectDirectory() })
|
||||
.catch((error) => {
|
||||
console.error("Failed to share session", error)
|
||||
})
|
||||
@@ -181,11 +181,11 @@ function useSessionShare(args: {
|
||||
}
|
||||
|
||||
const unshareSession = () => {
|
||||
const sessionID = args.sessionID()
|
||||
if (!sessionID || state.unshare) return
|
||||
const session = args.currentSession()
|
||||
if (!session || state.unshare) return
|
||||
setState("unshare", true)
|
||||
args.globalSDK.client.session
|
||||
.unshare({ sessionID, directory: args.projectDirectory() })
|
||||
.unshare({ sessionID: session.id, directory: args.projectDirectory() })
|
||||
.catch((error) => {
|
||||
console.error("Failed to unshare session", error)
|
||||
})
|
||||
@@ -243,9 +243,9 @@ export function SessionHeader() {
|
||||
})
|
||||
const hotkey = createMemo(() => command.keybind("file.open"))
|
||||
|
||||
const currentSession = createMemo(() => (params.id ? sync.session.get(params.id) : undefined))
|
||||
const currentSession = createMemo(() => sync.data.session.find((s) => s.id === params.id))
|
||||
const shareEnabled = createMemo(() => sync.data.config.share !== "disabled")
|
||||
const showShare = createMemo(() => shareEnabled() && !!params.id)
|
||||
const showShare = createMemo(() => shareEnabled() && !!currentSession())
|
||||
const sessionKey = createMemo(() => `${params.dir}${params.id ? "/" + params.id : ""}`)
|
||||
const view = createMemo(() => layout.view(sessionKey))
|
||||
const os = createMemo(() => detectOS(platform))
|
||||
@@ -346,7 +346,6 @@ export function SessionHeader() {
|
||||
const share = useSessionShare({
|
||||
globalSDK,
|
||||
currentSession,
|
||||
sessionID: () => params.id,
|
||||
projectDirectory,
|
||||
platform,
|
||||
})
|
||||
|
||||
@@ -202,26 +202,29 @@ export function StatusPopover() {
|
||||
triggerAs={Button}
|
||||
triggerProps={{
|
||||
variant: "ghost",
|
||||
class: "titlebar-icon w-6 h-6 p-0 box-border",
|
||||
"aria-label": language.t("status.popover.trigger"),
|
||||
class:
|
||||
"rounded-md h-[24px] pr-3 pl-0.5 gap-2 border border-border-weak-base bg-surface-panel shadow-none data-[expanded]:bg-surface-base-active",
|
||||
style: { scale: 1 },
|
||||
}}
|
||||
trigger={
|
||||
<div class="flex size-4 items-center justify-center">
|
||||
<div
|
||||
classList={{
|
||||
"size-1.5 rounded-full": true,
|
||||
"bg-icon-success-base": overallHealthy(),
|
||||
"bg-icon-critical-base": !overallHealthy() && server.healthy() !== undefined,
|
||||
"bg-border-weak-base": server.healthy() === undefined,
|
||||
}}
|
||||
/>
|
||||
<div class="flex items-center gap-0.5">
|
||||
<div class="size-4 flex items-center justify-center">
|
||||
<div
|
||||
classList={{
|
||||
"size-1.5 rounded-full": true,
|
||||
"bg-icon-success-base": overallHealthy(),
|
||||
"bg-icon-critical-base": !overallHealthy() && server.healthy() !== undefined,
|
||||
"bg-border-weak-base": server.healthy() === undefined,
|
||||
}}
|
||||
/>
|
||||
</div>
|
||||
<span class="text-12-regular text-text-strong">{language.t("status.popover.trigger")}</span>
|
||||
</div>
|
||||
}
|
||||
class="[&_[data-slot=popover-body]]:p-0 w-[360px] max-w-[calc(100vw-40px)] bg-transparent border-0 shadow-none rounded-xl"
|
||||
gutter={4}
|
||||
placement="bottom-end"
|
||||
shift={-168}
|
||||
shift={-136}
|
||||
>
|
||||
<div class="flex items-center gap-1 w-[360px] rounded-xl shadow-[var(--shadow-lg-border-base)]">
|
||||
<Tabs
|
||||
|
||||
@@ -1,16 +1,4 @@
|
||||
import {
|
||||
onCleanup,
|
||||
Show,
|
||||
Match,
|
||||
Switch,
|
||||
createMemo,
|
||||
createEffect,
|
||||
createComputed,
|
||||
on,
|
||||
onMount,
|
||||
untrack,
|
||||
createSignal,
|
||||
} from "solid-js"
|
||||
import { onCleanup, Show, Match, Switch, createMemo, createEffect, on, onMount, untrack } from "solid-js"
|
||||
import { createMediaQuery } from "@solid-primitives/media"
|
||||
import { createResizeObserver } from "@solid-primitives/resize-observer"
|
||||
import { useLocal } from "@/context/local"
|
||||
@@ -416,20 +404,8 @@ export default function Page() {
|
||||
mobileTab: "session" as "session" | "changes",
|
||||
changes: "session" as "session" | "turn",
|
||||
newSessionWorktree: "main",
|
||||
deferRender: false,
|
||||
})
|
||||
|
||||
createComputed((prev) => {
|
||||
const key = sessionKey()
|
||||
if (key !== prev) {
|
||||
setStore("deferRender", true)
|
||||
requestAnimationFrame(() => {
|
||||
setTimeout(() => setStore("deferRender", false), 0)
|
||||
})
|
||||
}
|
||||
return key
|
||||
}, sessionKey())
|
||||
|
||||
const turnDiffs = createMemo(() => lastUserMessage()?.summary?.diffs ?? [])
|
||||
const reviewDiffs = createMemo(() => (store.changes === "session" ? diffs() : turnDiffs()))
|
||||
|
||||
@@ -736,12 +712,35 @@ export default function Page() {
|
||||
loadingClass: string
|
||||
emptyClass: string
|
||||
}) => (
|
||||
<Show when={!store.deferRender}>
|
||||
<Switch>
|
||||
<Match when={store.changes === "turn" && !!params.id}>
|
||||
<Switch>
|
||||
<Match when={store.changes === "turn" && !!params.id}>
|
||||
<SessionReviewTab
|
||||
title={changesTitle()}
|
||||
empty={emptyTurn()}
|
||||
diffs={reviewDiffs}
|
||||
view={view}
|
||||
diffStyle={input.diffStyle}
|
||||
onDiffStyleChange={input.onDiffStyleChange}
|
||||
onScrollRef={(el) => setTree("reviewScroll", el)}
|
||||
focusedFile={tree.activeDiff}
|
||||
onLineComment={(comment) => addCommentToContext({ ...comment, origin: "review" })}
|
||||
onLineCommentUpdate={updateCommentInContext}
|
||||
onLineCommentDelete={removeCommentFromContext}
|
||||
lineCommentActions={reviewCommentActions()}
|
||||
comments={comments.all()}
|
||||
focusedComment={comments.focus()}
|
||||
onFocusedCommentChange={comments.setFocus}
|
||||
onViewFile={openReviewFile}
|
||||
classes={input.classes}
|
||||
/>
|
||||
</Match>
|
||||
<Match when={hasReview()}>
|
||||
<Show
|
||||
when={diffsReady()}
|
||||
fallback={<div class={input.loadingClass}>{language.t("session.review.loadingChanges")}</div>}
|
||||
>
|
||||
<SessionReviewTab
|
||||
title={changesTitle()}
|
||||
empty={emptyTurn()}
|
||||
diffs={reviewDiffs}
|
||||
view={view}
|
||||
diffStyle={input.diffStyle}
|
||||
@@ -758,64 +757,39 @@ export default function Page() {
|
||||
onViewFile={openReviewFile}
|
||||
classes={input.classes}
|
||||
/>
|
||||
</Match>
|
||||
<Match when={hasReview()}>
|
||||
<Show
|
||||
when={diffsReady()}
|
||||
fallback={<div class={input.loadingClass}>{language.t("session.review.loadingChanges")}</div>}
|
||||
>
|
||||
<SessionReviewTab
|
||||
title={changesTitle()}
|
||||
diffs={reviewDiffs}
|
||||
view={view}
|
||||
diffStyle={input.diffStyle}
|
||||
onDiffStyleChange={input.onDiffStyleChange}
|
||||
onScrollRef={(el) => setTree("reviewScroll", el)}
|
||||
focusedFile={tree.activeDiff}
|
||||
onLineComment={(comment) => addCommentToContext({ ...comment, origin: "review" })}
|
||||
onLineCommentUpdate={updateCommentInContext}
|
||||
onLineCommentDelete={removeCommentFromContext}
|
||||
lineCommentActions={reviewCommentActions()}
|
||||
comments={comments.all()}
|
||||
focusedComment={comments.focus()}
|
||||
onFocusedCommentChange={comments.setFocus}
|
||||
onViewFile={openReviewFile}
|
||||
classes={input.classes}
|
||||
/>
|
||||
</Show>
|
||||
</Match>
|
||||
<Match when={true}>
|
||||
<SessionReviewTab
|
||||
title={changesTitle()}
|
||||
empty={
|
||||
store.changes === "turn" ? (
|
||||
emptyTurn()
|
||||
) : (
|
||||
<div class={input.emptyClass}>
|
||||
<Mark class="w-14 opacity-10" />
|
||||
<div class="text-14-regular text-text-weak max-w-56">{language.t(reviewEmptyKey())}</div>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
diffs={reviewDiffs}
|
||||
view={view}
|
||||
diffStyle={input.diffStyle}
|
||||
onDiffStyleChange={input.onDiffStyleChange}
|
||||
onScrollRef={(el) => setTree("reviewScroll", el)}
|
||||
focusedFile={tree.activeDiff}
|
||||
onLineComment={(comment) => addCommentToContext({ ...comment, origin: "review" })}
|
||||
onLineCommentUpdate={updateCommentInContext}
|
||||
onLineCommentDelete={removeCommentFromContext}
|
||||
lineCommentActions={reviewCommentActions()}
|
||||
comments={comments.all()}
|
||||
focusedComment={comments.focus()}
|
||||
onFocusedCommentChange={comments.setFocus}
|
||||
onViewFile={openReviewFile}
|
||||
classes={input.classes}
|
||||
/>
|
||||
</Match>
|
||||
</Switch>
|
||||
</Show>
|
||||
</Show>
|
||||
</Match>
|
||||
<Match when={true}>
|
||||
<SessionReviewTab
|
||||
title={changesTitle()}
|
||||
empty={
|
||||
store.changes === "turn" ? (
|
||||
emptyTurn()
|
||||
) : (
|
||||
<div class={input.emptyClass}>
|
||||
<Mark class="w-14 opacity-10" />
|
||||
<div class="text-14-regular text-text-weak max-w-56">{language.t(reviewEmptyKey())}</div>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
diffs={reviewDiffs}
|
||||
view={view}
|
||||
diffStyle={input.diffStyle}
|
||||
onDiffStyleChange={input.onDiffStyleChange}
|
||||
onScrollRef={(el) => setTree("reviewScroll", el)}
|
||||
focusedFile={tree.activeDiff}
|
||||
onLineComment={(comment) => addCommentToContext({ ...comment, origin: "review" })}
|
||||
onLineCommentUpdate={updateCommentInContext}
|
||||
onLineCommentDelete={removeCommentFromContext}
|
||||
lineCommentActions={reviewCommentActions()}
|
||||
comments={comments.all()}
|
||||
focusedComment={comments.focus()}
|
||||
onFocusedCommentChange={comments.setFocus}
|
||||
onViewFile={openReviewFile}
|
||||
classes={input.classes}
|
||||
/>
|
||||
</Match>
|
||||
</Switch>
|
||||
)
|
||||
|
||||
const reviewPanel = () => (
|
||||
|
||||
@@ -230,30 +230,21 @@ export function MessageTimeline(props: {
|
||||
(item): item is AssistantMessage => item.role === "assistant" && typeof item.time.completed !== "number",
|
||||
),
|
||||
)
|
||||
const activeMessageID = createMemo(() => {
|
||||
const parentID = pending()?.parentID
|
||||
if (!parentID) return
|
||||
|
||||
const messages = sessionMessages()
|
||||
const result = Binary.search(messages, parentID, (message) => message.id)
|
||||
const message = result.found ? messages[result.index] : messages.find((item) => item.id === parentID)
|
||||
if (!message || message.role !== "user") return
|
||||
return message.id
|
||||
})
|
||||
const sessionStatus = createMemo(() => {
|
||||
const id = sessionID()
|
||||
if (!id) return idle
|
||||
return sync.data.session_status[id] ?? idle
|
||||
})
|
||||
const activeMessageID = createMemo(() => {
|
||||
const parentID = pending()?.parentID
|
||||
if (parentID) {
|
||||
const messages = sessionMessages()
|
||||
const result = Binary.search(messages, parentID, (message) => message.id)
|
||||
const message = result.found ? messages[result.index] : messages.find((item) => item.id === parentID)
|
||||
if (message && message.role === "user") return message.id
|
||||
}
|
||||
|
||||
const status = sessionStatus()
|
||||
if (status.type !== "idle") {
|
||||
const messages = sessionMessages()
|
||||
for (let i = messages.length - 1; i >= 0; i--) {
|
||||
if (messages[i].role === "user") return messages[i].id
|
||||
}
|
||||
}
|
||||
|
||||
return undefined
|
||||
})
|
||||
const info = createMemo(() => {
|
||||
const id = sessionID()
|
||||
if (!id) return
|
||||
@@ -694,10 +685,9 @@ export function MessageTimeline(props: {
|
||||
{(messageID) => {
|
||||
const active = createMemo(() => activeMessageID() === messageID)
|
||||
const queued = createMemo(() => {
|
||||
if (active()) return false
|
||||
const activeID = activeMessageID()
|
||||
if (activeID) return messageID > activeID
|
||||
return false
|
||||
const item = pending()
|
||||
if (!item || active()) return false
|
||||
return messageID > item.id
|
||||
})
|
||||
const comments = createMemo(() => messageComments(sync.data.part[messageID] ?? []), [], {
|
||||
equals: (a, b) => JSON.stringify(a) === JSON.stringify(b),
|
||||
@@ -715,6 +705,7 @@ export function MessageTimeline(props: {
|
||||
"min-w-0 w-full max-w-full": true,
|
||||
"md:max-w-200 2xl:max-w-[1000px]": props.centered,
|
||||
}}
|
||||
style={{ "content-visibility": "auto", "contain-intrinsic-size": "auto 500px" }}
|
||||
>
|
||||
<Show when={commentCount() > 0}>
|
||||
<div class="w-full px-4 md:px-5 pb-2">
|
||||
|
||||
@@ -168,10 +168,6 @@ export const useSessionHashScroll = (input: {
|
||||
})
|
||||
|
||||
onMount(() => {
|
||||
if (typeof window !== "undefined" && "scrollRestoration" in window.history) {
|
||||
window.history.scrollRestoration = "manual"
|
||||
}
|
||||
|
||||
const handler = () => {
|
||||
if (!input.sessionID() || !input.messagesReady()) return
|
||||
requestAnimationFrame(() => applyHash("auto"))
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@opencode-ai/console-app",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"type": "module",
|
||||
"license": "MIT",
|
||||
"scripts": {
|
||||
|
||||
@@ -43,7 +43,7 @@ export const anthropicHelper: ProviderHelper = ({ reqModel, providerModel }) =>
|
||||
...(isBedrock
|
||||
? {
|
||||
anthropic_version: "bedrock-2023-05-31",
|
||||
anthropic_beta: supports1m ? ["context-1m-2025-08-07"] : undefined,
|
||||
anthropic_beta: supports1m ? "context-1m-2025-08-07" : undefined,
|
||||
model: undefined,
|
||||
stream: undefined,
|
||||
}
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"$schema": "https://json.schemastore.org/package.json",
|
||||
"name": "@opencode-ai/console-core",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"private": true,
|
||||
"type": "module",
|
||||
"license": "MIT",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@opencode-ai/console-function",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"$schema": "https://json.schemastore.org/package.json",
|
||||
"private": true,
|
||||
"type": "module",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@opencode-ai/console-mail",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"dependencies": {
|
||||
"@jsx-email/all": "2.2.3",
|
||||
"@jsx-email/cli": "1.4.3",
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"name": "@opencode-ai/desktop",
|
||||
"private": true,
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"type": "module",
|
||||
"license": "MIT",
|
||||
"scripts": {
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@opencode-ai/enterprise",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"private": true,
|
||||
"type": "module",
|
||||
"license": "MIT",
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
id = "opencode"
|
||||
name = "OpenCode"
|
||||
description = "The open source coding agent."
|
||||
version = "1.2.16"
|
||||
version = "1.2.15"
|
||||
schema_version = 1
|
||||
authors = ["Anomaly"]
|
||||
repository = "https://github.com/anomalyco/opencode"
|
||||
@@ -11,26 +11,26 @@ name = "OpenCode"
|
||||
icon = "./icons/opencode.svg"
|
||||
|
||||
[agent_servers.opencode.targets.darwin-aarch64]
|
||||
archive = "https://github.com/anomalyco/opencode/releases/download/v1.2.16/opencode-darwin-arm64.zip"
|
||||
archive = "https://github.com/anomalyco/opencode/releases/download/v1.2.15/opencode-darwin-arm64.zip"
|
||||
cmd = "./opencode"
|
||||
args = ["acp"]
|
||||
|
||||
[agent_servers.opencode.targets.darwin-x86_64]
|
||||
archive = "https://github.com/anomalyco/opencode/releases/download/v1.2.16/opencode-darwin-x64.zip"
|
||||
archive = "https://github.com/anomalyco/opencode/releases/download/v1.2.15/opencode-darwin-x64.zip"
|
||||
cmd = "./opencode"
|
||||
args = ["acp"]
|
||||
|
||||
[agent_servers.opencode.targets.linux-aarch64]
|
||||
archive = "https://github.com/anomalyco/opencode/releases/download/v1.2.16/opencode-linux-arm64.tar.gz"
|
||||
archive = "https://github.com/anomalyco/opencode/releases/download/v1.2.15/opencode-linux-arm64.tar.gz"
|
||||
cmd = "./opencode"
|
||||
args = ["acp"]
|
||||
|
||||
[agent_servers.opencode.targets.linux-x86_64]
|
||||
archive = "https://github.com/anomalyco/opencode/releases/download/v1.2.16/opencode-linux-x64.tar.gz"
|
||||
archive = "https://github.com/anomalyco/opencode/releases/download/v1.2.15/opencode-linux-x64.tar.gz"
|
||||
cmd = "./opencode"
|
||||
args = ["acp"]
|
||||
|
||||
[agent_servers.opencode.targets.windows-x86_64]
|
||||
archive = "https://github.com/anomalyco/opencode/releases/download/v1.2.16/opencode-windows-x64.zip"
|
||||
archive = "https://github.com/anomalyco/opencode/releases/download/v1.2.15/opencode-windows-x64.zip"
|
||||
cmd = "./opencode.exe"
|
||||
args = ["acp"]
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@opencode-ai/function",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"$schema": "https://json.schemastore.org/package.json",
|
||||
"private": true,
|
||||
"type": "module",
|
||||
|
||||
@@ -1,5 +0,0 @@
|
||||
ALTER TABLE `workspace` ADD `type` text NOT NULL;--> statement-breakpoint
|
||||
ALTER TABLE `workspace` ADD `name` text;--> statement-breakpoint
|
||||
ALTER TABLE `workspace` ADD `directory` text;--> statement-breakpoint
|
||||
ALTER TABLE `workspace` ADD `extra` text;--> statement-breakpoint
|
||||
ALTER TABLE `workspace` DROP COLUMN `config`;
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"$schema": "https://json.schemastore.org/package.json",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"name": "opencode",
|
||||
"type": "module",
|
||||
"license": "MIT",
|
||||
@@ -89,8 +89,8 @@
|
||||
"@opencode-ai/sdk": "workspace:*",
|
||||
"@opencode-ai/util": "workspace:*",
|
||||
"@openrouter/ai-sdk-provider": "1.5.4",
|
||||
"@opentui/core": "0.1.86",
|
||||
"@opentui/solid": "0.1.86",
|
||||
"@opentui/core": "0.1.81",
|
||||
"@opentui/solid": "0.1.81",
|
||||
"@parcel/watcher": "2.5.1",
|
||||
"@pierre/diffs": "catalog:",
|
||||
"@solid-primitives/event-bus": "1.1.2",
|
||||
|
||||
@@ -56,18 +56,13 @@ export namespace Auth {
|
||||
}
|
||||
|
||||
export async function set(key: string, info: Info) {
|
||||
const normalized = key.replace(/\/+$/, "")
|
||||
const data = await all()
|
||||
if (normalized !== key) delete data[key]
|
||||
delete data[normalized + "/"]
|
||||
await Filesystem.writeJson(filepath, { ...data, [normalized]: info }, 0o600)
|
||||
await Filesystem.writeJson(filepath, { ...data, [key]: info }, 0o600)
|
||||
}
|
||||
|
||||
export async function remove(key: string) {
|
||||
const normalized = key.replace(/\/+$/, "")
|
||||
const data = await all()
|
||||
delete data[key]
|
||||
delete data[normalized]
|
||||
await Filesystem.writeJson(filepath, data, 0o600)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -263,8 +263,7 @@ export const AuthLoginCommand = cmd({
|
||||
UI.empty()
|
||||
prompts.intro("Add credential")
|
||||
if (args.url) {
|
||||
const url = args.url.replace(/\/+$/, "")
|
||||
const wellknown = await fetch(`${url}/.well-known/opencode`).then((x) => x.json() as any)
|
||||
const wellknown = await fetch(`${args.url}/.well-known/opencode`).then((x) => x.json() as any)
|
||||
prompts.log.info(`Running \`${wellknown.auth.command.join(" ")}\``)
|
||||
const proc = Process.spawn(wellknown.auth.command, {
|
||||
stdout: "pipe",
|
||||
@@ -280,12 +279,12 @@ export const AuthLoginCommand = cmd({
|
||||
prompts.outro("Done")
|
||||
return
|
||||
}
|
||||
await Auth.set(url, {
|
||||
await Auth.set(args.url, {
|
||||
type: "wellknown",
|
||||
key: wellknown.auth.env,
|
||||
token: token.trim(),
|
||||
})
|
||||
prompts.log.success("Logged into " + url)
|
||||
prompts.log.success("Logged into " + args.url)
|
||||
prompts.outro("Done")
|
||||
return
|
||||
}
|
||||
|
||||
@@ -555,45 +555,6 @@ export const RunCommand = cmd({
|
||||
// Validate agent if specified
|
||||
const agent = await (async () => {
|
||||
if (!args.agent) return undefined
|
||||
|
||||
// When attaching, validate against the running server instead of local Instance state.
|
||||
if (args.attach) {
|
||||
const modes = await sdk.app
|
||||
.agents(undefined, { throwOnError: true })
|
||||
.then((x) => x.data ?? [])
|
||||
.catch(() => undefined)
|
||||
|
||||
if (!modes) {
|
||||
UI.println(
|
||||
UI.Style.TEXT_WARNING_BOLD + "!",
|
||||
UI.Style.TEXT_NORMAL,
|
||||
`failed to list agents from ${args.attach}. Falling back to default agent`,
|
||||
)
|
||||
return undefined
|
||||
}
|
||||
|
||||
const agent = modes.find((a) => a.name === args.agent)
|
||||
if (!agent) {
|
||||
UI.println(
|
||||
UI.Style.TEXT_WARNING_BOLD + "!",
|
||||
UI.Style.TEXT_NORMAL,
|
||||
`agent "${args.agent}" not found. Falling back to default agent`,
|
||||
)
|
||||
return undefined
|
||||
}
|
||||
|
||||
if (agent.mode === "subagent") {
|
||||
UI.println(
|
||||
UI.Style.TEXT_WARNING_BOLD + "!",
|
||||
UI.Style.TEXT_NORMAL,
|
||||
`agent "${args.agent}" is a subagent, not a primary agent. Falling back to default agent`,
|
||||
)
|
||||
return undefined
|
||||
}
|
||||
|
||||
return args.agent
|
||||
}
|
||||
|
||||
const entry = await Agent.get(args.agent)
|
||||
if (!entry) {
|
||||
UI.println(
|
||||
|
||||
@@ -18,7 +18,14 @@ export const ServeCommand = cmd({
|
||||
const server = Server.listen(opts)
|
||||
console.log(`opencode server listening on http://${server.hostname}:${server.port}`)
|
||||
|
||||
let workspaceSync: Array<ReturnType<typeof Workspace.startSyncing>> = []
|
||||
// Only available in development right now
|
||||
if (Installation.isLocal()) {
|
||||
workspaceSync = Project.list().map((project) => Workspace.startSyncing(project))
|
||||
}
|
||||
|
||||
await new Promise(() => {})
|
||||
await server.stop()
|
||||
await Promise.all(workspaceSync.map((item) => item.stop()))
|
||||
},
|
||||
})
|
||||
|
||||
@@ -1979,8 +1979,8 @@ function Task(props: ToolProps<typeof TaskTool>) {
|
||||
|
||||
if (isRunning() && tools().length > 0) {
|
||||
// content[0] += ` · ${tools().length} toolcalls`
|
||||
if (current()) content.push(`↳ ${Locale.titlecase(current()!.tool)} ${(current()!.state as any).title}`)
|
||||
else content.push(`↳ ${tools().length} toolcalls`)
|
||||
if (current()) content.push(`⤷ ${Locale.titlecase(current()!.tool)} ${(current()!.state as any).title}`)
|
||||
else content.push(`⤷ ${tools().length} toolcalls`)
|
||||
}
|
||||
|
||||
if (props.part.state.status === "completed") {
|
||||
|
||||
@@ -5,8 +5,8 @@ import { type rpc } from "./worker"
|
||||
import path from "path"
|
||||
import { fileURLToPath } from "url"
|
||||
import { UI } from "@/cli/ui"
|
||||
import { iife } from "@/util/iife"
|
||||
import { Log } from "@/util/log"
|
||||
import { withTimeout } from "@/util/timeout"
|
||||
import { withNetworkOptions, resolveNetworkOptions } from "@/cli/network"
|
||||
import { Filesystem } from "@/util/filesystem"
|
||||
import type { Event } from "@opencode-ai/sdk/v2"
|
||||
@@ -45,20 +45,6 @@ function createEventSource(client: RpcClient): EventSource {
|
||||
}
|
||||
}
|
||||
|
||||
async function target() {
|
||||
if (typeof OPENCODE_WORKER_PATH !== "undefined") return OPENCODE_WORKER_PATH
|
||||
const dist = new URL("./cli/cmd/tui/worker.js", import.meta.url)
|
||||
if (await Filesystem.exists(fileURLToPath(dist))) return dist
|
||||
return new URL("./worker.ts", import.meta.url)
|
||||
}
|
||||
|
||||
async function input(value?: string) {
|
||||
const piped = process.stdin.isTTY ? undefined : await Bun.stdin.text()
|
||||
if (!value) return piped
|
||||
if (!piped) return value
|
||||
return piped + "\n" + value
|
||||
}
|
||||
|
||||
export const TuiThreadCommand = cmd({
|
||||
command: "$0 [project]",
|
||||
describe: "start opencode tui",
|
||||
@@ -111,17 +97,23 @@ export const TuiThreadCommand = cmd({
|
||||
}
|
||||
|
||||
// Resolve relative paths against PWD to preserve behavior when using --cwd flag
|
||||
const root = process.env.PWD ?? process.cwd()
|
||||
const cwd = args.project ? path.resolve(root, args.project) : process.cwd()
|
||||
const file = await target()
|
||||
const baseCwd = process.env.PWD ?? process.cwd()
|
||||
const cwd = args.project ? path.resolve(baseCwd, args.project) : process.cwd()
|
||||
const localWorker = new URL("./worker.ts", import.meta.url)
|
||||
const distWorker = new URL("./cli/cmd/tui/worker.js", import.meta.url)
|
||||
const workerPath = await iife(async () => {
|
||||
if (typeof OPENCODE_WORKER_PATH !== "undefined") return OPENCODE_WORKER_PATH
|
||||
if (await Filesystem.exists(fileURLToPath(distWorker))) return distWorker
|
||||
return localWorker
|
||||
})
|
||||
try {
|
||||
process.chdir(cwd)
|
||||
} catch {
|
||||
} catch (e) {
|
||||
UI.error("Failed to change directory to " + cwd)
|
||||
return
|
||||
}
|
||||
|
||||
const worker = new Worker(file, {
|
||||
const worker = new Worker(workerPath, {
|
||||
env: Object.fromEntries(
|
||||
Object.entries(process.env).filter((entry): entry is [string, string] => entry[1] !== undefined),
|
||||
),
|
||||
@@ -129,88 +121,76 @@ export const TuiThreadCommand = cmd({
|
||||
worker.onerror = (e) => {
|
||||
Log.Default.error(e)
|
||||
}
|
||||
|
||||
const client = Rpc.client<typeof rpc>(worker)
|
||||
const error = (e: unknown) => {
|
||||
process.on("uncaughtException", (e) => {
|
||||
Log.Default.error(e)
|
||||
}
|
||||
const reload = () => {
|
||||
client.call("reload", undefined).catch((err) => {
|
||||
Log.Default.warn("worker reload failed", {
|
||||
error: err instanceof Error ? err.message : String(err),
|
||||
})
|
||||
})
|
||||
}
|
||||
process.on("uncaughtException", error)
|
||||
process.on("unhandledRejection", error)
|
||||
process.on("SIGUSR2", reload)
|
||||
})
|
||||
process.on("unhandledRejection", (e) => {
|
||||
Log.Default.error(e)
|
||||
})
|
||||
process.on("SIGUSR2", async () => {
|
||||
await client.call("reload", undefined)
|
||||
})
|
||||
|
||||
let stopped = false
|
||||
const stop = async () => {
|
||||
if (stopped) return
|
||||
stopped = true
|
||||
process.off("uncaughtException", error)
|
||||
process.off("unhandledRejection", error)
|
||||
process.off("SIGUSR2", reload)
|
||||
await withTimeout(client.call("shutdown", undefined), 5000).catch((error) => {
|
||||
Log.Default.warn("worker shutdown failed", {
|
||||
error: error instanceof Error ? error.message : String(error),
|
||||
})
|
||||
})
|
||||
worker.terminate()
|
||||
}
|
||||
|
||||
const prompt = await input(args.prompt)
|
||||
const prompt = await iife(async () => {
|
||||
const piped = !process.stdin.isTTY ? await Bun.stdin.text() : undefined
|
||||
if (!args.prompt) return piped
|
||||
return piped ? piped + "\n" + args.prompt : args.prompt
|
||||
})
|
||||
const config = await Instance.provide({
|
||||
directory: cwd,
|
||||
fn: () => TuiConfig.get(),
|
||||
})
|
||||
|
||||
const network = await resolveNetworkOptions(args)
|
||||
const external =
|
||||
// Check if server should be started (port or hostname explicitly set in CLI or config)
|
||||
const networkOpts = await resolveNetworkOptions(args)
|
||||
const shouldStartServer =
|
||||
process.argv.includes("--port") ||
|
||||
process.argv.includes("--hostname") ||
|
||||
process.argv.includes("--mdns") ||
|
||||
network.mdns ||
|
||||
network.port !== 0 ||
|
||||
network.hostname !== "127.0.0.1"
|
||||
networkOpts.mdns ||
|
||||
networkOpts.port !== 0 ||
|
||||
networkOpts.hostname !== "127.0.0.1"
|
||||
|
||||
const transport = external
|
||||
? {
|
||||
url: (await client.call("server", network)).url,
|
||||
fetch: undefined,
|
||||
events: undefined,
|
||||
}
|
||||
: {
|
||||
url: "http://opencode.internal",
|
||||
fetch: createWorkerFetch(client),
|
||||
events: createEventSource(client),
|
||||
}
|
||||
let url: string
|
||||
let customFetch: typeof fetch | undefined
|
||||
let events: EventSource | undefined
|
||||
|
||||
if (shouldStartServer) {
|
||||
// Start HTTP server for external access
|
||||
const server = await client.call("server", networkOpts)
|
||||
url = server.url
|
||||
} else {
|
||||
// Use direct RPC communication (no HTTP)
|
||||
url = "http://opencode.internal"
|
||||
customFetch = createWorkerFetch(client)
|
||||
events = createEventSource(client)
|
||||
}
|
||||
|
||||
const tuiPromise = tui({
|
||||
url,
|
||||
config,
|
||||
directory: cwd,
|
||||
fetch: customFetch,
|
||||
events,
|
||||
args: {
|
||||
continue: args.continue,
|
||||
sessionID: args.session,
|
||||
agent: args.agent,
|
||||
model: args.model,
|
||||
prompt,
|
||||
fork: args.fork,
|
||||
},
|
||||
onExit: async () => {
|
||||
await client.call("shutdown", undefined)
|
||||
},
|
||||
})
|
||||
|
||||
setTimeout(() => {
|
||||
client.call("checkUpgrade", { directory: cwd }).catch(() => {})
|
||||
}, 1000).unref?.()
|
||||
}, 1000)
|
||||
|
||||
try {
|
||||
await tui({
|
||||
url: transport.url,
|
||||
config,
|
||||
directory: cwd,
|
||||
fetch: transport.fetch,
|
||||
events: transport.events,
|
||||
args: {
|
||||
continue: args.continue,
|
||||
sessionID: args.session,
|
||||
agent: args.agent,
|
||||
model: args.model,
|
||||
prompt,
|
||||
fork: args.fork,
|
||||
},
|
||||
onExit: stop,
|
||||
})
|
||||
} finally {
|
||||
await stop()
|
||||
}
|
||||
await tuiPromise
|
||||
} finally {
|
||||
unguard?.()
|
||||
}
|
||||
|
||||
@@ -137,7 +137,12 @@ export const rpc = {
|
||||
async shutdown() {
|
||||
Log.Default.info("worker shutting down")
|
||||
if (eventStream.abort) eventStream.abort.abort()
|
||||
await Instance.disposeAll()
|
||||
await Promise.race([
|
||||
Instance.disposeAll(),
|
||||
new Promise((resolve) => {
|
||||
setTimeout(resolve, 5000)
|
||||
}),
|
||||
])
|
||||
if (server) server.stop(true)
|
||||
},
|
||||
}
|
||||
|
||||
@@ -86,12 +86,11 @@ export namespace Config {
|
||||
let result: Info = {}
|
||||
for (const [key, value] of Object.entries(auth)) {
|
||||
if (value.type === "wellknown") {
|
||||
const url = key.replace(/\/+$/, "")
|
||||
process.env[value.key] = value.token
|
||||
log.debug("fetching remote config", { url: `${url}/.well-known/opencode` })
|
||||
const response = await fetch(`${url}/.well-known/opencode`)
|
||||
log.debug("fetching remote config", { url: `${key}/.well-known/opencode` })
|
||||
const response = await fetch(`${key}/.well-known/opencode`)
|
||||
if (!response.ok) {
|
||||
throw new Error(`failed to fetch remote config from ${url}: ${response.status}`)
|
||||
throw new Error(`failed to fetch remote config from ${key}: ${response.status}`)
|
||||
}
|
||||
const wellknown = (await response.json()) as any
|
||||
const remoteConfig = wellknown.config ?? {}
|
||||
@@ -100,11 +99,11 @@ export namespace Config {
|
||||
result = mergeConfigConcatArrays(
|
||||
result,
|
||||
await load(JSON.stringify(remoteConfig), {
|
||||
dir: path.dirname(`${url}/.well-known/opencode`),
|
||||
source: `${url}/.well-known/opencode`,
|
||||
dir: path.dirname(`${key}/.well-known/opencode`),
|
||||
source: `${key}/.well-known/opencode`,
|
||||
}),
|
||||
)
|
||||
log.debug("loaded remote config from well-known", { url })
|
||||
log.debug("loaded remote config from well-known", { url: key })
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -1,20 +1,10 @@
|
||||
import { lazy } from "@/util/lazy"
|
||||
import type { Adaptor } from "../types"
|
||||
import { WorktreeAdaptor } from "./worktree"
|
||||
import type { Config } from "../config"
|
||||
import type { Adaptor } from "./types"
|
||||
|
||||
const ADAPTORS: Record<string, () => Promise<Adaptor>> = {
|
||||
worktree: lazy(async () => (await import("./worktree")).WorktreeAdaptor),
|
||||
}
|
||||
|
||||
export function getAdaptor(type: string): Promise<Adaptor> {
|
||||
return ADAPTORS[type]()
|
||||
}
|
||||
|
||||
export function installAdaptor(type: string, adaptor: Adaptor) {
|
||||
// This is experimental: mostly used for testing right now, but we
|
||||
// will likely allow this in the future. Need to figure out the
|
||||
// TypeScript story
|
||||
|
||||
// @ts-expect-error we force the builtin types right now, but we
|
||||
// will implement a way to extend the types for custom adaptors
|
||||
ADAPTORS[type] = () => adaptor
|
||||
export function getAdaptor(config: Config): Adaptor {
|
||||
switch (config.type) {
|
||||
case "worktree":
|
||||
return WorktreeAdaptor
|
||||
}
|
||||
}
|
||||
|
||||
7
packages/opencode/src/control-plane/adaptors/types.ts
Normal file
7
packages/opencode/src/control-plane/adaptors/types.ts
Normal file
@@ -0,0 +1,7 @@
|
||||
import type { Config } from "../config"
|
||||
|
||||
export type Adaptor<T extends Config = Config> = {
|
||||
create(from: T, branch?: string | null): Promise<{ config: T; init: () => Promise<void> }>
|
||||
remove(from: T): Promise<void>
|
||||
request(from: T, method: string, url: string, data?: BodyInit, signal?: AbortSignal): Promise<Response | undefined>
|
||||
}
|
||||
@@ -1,46 +1,26 @@
|
||||
import z from "zod"
|
||||
import { Worktree } from "@/worktree"
|
||||
import { type Adaptor, WorkspaceInfo } from "../types"
|
||||
import type { Config } from "../config"
|
||||
import type { Adaptor } from "./types"
|
||||
|
||||
const Config = WorkspaceInfo.extend({
|
||||
name: WorkspaceInfo.shape.name.unwrap(),
|
||||
branch: WorkspaceInfo.shape.branch.unwrap(),
|
||||
directory: WorkspaceInfo.shape.directory.unwrap(),
|
||||
})
|
||||
type WorktreeConfig = Extract<Config, { type: "worktree" }>
|
||||
|
||||
type Config = z.infer<typeof Config>
|
||||
|
||||
export const WorktreeAdaptor: Adaptor = {
|
||||
async configure(info) {
|
||||
const worktree = await Worktree.makeWorktreeInfo(info.name ?? undefined)
|
||||
export const WorktreeAdaptor: Adaptor<WorktreeConfig> = {
|
||||
async create(_from: WorktreeConfig, _branch: string) {
|
||||
const next = await Worktree.create(undefined)
|
||||
return {
|
||||
...info,
|
||||
name: worktree.name,
|
||||
branch: worktree.branch,
|
||||
directory: worktree.directory,
|
||||
config: {
|
||||
type: "worktree",
|
||||
directory: next.directory,
|
||||
},
|
||||
// Hack for now: `Worktree.create` puts all its async code in a
|
||||
// `setTimeout` so it doesn't use this, but we should change that
|
||||
init: async () => {},
|
||||
}
|
||||
},
|
||||
async create(info) {
|
||||
const config = Config.parse(info)
|
||||
const bootstrap = await Worktree.createFromInfo({
|
||||
name: config.name,
|
||||
directory: config.directory,
|
||||
branch: config.branch,
|
||||
})
|
||||
return bootstrap()
|
||||
},
|
||||
async remove(info) {
|
||||
const config = Config.parse(info)
|
||||
async remove(config: WorktreeConfig) {
|
||||
await Worktree.remove({ directory: config.directory })
|
||||
},
|
||||
async fetch(info, input: RequestInfo | URL, init?: RequestInit) {
|
||||
const config = Config.parse(info)
|
||||
const { WorkspaceServer } = await import("../workspace-server/server")
|
||||
const url = input instanceof Request || input instanceof URL ? input : new URL(input, "http://opencode.internal")
|
||||
const headers = new Headers(init?.headers ?? (input instanceof Request ? input.headers : undefined))
|
||||
headers.set("x-opencode-directory", config.directory)
|
||||
|
||||
const request = new Request(url, { ...init, headers })
|
||||
return WorkspaceServer.App().fetch(request)
|
||||
async request(_from: WorktreeConfig, _method: string, _url: string, _data?: BodyInit, _signal?: AbortSignal) {
|
||||
throw new Error("worktree does not support request")
|
||||
},
|
||||
}
|
||||
|
||||
10
packages/opencode/src/control-plane/config.ts
Normal file
10
packages/opencode/src/control-plane/config.ts
Normal file
@@ -0,0 +1,10 @@
|
||||
import z from "zod"
|
||||
|
||||
export const Config = z.discriminatedUnion("type", [
|
||||
z.object({
|
||||
directory: z.string(),
|
||||
type: z.literal("worktree"),
|
||||
}),
|
||||
])
|
||||
|
||||
export type Config = z.infer<typeof Config>
|
||||
@@ -0,0 +1,46 @@
|
||||
import { Instance } from "@/project/instance"
|
||||
import type { MiddlewareHandler } from "hono"
|
||||
import { Installation } from "../installation"
|
||||
import { getAdaptor } from "./adaptors"
|
||||
import { Workspace } from "./workspace"
|
||||
|
||||
// This middleware forwards all non-GET requests if the workspace is a
|
||||
// remote. The remote workspace needs to handle session mutations
|
||||
async function proxySessionRequest(req: Request) {
|
||||
if (req.method === "GET") return
|
||||
if (!Instance.directory.startsWith("wrk_")) return
|
||||
|
||||
const workspace = await Workspace.get(Instance.directory)
|
||||
if (!workspace) {
|
||||
return new Response(`Workspace not found: ${Instance.directory}`, {
|
||||
status: 500,
|
||||
headers: {
|
||||
"content-type": "text/plain; charset=utf-8",
|
||||
},
|
||||
})
|
||||
}
|
||||
if (workspace.config.type === "worktree") return
|
||||
|
||||
const url = new URL(req.url)
|
||||
const body = req.method === "HEAD" ? undefined : await req.arrayBuffer()
|
||||
return getAdaptor(workspace.config).request(
|
||||
workspace.config,
|
||||
req.method,
|
||||
`${url.pathname}${url.search}`,
|
||||
body,
|
||||
req.signal,
|
||||
)
|
||||
}
|
||||
|
||||
export const SessionProxyMiddleware: MiddlewareHandler = async (c, next) => {
|
||||
// Only available in development for now
|
||||
if (!Installation.isLocal()) {
|
||||
return next()
|
||||
}
|
||||
|
||||
const response = await proxySessionRequest(c.req.raw)
|
||||
if (response) {
|
||||
return response
|
||||
}
|
||||
return next()
|
||||
}
|
||||
@@ -1,20 +0,0 @@
|
||||
import z from "zod"
|
||||
import { Identifier } from "@/id/id"
|
||||
|
||||
export const WorkspaceInfo = z.object({
|
||||
id: Identifier.schema("workspace"),
|
||||
type: z.string(),
|
||||
branch: z.string().nullable(),
|
||||
name: z.string().nullable(),
|
||||
directory: z.string().nullable(),
|
||||
extra: z.unknown().nullable(),
|
||||
projectID: z.string(),
|
||||
})
|
||||
export type WorkspaceInfo = z.infer<typeof WorkspaceInfo>
|
||||
|
||||
export type Adaptor = {
|
||||
configure(input: WorkspaceInfo): WorkspaceInfo | Promise<WorkspaceInfo>
|
||||
create(input: WorkspaceInfo, from?: WorkspaceInfo): Promise<void>
|
||||
remove(config: WorkspaceInfo): Promise<void>
|
||||
fetch(config: WorkspaceInfo, input: RequestInfo | URL, init?: RequestInit): Promise<Response>
|
||||
}
|
||||
@@ -1,50 +0,0 @@
|
||||
import { Instance } from "@/project/instance"
|
||||
import type { MiddlewareHandler } from "hono"
|
||||
import { Installation } from "../installation"
|
||||
import { getAdaptor } from "./adaptors"
|
||||
import { Workspace } from "./workspace"
|
||||
import { WorkspaceContext } from "./workspace-context"
|
||||
|
||||
// This middleware forwards all non-GET requests if the workspace is a
|
||||
// remote. The remote workspace needs to handle session mutations
|
||||
async function routeRequest(req: Request) {
|
||||
// Right now, we need to forward all requests to the workspace
|
||||
// because we don't have syncing. In the future all GET requests
|
||||
// which don't mutate anything will be handled locally
|
||||
//
|
||||
// if (req.method === "GET") return
|
||||
|
||||
if (!WorkspaceContext.workspaceID) return
|
||||
|
||||
const workspace = await Workspace.get(WorkspaceContext.workspaceID)
|
||||
if (!workspace) {
|
||||
return new Response(`Workspace not found: ${WorkspaceContext.workspaceID}`, {
|
||||
status: 500,
|
||||
headers: {
|
||||
"content-type": "text/plain; charset=utf-8",
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
const adaptor = await getAdaptor(workspace.type)
|
||||
|
||||
return adaptor.fetch(workspace, `${new URL(req.url).pathname}${new URL(req.url).search}`, {
|
||||
method: req.method,
|
||||
body: req.method === "GET" || req.method === "HEAD" ? undefined : await req.arrayBuffer(),
|
||||
signal: req.signal,
|
||||
headers: req.headers,
|
||||
})
|
||||
}
|
||||
|
||||
export const WorkspaceRouterMiddleware: MiddlewareHandler = async (c, next) => {
|
||||
// Only available in development for now
|
||||
if (!Installation.isLocal()) {
|
||||
return next()
|
||||
}
|
||||
|
||||
const response = await routeRequest(c.req.raw)
|
||||
if (response) {
|
||||
return response
|
||||
}
|
||||
return next()
|
||||
}
|
||||
@@ -1,57 +1,17 @@
|
||||
import { Hono } from "hono"
|
||||
import { Instance } from "../../project/instance"
|
||||
import { InstanceBootstrap } from "../../project/bootstrap"
|
||||
import { SessionRoutes } from "../../server/routes/session"
|
||||
import { WorkspaceServerRoutes } from "./routes"
|
||||
import { WorkspaceContext } from "../workspace-context"
|
||||
|
||||
export namespace WorkspaceServer {
|
||||
export function App() {
|
||||
const session = new Hono()
|
||||
.use(async (c, next) => {
|
||||
// Right now, we need handle all requests because we don't
|
||||
// have syncing. In the future all GET requests will handled
|
||||
// by the control plane
|
||||
//
|
||||
// if (c.req.method === "GET") return c.notFound()
|
||||
.use("*", async (c, next) => {
|
||||
if (c.req.method === "GET") return c.notFound()
|
||||
await next()
|
||||
})
|
||||
.route("/", SessionRoutes())
|
||||
|
||||
return new Hono()
|
||||
.use(async (c, next) => {
|
||||
const workspaceID = c.req.query("workspace") || c.req.header("x-opencode-workspace")
|
||||
const raw = c.req.query("directory") || c.req.header("x-opencode-directory")
|
||||
if (workspaceID == null) {
|
||||
throw new Error("workspaceID parameter is required")
|
||||
}
|
||||
if (raw == null) {
|
||||
throw new Error("directory parameter is required")
|
||||
}
|
||||
|
||||
const directory = (() => {
|
||||
try {
|
||||
return decodeURIComponent(raw)
|
||||
} catch {
|
||||
return raw
|
||||
}
|
||||
})()
|
||||
|
||||
return WorkspaceContext.provide({
|
||||
workspaceID,
|
||||
async fn() {
|
||||
return Instance.provide({
|
||||
directory,
|
||||
init: InstanceBootstrap,
|
||||
async fn() {
|
||||
return next()
|
||||
},
|
||||
})
|
||||
},
|
||||
})
|
||||
})
|
||||
.route("/session", session)
|
||||
.route("/", WorkspaceServerRoutes())
|
||||
return new Hono().route("/session", session).route("/", WorkspaceServerRoutes())
|
||||
}
|
||||
|
||||
export function Listen(opts: { hostname: string; port: number }) {
|
||||
|
||||
@@ -1,14 +1,12 @@
|
||||
import { sqliteTable, text } from "drizzle-orm/sqlite-core"
|
||||
import { ProjectTable } from "@/project/project.sql"
|
||||
import type { Config } from "./config"
|
||||
|
||||
export const WorkspaceTable = sqliteTable("workspace", {
|
||||
id: text().primaryKey(),
|
||||
type: text().notNull(),
|
||||
branch: text(),
|
||||
name: text(),
|
||||
directory: text(),
|
||||
extra: text({ mode: "json" }),
|
||||
project_id: text()
|
||||
.notNull()
|
||||
.references(() => ProjectTable.id, { onDelete: "cascade" }),
|
||||
config: text({ mode: "json" }).notNull().$type<Config>(),
|
||||
})
|
||||
|
||||
@@ -7,8 +7,8 @@ import { BusEvent } from "@/bus/bus-event"
|
||||
import { GlobalBus } from "@/bus/global"
|
||||
import { Log } from "@/util/log"
|
||||
import { WorkspaceTable } from "./workspace.sql"
|
||||
import { Config } from "./config"
|
||||
import { getAdaptor } from "./adaptors"
|
||||
import { WorkspaceInfo } from "./types"
|
||||
import { parseSSE } from "./sse"
|
||||
|
||||
export namespace Workspace {
|
||||
@@ -27,64 +27,72 @@ export namespace Workspace {
|
||||
),
|
||||
}
|
||||
|
||||
export const Info = WorkspaceInfo.meta({
|
||||
ref: "Workspace",
|
||||
})
|
||||
export const Info = z
|
||||
.object({
|
||||
id: Identifier.schema("workspace"),
|
||||
branch: z.string().nullable(),
|
||||
projectID: z.string(),
|
||||
config: Config,
|
||||
})
|
||||
.meta({
|
||||
ref: "Workspace",
|
||||
})
|
||||
export type Info = z.infer<typeof Info>
|
||||
|
||||
function fromRow(row: typeof WorkspaceTable.$inferSelect): Info {
|
||||
return {
|
||||
id: row.id,
|
||||
type: row.type,
|
||||
branch: row.branch,
|
||||
name: row.name,
|
||||
directory: row.directory,
|
||||
extra: row.extra,
|
||||
projectID: row.project_id,
|
||||
config: row.config,
|
||||
}
|
||||
}
|
||||
|
||||
const CreateInput = z.object({
|
||||
id: Identifier.schema("workspace").optional(),
|
||||
type: Info.shape.type,
|
||||
branch: Info.shape.branch,
|
||||
projectID: Info.shape.projectID,
|
||||
extra: Info.shape.extra,
|
||||
})
|
||||
export const create = fn(
|
||||
z.object({
|
||||
id: Identifier.schema("workspace").optional(),
|
||||
projectID: Info.shape.projectID,
|
||||
branch: Info.shape.branch,
|
||||
config: Info.shape.config,
|
||||
}),
|
||||
async (input) => {
|
||||
const id = Identifier.ascending("workspace", input.id)
|
||||
|
||||
export const create = fn(CreateInput, async (input) => {
|
||||
const id = Identifier.ascending("workspace", input.id)
|
||||
const adaptor = await getAdaptor(input.type)
|
||||
const { config, init } = await getAdaptor(input.config).create(input.config, input.branch)
|
||||
|
||||
const config = await adaptor.configure({ ...input, id, name: null, directory: null })
|
||||
const info: Info = {
|
||||
id,
|
||||
projectID: input.projectID,
|
||||
branch: input.branch,
|
||||
config,
|
||||
}
|
||||
|
||||
const info: Info = {
|
||||
id,
|
||||
type: config.type,
|
||||
branch: config.branch ?? null,
|
||||
name: config.name ?? null,
|
||||
directory: config.directory ?? null,
|
||||
extra: config.extra ?? null,
|
||||
projectID: input.projectID,
|
||||
}
|
||||
setTimeout(async () => {
|
||||
await init()
|
||||
|
||||
Database.use((db) => {
|
||||
db.insert(WorkspaceTable)
|
||||
.values({
|
||||
id: info.id,
|
||||
type: info.type,
|
||||
branch: info.branch,
|
||||
name: info.name,
|
||||
directory: info.directory,
|
||||
extra: info.extra,
|
||||
project_id: info.projectID,
|
||||
Database.use((db) => {
|
||||
db.insert(WorkspaceTable)
|
||||
.values({
|
||||
id: info.id,
|
||||
branch: info.branch,
|
||||
project_id: info.projectID,
|
||||
config: info.config,
|
||||
})
|
||||
.run()
|
||||
})
|
||||
.run()
|
||||
})
|
||||
|
||||
await adaptor.create(config)
|
||||
return info
|
||||
})
|
||||
GlobalBus.emit("event", {
|
||||
directory: id,
|
||||
payload: {
|
||||
type: Event.Ready.type,
|
||||
properties: {},
|
||||
},
|
||||
})
|
||||
}, 0)
|
||||
|
||||
return info
|
||||
},
|
||||
)
|
||||
|
||||
export function list(project: Project.Info) {
|
||||
const rows = Database.use((db) =>
|
||||
@@ -103,8 +111,7 @@ export namespace Workspace {
|
||||
const row = Database.use((db) => db.select().from(WorkspaceTable).where(eq(WorkspaceTable.id, id)).get())
|
||||
if (row) {
|
||||
const info = fromRow(row)
|
||||
const adaptor = await getAdaptor(row.type)
|
||||
adaptor.remove(info)
|
||||
await getAdaptor(info.config).remove(info.config)
|
||||
Database.use((db) => db.delete(WorkspaceTable).where(eq(WorkspaceTable.id, id)).run())
|
||||
return info
|
||||
}
|
||||
@@ -113,8 +120,9 @@ export namespace Workspace {
|
||||
|
||||
async function workspaceEventLoop(space: Info, stop: AbortSignal) {
|
||||
while (!stop.aborted) {
|
||||
const adaptor = await getAdaptor(space.type)
|
||||
const res = await adaptor.fetch(space, "/event", { method: "GET", signal: stop }).catch(() => undefined)
|
||||
const res = await getAdaptor(space.config)
|
||||
.request(space.config, "GET", "/event", undefined, stop)
|
||||
.catch(() => undefined)
|
||||
if (!res || !res.ok || !res.body) {
|
||||
await Bun.sleep(1000)
|
||||
continue
|
||||
@@ -132,7 +140,7 @@ export namespace Workspace {
|
||||
|
||||
export function startSyncing(project: Project.Info) {
|
||||
const stop = new AbortController()
|
||||
const spaces = list(project).filter((space) => space.type !== "worktree")
|
||||
const spaces = list(project).filter((space) => space.config.type !== "worktree")
|
||||
|
||||
spaces.forEach((space) => {
|
||||
void workspaceEventLoop(space, stop.signal).catch((error) => {
|
||||
|
||||
@@ -3,11 +3,6 @@ function truthy(key: string) {
|
||||
return value === "true" || value === "1"
|
||||
}
|
||||
|
||||
function falsy(key: string) {
|
||||
const value = process.env[key]?.toLowerCase()
|
||||
return value === "false" || value === "0"
|
||||
}
|
||||
|
||||
export namespace Flag {
|
||||
export const OPENCODE_AUTO_SHARE = truthy("OPENCODE_AUTO_SHARE")
|
||||
export const OPENCODE_GIT_BASH_PATH = process.env["OPENCODE_GIT_BASH_PATH"]
|
||||
@@ -57,7 +52,7 @@ export namespace Flag {
|
||||
export const OPENCODE_EXPERIMENTAL_LSP_TOOL = OPENCODE_EXPERIMENTAL || truthy("OPENCODE_EXPERIMENTAL_LSP_TOOL")
|
||||
export const OPENCODE_DISABLE_FILETIME_CHECK = truthy("OPENCODE_DISABLE_FILETIME_CHECK")
|
||||
export const OPENCODE_EXPERIMENTAL_PLAN_MODE = OPENCODE_EXPERIMENTAL || truthy("OPENCODE_EXPERIMENTAL_PLAN_MODE")
|
||||
export const OPENCODE_EXPERIMENTAL_MARKDOWN = !falsy("OPENCODE_EXPERIMENTAL_MARKDOWN")
|
||||
export const OPENCODE_EXPERIMENTAL_MARKDOWN = truthy("OPENCODE_EXPERIMENTAL_MARKDOWN")
|
||||
export const OPENCODE_MODELS_URL = process.env["OPENCODE_MODELS_URL"]
|
||||
export const OPENCODE_MODELS_PATH = process.env["OPENCODE_MODELS_PATH"]
|
||||
|
||||
|
||||
@@ -88,7 +88,6 @@ export const ExperimentalRoutes = lazy(() =>
|
||||
)
|
||||
},
|
||||
)
|
||||
.route("/workspace", WorkspaceRoutes())
|
||||
.post(
|
||||
"/worktree",
|
||||
describeRoute({
|
||||
@@ -114,6 +113,7 @@ export const ExperimentalRoutes = lazy(() =>
|
||||
return c.json(worktree)
|
||||
},
|
||||
)
|
||||
.route("/workspace", WorkspaceRoutes())
|
||||
.get(
|
||||
"/worktree",
|
||||
describeRoute({
|
||||
|
||||
@@ -16,11 +16,13 @@ import { Log } from "../../util/log"
|
||||
import { PermissionNext } from "@/permission/next"
|
||||
import { errors } from "../error"
|
||||
import { lazy } from "../../util/lazy"
|
||||
import { SessionProxyMiddleware } from "../../control-plane/session-proxy-middleware"
|
||||
|
||||
const log = Log.create({ service: "server" })
|
||||
|
||||
export const SessionRoutes = lazy(() =>
|
||||
new Hono()
|
||||
.use(SessionProxyMiddleware)
|
||||
.get(
|
||||
"/",
|
||||
describeRoute({
|
||||
|
||||
@@ -9,7 +9,7 @@ import { lazy } from "../../util/lazy"
|
||||
export const WorkspaceRoutes = lazy(() =>
|
||||
new Hono()
|
||||
.post(
|
||||
"/",
|
||||
"/:id",
|
||||
describeRoute({
|
||||
summary: "Create workspace",
|
||||
description: "Create a workspace for the current project.",
|
||||
@@ -26,17 +26,27 @@ export const WorkspaceRoutes = lazy(() =>
|
||||
...errors(400),
|
||||
},
|
||||
}),
|
||||
validator(
|
||||
"param",
|
||||
z.object({
|
||||
id: Workspace.Info.shape.id,
|
||||
}),
|
||||
),
|
||||
validator(
|
||||
"json",
|
||||
Workspace.create.schema.omit({
|
||||
projectID: true,
|
||||
z.object({
|
||||
branch: Workspace.Info.shape.branch,
|
||||
config: Workspace.Info.shape.config,
|
||||
}),
|
||||
),
|
||||
async (c) => {
|
||||
const { id } = c.req.valid("param")
|
||||
const body = c.req.valid("json")
|
||||
const workspace = await Workspace.create({
|
||||
id,
|
||||
projectID: Instance.project.id,
|
||||
...body,
|
||||
branch: body.branch,
|
||||
config: body.config,
|
||||
})
|
||||
return c.json(workspace)
|
||||
},
|
||||
|
||||
@@ -22,7 +22,6 @@ import { Flag } from "../flag/flag"
|
||||
import { Command } from "../command"
|
||||
import { Global } from "../global"
|
||||
import { WorkspaceContext } from "../control-plane/workspace-context"
|
||||
import { WorkspaceRouterMiddleware } from "../control-plane/workspace-router-middleware"
|
||||
import { ProjectRoutes } from "./routes/project"
|
||||
import { SessionRoutes } from "./routes/session"
|
||||
import { PtyRoutes } from "./routes/pty"
|
||||
@@ -219,7 +218,6 @@ export namespace Server {
|
||||
},
|
||||
})
|
||||
})
|
||||
.use(WorkspaceRouterMiddleware)
|
||||
.get(
|
||||
"/doc",
|
||||
openAPIRouteHandler(app, {
|
||||
|
||||
@@ -24,7 +24,7 @@ Usage notes:
|
||||
- The command argument is required.
|
||||
- You can specify an optional timeout in milliseconds. If not specified, commands will time out after 120000ms (2 minutes).
|
||||
- It is very helpful if you write a clear, concise description of what this command does in 5-10 words.
|
||||
- If the output exceeds ${maxLines} lines or ${maxBytes} bytes, it will be truncated and the full output will be written to a file. You can use Read with offset/limit to read specific sections or Grep to search the full content. Do NOT use `head`, `tail`, or other truncation commands to limit output; the full output will already be captured to a file for more precise searching.
|
||||
- If the output exceeds ${maxLines} lines or ${maxBytes} bytes, it will be truncated and the full output will be written to a file. You can use Read with offset/limit to read specific sections or Grep to search the full content. Because of this, you do NOT need to use `head`, `tail`, or other truncation commands to limit output - just run the command directly.
|
||||
|
||||
- Avoid using Bash with the `find`, `grep`, `cat`, `head`, `tail`, `sed`, `awk`, or `echo` commands, unless explicitly instructed or when these commands are truly necessary for the task. Instead, always prefer using the dedicated tools for these commands:
|
||||
- File search: Use Glob (NOT find or ls)
|
||||
|
||||
@@ -331,7 +331,7 @@ export namespace Worktree {
|
||||
}, 0)
|
||||
}
|
||||
|
||||
export async function makeWorktreeInfo(name?: string): Promise<Info> {
|
||||
export const create = fn(CreateInput.optional(), async (input) => {
|
||||
if (Instance.project.vcs !== "git") {
|
||||
throw new NotGitError({ message: "Worktrees are only supported for git projects" })
|
||||
}
|
||||
@@ -339,11 +339,9 @@ export namespace Worktree {
|
||||
const root = path.join(Global.Path.data, "worktree", Instance.project.id)
|
||||
await fs.mkdir(root, { recursive: true })
|
||||
|
||||
const base = name ? slug(name) : ""
|
||||
return candidate(root, base || undefined)
|
||||
}
|
||||
const base = input?.name ? slug(input.name) : ""
|
||||
const info = await candidate(root, base || undefined)
|
||||
|
||||
export async function createFromInfo(info: Info, startCommand?: string) {
|
||||
const created = await $`git worktree add --no-checkout -b ${info.branch} ${info.directory}`
|
||||
.quiet()
|
||||
.nothrow()
|
||||
@@ -355,9 +353,8 @@ export namespace Worktree {
|
||||
await Project.addSandbox(Instance.project.id, info.directory).catch(() => undefined)
|
||||
|
||||
const projectID = Instance.project.id
|
||||
const extra = startCommand?.trim()
|
||||
|
||||
return () => {
|
||||
const extra = input?.startCommand?.trim()
|
||||
setTimeout(() => {
|
||||
const start = async () => {
|
||||
const populated = await $`git reset --hard`.quiet().nothrow().cwd(info.directory)
|
||||
if (populated.exitCode !== 0) {
|
||||
@@ -414,17 +411,8 @@ export namespace Worktree {
|
||||
void start().catch((error) => {
|
||||
log.error("worktree start task failed", { directory: info.directory, error })
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
export const create = fn(CreateInput.optional(), async (input) => {
|
||||
const info = await makeWorktreeInfo(input?.name)
|
||||
const bootstrap = await createFromInfo(info, input?.startCommand)
|
||||
// This is needed due to how worktrees currently work in the
|
||||
// desktop app
|
||||
setTimeout(() => {
|
||||
bootstrap()
|
||||
}, 0)
|
||||
|
||||
return info
|
||||
})
|
||||
|
||||
|
||||
@@ -1,58 +0,0 @@
|
||||
import { test, expect } from "bun:test"
|
||||
import { Auth } from "../../src/auth"
|
||||
|
||||
test("set normalizes trailing slashes in keys", async () => {
|
||||
await Auth.set("https://example.com/", {
|
||||
type: "wellknown",
|
||||
key: "TOKEN",
|
||||
token: "abc",
|
||||
})
|
||||
const data = await Auth.all()
|
||||
expect(data["https://example.com"]).toBeDefined()
|
||||
expect(data["https://example.com/"]).toBeUndefined()
|
||||
})
|
||||
|
||||
test("set cleans up pre-existing trailing-slash entry", async () => {
|
||||
// Simulate a pre-fix entry with trailing slash
|
||||
await Auth.set("https://example.com/", {
|
||||
type: "wellknown",
|
||||
key: "TOKEN",
|
||||
token: "old",
|
||||
})
|
||||
// Re-login with normalized key (as the CLI does post-fix)
|
||||
await Auth.set("https://example.com", {
|
||||
type: "wellknown",
|
||||
key: "TOKEN",
|
||||
token: "new",
|
||||
})
|
||||
const data = await Auth.all()
|
||||
const keys = Object.keys(data).filter((k) => k.includes("example.com"))
|
||||
expect(keys).toEqual(["https://example.com"])
|
||||
const entry = data["https://example.com"]!
|
||||
expect(entry.type).toBe("wellknown")
|
||||
if (entry.type === "wellknown") expect(entry.token).toBe("new")
|
||||
})
|
||||
|
||||
test("remove deletes both trailing-slash and normalized keys", async () => {
|
||||
await Auth.set("https://example.com", {
|
||||
type: "wellknown",
|
||||
key: "TOKEN",
|
||||
token: "abc",
|
||||
})
|
||||
await Auth.remove("https://example.com/")
|
||||
const data = await Auth.all()
|
||||
expect(data["https://example.com"]).toBeUndefined()
|
||||
expect(data["https://example.com/"]).toBeUndefined()
|
||||
})
|
||||
|
||||
test("set and remove are no-ops on keys without trailing slashes", async () => {
|
||||
await Auth.set("anthropic", {
|
||||
type: "api",
|
||||
key: "sk-test",
|
||||
})
|
||||
const data = await Auth.all()
|
||||
expect(data["anthropic"]).toBeDefined()
|
||||
await Auth.remove("anthropic")
|
||||
const after = await Auth.all()
|
||||
expect(after["anthropic"]).toBeUndefined()
|
||||
})
|
||||
@@ -1535,71 +1535,6 @@ test("project config overrides remote well-known config", async () => {
|
||||
}
|
||||
})
|
||||
|
||||
test("wellknown URL with trailing slash is normalized", async () => {
|
||||
const originalFetch = globalThis.fetch
|
||||
let fetchedUrl: string | undefined
|
||||
const mockFetch = mock((url: string | URL | Request) => {
|
||||
const urlStr = url.toString()
|
||||
if (urlStr.includes(".well-known/opencode")) {
|
||||
fetchedUrl = urlStr
|
||||
return Promise.resolve(
|
||||
new Response(
|
||||
JSON.stringify({
|
||||
config: {
|
||||
mcp: {
|
||||
slack: {
|
||||
type: "remote",
|
||||
url: "https://slack.example.com/mcp",
|
||||
enabled: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
}),
|
||||
{ status: 200 },
|
||||
),
|
||||
)
|
||||
}
|
||||
return originalFetch(url)
|
||||
})
|
||||
globalThis.fetch = mockFetch as unknown as typeof fetch
|
||||
|
||||
const originalAuthAll = Auth.all
|
||||
Auth.all = mock(() =>
|
||||
Promise.resolve({
|
||||
"https://example.com/": {
|
||||
type: "wellknown" as const,
|
||||
key: "TEST_TOKEN",
|
||||
token: "test-token",
|
||||
},
|
||||
}),
|
||||
)
|
||||
|
||||
try {
|
||||
await using tmp = await tmpdir({
|
||||
git: true,
|
||||
init: async (dir) => {
|
||||
await Filesystem.write(
|
||||
path.join(dir, "opencode.json"),
|
||||
JSON.stringify({
|
||||
$schema: "https://opencode.ai/config.json",
|
||||
}),
|
||||
)
|
||||
},
|
||||
})
|
||||
await Instance.provide({
|
||||
directory: tmp.path,
|
||||
fn: async () => {
|
||||
await Config.get()
|
||||
// Trailing slash should be stripped — no double slash in the fetch URL
|
||||
expect(fetchedUrl).toBe("https://example.com/.well-known/opencode")
|
||||
},
|
||||
})
|
||||
} finally {
|
||||
globalThis.fetch = originalFetch
|
||||
Auth.all = originalAuthAll
|
||||
}
|
||||
})
|
||||
|
||||
describe("getPluginName", () => {
|
||||
test("extracts name from file:// URL", () => {
|
||||
expect(Config.getPluginName("file:///path/to/plugin/foo.js")).toBe("foo")
|
||||
|
||||
@@ -5,11 +5,8 @@ import { tmpdir } from "../fixture/fixture"
|
||||
import { Project } from "../../src/project/project"
|
||||
import { WorkspaceTable } from "../../src/control-plane/workspace.sql"
|
||||
import { Instance } from "../../src/project/instance"
|
||||
import { WorkspaceContext } from "../../src/control-plane/workspace-context"
|
||||
import { Database } from "../../src/storage/db"
|
||||
import { resetDatabase } from "../fixture/db"
|
||||
import * as adaptors from "../../src/control-plane/adaptors"
|
||||
import type { Adaptor } from "../../src/control-plane/types"
|
||||
|
||||
afterEach(async () => {
|
||||
mock.restore()
|
||||
@@ -21,35 +18,18 @@ type State = {
|
||||
calls: Array<{ method: string; url: string; body?: string }>
|
||||
}
|
||||
|
||||
const remote = { type: "testing", name: "remote-a" } as unknown as typeof WorkspaceTable.$inferInsert
|
||||
const remote = { type: "testing", name: "remote-a" } as unknown as typeof WorkspaceTable.$inferInsert.config
|
||||
|
||||
async function setup(state: State) {
|
||||
const TestAdaptor: Adaptor = {
|
||||
configure(config) {
|
||||
return config
|
||||
},
|
||||
async create() {
|
||||
throw new Error("not used")
|
||||
},
|
||||
async remove() {},
|
||||
|
||||
async fetch(_config: unknown, input: RequestInfo | URL, init?: RequestInit) {
|
||||
const url =
|
||||
input instanceof Request || input instanceof URL
|
||||
? input.toString()
|
||||
: new URL(input, "http://workspace.test").toString()
|
||||
const request = new Request(url, init)
|
||||
const body = request.method === "GET" || request.method === "HEAD" ? undefined : await request.text()
|
||||
state.calls.push({
|
||||
method: request.method,
|
||||
url: `${new URL(request.url).pathname}${new URL(request.url).search}`,
|
||||
body,
|
||||
})
|
||||
return new Response("proxied", { status: 202 })
|
||||
},
|
||||
}
|
||||
|
||||
adaptors.installAdaptor("testing", TestAdaptor)
|
||||
mock.module("../../src/control-plane/adaptors", () => ({
|
||||
getAdaptor: () => ({
|
||||
request: async (_config: unknown, method: string, url: string, data?: BodyInit) => {
|
||||
const body = data ? await new Response(data).text() : undefined
|
||||
state.calls.push({ method, url, body })
|
||||
return new Response("proxied", { status: 202 })
|
||||
},
|
||||
}),
|
||||
}))
|
||||
|
||||
await using tmp = await tmpdir({ git: true })
|
||||
const { project } = await Project.fromDirectory(tmp.path)
|
||||
@@ -65,23 +45,20 @@ async function setup(state: State) {
|
||||
id: id1,
|
||||
branch: "main",
|
||||
project_id: project.id,
|
||||
type: remote.type,
|
||||
name: remote.name,
|
||||
config: remote,
|
||||
},
|
||||
{
|
||||
id: id2,
|
||||
branch: "main",
|
||||
project_id: project.id,
|
||||
type: "worktree",
|
||||
directory: tmp.path,
|
||||
name: "local",
|
||||
config: { type: "worktree", directory: tmp.path },
|
||||
},
|
||||
])
|
||||
.run(),
|
||||
)
|
||||
|
||||
const { WorkspaceRouterMiddleware } = await import("../../src/control-plane/workspace-router-middleware")
|
||||
const app = new Hono().use(WorkspaceRouterMiddleware)
|
||||
const { SessionProxyMiddleware } = await import("../../src/control-plane/session-proxy-middleware")
|
||||
const app = new Hono().use(SessionProxyMiddleware)
|
||||
|
||||
return {
|
||||
id1,
|
||||
@@ -89,19 +66,15 @@ async function setup(state: State) {
|
||||
app,
|
||||
async request(input: RequestInfo | URL, init?: RequestInit) {
|
||||
return Instance.provide({
|
||||
directory: tmp.path,
|
||||
fn: async () =>
|
||||
WorkspaceContext.provide({
|
||||
workspaceID: state.workspace === "first" ? id1 : id2,
|
||||
fn: () => app.request(input, init),
|
||||
}),
|
||||
directory: state.workspace === "first" ? id1 : id2,
|
||||
fn: async () => app.request(input, init),
|
||||
})
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
describe("control-plane/session-proxy-middleware", () => {
|
||||
test("forwards non-GET session requests for workspaces", async () => {
|
||||
test("forwards non-GET session requests for remote workspaces", async () => {
|
||||
const state: State = {
|
||||
workspace: "first",
|
||||
calls: [],
|
||||
@@ -129,21 +102,46 @@ describe("control-plane/session-proxy-middleware", () => {
|
||||
])
|
||||
})
|
||||
|
||||
// It will behave this way when we have syncing
|
||||
//
|
||||
// test("does not forward GET requests", async () => {
|
||||
// const state: State = {
|
||||
// workspace: "first",
|
||||
// calls: [],
|
||||
// }
|
||||
test("does not forward GET requests", async () => {
|
||||
const state: State = {
|
||||
workspace: "first",
|
||||
calls: [],
|
||||
}
|
||||
|
||||
// const ctx = await setup(state)
|
||||
const ctx = await setup(state)
|
||||
|
||||
// ctx.app.get("/session/foo", (c) => c.text("local", 200))
|
||||
// const response = await ctx.request("http://workspace.test/session/foo?x=1")
|
||||
ctx.app.get("/session/foo", (c) => c.text("local", 200))
|
||||
const response = await ctx.request("http://workspace.test/session/foo?x=1")
|
||||
|
||||
// expect(response.status).toBe(200)
|
||||
// expect(await response.text()).toBe("local")
|
||||
// expect(state.calls).toEqual([])
|
||||
// })
|
||||
expect(response.status).toBe(200)
|
||||
expect(await response.text()).toBe("local")
|
||||
expect(state.calls).toEqual([])
|
||||
})
|
||||
|
||||
test("does not forward GET or POST requests for worktree workspaces", async () => {
|
||||
const state: State = {
|
||||
workspace: "second",
|
||||
calls: [],
|
||||
}
|
||||
|
||||
const ctx = await setup(state)
|
||||
|
||||
ctx.app.get("/session/foo", (c) => c.text("local-get", 200))
|
||||
ctx.app.post("/session/foo", (c) => c.text("local-post", 200))
|
||||
|
||||
const getResponse = await ctx.request("http://workspace.test/session/foo?x=1")
|
||||
const postResponse = await ctx.request("http://workspace.test/session/foo?x=1", {
|
||||
method: "POST",
|
||||
body: JSON.stringify({ hello: "world" }),
|
||||
headers: {
|
||||
"content-type": "application/json",
|
||||
},
|
||||
})
|
||||
|
||||
expect(getResponse.status).toBe(200)
|
||||
expect(await getResponse.text()).toBe("local-get")
|
||||
expect(postResponse.status).toBe(200)
|
||||
expect(await postResponse.text()).toBe("local-post")
|
||||
expect(state.calls).toEqual([])
|
||||
})
|
||||
})
|
||||
|
||||
@@ -4,7 +4,6 @@ import { WorkspaceServer } from "../../src/control-plane/workspace-server/server
|
||||
import { parseSSE } from "../../src/control-plane/sse"
|
||||
import { GlobalBus } from "../../src/bus/global"
|
||||
import { resetDatabase } from "../fixture/db"
|
||||
import { tmpdir } from "../fixture/fixture"
|
||||
|
||||
afterEach(async () => {
|
||||
await resetDatabase()
|
||||
@@ -14,17 +13,13 @@ Log.init({ print: false })
|
||||
|
||||
describe("control-plane/workspace-server SSE", () => {
|
||||
test("streams GlobalBus events and parseSSE reads them", async () => {
|
||||
await using tmp = await tmpdir({ git: true })
|
||||
const app = WorkspaceServer.App()
|
||||
const stop = new AbortController()
|
||||
const seen: unknown[] = []
|
||||
|
||||
try {
|
||||
const response = await app.request("/event", {
|
||||
signal: stop.signal,
|
||||
headers: {
|
||||
"x-opencode-workspace": "wrk_test_workspace",
|
||||
"x-opencode-directory": tmp.path,
|
||||
},
|
||||
})
|
||||
|
||||
expect(response.status).toBe(200)
|
||||
|
||||
@@ -7,8 +7,6 @@ import { Database } from "../../src/storage/db"
|
||||
import { WorkspaceTable } from "../../src/control-plane/workspace.sql"
|
||||
import { GlobalBus } from "../../src/bus/global"
|
||||
import { resetDatabase } from "../fixture/db"
|
||||
import * as adaptors from "../../src/control-plane/adaptors"
|
||||
import type { Adaptor } from "../../src/control-plane/types"
|
||||
|
||||
afterEach(async () => {
|
||||
mock.restore()
|
||||
@@ -17,34 +15,35 @@ afterEach(async () => {
|
||||
|
||||
Log.init({ print: false })
|
||||
|
||||
const remote = { type: "testing", name: "remote-a" } as unknown as typeof WorkspaceTable.$inferInsert
|
||||
const seen: string[] = []
|
||||
const remote = { type: "testing", name: "remote-a" } as unknown as typeof WorkspaceTable.$inferInsert.config
|
||||
|
||||
const TestAdaptor: Adaptor = {
|
||||
configure(config) {
|
||||
return config
|
||||
},
|
||||
async create() {
|
||||
throw new Error("not used")
|
||||
},
|
||||
async remove() {},
|
||||
async fetch(_config: unknown, _input: RequestInfo | URL, _init?: RequestInit) {
|
||||
const body = new ReadableStream<Uint8Array>({
|
||||
start(controller) {
|
||||
const encoder = new TextEncoder()
|
||||
controller.enqueue(encoder.encode('data: {"type":"remote.ready","properties":{}}\n\n'))
|
||||
controller.close()
|
||||
mock.module("../../src/control-plane/adaptors", () => ({
|
||||
getAdaptor: (config: { type: string }) => {
|
||||
seen.push(config.type)
|
||||
return {
|
||||
async create() {
|
||||
throw new Error("not used")
|
||||
},
|
||||
})
|
||||
return new Response(body, {
|
||||
status: 200,
|
||||
headers: {
|
||||
"content-type": "text/event-stream",
|
||||
async remove() {},
|
||||
async request() {
|
||||
const body = new ReadableStream<Uint8Array>({
|
||||
start(controller) {
|
||||
const encoder = new TextEncoder()
|
||||
controller.enqueue(encoder.encode('data: {"type":"remote.ready","properties":{}}\n\n'))
|
||||
controller.close()
|
||||
},
|
||||
})
|
||||
return new Response(body, {
|
||||
status: 200,
|
||||
headers: {
|
||||
"content-type": "text/event-stream",
|
||||
},
|
||||
})
|
||||
},
|
||||
})
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
adaptors.installAdaptor("testing", TestAdaptor)
|
||||
}))
|
||||
|
||||
describe("control-plane/workspace.startSyncing", () => {
|
||||
test("syncs only remote workspaces and emits remote SSE events", async () => {
|
||||
@@ -63,16 +62,13 @@ describe("control-plane/workspace.startSyncing", () => {
|
||||
id: id1,
|
||||
branch: "main",
|
||||
project_id: project.id,
|
||||
type: remote.type,
|
||||
name: remote.name,
|
||||
config: remote,
|
||||
},
|
||||
{
|
||||
id: id2,
|
||||
branch: "main",
|
||||
project_id: project.id,
|
||||
type: "worktree",
|
||||
directory: tmp.path,
|
||||
name: "local",
|
||||
config: { type: "worktree", directory: tmp.path },
|
||||
},
|
||||
])
|
||||
.run(),
|
||||
@@ -95,5 +91,7 @@ describe("control-plane/workspace.startSyncing", () => {
|
||||
])
|
||||
|
||||
await sync.stop()
|
||||
expect(seen).toContain("testing")
|
||||
expect(seen).not.toContain("worktree")
|
||||
})
|
||||
})
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"$schema": "https://json.schemastore.org/package.json",
|
||||
"name": "@opencode-ai/plugin",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"type": "module",
|
||||
"license": "MIT",
|
||||
"scripts": {
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"$schema": "https://json.schemastore.org/package.json",
|
||||
"name": "@opencode-ai/sdk",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"type": "module",
|
||||
"license": "MIT",
|
||||
"scripts": {
|
||||
|
||||
@@ -862,214 +862,6 @@ export class Tool extends HeyApiClient {
|
||||
}
|
||||
}
|
||||
|
||||
export class Workspace extends HeyApiClient {
|
||||
/**
|
||||
* List workspaces
|
||||
*
|
||||
* List all workspaces.
|
||||
*/
|
||||
public list<ThrowOnError extends boolean = false>(
|
||||
parameters?: {
|
||||
directory?: string
|
||||
workspace?: string
|
||||
},
|
||||
options?: Options<never, ThrowOnError>,
|
||||
) {
|
||||
const params = buildClientParams(
|
||||
[parameters],
|
||||
[
|
||||
{
|
||||
args: [
|
||||
{ in: "query", key: "directory" },
|
||||
{ in: "query", key: "workspace" },
|
||||
],
|
||||
},
|
||||
],
|
||||
)
|
||||
return (options?.client ?? this.client).get<ExperimentalWorkspaceListResponses, unknown, ThrowOnError>({
|
||||
url: "/experimental/workspace",
|
||||
...options,
|
||||
...params,
|
||||
})
|
||||
}
|
||||
|
||||
/**
|
||||
* Create workspace
|
||||
*
|
||||
* Create a workspace for the current project.
|
||||
*/
|
||||
public create<ThrowOnError extends boolean = false>(
|
||||
parameters?: {
|
||||
directory?: string
|
||||
workspace?: string
|
||||
id?: string
|
||||
type?: string
|
||||
branch?: string | null
|
||||
extra?: unknown | null
|
||||
},
|
||||
options?: Options<never, ThrowOnError>,
|
||||
) {
|
||||
const params = buildClientParams(
|
||||
[parameters],
|
||||
[
|
||||
{
|
||||
args: [
|
||||
{ in: "query", key: "directory" },
|
||||
{ in: "query", key: "workspace" },
|
||||
{ in: "body", key: "id" },
|
||||
{ in: "body", key: "type" },
|
||||
{ in: "body", key: "branch" },
|
||||
{ in: "body", key: "extra" },
|
||||
],
|
||||
},
|
||||
],
|
||||
)
|
||||
return (options?.client ?? this.client).post<
|
||||
ExperimentalWorkspaceCreateResponses,
|
||||
ExperimentalWorkspaceCreateErrors,
|
||||
ThrowOnError
|
||||
>({
|
||||
url: "/experimental/workspace",
|
||||
...options,
|
||||
...params,
|
||||
headers: {
|
||||
"Content-Type": "application/json",
|
||||
...options?.headers,
|
||||
...params.headers,
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
/**
|
||||
* Remove workspace
|
||||
*
|
||||
* Remove an existing workspace.
|
||||
*/
|
||||
public remove<ThrowOnError extends boolean = false>(
|
||||
parameters: {
|
||||
id: string
|
||||
directory?: string
|
||||
workspace?: string
|
||||
},
|
||||
options?: Options<never, ThrowOnError>,
|
||||
) {
|
||||
const params = buildClientParams(
|
||||
[parameters],
|
||||
[
|
||||
{
|
||||
args: [
|
||||
{ in: "path", key: "id" },
|
||||
{ in: "query", key: "directory" },
|
||||
{ in: "query", key: "workspace" },
|
||||
],
|
||||
},
|
||||
],
|
||||
)
|
||||
return (options?.client ?? this.client).delete<
|
||||
ExperimentalWorkspaceRemoveResponses,
|
||||
ExperimentalWorkspaceRemoveErrors,
|
||||
ThrowOnError
|
||||
>({
|
||||
url: "/experimental/workspace/{id}",
|
||||
...options,
|
||||
...params,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
export class Session extends HeyApiClient {
|
||||
/**
|
||||
* List sessions
|
||||
*
|
||||
* Get a list of all OpenCode sessions across projects, sorted by most recently updated. Archived sessions are excluded by default.
|
||||
*/
|
||||
public list<ThrowOnError extends boolean = false>(
|
||||
parameters?: {
|
||||
directory?: string
|
||||
workspace?: string
|
||||
roots?: boolean
|
||||
start?: number
|
||||
cursor?: number
|
||||
search?: string
|
||||
limit?: number
|
||||
archived?: boolean
|
||||
},
|
||||
options?: Options<never, ThrowOnError>,
|
||||
) {
|
||||
const params = buildClientParams(
|
||||
[parameters],
|
||||
[
|
||||
{
|
||||
args: [
|
||||
{ in: "query", key: "directory" },
|
||||
{ in: "query", key: "workspace" },
|
||||
{ in: "query", key: "roots" },
|
||||
{ in: "query", key: "start" },
|
||||
{ in: "query", key: "cursor" },
|
||||
{ in: "query", key: "search" },
|
||||
{ in: "query", key: "limit" },
|
||||
{ in: "query", key: "archived" },
|
||||
],
|
||||
},
|
||||
],
|
||||
)
|
||||
return (options?.client ?? this.client).get<ExperimentalSessionListResponses, unknown, ThrowOnError>({
|
||||
url: "/experimental/session",
|
||||
...options,
|
||||
...params,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
export class Resource extends HeyApiClient {
|
||||
/**
|
||||
* Get MCP resources
|
||||
*
|
||||
* Get all available MCP resources from connected servers. Optionally filter by name.
|
||||
*/
|
||||
public list<ThrowOnError extends boolean = false>(
|
||||
parameters?: {
|
||||
directory?: string
|
||||
workspace?: string
|
||||
},
|
||||
options?: Options<never, ThrowOnError>,
|
||||
) {
|
||||
const params = buildClientParams(
|
||||
[parameters],
|
||||
[
|
||||
{
|
||||
args: [
|
||||
{ in: "query", key: "directory" },
|
||||
{ in: "query", key: "workspace" },
|
||||
],
|
||||
},
|
||||
],
|
||||
)
|
||||
return (options?.client ?? this.client).get<ExperimentalResourceListResponses, unknown, ThrowOnError>({
|
||||
url: "/experimental/resource",
|
||||
...options,
|
||||
...params,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
export class Experimental extends HeyApiClient {
|
||||
private _workspace?: Workspace
|
||||
get workspace(): Workspace {
|
||||
return (this._workspace ??= new Workspace({ client: this.client }))
|
||||
}
|
||||
|
||||
private _session?: Session
|
||||
get session(): Session {
|
||||
return (this._session ??= new Session({ client: this.client }))
|
||||
}
|
||||
|
||||
private _resource?: Resource
|
||||
get resource(): Resource {
|
||||
return (this._resource ??= new Resource({ client: this.client }))
|
||||
}
|
||||
}
|
||||
|
||||
export class Worktree extends HeyApiClient {
|
||||
/**
|
||||
* Remove worktree
|
||||
@@ -1213,6 +1005,215 @@ export class Worktree extends HeyApiClient {
|
||||
}
|
||||
}
|
||||
|
||||
export class Workspace extends HeyApiClient {
|
||||
/**
|
||||
* Remove workspace
|
||||
*
|
||||
* Remove an existing workspace.
|
||||
*/
|
||||
public remove<ThrowOnError extends boolean = false>(
|
||||
parameters: {
|
||||
id: string
|
||||
directory?: string
|
||||
workspace?: string
|
||||
},
|
||||
options?: Options<never, ThrowOnError>,
|
||||
) {
|
||||
const params = buildClientParams(
|
||||
[parameters],
|
||||
[
|
||||
{
|
||||
args: [
|
||||
{ in: "path", key: "id" },
|
||||
{ in: "query", key: "directory" },
|
||||
{ in: "query", key: "workspace" },
|
||||
],
|
||||
},
|
||||
],
|
||||
)
|
||||
return (options?.client ?? this.client).delete<
|
||||
ExperimentalWorkspaceRemoveResponses,
|
||||
ExperimentalWorkspaceRemoveErrors,
|
||||
ThrowOnError
|
||||
>({
|
||||
url: "/experimental/workspace/{id}",
|
||||
...options,
|
||||
...params,
|
||||
})
|
||||
}
|
||||
|
||||
/**
|
||||
* Create workspace
|
||||
*
|
||||
* Create a workspace for the current project.
|
||||
*/
|
||||
public create<ThrowOnError extends boolean = false>(
|
||||
parameters: {
|
||||
id: string
|
||||
directory?: string
|
||||
workspace?: string
|
||||
branch?: string | null
|
||||
config?: {
|
||||
directory: string
|
||||
type: "worktree"
|
||||
}
|
||||
},
|
||||
options?: Options<never, ThrowOnError>,
|
||||
) {
|
||||
const params = buildClientParams(
|
||||
[parameters],
|
||||
[
|
||||
{
|
||||
args: [
|
||||
{ in: "path", key: "id" },
|
||||
{ in: "query", key: "directory" },
|
||||
{ in: "query", key: "workspace" },
|
||||
{ in: "body", key: "branch" },
|
||||
{ in: "body", key: "config" },
|
||||
],
|
||||
},
|
||||
],
|
||||
)
|
||||
return (options?.client ?? this.client).post<
|
||||
ExperimentalWorkspaceCreateResponses,
|
||||
ExperimentalWorkspaceCreateErrors,
|
||||
ThrowOnError
|
||||
>({
|
||||
url: "/experimental/workspace/{id}",
|
||||
...options,
|
||||
...params,
|
||||
headers: {
|
||||
"Content-Type": "application/json",
|
||||
...options?.headers,
|
||||
...params.headers,
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
/**
|
||||
* List workspaces
|
||||
*
|
||||
* List all workspaces.
|
||||
*/
|
||||
public list<ThrowOnError extends boolean = false>(
|
||||
parameters?: {
|
||||
directory?: string
|
||||
workspace?: string
|
||||
},
|
||||
options?: Options<never, ThrowOnError>,
|
||||
) {
|
||||
const params = buildClientParams(
|
||||
[parameters],
|
||||
[
|
||||
{
|
||||
args: [
|
||||
{ in: "query", key: "directory" },
|
||||
{ in: "query", key: "workspace" },
|
||||
],
|
||||
},
|
||||
],
|
||||
)
|
||||
return (options?.client ?? this.client).get<ExperimentalWorkspaceListResponses, unknown, ThrowOnError>({
|
||||
url: "/experimental/workspace",
|
||||
...options,
|
||||
...params,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
export class Session extends HeyApiClient {
|
||||
/**
|
||||
* List sessions
|
||||
*
|
||||
* Get a list of all OpenCode sessions across projects, sorted by most recently updated. Archived sessions are excluded by default.
|
||||
*/
|
||||
public list<ThrowOnError extends boolean = false>(
|
||||
parameters?: {
|
||||
directory?: string
|
||||
workspace?: string
|
||||
roots?: boolean
|
||||
start?: number
|
||||
cursor?: number
|
||||
search?: string
|
||||
limit?: number
|
||||
archived?: boolean
|
||||
},
|
||||
options?: Options<never, ThrowOnError>,
|
||||
) {
|
||||
const params = buildClientParams(
|
||||
[parameters],
|
||||
[
|
||||
{
|
||||
args: [
|
||||
{ in: "query", key: "directory" },
|
||||
{ in: "query", key: "workspace" },
|
||||
{ in: "query", key: "roots" },
|
||||
{ in: "query", key: "start" },
|
||||
{ in: "query", key: "cursor" },
|
||||
{ in: "query", key: "search" },
|
||||
{ in: "query", key: "limit" },
|
||||
{ in: "query", key: "archived" },
|
||||
],
|
||||
},
|
||||
],
|
||||
)
|
||||
return (options?.client ?? this.client).get<ExperimentalSessionListResponses, unknown, ThrowOnError>({
|
||||
url: "/experimental/session",
|
||||
...options,
|
||||
...params,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
export class Resource extends HeyApiClient {
|
||||
/**
|
||||
* Get MCP resources
|
||||
*
|
||||
* Get all available MCP resources from connected servers. Optionally filter by name.
|
||||
*/
|
||||
public list<ThrowOnError extends boolean = false>(
|
||||
parameters?: {
|
||||
directory?: string
|
||||
workspace?: string
|
||||
},
|
||||
options?: Options<never, ThrowOnError>,
|
||||
) {
|
||||
const params = buildClientParams(
|
||||
[parameters],
|
||||
[
|
||||
{
|
||||
args: [
|
||||
{ in: "query", key: "directory" },
|
||||
{ in: "query", key: "workspace" },
|
||||
],
|
||||
},
|
||||
],
|
||||
)
|
||||
return (options?.client ?? this.client).get<ExperimentalResourceListResponses, unknown, ThrowOnError>({
|
||||
url: "/experimental/resource",
|
||||
...options,
|
||||
...params,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
export class Experimental extends HeyApiClient {
|
||||
private _workspace?: Workspace
|
||||
get workspace(): Workspace {
|
||||
return (this._workspace ??= new Workspace({ client: this.client }))
|
||||
}
|
||||
|
||||
private _session?: Session
|
||||
get session(): Session {
|
||||
return (this._session ??= new Session({ client: this.client }))
|
||||
}
|
||||
|
||||
private _resource?: Resource
|
||||
get resource(): Resource {
|
||||
return (this._resource ??= new Resource({ client: this.client }))
|
||||
}
|
||||
}
|
||||
|
||||
export class Session2 extends HeyApiClient {
|
||||
/**
|
||||
* List sessions
|
||||
@@ -3897,16 +3898,16 @@ export class OpencodeClient extends HeyApiClient {
|
||||
return (this._tool ??= new Tool({ client: this.client }))
|
||||
}
|
||||
|
||||
private _experimental?: Experimental
|
||||
get experimental(): Experimental {
|
||||
return (this._experimental ??= new Experimental({ client: this.client }))
|
||||
}
|
||||
|
||||
private _worktree?: Worktree
|
||||
get worktree(): Worktree {
|
||||
return (this._worktree ??= new Worktree({ client: this.client }))
|
||||
}
|
||||
|
||||
private _experimental?: Experimental
|
||||
get experimental(): Experimental {
|
||||
return (this._experimental ??= new Experimental({ client: this.client }))
|
||||
}
|
||||
|
||||
private _session?: Session2
|
||||
get session(): Session2 {
|
||||
return (this._session ??= new Session2({ client: this.client }))
|
||||
|
||||
@@ -889,6 +889,21 @@ export type EventVcsBranchUpdated = {
|
||||
}
|
||||
}
|
||||
|
||||
export type EventWorktreeReady = {
|
||||
type: "worktree.ready"
|
||||
properties: {
|
||||
name: string
|
||||
branch: string
|
||||
}
|
||||
}
|
||||
|
||||
export type EventWorktreeFailed = {
|
||||
type: "worktree.failed"
|
||||
properties: {
|
||||
message: string
|
||||
}
|
||||
}
|
||||
|
||||
export type EventWorkspaceReady = {
|
||||
type: "workspace.ready"
|
||||
properties: {
|
||||
@@ -942,21 +957,6 @@ export type EventPtyDeleted = {
|
||||
}
|
||||
}
|
||||
|
||||
export type EventWorktreeReady = {
|
||||
type: "worktree.ready"
|
||||
properties: {
|
||||
name: string
|
||||
branch: string
|
||||
}
|
||||
}
|
||||
|
||||
export type EventWorktreeFailed = {
|
||||
type: "worktree.failed"
|
||||
properties: {
|
||||
message: string
|
||||
}
|
||||
}
|
||||
|
||||
export type Event =
|
||||
| EventInstallationUpdated
|
||||
| EventInstallationUpdateAvailable
|
||||
@@ -995,14 +995,14 @@ export type Event =
|
||||
| EventSessionDiff
|
||||
| EventSessionError
|
||||
| EventVcsBranchUpdated
|
||||
| EventWorktreeReady
|
||||
| EventWorktreeFailed
|
||||
| EventWorkspaceReady
|
||||
| EventWorkspaceFailed
|
||||
| EventPtyCreated
|
||||
| EventPtyUpdated
|
||||
| EventPtyExited
|
||||
| EventPtyDeleted
|
||||
| EventWorktreeReady
|
||||
| EventWorktreeFailed
|
||||
|
||||
export type GlobalEvent = {
|
||||
directory: string
|
||||
@@ -1631,16 +1631,6 @@ export type ToolListItem = {
|
||||
|
||||
export type ToolList = Array<ToolListItem>
|
||||
|
||||
export type Workspace = {
|
||||
id: string
|
||||
type: string
|
||||
branch: string | null
|
||||
name: string | null
|
||||
directory: string | null
|
||||
extra: unknown | null
|
||||
projectID: string
|
||||
}
|
||||
|
||||
export type Worktree = {
|
||||
name: string
|
||||
branch: string
|
||||
@@ -1655,6 +1645,16 @@ export type WorktreeCreateInput = {
|
||||
startCommand?: string
|
||||
}
|
||||
|
||||
export type Workspace = {
|
||||
id: string
|
||||
branch: string | null
|
||||
projectID: string
|
||||
config: {
|
||||
directory: string
|
||||
type: "worktree"
|
||||
}
|
||||
}
|
||||
|
||||
export type WorktreeRemoveInput = {
|
||||
directory: string
|
||||
}
|
||||
@@ -2444,93 +2444,6 @@ export type ToolListResponses = {
|
||||
|
||||
export type ToolListResponse = ToolListResponses[keyof ToolListResponses]
|
||||
|
||||
export type ExperimentalWorkspaceListData = {
|
||||
body?: never
|
||||
path?: never
|
||||
query?: {
|
||||
directory?: string
|
||||
workspace?: string
|
||||
}
|
||||
url: "/experimental/workspace"
|
||||
}
|
||||
|
||||
export type ExperimentalWorkspaceListResponses = {
|
||||
/**
|
||||
* Workspaces
|
||||
*/
|
||||
200: Array<Workspace>
|
||||
}
|
||||
|
||||
export type ExperimentalWorkspaceListResponse =
|
||||
ExperimentalWorkspaceListResponses[keyof ExperimentalWorkspaceListResponses]
|
||||
|
||||
export type ExperimentalWorkspaceCreateData = {
|
||||
body?: {
|
||||
id?: string
|
||||
type: string
|
||||
branch: string | null
|
||||
extra: unknown | null
|
||||
}
|
||||
path?: never
|
||||
query?: {
|
||||
directory?: string
|
||||
workspace?: string
|
||||
}
|
||||
url: "/experimental/workspace"
|
||||
}
|
||||
|
||||
export type ExperimentalWorkspaceCreateErrors = {
|
||||
/**
|
||||
* Bad request
|
||||
*/
|
||||
400: BadRequestError
|
||||
}
|
||||
|
||||
export type ExperimentalWorkspaceCreateError =
|
||||
ExperimentalWorkspaceCreateErrors[keyof ExperimentalWorkspaceCreateErrors]
|
||||
|
||||
export type ExperimentalWorkspaceCreateResponses = {
|
||||
/**
|
||||
* Workspace created
|
||||
*/
|
||||
200: Workspace
|
||||
}
|
||||
|
||||
export type ExperimentalWorkspaceCreateResponse =
|
||||
ExperimentalWorkspaceCreateResponses[keyof ExperimentalWorkspaceCreateResponses]
|
||||
|
||||
export type ExperimentalWorkspaceRemoveData = {
|
||||
body?: never
|
||||
path: {
|
||||
id: string
|
||||
}
|
||||
query?: {
|
||||
directory?: string
|
||||
workspace?: string
|
||||
}
|
||||
url: "/experimental/workspace/{id}"
|
||||
}
|
||||
|
||||
export type ExperimentalWorkspaceRemoveErrors = {
|
||||
/**
|
||||
* Bad request
|
||||
*/
|
||||
400: BadRequestError
|
||||
}
|
||||
|
||||
export type ExperimentalWorkspaceRemoveError =
|
||||
ExperimentalWorkspaceRemoveErrors[keyof ExperimentalWorkspaceRemoveErrors]
|
||||
|
||||
export type ExperimentalWorkspaceRemoveResponses = {
|
||||
/**
|
||||
* Workspace removed
|
||||
*/
|
||||
200: Workspace
|
||||
}
|
||||
|
||||
export type ExperimentalWorkspaceRemoveResponse =
|
||||
ExperimentalWorkspaceRemoveResponses[keyof ExperimentalWorkspaceRemoveResponses]
|
||||
|
||||
export type WorktreeRemoveData = {
|
||||
body?: WorktreeRemoveInput
|
||||
path?: never
|
||||
@@ -2606,6 +2519,96 @@ export type WorktreeCreateResponses = {
|
||||
|
||||
export type WorktreeCreateResponse = WorktreeCreateResponses[keyof WorktreeCreateResponses]
|
||||
|
||||
export type ExperimentalWorkspaceRemoveData = {
|
||||
body?: never
|
||||
path: {
|
||||
id: string
|
||||
}
|
||||
query?: {
|
||||
directory?: string
|
||||
workspace?: string
|
||||
}
|
||||
url: "/experimental/workspace/{id}"
|
||||
}
|
||||
|
||||
export type ExperimentalWorkspaceRemoveErrors = {
|
||||
/**
|
||||
* Bad request
|
||||
*/
|
||||
400: BadRequestError
|
||||
}
|
||||
|
||||
export type ExperimentalWorkspaceRemoveError =
|
||||
ExperimentalWorkspaceRemoveErrors[keyof ExperimentalWorkspaceRemoveErrors]
|
||||
|
||||
export type ExperimentalWorkspaceRemoveResponses = {
|
||||
/**
|
||||
* Workspace removed
|
||||
*/
|
||||
200: Workspace
|
||||
}
|
||||
|
||||
export type ExperimentalWorkspaceRemoveResponse =
|
||||
ExperimentalWorkspaceRemoveResponses[keyof ExperimentalWorkspaceRemoveResponses]
|
||||
|
||||
export type ExperimentalWorkspaceCreateData = {
|
||||
body?: {
|
||||
branch: string | null
|
||||
config: {
|
||||
directory: string
|
||||
type: "worktree"
|
||||
}
|
||||
}
|
||||
path: {
|
||||
id: string
|
||||
}
|
||||
query?: {
|
||||
directory?: string
|
||||
workspace?: string
|
||||
}
|
||||
url: "/experimental/workspace/{id}"
|
||||
}
|
||||
|
||||
export type ExperimentalWorkspaceCreateErrors = {
|
||||
/**
|
||||
* Bad request
|
||||
*/
|
||||
400: BadRequestError
|
||||
}
|
||||
|
||||
export type ExperimentalWorkspaceCreateError =
|
||||
ExperimentalWorkspaceCreateErrors[keyof ExperimentalWorkspaceCreateErrors]
|
||||
|
||||
export type ExperimentalWorkspaceCreateResponses = {
|
||||
/**
|
||||
* Workspace created
|
||||
*/
|
||||
200: Workspace
|
||||
}
|
||||
|
||||
export type ExperimentalWorkspaceCreateResponse =
|
||||
ExperimentalWorkspaceCreateResponses[keyof ExperimentalWorkspaceCreateResponses]
|
||||
|
||||
export type ExperimentalWorkspaceListData = {
|
||||
body?: never
|
||||
path?: never
|
||||
query?: {
|
||||
directory?: string
|
||||
workspace?: string
|
||||
}
|
||||
url: "/experimental/workspace"
|
||||
}
|
||||
|
||||
export type ExperimentalWorkspaceListResponses = {
|
||||
/**
|
||||
* Workspaces
|
||||
*/
|
||||
200: Array<Workspace>
|
||||
}
|
||||
|
||||
export type ExperimentalWorkspaceListResponse =
|
||||
ExperimentalWorkspaceListResponses[keyof ExperimentalWorkspaceListResponses]
|
||||
|
||||
export type WorktreeResetData = {
|
||||
body?: WorktreeResetInput
|
||||
path?: never
|
||||
|
||||
@@ -1108,196 +1108,6 @@
|
||||
]
|
||||
}
|
||||
},
|
||||
"/experimental/workspace": {
|
||||
"post": {
|
||||
"operationId": "experimental.workspace.create",
|
||||
"parameters": [
|
||||
{
|
||||
"in": "query",
|
||||
"name": "directory",
|
||||
"schema": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
{
|
||||
"in": "query",
|
||||
"name": "workspace",
|
||||
"schema": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
],
|
||||
"summary": "Create workspace",
|
||||
"description": "Create a workspace for the current project.",
|
||||
"responses": {
|
||||
"200": {
|
||||
"description": "Workspace created",
|
||||
"content": {
|
||||
"application/json": {
|
||||
"schema": {
|
||||
"$ref": "#/components/schemas/Workspace"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"400": {
|
||||
"description": "Bad request",
|
||||
"content": {
|
||||
"application/json": {
|
||||
"schema": {
|
||||
"$ref": "#/components/schemas/BadRequestError"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"requestBody": {
|
||||
"content": {
|
||||
"application/json": {
|
||||
"schema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"pattern": "^wrk.*"
|
||||
},
|
||||
"type": {
|
||||
"type": "string"
|
||||
},
|
||||
"branch": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
]
|
||||
},
|
||||
"extra": {
|
||||
"anyOf": [
|
||||
{},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"required": ["type", "branch", "extra"]
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"x-codeSamples": [
|
||||
{
|
||||
"lang": "js",
|
||||
"source": "import { createOpencodeClient } from \"@opencode-ai/sdk\n\nconst client = createOpencodeClient()\nawait client.experimental.workspace.create({\n ...\n})"
|
||||
}
|
||||
]
|
||||
},
|
||||
"get": {
|
||||
"operationId": "experimental.workspace.list",
|
||||
"parameters": [
|
||||
{
|
||||
"in": "query",
|
||||
"name": "directory",
|
||||
"schema": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
{
|
||||
"in": "query",
|
||||
"name": "workspace",
|
||||
"schema": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
],
|
||||
"summary": "List workspaces",
|
||||
"description": "List all workspaces.",
|
||||
"responses": {
|
||||
"200": {
|
||||
"description": "Workspaces",
|
||||
"content": {
|
||||
"application/json": {
|
||||
"schema": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "#/components/schemas/Workspace"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"x-codeSamples": [
|
||||
{
|
||||
"lang": "js",
|
||||
"source": "import { createOpencodeClient } from \"@opencode-ai/sdk\n\nconst client = createOpencodeClient()\nawait client.experimental.workspace.list({\n ...\n})"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"/experimental/workspace/{id}": {
|
||||
"delete": {
|
||||
"operationId": "experimental.workspace.remove",
|
||||
"parameters": [
|
||||
{
|
||||
"in": "query",
|
||||
"name": "directory",
|
||||
"schema": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
{
|
||||
"in": "query",
|
||||
"name": "workspace",
|
||||
"schema": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
{
|
||||
"in": "path",
|
||||
"name": "id",
|
||||
"schema": {
|
||||
"type": "string",
|
||||
"pattern": "^wrk.*"
|
||||
},
|
||||
"required": true
|
||||
}
|
||||
],
|
||||
"summary": "Remove workspace",
|
||||
"description": "Remove an existing workspace.",
|
||||
"responses": {
|
||||
"200": {
|
||||
"description": "Workspace removed",
|
||||
"content": {
|
||||
"application/json": {
|
||||
"schema": {
|
||||
"$ref": "#/components/schemas/Workspace"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"400": {
|
||||
"description": "Bad request",
|
||||
"content": {
|
||||
"application/json": {
|
||||
"schema": {
|
||||
"$ref": "#/components/schemas/BadRequestError"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"x-codeSamples": [
|
||||
{
|
||||
"lang": "js",
|
||||
"source": "import { createOpencodeClient } from \"@opencode-ai/sdk\n\nconst client = createOpencodeClient()\nawait client.experimental.workspace.remove({\n ...\n})"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"/experimental/worktree": {
|
||||
"post": {
|
||||
"operationId": "worktree.create",
|
||||
@@ -1458,6 +1268,207 @@
|
||||
]
|
||||
}
|
||||
},
|
||||
"/experimental/workspace/{id}": {
|
||||
"post": {
|
||||
"operationId": "experimental.workspace.create",
|
||||
"parameters": [
|
||||
{
|
||||
"in": "query",
|
||||
"name": "directory",
|
||||
"schema": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
{
|
||||
"in": "query",
|
||||
"name": "workspace",
|
||||
"schema": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
{
|
||||
"in": "path",
|
||||
"name": "id",
|
||||
"schema": {
|
||||
"type": "string",
|
||||
"pattern": "^wrk.*"
|
||||
},
|
||||
"required": true
|
||||
}
|
||||
],
|
||||
"summary": "Create workspace",
|
||||
"description": "Create a workspace for the current project.",
|
||||
"responses": {
|
||||
"200": {
|
||||
"description": "Workspace created",
|
||||
"content": {
|
||||
"application/json": {
|
||||
"schema": {
|
||||
"$ref": "#/components/schemas/Workspace"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"400": {
|
||||
"description": "Bad request",
|
||||
"content": {
|
||||
"application/json": {
|
||||
"schema": {
|
||||
"$ref": "#/components/schemas/BadRequestError"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"requestBody": {
|
||||
"content": {
|
||||
"application/json": {
|
||||
"schema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"branch": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
]
|
||||
},
|
||||
"config": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"directory": {
|
||||
"type": "string"
|
||||
},
|
||||
"type": {
|
||||
"type": "string",
|
||||
"const": "worktree"
|
||||
}
|
||||
},
|
||||
"required": ["directory", "type"]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"required": ["branch", "config"]
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"x-codeSamples": [
|
||||
{
|
||||
"lang": "js",
|
||||
"source": "import { createOpencodeClient } from \"@opencode-ai/sdk\n\nconst client = createOpencodeClient()\nawait client.experimental.workspace.create({\n ...\n})"
|
||||
}
|
||||
]
|
||||
},
|
||||
"delete": {
|
||||
"operationId": "experimental.workspace.remove",
|
||||
"parameters": [
|
||||
{
|
||||
"in": "query",
|
||||
"name": "directory",
|
||||
"schema": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
{
|
||||
"in": "query",
|
||||
"name": "workspace",
|
||||
"schema": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
{
|
||||
"in": "path",
|
||||
"name": "id",
|
||||
"schema": {
|
||||
"type": "string",
|
||||
"pattern": "^wrk.*"
|
||||
},
|
||||
"required": true
|
||||
}
|
||||
],
|
||||
"summary": "Remove workspace",
|
||||
"description": "Remove an existing workspace.",
|
||||
"responses": {
|
||||
"200": {
|
||||
"description": "Workspace removed",
|
||||
"content": {
|
||||
"application/json": {
|
||||
"schema": {
|
||||
"$ref": "#/components/schemas/Workspace"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"400": {
|
||||
"description": "Bad request",
|
||||
"content": {
|
||||
"application/json": {
|
||||
"schema": {
|
||||
"$ref": "#/components/schemas/BadRequestError"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"x-codeSamples": [
|
||||
{
|
||||
"lang": "js",
|
||||
"source": "import { createOpencodeClient } from \"@opencode-ai/sdk\n\nconst client = createOpencodeClient()\nawait client.experimental.workspace.remove({\n ...\n})"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"/experimental/workspace": {
|
||||
"get": {
|
||||
"operationId": "experimental.workspace.list",
|
||||
"parameters": [
|
||||
{
|
||||
"in": "query",
|
||||
"name": "directory",
|
||||
"schema": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
{
|
||||
"in": "query",
|
||||
"name": "workspace",
|
||||
"schema": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
],
|
||||
"summary": "List workspaces",
|
||||
"description": "List all workspaces.",
|
||||
"responses": {
|
||||
"200": {
|
||||
"description": "Workspaces",
|
||||
"content": {
|
||||
"application/json": {
|
||||
"schema": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "#/components/schemas/Workspace"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"x-codeSamples": [
|
||||
{
|
||||
"lang": "js",
|
||||
"source": "import { createOpencodeClient } from \"@opencode-ai/sdk\n\nconst client = createOpencodeClient()\nawait client.experimental.workspace.list({\n ...\n})"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"/experimental/worktree/reset": {
|
||||
"post": {
|
||||
"operationId": "worktree.reset",
|
||||
@@ -9269,6 +9280,47 @@
|
||||
},
|
||||
"required": ["type", "properties"]
|
||||
},
|
||||
"Event.worktree.ready": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"type": {
|
||||
"type": "string",
|
||||
"const": "worktree.ready"
|
||||
},
|
||||
"properties": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"name": {
|
||||
"type": "string"
|
||||
},
|
||||
"branch": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": ["name", "branch"]
|
||||
}
|
||||
},
|
||||
"required": ["type", "properties"]
|
||||
},
|
||||
"Event.worktree.failed": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"type": {
|
||||
"type": "string",
|
||||
"const": "worktree.failed"
|
||||
},
|
||||
"properties": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"message": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": ["message"]
|
||||
}
|
||||
},
|
||||
"required": ["type", "properties"]
|
||||
},
|
||||
"Event.workspace.ready": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
@@ -9420,47 +9472,6 @@
|
||||
},
|
||||
"required": ["type", "properties"]
|
||||
},
|
||||
"Event.worktree.ready": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"type": {
|
||||
"type": "string",
|
||||
"const": "worktree.ready"
|
||||
},
|
||||
"properties": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"name": {
|
||||
"type": "string"
|
||||
},
|
||||
"branch": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": ["name", "branch"]
|
||||
}
|
||||
},
|
||||
"required": ["type", "properties"]
|
||||
},
|
||||
"Event.worktree.failed": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"type": {
|
||||
"type": "string",
|
||||
"const": "worktree.failed"
|
||||
},
|
||||
"properties": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"message": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": ["message"]
|
||||
}
|
||||
},
|
||||
"required": ["type", "properties"]
|
||||
},
|
||||
"Event": {
|
||||
"anyOf": [
|
||||
{
|
||||
@@ -9574,6 +9585,12 @@
|
||||
{
|
||||
"$ref": "#/components/schemas/Event.vcs.branch.updated"
|
||||
},
|
||||
{
|
||||
"$ref": "#/components/schemas/Event.worktree.ready"
|
||||
},
|
||||
{
|
||||
"$ref": "#/components/schemas/Event.worktree.failed"
|
||||
},
|
||||
{
|
||||
"$ref": "#/components/schemas/Event.workspace.ready"
|
||||
},
|
||||
@@ -9591,12 +9608,6 @@
|
||||
},
|
||||
{
|
||||
"$ref": "#/components/schemas/Event.pty.deleted"
|
||||
},
|
||||
{
|
||||
"$ref": "#/components/schemas/Event.worktree.ready"
|
||||
},
|
||||
{
|
||||
"$ref": "#/components/schemas/Event.worktree.failed"
|
||||
}
|
||||
]
|
||||
},
|
||||
@@ -10990,60 +11001,6 @@
|
||||
"$ref": "#/components/schemas/ToolListItem"
|
||||
}
|
||||
},
|
||||
"Workspace": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"pattern": "^wrk.*"
|
||||
},
|
||||
"type": {
|
||||
"type": "string"
|
||||
},
|
||||
"branch": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
]
|
||||
},
|
||||
"name": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
]
|
||||
},
|
||||
"directory": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
]
|
||||
},
|
||||
"extra": {
|
||||
"anyOf": [
|
||||
{},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
]
|
||||
},
|
||||
"projectID": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": ["id", "type", "branch", "name", "directory", "extra", "projectID"]
|
||||
},
|
||||
"Worktree": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
@@ -11071,6 +11028,46 @@
|
||||
}
|
||||
}
|
||||
},
|
||||
"Workspace": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"pattern": "^wrk.*"
|
||||
},
|
||||
"branch": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
]
|
||||
},
|
||||
"projectID": {
|
||||
"type": "string"
|
||||
},
|
||||
"config": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"directory": {
|
||||
"type": "string"
|
||||
},
|
||||
"type": {
|
||||
"type": "string",
|
||||
"const": "worktree"
|
||||
}
|
||||
},
|
||||
"required": ["directory", "type"]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"required": ["id", "branch", "projectID", "config"]
|
||||
},
|
||||
"WorktreeRemoveInput": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@opencode-ai/slack",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"type": "module",
|
||||
"license": "MIT",
|
||||
"scripts": {
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@opencode-ai/ui",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"type": "module",
|
||||
"license": "MIT",
|
||||
"exports": {
|
||||
|
||||
@@ -57,11 +57,6 @@ export function TextReveal(props: {
|
||||
() => props.text,
|
||||
(next, prev) => {
|
||||
if (next === prev) return
|
||||
if (typeof next === "string" && typeof prev === "string" && next.startsWith(prev)) {
|
||||
setCur(next)
|
||||
widen(win())
|
||||
return
|
||||
}
|
||||
setSwapping(true)
|
||||
setOld(prev)
|
||||
setCur(next)
|
||||
|
||||
@@ -78,13 +78,11 @@ export function createAutoScroll(options: AutoScrollOptions) {
|
||||
|
||||
const scrollToBottom = (force: boolean) => {
|
||||
if (!force && !active()) return
|
||||
|
||||
if (force && store.userScrolled) setStore("userScrolled", false)
|
||||
|
||||
const el = scroll
|
||||
if (!el) return
|
||||
|
||||
if (!force && store.userScrolled) return
|
||||
if (force && store.userScrolled) setStore("userScrolled", false)
|
||||
|
||||
const distance = distanceFromBottom(el)
|
||||
if (distance < 2) {
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@opencode-ai/util",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"private": true,
|
||||
"type": "module",
|
||||
"license": "MIT",
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
"name": "@opencode-ai/web",
|
||||
"type": "module",
|
||||
"license": "MIT",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"scripts": {
|
||||
"dev": "astro dev",
|
||||
"dev:remote": "VITE_API_URL=https://api.opencode.ai astro dev",
|
||||
|
||||
@@ -1,145 +0,0 @@
|
||||
---
|
||||
title: Go
|
||||
description: اشتراك منخفض التكلفة لنماذج البرمجة المفتوحة.
|
||||
---
|
||||
|
||||
import config from "../../../../config.mjs"
|
||||
export const console = config.console
|
||||
export const email = `mailto:${config.email}`
|
||||
|
||||
OpenCode Go هو اشتراك منخفض التكلفة بقيمة **10 دولارات شهرياً** يمنحك وصولاً موثوقاً إلى نماذج البرمجة المفتوحة الشائعة.
|
||||
|
||||
:::note
|
||||
OpenCode Go حالياً في مرحلة تجريبية (beta).
|
||||
:::
|
||||
|
||||
يعمل Go مثل أي مزود آخر في OpenCode. تشترك في OpenCode Go وتحصل على مفتاح API الخاص بك. إنه **اختياري تماماً** ولا تحتاج إلى استخدامه لاستخدام OpenCode.
|
||||
|
||||
صُمم بشكل أساسي للمستخدمين الدوليين، مع استضافة النماذج في الولايات المتحدة والاتحاد الأوروبي وسنغافورة لضمان وصول عالمي مستقر.
|
||||
|
||||
---
|
||||
|
||||
## الخلفية (Background)
|
||||
|
||||
أصبحت النماذج المفتوحة جيدة حقاً. فهي تصل الآن إلى أداء قريب من النماذج المملوكة لمهام البرمجة. ولأن العديد من المزودين يمكنهم تقديمها بشكل تنافسي، فهي عادة ما تكون أرخص بكثير.
|
||||
|
||||
ومع ذلك، فإن الحصول على وصول موثوق ومنخفض الكمون (low latency) إليها قد يكون صعباً. يختلف المزودون في الجودة والتوافر.
|
||||
|
||||
:::tip
|
||||
قمنا باختبار مجموعة مختارة من النماذج والمزودين الذين يعملون بشكل جيد مع OpenCode.
|
||||
:::
|
||||
|
||||
لإصلاح ذلك، قمنا ببعض الأشياء:
|
||||
|
||||
1. اختبرنا مجموعة مختارة من النماذج المفتوحة وتحدثنا مع فرقهم حول أفضل طريقة لتشغيلها.
|
||||
2. عملنا بعد ذلك مع عدد قليل من المزودين للتأكد من تقديمها بشكل صحيح.
|
||||
3. أخيراً، قمنا بقياس أداء (benchmark) مزيج النموذج/المزود وتوصلنا إلى قائمة نشعر بالراحة في التوصية بها.
|
||||
|
||||
يمنحك OpenCode Go الوصول إلى هذه النماذج مقابل **10 دولارات شهرياً**.
|
||||
|
||||
---
|
||||
|
||||
## كيف يعمل (How it works)
|
||||
|
||||
يعمل OpenCode Go مثل أي مزود آخر في OpenCode.
|
||||
|
||||
1. قم بتسجيل الدخول إلى **<a href={console}>OpenCode Zen</a>**، واشترك في Go، وانسخ مفتاح API الخاص بك.
|
||||
2. قم بتشغيل الأمر `/connect` في واجهة TUI، وحدد `OpenCode Go`، والصق مفتاح API الخاص بك.
|
||||
3. قم بتشغيل `/models` في واجهة TUI لرؤية قائمة النماذج المتاحة من خلال Go.
|
||||
|
||||
:::note
|
||||
يمكن لعضو واحد فقط لكل مساحة عمل (workspace) الاشتراك في OpenCode Go.
|
||||
:::
|
||||
|
||||
تتضمن القائمة الحالية للنماذج:
|
||||
|
||||
- **GLM-5**
|
||||
- **Kimi K2.5**
|
||||
- **MiniMax M2.5**
|
||||
|
||||
قد تتغير قائمة النماذج ونحن نختبر ونضيف نماذج جديدة.
|
||||
|
||||
---
|
||||
|
||||
## حدود الاستخدام (Usage limits)
|
||||
|
||||
يتضمن OpenCode Go الحدود التالية:
|
||||
|
||||
- **حد 5 ساعات** — 12 دولاراً من الاستخدام
|
||||
- **حد أسبوعي** — 30 دولاراً من الاستخدام
|
||||
- **حد شهري** — 60 دولاراً من الاستخدام
|
||||
|
||||
يتم تعريف الحدود بقيمة الدولار. هذا يعني أن عدد طلباتك الفعلي يعتمد على النموذج الذي تستخدمه. تسمح النماذج الأرخص مثل MiniMax M2.5 بمزيد من الطلبات، بينما تسمح النماذج الأعلى تكلفة مثل GLM-5 بعدد أقل.
|
||||
|
||||
يوفر الجدول أدناه عدداً تقديرياً للطلبات بناءً على أنماط استخدام Go النموذجية:
|
||||
|
||||
| | GLM-5 | Kimi K2.5 | MiniMax M2.5 |
|
||||
| ---------------- | ----- | --------- | ------------ |
|
||||
| طلبات كل 5 ساعات | 1,150 | 1,850 | 30,000 |
|
||||
| طلبات أسبوعياً | 2,880 | 4,630 | 75,000 |
|
||||
| طلبات شهرياً | 5,750 | 9,250 | 150,000 |
|
||||
|
||||
تعتمد التقديرات على أنماط الطلبات المتوسطة الملحوظة:
|
||||
|
||||
- GLM-5 — 700 input, 52,000 cached, 150 output tokens per request
|
||||
- Kimi K2.5 — 870 input, 55,000 cached, 200 output tokens per request
|
||||
- MiniMax M2.5 — 300 input, 55,000 cached, 125 output tokens per request
|
||||
|
||||
يمكنك تتبع استخدامك الحالي في **<a href={console}>console</a>**.
|
||||
|
||||
:::tip
|
||||
إذا وصلت إلى حد الاستخدام، يمكنك الاستمرار في استخدام النماذج المجانية.
|
||||
:::
|
||||
|
||||
قد تتغير حدود الاستخدام ونحن نتعلم من الاستخدام المبكر والملاحظات.
|
||||
|
||||
---
|
||||
|
||||
### التسعير (Pricing)
|
||||
|
||||
OpenCode Go هي خطة اشتراك بقيمة **10 دولارات شهرياً**. أدناه الأسعار **لكل 1 مليون رمز (token)**.
|
||||
|
||||
| Model | Input | Output | Cached Read |
|
||||
| ------------ | ----- | ------ | ----------- |
|
||||
| GLM-5 | $1.00 | $3.20 | $0.20 |
|
||||
| Kimi K2.5 | $0.60 | $3.00 | $0.10 |
|
||||
| MiniMax M2.5 | $0.30 | $1.20 | $0.03 |
|
||||
|
||||
---
|
||||
|
||||
### الاستخدام خارج الحدود (Usage beyond limits)
|
||||
|
||||
إذا كان لديك أيضاً أرصدة في رصيد Zen الخاص بك، يمكنك تمكين خيار **Use balance** في الـ console. عند التمكين، سيعود Go لاستخدام رصيد Zen الخاص بك بعد وصولك إلى حدود الاستخدام بدلاً من حظر الطلبات.
|
||||
|
||||
---
|
||||
|
||||
## نقاط النهاية (Endpoints)
|
||||
|
||||
يمكنك أيضاً الوصول إلى نماذج Go من خلال نقاط نهاية API التالية.
|
||||
|
||||
| Model | Model ID | Endpoint | AI SDK Package |
|
||||
| ------------ | ------------ | ------------------------------------------------ | --------------------------- |
|
||||
| GLM-5 | glm-5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| Kimi K2.5 | kimi-k2.5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| MiniMax M2.5 | minimax-m2.5 | `https://opencode.ai/zen/go/v1/messages` | `@ai-sdk/anthropic` |
|
||||
|
||||
يستخدم [model id](/docs/config/#models) في تكوين OpenCode الخاص بك التنسيق `opencode-go/<model-id>`. على سبيل المثال، بالنسبة لـ Kimi K2.5، ستستخدم `opencode-go/kimi-k2.5` في التكوين الخاص بك.
|
||||
|
||||
---
|
||||
|
||||
## الخصوصية (Privacy)
|
||||
|
||||
تم تصميم الخطة بشكل أساسي للمستخدمين الدوليين، مع استضافة النماذج في الولايات المتحدة والاتحاد الأوروبي وسنغافورة لضمان وصول عالمي مستقر.
|
||||
|
||||
<a href={email}>تواصل معنا</a> إذا كان لديك أي أسئلة.
|
||||
|
||||
---
|
||||
|
||||
## الأهداف (Goals)
|
||||
|
||||
أنشأنا OpenCode Go لـ:
|
||||
|
||||
1. جعل برمجة الذكاء الاصطناعي **في المتناول** لمزيد من الناس باشتراك منخفض التكلفة.
|
||||
2. توفير وصول **موثوق** لأفضل نماذج البرمجة المفتوحة.
|
||||
3. انتقاء نماذج **مختبرة وتم قياس أدائها** لاستخدام وكيل البرمجة.
|
||||
4. عدم وجود **قيود (lock-in)** من خلال السماح لك باستخدام أي مزود آخر مع OpenCode أيضاً.
|
||||
@@ -1,159 +0,0 @@
|
||||
---
|
||||
title: Go
|
||||
description: Povoljna pretplata za otvorene modele kodiranja.
|
||||
---
|
||||
|
||||
import config from "../../../../config.mjs"
|
||||
export const console = config.console
|
||||
export const email = `mailto:${config.email}`
|
||||
|
||||
OpenCode Go je povoljna pretplata od **$10/mjesečno** koja vam daje pouzdan pristup popularnim otvorenim modelima kodiranja.
|
||||
|
||||
:::note
|
||||
OpenCode Go je trenutno u beta fazi.
|
||||
:::
|
||||
|
||||
Go funkcioniše kao bilo koji drugi pružatelj usluga u OpenCode-u. Pretplatite se na OpenCode Go i
|
||||
dobijete svoj API ključ. To je **potpuno opcionalno** i ne morate ga koristiti da biste
|
||||
koristili OpenCode.
|
||||
|
||||
Dizajniran je prvenstveno za međunarodne korisnike, sa modelima hostovanim u SAD-u, EU i Singapuru za stabilan globalni pristup.
|
||||
|
||||
---
|
||||
|
||||
## Pozadina
|
||||
|
||||
Otvoreni modeli su postali zaista dobri. Sada dostižu performanse bliske
|
||||
vlasničkim modelima za zadatke kodiranja. A budući da ih mnogi pružatelji usluga mogu posluživati
|
||||
konkurentno, obično su daleko jeftiniji.
|
||||
|
||||
Međutim, dobivanje pouzdanog pristupa s malim kašnjenjem može biti teško. Pružatelji usluga
|
||||
variraju u kvaliteti i dostupnosti.
|
||||
|
||||
:::tip
|
||||
Testirali smo odabranu grupu modela i pružatelja usluga koji dobro rade sa OpenCode-om.
|
||||
:::
|
||||
|
||||
Da bismo ovo riješili, uradili smo nekoliko stvari:
|
||||
|
||||
1. Testirali smo odabranu grupu otvorenih modela i razgovarali sa njihovim timovima o tome kako ih
|
||||
najbolje pokrenuti.
|
||||
2. Zatim smo radili sa nekoliko pružatelja usluga kako bismo bili sigurni da su ispravno
|
||||
posluživani.
|
||||
3. Konačno, benchmarkirali smo kombinaciju modela/pružatelja usluga i došli do
|
||||
liste koju se osjećamo dobro preporučiti.
|
||||
|
||||
OpenCode Go vam daje pristup ovim modelima za **$10/mjesečno**.
|
||||
|
||||
---
|
||||
|
||||
## Kako to funkcioniše
|
||||
|
||||
OpenCode Go funkcioniše kao bilo koji drugi pružatelj usluga u OpenCode-u.
|
||||
|
||||
1. Prijavite se na **<a href={console}>OpenCode Zen</a>**, pretplatite se na Go i
|
||||
kopirajte svoj API ključ.
|
||||
2. Pokrenite `/connect` komandu u TUI-ju, odaberite `OpenCode Go` i zalijepite
|
||||
svoj API ključ.
|
||||
3. Pokrenite `/models` u TUI-ju da vidite listu modela dostupnih putem Go.
|
||||
|
||||
:::note
|
||||
Samo jedan član po radnom prostoru se može pretplatiti na OpenCode Go.
|
||||
:::
|
||||
|
||||
Trenutna lista modela uključuje:
|
||||
|
||||
- **GLM-5**
|
||||
- **Kimi K2.5**
|
||||
- **MiniMax M2.5**
|
||||
|
||||
Lista modela se može mijenjati kako testiramo i dodajemo nove.
|
||||
|
||||
---
|
||||
|
||||
## Ograničenja korištenja
|
||||
|
||||
OpenCode Go uključuje sljedeća ograničenja:
|
||||
|
||||
- **Limit od 5 sati** — $12 korištenja
|
||||
- **Sedmični limit** — $30 korištenja
|
||||
- **Mjesečni limit** — $60 korištenja
|
||||
|
||||
Limiti su definisani u dolarskoj vrijednosti. To znači da vaš stvarni broj zahtjeva zavisi od modela koji koristite. Jeftiniji modeli kao što je MiniMax M2.5 omogućavaju više zahtjeva, dok skuplji modeli kao što je GLM-5 omogućavaju manje.
|
||||
|
||||
Tabela ispod pruža procijenjeni broj zahtjeva na osnovu tipičnih Go obrazaca korištenja:
|
||||
|
||||
| | GLM-5 | Kimi K2.5 | MiniMax M2.5 |
|
||||
| ------------------ | ----- | --------- | ------------ |
|
||||
| zahtjeva po 5 sati | 1,150 | 1,850 | 30,000 |
|
||||
| zahtjeva sedmično | 2,880 | 4,630 | 75,000 |
|
||||
| zahtjeva mjesečno | 5,750 | 9,250 | 150,000 |
|
||||
|
||||
Procjene su zasnovane na uočenim prosječnim obrascima zahtjeva:
|
||||
|
||||
- GLM-5 — 700 ulaznih, 52,000 keširanih, 150 izlaznih tokena po zahtjevu
|
||||
- Kimi K2.5 — 870 ulaznih, 55,000 keširanih, 200 izlaznih tokena po zahtjevu
|
||||
- MiniMax M2.5 — 300 ulaznih, 55,000 keširanih, 125 izlaznih tokena po zahtjevu
|
||||
|
||||
Možete pratiti svoje trenutno korištenje u **<a href={console}>konzoli</a>**.
|
||||
|
||||
:::tip
|
||||
Ako dosegnete limit korištenja, možete nastaviti koristiti besplatne modele.
|
||||
:::
|
||||
|
||||
Ograničenja korištenja se mogu mijenjati kako učimo iz rane upotrebe i povratnih informacija.
|
||||
|
||||
---
|
||||
|
||||
### Cijene
|
||||
|
||||
OpenCode Go je pretplatnički plan od **$10/mjesečno**. Ispod su cijene **po 1M tokena**.
|
||||
|
||||
| Model | Ulaz | Izlaz | Keširano čitanje |
|
||||
| ------------ | ----- | ----- | ---------------- |
|
||||
| GLM-5 | $1.00 | $3.20 | $0.20 |
|
||||
| Kimi K2.5 | $0.60 | $3.00 | $0.10 |
|
||||
| MiniMax M2.5 | $0.30 | $1.20 | $0.03 |
|
||||
|
||||
---
|
||||
|
||||
### Korištenje izvan limita
|
||||
|
||||
Ako također imate kredita na svom Zen saldu, možete omogućiti opciju **Use balance**
|
||||
u konzoli. Kada je omogućeno, Go će se prebaciti na vaš Zen saldo
|
||||
nakon što dosegnete limite korištenja umjesto blokiranja zahtjeva.
|
||||
|
||||
---
|
||||
|
||||
## Endpointi
|
||||
|
||||
Također možete pristupiti Go modelima putem sljedećih API endpointa.
|
||||
|
||||
| Model | ID modela | Endpoint | AI SDK Paket |
|
||||
| ------------ | ------------ | ------------------------------------------------ | --------------------------- |
|
||||
| GLM-5 | glm-5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| Kimi K2.5 | kimi-k2.5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| MiniMax M2.5 | minimax-m2.5 | `https://opencode.ai/zen/go/v1/messages` | `@ai-sdk/anthropic` |
|
||||
|
||||
[Model id](/docs/config/#models) u vašoj OpenCode konfiguraciji
|
||||
koristi format `opencode-go/<model-id>`. Na primjer, za Kimi K2.5, biste
|
||||
koristili `opencode-go/kimi-k2.5` u vašoj konfiguraciji.
|
||||
|
||||
---
|
||||
|
||||
## Privatnost
|
||||
|
||||
Plan je dizajniran prvenstveno za međunarodne korisnike, sa modelima hostovanim u SAD-u, EU i Singapuru za stabilan globalni pristup.
|
||||
|
||||
<a href={email}>Kontaktirajte nas</a> ako imate bilo kakvih pitanja.
|
||||
|
||||
---
|
||||
|
||||
## Ciljevi
|
||||
|
||||
Kreirali smo OpenCode Go da:
|
||||
|
||||
1. Učinimo AI kodiranje **pristupačnim** većem broju ljudi uz povoljnu pretplatu.
|
||||
2. Pružimo **pouzdan** pristup najboljim otvorenim modelima kodiranja.
|
||||
3. Kurišemo modele koji su **testirani i benchmarkirani** za upotrebu agenata za kodiranje.
|
||||
4. Nemamo **zaključavanja (lock-in)** omogućavajući vam da koristite bilo kojeg drugog pružatelja usluga sa OpenCode-om također.
|
||||
@@ -1,145 +0,0 @@
|
||||
---
|
||||
title: Go
|
||||
description: Lavprisabonnement for åbne kodningsmodeller.
|
||||
---
|
||||
|
||||
import config from "../../../../config.mjs"
|
||||
export const console = config.console
|
||||
export const email = `mailto:${config.email}`
|
||||
|
||||
OpenCode Go er et billigt **$10/måned** abonnement, der giver dig pålidelig adgang til populære åbne kodningsmodeller.
|
||||
|
||||
:::note
|
||||
OpenCode Go er i øjeblikket i beta.
|
||||
:::
|
||||
|
||||
Go fungerer ligesom enhver anden udbyder i OpenCode. Du abonnerer på OpenCode Go og får din API-nøgle. Det er **helt valgfrit**, og du behøver ikke at bruge det for at bruge OpenCode.
|
||||
|
||||
Det er primært designet til internationale brugere, med modeller hostet i USA, EU og Singapore for stabil global adgang.
|
||||
|
||||
---
|
||||
|
||||
## Baggrund
|
||||
|
||||
Åbne modeller er blevet virkelig gode. De når nu en ydeevne tæt på proprietære modeller til kodningsopgaver. Og fordi mange udbydere kan tilbyde dem konkurrencedygtigt, er de normalt langt billigere.
|
||||
|
||||
Det kan dog være svært at få pålidelig adgang med lav latenstid til dem. Udbydere varierer i kvalitet og tilgængelighed.
|
||||
|
||||
:::tip
|
||||
Vi testede en udvalgt gruppe af modeller og udbydere, der fungerer godt med OpenCode.
|
||||
:::
|
||||
|
||||
For at løse dette gjorde vi et par ting:
|
||||
|
||||
1. Vi testede en udvalgt gruppe af åbne modeller og talte med deres teams om, hvordan man bedst kører dem.
|
||||
2. Vi arbejdede derefter sammen med nogle få udbydere for at sikre, at disse blev leveret korrekt.
|
||||
3. Endelig benchmarkede vi kombinationen af model/udbyder og kom frem til en liste, som vi har det godt med at anbefale.
|
||||
|
||||
OpenCode Go giver dig adgang til disse modeller for **$10/måned**.
|
||||
|
||||
---
|
||||
|
||||
## Sådan fungerer det
|
||||
|
||||
OpenCode Go fungerer ligesom enhver anden udbyder i OpenCode.
|
||||
|
||||
1. Du logger ind på **<a href={console}>OpenCode Zen</a>**, abonnerer på Go, og kopierer din API-nøgle.
|
||||
2. Du kører kommandoen `/connect` i TUI'en, vælger `OpenCode Go`, og indsætter din API-nøgle.
|
||||
3. Kør `/models` i TUI'en for at se listen over modeller, der er tilgængelige gennem Go.
|
||||
|
||||
:::note
|
||||
Kun ét medlem pr. workspace kan abonnere på OpenCode Go.
|
||||
:::
|
||||
|
||||
Den nuværende liste over modeller inkluderer:
|
||||
|
||||
- **GLM-5**
|
||||
- **Kimi K2.5**
|
||||
- **MiniMax M2.5**
|
||||
|
||||
Listen over modeller kan ændre sig, efterhånden som vi tester og tilføjer nye.
|
||||
|
||||
---
|
||||
|
||||
## Forbrugsgrænser
|
||||
|
||||
OpenCode Go inkluderer følgende grænser:
|
||||
|
||||
- **5 timers grænse** — $12 forbrug
|
||||
- **Ugentlig grænse** — $30 forbrug
|
||||
- **Månedlig grænse** — $60 forbrug
|
||||
|
||||
Grænser er defineret i dollarværdi. Det betyder, at dit faktiske antal forespørgsler afhænger af den model, du bruger. Billigere modeller som MiniMax M2.5 tillader flere forespørgsler, mens dyrere modeller som GLM-5 tillader færre.
|
||||
|
||||
Tabellen nedenfor giver et estimeret antal forespørgsler baseret på typiske Go-brugsmønstre:
|
||||
|
||||
| | GLM-5 | Kimi K2.5 | MiniMax M2.5 |
|
||||
| ------------------------- | ----- | --------- | ------------ |
|
||||
| forespørgsler pr. 5 timer | 1.150 | 1.850 | 30.000 |
|
||||
| forespørgsler pr. uge | 2.880 | 4.630 | 75.000 |
|
||||
| forespørgsler pr. måned | 5.750 | 9.250 | 150.000 |
|
||||
|
||||
Estimater er baseret på observerede gennemsnitlige forespørgselsmønstre:
|
||||
|
||||
- GLM-5 — 700 input, 52.000 cached, 150 output tokens pr. forespørgsel
|
||||
- Kimi K2.5 — 870 input, 55.000 cached, 200 output tokens pr. forespørgsel
|
||||
- MiniMax M2.5 — 300 input, 55.000 cached, 125 output tokens pr. forespørgsel
|
||||
|
||||
Du kan spore dit nuværende forbrug i **<a href={console}>konsollen</a>**.
|
||||
|
||||
:::tip
|
||||
Hvis du når forbrugsgrænsen, kan du fortsætte med at bruge de gratis modeller.
|
||||
:::
|
||||
|
||||
Forbrugsgrænser kan ændre sig, efterhånden som vi lærer fra tidlig brug og feedback.
|
||||
|
||||
---
|
||||
|
||||
### Priser
|
||||
|
||||
OpenCode Go er en **$10/måned** abonnementsplan. Nedenfor er priserne **pr. 1M tokens**.
|
||||
|
||||
| Model | Input | Output | Cached Læsning |
|
||||
| ------------ | ----- | ------ | -------------- |
|
||||
| GLM-5 | $1,00 | $3,20 | $0,20 |
|
||||
| Kimi K2.5 | $0,60 | $3,00 | $0,10 |
|
||||
| MiniMax M2.5 | $0,30 | $1,20 | $0,03 |
|
||||
|
||||
---
|
||||
|
||||
### Forbrug ud over grænser
|
||||
|
||||
Hvis du også har kreditter på din Zen-saldo, kan du aktivere **Brug saldo**-indstillingen i konsollen. Når den er aktiveret, vil Go falde tilbage på din Zen-saldo, efter du har nået dine forbrugsgrænser, i stedet for at blokere forespørgsler.
|
||||
|
||||
---
|
||||
|
||||
## Endepunkter
|
||||
|
||||
Du kan også få adgang til Go-modeller gennem følgende API-endepunkter.
|
||||
|
||||
| Model | Model ID | Endpoint | AI SDK Pakke |
|
||||
| ------------ | ------------ | ------------------------------------------------ | --------------------------- |
|
||||
| GLM-5 | glm-5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| Kimi K2.5 | kimi-k2.5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| MiniMax M2.5 | minimax-m2.5 | `https://opencode.ai/zen/go/v1/messages` | `@ai-sdk/anthropic` |
|
||||
|
||||
[Model-id'et](/docs/config/#models) i din OpenCode-konfiguration bruger formatet `opencode-go/<model-id>`. For eksempel, for Kimi K2.5, ville du bruge `opencode-go/kimi-k2.5` i din konfiguration.
|
||||
|
||||
---
|
||||
|
||||
## Privatliv
|
||||
|
||||
Planen er primært designet til internationale brugere, med modeller hostet i USA, EU og Singapore for stabil global adgang.
|
||||
|
||||
<a href={email}>Kontakt os</a> hvis du har spørgsmål.
|
||||
|
||||
---
|
||||
|
||||
## Mål
|
||||
|
||||
Vi skabte OpenCode Go for at:
|
||||
|
||||
1. Gøre AI-kodning **tilgængelig** for flere mennesker med et billigt abonnement.
|
||||
2. Tilbyde **pålidelig** adgang til de bedste åbne kodningsmodeller.
|
||||
3. Udvælge modeller, der er **testet og benchmarked** til brug med kodningsagenter.
|
||||
4. Have **ingen lock-in** ved at tillade dig også at bruge enhver anden udbyder med OpenCode.
|
||||
@@ -1,145 +0,0 @@
|
||||
---
|
||||
title: Go
|
||||
description: Kostengünstiges Abonnement für Open-Coding-Modelle.
|
||||
---
|
||||
|
||||
import config from "../../../../config.mjs"
|
||||
export const console = config.console
|
||||
export const email = `mailto:${config.email}`
|
||||
|
||||
OpenCode Go ist ein kostengünstiges Abonnement für **10 $/Monat**, das dir zuverlässigen Zugriff auf beliebte Open-Coding-Modelle bietet.
|
||||
|
||||
:::note
|
||||
OpenCode Go befindet sich derzeit in der Beta-Phase.
|
||||
:::
|
||||
|
||||
Go funktioniert wie jeder andere Anbieter in OpenCode. Du abonnierst OpenCode Go und erhältst deinen API-Schlüssel. Es ist **völlig optional** und du musst es nicht nutzen, um OpenCode zu verwenden.
|
||||
|
||||
Es wurde primär für internationale Nutzer entwickelt, mit Modellen, die in den USA, der EU und Singapur gehostet werden, um einen stabilen weltweiten Zugriff zu gewährleisten.
|
||||
|
||||
---
|
||||
|
||||
## Hintergrund
|
||||
|
||||
Offene Modelle sind wirklich gut geworden. Sie erreichen bei Coding-Aufgaben mittlerweile eine Leistung, die der von proprietären Modellen nahekommt. Und da viele Anbieter sie wettbewerbsfähig bereitstellen können, sind sie in der Regel deutlich günstiger.
|
||||
|
||||
Es kann jedoch schwierig sein, einen zuverlässigen Zugang mit niedriger Latenz zu erhalten. Die Anbieter variieren in Qualität und Verfügbarkeit.
|
||||
|
||||
:::tip
|
||||
Wir haben eine ausgewählte Gruppe von Modellen und Anbietern getestet, die gut mit OpenCode funktionieren.
|
||||
:::
|
||||
|
||||
Um dies zu lösen, haben wir einige Dinge getan:
|
||||
|
||||
1. Wir haben eine ausgewählte Gruppe offener Modelle getestet und mit deren Teams darüber gesprochen, wie man sie am besten betreibt.
|
||||
2. Anschließend haben wir mit einigen Anbietern zusammengearbeitet, um sicherzustellen, dass diese korrekt bereitgestellt werden.
|
||||
3. Schließlich haben wir die Kombination aus Modell und Anbieter einem Benchmark unterzogen und eine Liste erstellt, die wir guten Gewissens empfehlen können.
|
||||
|
||||
OpenCode Go gibt dir Zugriff auf diese Modelle für **10 $/Monat**.
|
||||
|
||||
---
|
||||
|
||||
## Wie es funktioniert
|
||||
|
||||
OpenCode Go funktioniert wie jeder andere Anbieter in OpenCode.
|
||||
|
||||
1. Du meldest dich bei **<a href={console}>OpenCode Zen</a>** an, abonnierst Go und kopierst deinen API-Schlüssel.
|
||||
2. Du führst den Befehl `/connect` in der TUI aus, wählst `OpenCode Go` und fügst deinen API-Schlüssel ein.
|
||||
3. Führe `/models` in der TUI aus, um die Liste der über Go verfügbaren Modelle zu sehen.
|
||||
|
||||
:::note
|
||||
Nur ein Mitglied pro Workspace kann OpenCode Go abonnieren.
|
||||
:::
|
||||
|
||||
Die aktuelle Liste der Modelle umfasst:
|
||||
|
||||
- **GLM-5**
|
||||
- **Kimi K2.5**
|
||||
- **MiniMax M2.5**
|
||||
|
||||
Die Liste der Modelle kann sich ändern, wenn wir neue testen und hinzufügen.
|
||||
|
||||
---
|
||||
|
||||
## Nutzungslimits
|
||||
|
||||
OpenCode Go beinhaltet die folgenden Limits:
|
||||
|
||||
- **5-Stunden-Limit** — 12 $ Nutzung
|
||||
- **Wöchentliches Limit** — 30 $ Nutzung
|
||||
- **Monatliches Limit** — 60 $ Nutzung
|
||||
|
||||
Die Limits sind in Dollarwerten definiert. Das bedeutet, dass deine tatsächliche Anzahl an Anfragen von dem verwendeten Modell abhängt. Günstigere Modelle wie MiniMax M2.5 ermöglichen mehr Anfragen, während kostenintensivere Modelle wie GLM-5 weniger zulassen.
|
||||
|
||||
Die untenstehende Tabelle bietet eine geschätzte Anzahl an Anfragen basierend auf typischen Go-Nutzungsmustern:
|
||||
|
||||
| | GLM-5 | Kimi K2.5 | MiniMax M2.5 |
|
||||
| ------------------- | ----- | --------- | ------------ |
|
||||
| Anfragen pro 5 Std. | 1.150 | 1.850 | 30.000 |
|
||||
| Anfragen pro Woche | 2.880 | 4.630 | 75.000 |
|
||||
| Anfragen pro Monat | 5.750 | 9.250 | 150.000 |
|
||||
|
||||
Die Schätzungen basieren auf beobachteten durchschnittlichen Nutzungsmustern:
|
||||
|
||||
- GLM-5 — 700 Input-, 52.000 Cached-, 150 Output-Token pro Anfrage
|
||||
- Kimi K2.5 — 870 Input-, 55.000 Cached-, 200 Output-Token pro Anfrage
|
||||
- MiniMax M2.5 — 300 Input-, 55.000 Cached-, 125 Output-Token pro Anfrage
|
||||
|
||||
Du kannst deine aktuelle Nutzung in der **<a href={console}>Konsole</a>** verfolgen.
|
||||
|
||||
:::tip
|
||||
Wenn du das Nutzungslimit erreichst, kannst du weiterhin die kostenlosen Modelle verwenden.
|
||||
:::
|
||||
|
||||
Nutzungslimits können sich ändern, da wir aus der frühen Nutzung und dem Feedback lernen.
|
||||
|
||||
---
|
||||
|
||||
### Preise
|
||||
|
||||
OpenCode Go ist ein Abonnementplan für **10 $/Monat**. Unten stehen die Preise **pro 1 Mio. Token**.
|
||||
|
||||
| Modell | Input | Output | Cached Read |
|
||||
| ------------ | ------ | ------ | ----------- |
|
||||
| GLM-5 | 1,00 $ | 3,20 $ | 0,20 $ |
|
||||
| Kimi K2.5 | 0,60 $ | 3,00 $ | 0,10 $ |
|
||||
| MiniMax M2.5 | 0,30 $ | 1,20 $ | 0,03 $ |
|
||||
|
||||
---
|
||||
|
||||
### Nutzung über Limits hinaus
|
||||
|
||||
Wenn du auch Guthaben auf deinem Zen-Konto hast, kannst du die Option **Use balance** in der Konsole aktivieren. Wenn aktiviert, greift Go auf dein Zen-Guthaben zurück, nachdem du deine Nutzungslimits erreicht hast, anstatt Anfragen zu blockieren.
|
||||
|
||||
---
|
||||
|
||||
## Endpunkte
|
||||
|
||||
Du kannst auch über die folgenden API-Endpunkte auf Go-Modelle zugreifen.
|
||||
|
||||
| Modell | Modell-ID | Endpunkt | AI SDK Paket |
|
||||
| ------------ | ------------ | ------------------------------------------------ | --------------------------- |
|
||||
| GLM-5 | glm-5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| Kimi K2.5 | kimi-k2.5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| MiniMax M2.5 | minimax-m2.5 | `https://opencode.ai/zen/go/v1/messages` | `@ai-sdk/anthropic` |
|
||||
|
||||
Die [Modell-ID](/docs/config/#models) in deiner OpenCode-Konfiguration verwendet das Format `opencode-go/<model-id>`. Zum Beispiel würdest du für Kimi K2.5 `opencode-go/kimi-k2.5` in deiner Konfiguration verwenden.
|
||||
|
||||
---
|
||||
|
||||
## Datenschutz
|
||||
|
||||
Der Plan wurde primär für internationale Nutzer entwickelt, mit Modellen, die in den USA, der EU und Singapur gehostet werden, um einen stabilen weltweiten Zugriff zu gewährleisten.
|
||||
|
||||
<a href={email}>Kontaktiere uns</a>, wenn du Fragen hast.
|
||||
|
||||
---
|
||||
|
||||
## Ziele
|
||||
|
||||
Wir haben OpenCode Go erstellt, um:
|
||||
|
||||
1. AI-Coding für mehr Menschen durch ein kostengünstiges Abonnement **zugänglich** zu machen.
|
||||
2. **Zuverlässigen** Zugriff auf die besten Open-Coding-Modelle zu bieten.
|
||||
3. Modelle zu kuratieren, die für den Einsatz von Coding-Agents **getestet und gebenchmarkt** sind.
|
||||
4. **Keinen Lock-in** zu haben, indem wir dir ermöglichen, jeden anderen Anbieter ebenfalls mit OpenCode zu nutzen.
|
||||
@@ -1,145 +0,0 @@
|
||||
---
|
||||
title: Go
|
||||
description: Suscripción de bajo coste para modelos de código abiertos.
|
||||
---
|
||||
|
||||
import config from "../../../../config.mjs"
|
||||
export const console = config.console
|
||||
export const email = `mailto:${config.email}`
|
||||
|
||||
OpenCode Go es una suscripción de bajo coste de **10 $/mes** que te ofrece acceso fiable a modelos populares de código abierto.
|
||||
|
||||
:::note
|
||||
OpenCode Go está actualmente en beta.
|
||||
:::
|
||||
|
||||
Go funciona como cualquier otro proveedor en OpenCode. Te suscribes a OpenCode Go y obtienes tu clave API. Es **completamente opcional** y no necesitas usarlo para utilizar OpenCode.
|
||||
|
||||
Está diseñado principalmente para usuarios internacionales, con modelos alojados en EE. UU., la UE y Singapur para un acceso global estable.
|
||||
|
||||
---
|
||||
|
||||
## Contexto
|
||||
|
||||
Los modelos abiertos han mejorado mucho. Ahora alcanzan un rendimiento cercano al de los modelos propietarios para tareas de programación. Y como muchos proveedores pueden servirlos de forma competitiva, suelen ser mucho más baratos.
|
||||
|
||||
Sin embargo, conseguir un acceso fiable y de baja latencia a ellos puede ser difícil. Los proveedores varían en calidad y disponibilidad.
|
||||
|
||||
:::tip
|
||||
Hemos probado un grupo selecto de modelos y proveedores que funcionan bien con OpenCode.
|
||||
:::
|
||||
|
||||
Para solucionar esto, hicimos un par de cosas:
|
||||
|
||||
1. Probamos un grupo selecto de modelos abiertos y hablamos con sus equipos sobre la mejor manera de ejecutarlos.
|
||||
2. Luego trabajamos con algunos proveedores para asegurarnos de que se sirvieran correctamente.
|
||||
3. Finalmente, evaluamos la combinación de modelo/proveedor y elaboramos una lista que nos sentimos cómodos recomendando.
|
||||
|
||||
OpenCode Go te da acceso a estos modelos por **10 $/mes**.
|
||||
|
||||
---
|
||||
|
||||
## Cómo funciona
|
||||
|
||||
OpenCode Go funciona como cualquier otro proveedor en OpenCode.
|
||||
|
||||
1. Inicias sesión en **<a href={console}>OpenCode Zen</a>**, te suscribes a Go y copias tu clave API.
|
||||
2. Ejecutas el comando `/connect` en la TUI, seleccionas `OpenCode Go` y pegas tu clave API.
|
||||
3. Ejecuta `/models` en la TUI para ver la lista de modelos disponibles a través de Go.
|
||||
|
||||
:::note
|
||||
Solo un miembro por espacio de trabajo puede suscribirse a OpenCode Go.
|
||||
:::
|
||||
|
||||
La lista actual de modelos incluye:
|
||||
|
||||
- **GLM-5**
|
||||
- **Kimi K2.5**
|
||||
- **MiniMax M2.5**
|
||||
|
||||
La lista de modelos puede cambiar a medida que probamos y añadimos nuevos.
|
||||
|
||||
---
|
||||
|
||||
## Límites de uso
|
||||
|
||||
OpenCode Go incluye los siguientes límites:
|
||||
|
||||
- **Límite de 5 horas** — 12 $ de uso
|
||||
- **Límite semanal** — 30 $ de uso
|
||||
- **Límite mensual** — 60 $ de uso
|
||||
|
||||
Los límites se definen en valor monetario. Esto significa que tu recuento real de solicitudes depende del modelo que uses. Los modelos más baratos como MiniMax M2.5 permiten más solicitudes, mientras que los modelos de mayor coste como GLM-5 permiten menos.
|
||||
|
||||
La siguiente tabla proporciona una estimación del recuento de solicitudes basada en patrones de uso típicos de Go:
|
||||
|
||||
| | GLM-5 | Kimi K2.5 | MiniMax M2.5 |
|
||||
| ----------------------- | ----- | --------- | ------------ |
|
||||
| solicitudes por 5 horas | 1.150 | 1.850 | 30.000 |
|
||||
| solicitudes por semana | 2.880 | 4.630 | 75.000 |
|
||||
| solicitudes por mes | 5.750 | 9.250 | 150.000 |
|
||||
|
||||
Las estimaciones se basan en patrones de solicitud promedio observados:
|
||||
|
||||
- GLM-5 — 700 de entrada, 52.000 en caché, 150 tokens de salida por solicitud
|
||||
- Kimi K2.5 — 870 de entrada, 55.000 en caché, 200 tokens de salida por solicitud
|
||||
- MiniMax M2.5 — 300 de entrada, 55.000 en caché, 125 tokens de salida por solicitud
|
||||
|
||||
Puedes realizar un seguimiento de tu uso actual en la **<a href={console}>consola</a>**.
|
||||
|
||||
:::tip
|
||||
Si alcanzas el límite de uso, puedes seguir usando los modelos gratuitos.
|
||||
:::
|
||||
|
||||
Los límites de uso pueden cambiar a medida que aprendamos del uso temprano y los comentarios.
|
||||
|
||||
---
|
||||
|
||||
### Precios
|
||||
|
||||
OpenCode Go es un plan de suscripción de **10 $/mes**. A continuación se muestran los precios **por 1M de tokens**.
|
||||
|
||||
| Modelo | Entrada | Salida | Lectura en caché |
|
||||
| ------------ | ------- | ------ | ---------------- |
|
||||
| GLM-5 | 1,00 $ | 3,20 $ | 0,20 $ |
|
||||
| Kimi K2.5 | 0,60 $ | 3,00 $ | 0,10 $ |
|
||||
| MiniMax M2.5 | 0,30 $ | 1,20 $ | 0,03 $ |
|
||||
|
||||
---
|
||||
|
||||
### Uso más allá de los límites
|
||||
|
||||
Si también tienes créditos en tu saldo de Zen, puedes habilitar la opción **Usar saldo** en la consola. Cuando está habilitada, Go recurrirá a tu saldo de Zen después de que hayas alcanzado tus límites de uso en lugar de bloquear las solicitudes.
|
||||
|
||||
---
|
||||
|
||||
## Endpoints
|
||||
|
||||
También puedes acceder a los modelos de Go a través de los siguientes endpoints de API.
|
||||
|
||||
| Modelo | ID del modelo | Endpoint | Paquete AI SDK |
|
||||
| ------------ | ------------- | ------------------------------------------------ | --------------------------- |
|
||||
| GLM-5 | glm-5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| Kimi K2.5 | kimi-k2.5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| MiniMax M2.5 | minimax-m2.5 | `https://opencode.ai/zen/go/v1/messages` | `@ai-sdk/anthropic` |
|
||||
|
||||
El [model id](/docs/config/#models) en tu configuración de OpenCode usa el formato `opencode-go/<model-id>`. Por ejemplo, para Kimi K2.5, usarías `opencode-go/kimi-k2.5` en tu configuración.
|
||||
|
||||
---
|
||||
|
||||
## Privacidad
|
||||
|
||||
El plan está diseñado principalmente para usuarios internacionales, con modelos alojados en EE. UU., la UE y Singapur para un acceso global estable.
|
||||
|
||||
<a href={email}>Contáctanos</a> si tienes alguna pregunta.
|
||||
|
||||
---
|
||||
|
||||
## Objetivos
|
||||
|
||||
Creamos OpenCode Go para:
|
||||
|
||||
1. Hacer que la programación con IA sea **accesible** a más personas con una suscripción de bajo coste.
|
||||
2. Proporcionar acceso **fiable** a los mejores modelos de código abierto.
|
||||
3. Seleccionar modelos que han sido **probados y evaluados** para su uso con agentes de programación.
|
||||
4. **Sin ataduras**, permitiéndote usar cualquier otro proveedor con OpenCode también.
|
||||
@@ -1,145 +0,0 @@
|
||||
---
|
||||
title: Go
|
||||
description: Abonnement à bas coût pour les modèles de code ouverts.
|
||||
---
|
||||
|
||||
import config from "../../../../config.mjs"
|
||||
export const console = config.console
|
||||
export const email = `mailto:${config.email}`
|
||||
|
||||
OpenCode Go est un abonnement à bas coût de **10 $/mois** qui vous donne un accès fiable aux modèles de code ouverts populaires.
|
||||
|
||||
:::note
|
||||
OpenCode Go est actuellement en bêta.
|
||||
:::
|
||||
|
||||
Go fonctionne comme tout autre fournisseur dans OpenCode. Vous vous abonnez à OpenCode Go et obtenez votre clé API. C'est **complètement optionnel** et vous n'avez pas besoin de l'utiliser pour utiliser OpenCode.
|
||||
|
||||
Il est conçu principalement pour les utilisateurs internationaux, avec des modèles hébergés aux États-Unis, en UE et à Singapour pour un accès mondial stable.
|
||||
|
||||
---
|
||||
|
||||
## Contexte
|
||||
|
||||
Les modèles ouverts sont devenus vraiment bons. Ils atteignent maintenant des performances proches des modèles propriétaires pour les tâches de codage. Et parce que de nombreux fournisseurs peuvent les servir de manière compétitive, ils sont généralement beaucoup moins chers.
|
||||
|
||||
Cependant, obtenir un accès fiable et à faible latence à ces modèles peut être difficile. Les fournisseurs varient en qualité et en disponibilité.
|
||||
|
||||
:::tip
|
||||
Nous avons testé un groupe sélectionné de modèles et de fournisseurs qui fonctionnent bien avec OpenCode.
|
||||
:::
|
||||
|
||||
Pour remédier à cela, nous avons fait plusieurs choses :
|
||||
|
||||
1. Nous avons testé un groupe sélectionné de modèles ouverts et discuté avec leurs équipes de la meilleure façon de les exécuter.
|
||||
2. Nous avons ensuite travaillé avec quelques fournisseurs pour nous assurer qu'ils étaient servis correctement.
|
||||
3. Enfin, nous avons évalué la combinaison modèle/fournisseur et établi une liste que nous nous sentons à l'aise de recommander.
|
||||
|
||||
OpenCode Go vous donne accès à ces modèles pour **10 $/mois**.
|
||||
|
||||
---
|
||||
|
||||
## Comment ça marche
|
||||
|
||||
OpenCode Go fonctionne comme tout autre fournisseur dans OpenCode.
|
||||
|
||||
1. Vous vous connectez à **<a href={console}>OpenCode Zen</a>**, vous vous abonnez à Go et copiez votre clé API.
|
||||
2. Vous exécutez la commande `/connect` dans la TUI, sélectionnez `OpenCode Go`, et collez votre clé API.
|
||||
3. Exécutez `/models` dans la TUI pour voir la liste des modèles disponibles via Go.
|
||||
|
||||
:::note
|
||||
Un seul membre par espace de travail peut s'abonner à OpenCode Go.
|
||||
:::
|
||||
|
||||
La liste actuelle des modèles inclut :
|
||||
|
||||
- **GLM-5**
|
||||
- **Kimi K2.5**
|
||||
- **MiniMax M2.5**
|
||||
|
||||
La liste des modèles peut changer à mesure que nous testons et ajoutons de nouveaux modèles.
|
||||
|
||||
---
|
||||
|
||||
## Limites d'utilisation
|
||||
|
||||
OpenCode Go inclut les limites suivantes :
|
||||
|
||||
- **Limite de 5 heures** — 12 $ d'utilisation
|
||||
- **Limite hebdomadaire** — 30 $ d'utilisation
|
||||
- **Limite mensuelle** — 60 $ d'utilisation
|
||||
|
||||
Les limites sont définies en valeur monétaire. Cela signifie que votre nombre réel de requêtes dépend du modèle que vous utilisez. Les modèles moins chers comme MiniMax M2.5 permettent plus de requêtes, tandis que les modèles plus coûteux comme GLM-5 en permettent moins.
|
||||
|
||||
Le tableau ci-dessous fournit une estimation du nombre de requêtes basée sur des modèles d'utilisation typiques de Go :
|
||||
|
||||
| | GLM-5 | Kimi K2.5 | MiniMax M2.5 |
|
||||
| --------------------- | ----- | --------- | ------------ |
|
||||
| requêtes par 5 heures | 1 150 | 1 850 | 30 000 |
|
||||
| requêtes par semaine | 2 880 | 4 630 | 75 000 |
|
||||
| requêtes par mois | 5 750 | 9 250 | 150 000 |
|
||||
|
||||
Les estimations sont basées sur des modèles de requêtes moyens observés :
|
||||
|
||||
- GLM-5 — 700 tokens d'entrée, 52 000 en cache, 150 de sortie par requête
|
||||
- Kimi K2.5 — 870 tokens d'entrée, 55 000 en cache, 200 de sortie par requête
|
||||
- MiniMax M2.5 — 300 tokens d'entrée, 55 000 en cache, 125 de sortie par requête
|
||||
|
||||
Vous pouvez suivre votre utilisation actuelle dans la **<a href={console}>console</a>**.
|
||||
|
||||
:::tip
|
||||
Si vous atteignez la limite d'utilisation, vous pouvez continuer à utiliser les modèles gratuits.
|
||||
:::
|
||||
|
||||
Les limites d'utilisation peuvent changer à mesure que nous apprenons des premiers usages et retours.
|
||||
|
||||
---
|
||||
|
||||
### Tarification
|
||||
|
||||
OpenCode Go est un plan d'abonnement à **10 $/mois**. Ci-dessous se trouvent les prix **par 1M de tokens**.
|
||||
|
||||
| Modèle | Entrée | Sortie | Lecture en cache |
|
||||
| ------------ | ------ | ------ | ---------------- |
|
||||
| GLM-5 | 1,00 $ | 3,20 $ | 0,20 $ |
|
||||
| Kimi K2.5 | 0,60 $ | 3,00 $ | 0,10 $ |
|
||||
| MiniMax M2.5 | 0,30 $ | 1,20 $ | 0,03 $ |
|
||||
|
||||
---
|
||||
|
||||
### Utilisation au-delà des limites
|
||||
|
||||
Si vous avez aussi des crédits sur votre solde Zen, vous pouvez activer l'option **Use balance** dans la console. Lorsqu'elle est activée, Go basculera sur votre solde Zen après que vous ayez atteint vos limites d'utilisation au lieu de bloquer les requêtes.
|
||||
|
||||
---
|
||||
|
||||
## Endpoints
|
||||
|
||||
Vous pouvez également accéder aux modèles Go via les endpoints API suivants.
|
||||
|
||||
| Modèle | ID du modèle | Endpoint | Package AI SDK |
|
||||
| ------------ | ------------ | ------------------------------------------------ | --------------------------- |
|
||||
| GLM-5 | glm-5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| Kimi K2.5 | kimi-k2.5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| MiniMax M2.5 | minimax-m2.5 | `https://opencode.ai/zen/go/v1/messages` | `@ai-sdk/anthropic` |
|
||||
|
||||
L'[ID du modèle](/docs/config/#models) dans votre configuration OpenCode utilise le format `opencode-go/<model-id>`. Par exemple, pour Kimi K2.5, vous utiliseriez `opencode-go/kimi-k2.5` dans votre configuration.
|
||||
|
||||
---
|
||||
|
||||
## Confidentialité
|
||||
|
||||
Le plan est conçu principalement pour les utilisateurs internationaux, avec des modèles hébergés aux États-Unis, en UE et à Singapour pour un accès mondial stable.
|
||||
|
||||
<a href={email}>Contactez-nous</a> si vous avez des questions.
|
||||
|
||||
---
|
||||
|
||||
## Objectifs
|
||||
|
||||
Nous avons créé OpenCode Go pour :
|
||||
|
||||
1. Rendre le codage par IA **accessible** à plus de personnes avec un abonnement à bas coût.
|
||||
2. Fournir un accès **fiable** aux meilleurs modèles de code ouverts.
|
||||
3. Sélectionner des modèles qui sont **testés et évalués** pour l'utilisation d'agents de codage.
|
||||
4. N'avoir **aucun verrouillage** en vous permettant d'utiliser tout autre fournisseur avec OpenCode également.
|
||||
@@ -63,8 +63,8 @@ Only one member per workspace can subscribe to OpenCode Go.
|
||||
|
||||
The current list of models includes:
|
||||
|
||||
- **GLM-5**
|
||||
- **Kimi K2.5**
|
||||
- **GLM-5**
|
||||
- **MiniMax M2.5**
|
||||
|
||||
The list of models may change as we test and add new ones.
|
||||
@@ -83,17 +83,17 @@ Limits are defined in dollar value. This means your actual request count depends
|
||||
|
||||
The table below provides an estimated request count based on typical Go usage patterns:
|
||||
|
||||
| | GLM-5 | Kimi K2.5 | MiniMax M2.5 |
|
||||
| ------------------- | ----- | --------- | ------------ |
|
||||
| requests per 5 hour | 1,150 | 1,850 | 20,000 |
|
||||
| requests per week | 2,880 | 4,630 | 50,000 |
|
||||
| requests per month | 5,750 | 9,250 | 100,000 |
|
||||
| Usage limit | Dollar | MiniMax M2.5 | Kimi K2.5 | GLM-5 |
|
||||
| ------------- | ------ | ------------ | --------- | ----- |
|
||||
| 5 hour limit | $12 | 30,000 | 1,850 | 1,150 |
|
||||
| Weekly limit | $30 | 75,000 | 4,630 | 2,880 |
|
||||
| Monthly limit | $60 | 150,000 | 9,250 | 5,750 |
|
||||
|
||||
Estimates are based on observed average request patterns:
|
||||
|
||||
- GLM-5 — 700 input, 52,000 cached, 150 output tokens per request
|
||||
- Kimi K2.5 — 870 input, 55,000 cached, 200 output tokens per request
|
||||
- MiniMax M2.5 — 300 input, 55,000 cached, 125 output tokens per request
|
||||
- Kimi K2.5 — 870 input, 55,000 cached, 200 output tokens per request
|
||||
- GLM-5 — 700 input, 52,000 cached, 150 output tokens per request
|
||||
|
||||
You can track your current usage in the **<a href={console}>console</a>**.
|
||||
|
||||
@@ -105,6 +105,18 @@ Usage limits may change as we learn from early usage and feedback.
|
||||
|
||||
---
|
||||
|
||||
### Pricing
|
||||
|
||||
OpenCode Go is a **$10/month** subscription plan. Below are the prices **per 1M tokens**.
|
||||
|
||||
| Model | Input | Output | Cached Read |
|
||||
| ------------ | ----- | ------ | ----------- |
|
||||
| GLM-5 | $1.00 | $3.20 | $0.20 |
|
||||
| Kimi K2.5 | $0.60 | $3.00 | $0.10 |
|
||||
| MiniMax M2.5 | $0.30 | $1.20 | $0.03 |
|
||||
|
||||
---
|
||||
|
||||
### Usage beyond limits
|
||||
|
||||
If you also have credits on your Zen balance, you can enable the **Use balance**
|
||||
|
||||
@@ -1,145 +0,0 @@
|
||||
---
|
||||
title: Go
|
||||
description: Abbonamento a basso costo per modelli di coding open source.
|
||||
---
|
||||
|
||||
import config from "../../../../config.mjs"
|
||||
export const console = config.console
|
||||
export const email = `mailto:${config.email}`
|
||||
|
||||
OpenCode Go è un abbonamento a basso costo di **$10/mese** che ti offre un accesso affidabile ai modelli di coding open source più popolari.
|
||||
|
||||
:::note
|
||||
OpenCode Go è attualmente in beta.
|
||||
:::
|
||||
|
||||
Go funziona come qualsiasi altro provider in OpenCode. Ti abboni a OpenCode Go e ottieni la tua chiave API. È **completamente opzionale** e non è necessario utilizzarlo per usare OpenCode.
|
||||
|
||||
È progettato principalmente per utenti internazionali, con modelli ospitati negli Stati Uniti, UE e Singapore per un accesso globale stabile.
|
||||
|
||||
---
|
||||
|
||||
## Contesto
|
||||
|
||||
I modelli open source sono diventati davvero validi. Ora raggiungono prestazioni vicine ai modelli proprietari per le attività di coding. E poiché molti provider possono servirli in modo competitivo, sono solitamente molto più economici.
|
||||
|
||||
Tuttavia, ottenere un accesso affidabile e a bassa latenza può essere difficile. I provider variano in termini di qualità e disponibilità.
|
||||
|
||||
:::tip
|
||||
Abbiamo testato un gruppo selezionato di modelli e provider che funzionano bene con OpenCode.
|
||||
:::
|
||||
|
||||
Per risolvere questo problema, abbiamo fatto un paio di cose:
|
||||
|
||||
1. Abbiamo testato un gruppo selezionato di modelli open source e parlato con i loro team su come eseguirli al meglio.
|
||||
2. Abbiamo poi lavorato con alcuni provider per assicurarci che questi venissero serviti correttamente.
|
||||
3. Infine, abbiamo effettuato benchmark sulla combinazione modello/provider e abbiamo stilato un elenco che ci sentiamo di raccomandare.
|
||||
|
||||
OpenCode Go ti dà accesso a questi modelli per **$10/mese**.
|
||||
|
||||
---
|
||||
|
||||
## Come funziona
|
||||
|
||||
OpenCode Go funziona come qualsiasi altro provider in OpenCode.
|
||||
|
||||
1. Accedi a **<a href={console}>OpenCode Zen</a>**, abbonati a Go e copia la tua chiave API.
|
||||
2. Esegui il comando `/connect` nella TUI, seleziona `OpenCode Go` e incolla la tua chiave API.
|
||||
3. Esegui `/models` nella TUI per vedere l'elenco dei modelli disponibili tramite Go.
|
||||
|
||||
:::note
|
||||
Solo un membro per workspace può abbonarsi a OpenCode Go.
|
||||
:::
|
||||
|
||||
L'elenco attuale dei modelli include:
|
||||
|
||||
- **GLM-5**
|
||||
- **Kimi K2.5**
|
||||
- **MiniMax M2.5**
|
||||
|
||||
L'elenco dei modelli potrebbe cambiare man mano che ne testiamo e aggiungiamo di nuovi.
|
||||
|
||||
---
|
||||
|
||||
## Limiti di utilizzo
|
||||
|
||||
OpenCode Go include i seguenti limiti:
|
||||
|
||||
- **Limite di 5 ore** — $12 di utilizzo
|
||||
- **Limite settimanale** — $30 di utilizzo
|
||||
- **Limite mensile** — $60 di utilizzo
|
||||
|
||||
I limiti sono definiti in valore monetario. Ciò significa che il conteggio effettivo delle richieste dipende dal modello utilizzato. Modelli più economici come MiniMax M2.5 consentono più richieste, mentre modelli più costosi come GLM-5 ne consentono meno.
|
||||
|
||||
La tabella seguente fornisce una stima del conteggio delle richieste basata su tipici modelli di utilizzo di Go:
|
||||
|
||||
| | GLM-5 | Kimi K2.5 | MiniMax M2.5 |
|
||||
| --------------------- | ----- | --------- | ------------ |
|
||||
| richieste ogni 5 ore | 1.150 | 1.850 | 30.000 |
|
||||
| richieste a settimana | 2.880 | 4.630 | 75.000 |
|
||||
| richieste al mese | 5.750 | 9.250 | 150.000 |
|
||||
|
||||
Le stime si basano sui modelli di richiesta medi osservati:
|
||||
|
||||
- GLM-5 — 700 input, 52.000 cached, 150 output tokens per richiesta
|
||||
- Kimi K2.5 — 870 input, 55.000 cached, 200 output tokens per richiesta
|
||||
- MiniMax M2.5 — 300 input, 55.000 cached, 125 output tokens per richiesta
|
||||
|
||||
Puoi monitorare il tuo utilizzo attuale nella **<a href={console}>console</a>**.
|
||||
|
||||
:::tip
|
||||
Se raggiungi il limite di utilizzo, puoi continuare a utilizzare i modelli gratuiti.
|
||||
:::
|
||||
|
||||
I limiti di utilizzo potrebbero cambiare man mano che impariamo dall'utilizzo iniziale e dai feedback.
|
||||
|
||||
---
|
||||
|
||||
### Prezzi
|
||||
|
||||
OpenCode Go è un piano di abbonamento da **$10/mese**. Di seguito sono riportati i prezzi **per 1M di token**.
|
||||
|
||||
| Modello | Input | Output | Lettura Cached |
|
||||
| ------------ | ----- | ------ | -------------- |
|
||||
| GLM-5 | $1.00 | $3.20 | $0.20 |
|
||||
| Kimi K2.5 | $0.60 | $3.00 | $0.10 |
|
||||
| MiniMax M2.5 | $0.30 | $1.20 | $0.03 |
|
||||
|
||||
---
|
||||
|
||||
### Utilizzo oltre i limiti
|
||||
|
||||
Se hai anche crediti sul tuo saldo Zen, puoi abilitare l'opzione **Use balance** nella console. Quando abilitata, Go utilizzerà il tuo saldo Zen dopo aver raggiunto i limiti di utilizzo invece di bloccare le richieste.
|
||||
|
||||
---
|
||||
|
||||
## Endpoint
|
||||
|
||||
Puoi anche accedere ai modelli Go tramite i seguenti endpoint API.
|
||||
|
||||
| Modello | ID Modello | Endpoint | Pacchetto AI SDK |
|
||||
| ------------ | ------------ | ------------------------------------------------ | --------------------------- |
|
||||
| GLM-5 | glm-5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| Kimi K2.5 | kimi-k2.5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| MiniMax M2.5 | minimax-m2.5 | `https://opencode.ai/zen/go/v1/messages` | `@ai-sdk/anthropic` |
|
||||
|
||||
Il [model id](/docs/config/#models) nella tua configurazione OpenCode utilizza il formato `opencode-go/<model-id>`. Ad esempio, per Kimi K2.5, useresti `opencode-go/kimi-k2.5` nella tua configurazione.
|
||||
|
||||
---
|
||||
|
||||
## Privacy
|
||||
|
||||
Il piano è progettato principalmente per utenti internazionali, con modelli ospitati negli Stati Uniti, UE e Singapore per un accesso globale stabile.
|
||||
|
||||
<a href={email}>Contattaci</a> se hai domande.
|
||||
|
||||
---
|
||||
|
||||
## Obiettivi
|
||||
|
||||
Abbiamo creato OpenCode Go per:
|
||||
|
||||
1. Rendere l'AI per il coding **accessibile** a più persone con un abbonamento a basso costo.
|
||||
2. Fornire un accesso **affidabile** ai migliori modelli di coding open source.
|
||||
3. Curare modelli che sono **testati e benchmarked** per l'uso con agenti di coding.
|
||||
4. Non avere **alcun lock-in** permettendoti di utilizzare qualsiasi altro provider con OpenCode.
|
||||
@@ -1,145 +0,0 @@
|
||||
---
|
||||
title: Go
|
||||
description: オープンコーディングモデル向けの低価格サブスクリプション。
|
||||
---
|
||||
|
||||
import config from "../../../../config.mjs"
|
||||
export const console = config.console
|
||||
export const email = `mailto:${config.email}`
|
||||
|
||||
OpenCode Goは、人気のあるオープンコーディングモデルへの信頼性の高いアクセスを提供する、低価格な**月額10ドル**のサブスクリプションです。
|
||||
|
||||
:::note
|
||||
OpenCode Goは現在ベータ版です。
|
||||
:::
|
||||
|
||||
GoはOpenCodeの他のプロバイダーと同様に機能します。OpenCode Goに登録してAPIキーを取得します。これは**完全にオプション**であり、OpenCodeを使用するために必須ではありません。
|
||||
|
||||
主に海外ユーザー向けに設計されており、安定したグローバルアクセスのためにモデルは米国、EU、シンガポールでホストされています。
|
||||
|
||||
---
|
||||
|
||||
## 背景
|
||||
|
||||
オープンモデルは非常に高性能になりました。現在では、コーディングタスクにおいてプロプライエタリモデルに近いパフォーマンスを発揮します。また、多くのプロバイダーが競争力のある価格で提供できるため、通常はずっと安価です。
|
||||
|
||||
しかし、信頼性が高く低遅延なアクセスを得ることは難しい場合があります。プロバイダーによって品質や可用性が異なるためです。
|
||||
|
||||
:::tip
|
||||
OpenCodeとうまく連携する厳選されたモデルとプロバイダーをテストしました。
|
||||
:::
|
||||
|
||||
これを解決するために、私たちはいくつかのことを行いました。
|
||||
|
||||
1. 厳選されたオープンモデルをテストし、それらを最適に実行する方法についてチームと話し合いました。
|
||||
2. 次に、いくつかのプロバイダーと協力して、これらが正しく提供されていることを確認しました。
|
||||
3. 最後に、モデルとプロバイダーの組み合わせをベンチマークし、自信を持って推奨できるリストを作成しました。
|
||||
|
||||
OpenCode Goでは、これらのモデルに**月額10ドル**でアクセスできます。
|
||||
|
||||
---
|
||||
|
||||
## 仕組み
|
||||
|
||||
OpenCode GoはOpenCodeの他のプロバイダーと同様に機能します。
|
||||
|
||||
1. **<a href={console}>OpenCode Zen</a>**にサインインし、Goに登録してAPIキーをコピーします。
|
||||
2. TUIで`/connect`コマンドを実行し、`OpenCode Go`を選択してAPIキーを貼り付けます。
|
||||
3. TUIで`/models`を実行して、Go経由で利用可能なモデルのリストを確認します。
|
||||
|
||||
:::note
|
||||
ワークスペースごとに1人のメンバーのみがOpenCode Goに登録できます。
|
||||
:::
|
||||
|
||||
現在のモデルリストには以下が含まれます:
|
||||
|
||||
- **GLM-5**
|
||||
- **Kimi K2.5**
|
||||
- **MiniMax M2.5**
|
||||
|
||||
モデルのリストは、テストや新しいモデルの追加に伴い変更される可能性があります。
|
||||
|
||||
---
|
||||
|
||||
## 利用制限
|
||||
|
||||
OpenCode Goには以下の制限が含まれます:
|
||||
|
||||
- **5時間制限** — 12ドル分の利用
|
||||
- **週間制限** — 30ドル分の利用
|
||||
- **月間制限** — 60ドル分の利用
|
||||
|
||||
制限は金額で定義されています。つまり、実際のリクエスト数は使用するモデルによって異なります。MiniMax M2.5のような安価なモデルではより多くのリクエストが可能ですが、GLM-5のような高価なモデルでは少なくなります。
|
||||
|
||||
下の表は、典型的なGoの使用パターンに基づいた推定リクエスト数を示しています:
|
||||
|
||||
| | GLM-5 | Kimi K2.5 | MiniMax M2.5 |
|
||||
| ------------------------- | ----- | --------- | ------------ |
|
||||
| 5時間あたりのリクエスト数 | 1,150 | 1,850 | 30,000 |
|
||||
| 週間リクエスト数 | 2,880 | 4,630 | 75,000 |
|
||||
| 月間リクエスト数 | 5,750 | 9,250 | 150,000 |
|
||||
|
||||
推定値は、観測された平均的なリクエストパターンに基づいています:
|
||||
|
||||
- GLM-5 — 1リクエストあたり入力700、キャッシュ52,000、出力150トークン
|
||||
- Kimi K2.5 — 1リクエストあたり入力870、キャッシュ55,000、出力200トークン
|
||||
- MiniMax M2.5 — 1リクエストあたり入力300、キャッシュ55,000、出力125トークン
|
||||
|
||||
現在の使用状況は**<a href={console}>コンソール</a>**で確認できます。
|
||||
|
||||
:::tip
|
||||
利用制限に達した場合でも、無料モデルを引き続き使用できます。
|
||||
:::
|
||||
|
||||
利用制限は、初期の使用状況やフィードバックに基づいて変更される可能性があります。
|
||||
|
||||
---
|
||||
|
||||
### 価格
|
||||
|
||||
OpenCode Goは**月額10ドル**のサブスクリプションプランです。以下は**100万トークンあたり**の価格です。
|
||||
|
||||
| モデル | 入力 | 出力 | キャッシュ読み込み |
|
||||
| ------------ | ----- | ----- | ------------------ |
|
||||
| GLM-5 | $1.00 | $3.20 | $0.20 |
|
||||
| Kimi K2.5 | $0.60 | $3.00 | $0.10 |
|
||||
| MiniMax M2.5 | $0.30 | $1.20 | $0.03 |
|
||||
|
||||
---
|
||||
|
||||
### 制限を超えた利用
|
||||
|
||||
Zen残高にクレジットがある場合、コンソールで**残高を使用 (Use balance)**オプションを有効にできます。有効にすると、利用制限に達した後、リクエストをブロックする代わりにZen残高が使用されます。
|
||||
|
||||
---
|
||||
|
||||
## エンドポイント
|
||||
|
||||
以下のAPIエンドポイントを通じてGoモデルにアクセスすることもできます。
|
||||
|
||||
| モデル | モデルID | エンドポイント | AI SDKパッケージ |
|
||||
| ------------ | ------------ | ------------------------------------------------ | --------------------------- |
|
||||
| GLM-5 | glm-5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| Kimi K2.5 | kimi-k2.5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| MiniMax M2.5 | minimax-m2.5 | `https://opencode.ai/zen/go/v1/messages` | `@ai-sdk/anthropic` |
|
||||
|
||||
OpenCode設定の[モデルID](/docs/config/#models)は、`opencode-go/<model-id>`という形式を使用します。たとえば、Kimi K2.5の場合、設定で`opencode-go/kimi-k2.5`を使用します。
|
||||
|
||||
---
|
||||
|
||||
## プライバシー
|
||||
|
||||
このプランは主に海外ユーザー向けに設計されており、安定したグローバルアクセスのためにモデルは米国、EU、シンガポールでホストされています。
|
||||
|
||||
ご質問がある場合は<a href={email}>お問い合わせください</a>。
|
||||
|
||||
---
|
||||
|
||||
## 目標
|
||||
|
||||
OpenCode Goを作成した目的は以下の通りです:
|
||||
|
||||
1. 低価格のサブスクリプションで、より多くの人々がAIコーディングに**アクセス**できるようにすること。
|
||||
2. 最高のオープンコーディングモデルへの**信頼性の高い**アクセスを提供すること。
|
||||
3. コーディングエージェントでの使用向けに**テストおよびベンチマーク**されたモデルを厳選すること。
|
||||
4. OpenCodeで他のプロバイダーも使用できるようにすることで、**ロックインを排除**すること。
|
||||
@@ -1,145 +0,0 @@
|
||||
---
|
||||
title: Go
|
||||
description: 오픈 코딩 모델을 위한 저렴한 구독 서비스입니다.
|
||||
---
|
||||
|
||||
import config from "../../../../config.mjs"
|
||||
export const console = config.console
|
||||
export const email = `mailto:${config.email}`
|
||||
|
||||
OpenCode Go는 인기 있는 오픈 코딩 모델에 안정적으로 액세스할 수 있는 저렴한 **월 $10** 구독 서비스입니다.
|
||||
|
||||
:::note
|
||||
OpenCode Go는 현재 베타 버전입니다.
|
||||
:::
|
||||
|
||||
Go는 OpenCode의 다른 제공자처럼 작동합니다. OpenCode Go를 구독하고 API 키를 받으세요. 이는 **완전히 선택 사항**이며 OpenCode를 사용하기 위해 반드시 사용할 필요는 없습니다.
|
||||
|
||||
주로 해외 사용자를 위해 설계되었으며, 안정적인 글로벌 액세스를 위해 미국, EU, 싱가포르에서 모델이 호스팅됩니다.
|
||||
|
||||
---
|
||||
|
||||
## 배경
|
||||
|
||||
오픈 모델은 정말 좋아졌습니다. 이제 코딩 작업에서 독점 모델에 가까운 성능을 발휘합니다. 그리고 많은 제공자가 경쟁적으로 서비스할 수 있기 때문에 일반적으로 훨씬 저렴합니다.
|
||||
|
||||
하지만 안정적이고 지연 시간이 짧은 액세스를 얻기는 어려울 수 있습니다. 제공자마다 품질과 가용성이 다릅니다.
|
||||
|
||||
:::tip
|
||||
OpenCode와 잘 작동하는 엄선된 모델 및 제공자 그룹을 테스트했습니다.
|
||||
:::
|
||||
|
||||
이를 해결하기 위해 몇 가지 작업을 수행했습니다.
|
||||
|
||||
1. 엄선된 오픈 모델 그룹을 테스트하고 해당 팀과 최적의 실행 방법에 대해 논의했습니다.
|
||||
2. 그런 다음 몇몇 제공자와 협력하여 이것들이 올바르게 서비스되고 있는지 확인했습니다.
|
||||
3. 마지막으로 모델/제공자 조합을 벤치마킹하여 추천할 만한 목록을 만들었습니다.
|
||||
|
||||
OpenCode Go를 사용하면 **월 $10**에 이러한 모델에 액세스할 수 있습니다.
|
||||
|
||||
---
|
||||
|
||||
## 작동 방식
|
||||
|
||||
OpenCode Go는 OpenCode의 다른 제공자처럼 작동합니다.
|
||||
|
||||
1. **<a href={console}>OpenCode Zen</a>**에 로그인하고 Go를 구독한 다음 API 키를 복사합니다.
|
||||
2. TUI에서 `/connect` 명령을 실행하고 `OpenCode Go`를 선택한 다음 API 키를 붙여넣습니다.
|
||||
3. TUI에서 `/models`를 실행하여 Go를 통해 사용할 수 있는 모델 목록을 확인합니다.
|
||||
|
||||
:::note
|
||||
워크스페이스당 한 명의 멤버만 OpenCode Go를 구독할 수 있습니다.
|
||||
:::
|
||||
|
||||
현재 모델 목록은 다음과 같습니다.
|
||||
|
||||
- **GLM-5**
|
||||
- **Kimi K2.5**
|
||||
- **MiniMax M2.5**
|
||||
|
||||
모델 목록은 테스트하고 새로운 모델을 추가함에 따라 변경될 수 있습니다.
|
||||
|
||||
---
|
||||
|
||||
## 사용 한도
|
||||
|
||||
OpenCode Go에는 다음과 같은 한도가 포함됩니다.
|
||||
|
||||
- **5시간 한도** — $12 사용량
|
||||
- **주간 한도** — $30 사용량
|
||||
- **월간 한도** — $60 사용량
|
||||
|
||||
한도는 달러 가치로 정의됩니다. 즉, 실제 요청 수는 사용하는 모델에 따라 다릅니다. MiniMax M2.5와 같은 저렴한 모델은 더 많은 요청을 허용하는 반면, GLM-5와 같은 고비용 모델은 더 적은 요청을 허용합니다.
|
||||
|
||||
아래 표는 일반적인 Go 사용 패턴을 기반으로 한 예상 요청 수를 제공합니다.
|
||||
|
||||
| | GLM-5 | Kimi K2.5 | MiniMax M2.5 |
|
||||
| --------------- | ----- | --------- | ------------ |
|
||||
| 5시간당 요청 수 | 1,150 | 1,850 | 30,000 |
|
||||
| 주당 요청 수 | 2,880 | 4,630 | 75,000 |
|
||||
| 월당 요청 수 | 5,750 | 9,250 | 150,000 |
|
||||
|
||||
추정치는 관찰된 평균 요청 패턴을 기반으로 합니다.
|
||||
|
||||
- GLM-5 — 요청당 입력 700, 캐시 52,000, 출력 150 토큰
|
||||
- Kimi K2.5 — 요청당 입력 870, 캐시 55,000, 출력 200 토큰
|
||||
- MiniMax M2.5 — 요청당 입력 300, 캐시 55,000, 출력 125 토큰
|
||||
|
||||
**<a href={console}>콘솔</a>**에서 현재 사용량을 추적할 수 있습니다.
|
||||
|
||||
:::tip
|
||||
사용 한도에 도달하면 무료 모델을 계속 사용할 수 있습니다.
|
||||
:::
|
||||
|
||||
사용 한도는 초기 사용 및 피드백을 통해 학습함에 따라 변경될 수 있습니다.
|
||||
|
||||
---
|
||||
|
||||
### 가격
|
||||
|
||||
OpenCode Go는 **월 $10** 구독 요금제입니다. 아래는 **100만 토큰당** 가격입니다.
|
||||
|
||||
| Model | Input | Output | Cached Read |
|
||||
| ------------ | ----- | ------ | ----------- |
|
||||
| GLM-5 | $1.00 | $3.20 | $0.20 |
|
||||
| Kimi K2.5 | $0.60 | $3.00 | $0.10 |
|
||||
| MiniMax M2.5 | $0.30 | $1.20 | $0.03 |
|
||||
|
||||
---
|
||||
|
||||
### 한도 초과 사용
|
||||
|
||||
Zen 잔액에 크레딧이 있는 경우 콘솔에서 **잔액 사용(Use balance)** 옵션을 활성화할 수 있습니다. 활성화하면 사용 한도에 도달했을 때 요청을 차단하는 대신 Zen 잔액을 사용하게 됩니다.
|
||||
|
||||
---
|
||||
|
||||
## 엔드포인트
|
||||
|
||||
다음 API 엔드포인트를 통해 Go 모델에 액세스할 수도 있습니다.
|
||||
|
||||
| Model | Model ID | Endpoint | AI SDK Package |
|
||||
| ------------ | ------------ | ------------------------------------------------ | --------------------------- |
|
||||
| GLM-5 | glm-5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| Kimi K2.5 | kimi-k2.5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| MiniMax M2.5 | minimax-m2.5 | `https://opencode.ai/zen/go/v1/messages` | `@ai-sdk/anthropic` |
|
||||
|
||||
OpenCode 설정의 [모델 ID](/docs/config/#models)는 `opencode-go/<model-id>` 형식을 사용합니다. 예를 들어 Kimi K2.5의 경우 설정에서 `opencode-go/kimi-k2.5`를 사용합니다.
|
||||
|
||||
---
|
||||
|
||||
## 개인정보 보호
|
||||
|
||||
이 플랜은 주로 해외 사용자를 위해 설계되었으며, 안정적인 글로벌 액세스를 위해 미국, EU, 싱가포르에서 모델이 호스팅됩니다.
|
||||
|
||||
질문이 있으시면 <a href={email}>문의해 주세요</a>.
|
||||
|
||||
---
|
||||
|
||||
## 목표
|
||||
|
||||
우리는 다음을 위해 OpenCode Go를 만들었습니다.
|
||||
|
||||
1. 저렴한 구독으로 더 많은 사람들이 AI 코딩에 **접근할 수 있도록** 합니다.
|
||||
2. 최고의 오픈 코딩 모델에 **안정적으로** 액세스할 수 있도록 합니다.
|
||||
3. 코딩 에이전트 사용을 위해 **테스트 및 벤치마킹된** 모델을 큐레이팅합니다.
|
||||
4. OpenCode와 함께 다른 제공자도 사용할 수 있도록 하여 **락인(lock-in)이 없도록** 합니다.
|
||||
@@ -1,159 +0,0 @@
|
||||
---
|
||||
title: Go
|
||||
description: Lavkostnadsabonnement for åpne kodemodeller.
|
||||
---
|
||||
|
||||
import config from "../../../../config.mjs"
|
||||
export const console = config.console
|
||||
export const email = `mailto:${config.email}`
|
||||
|
||||
OpenCode Go er et lavkostnadsabonnement til **$10/måned** som gir deg pålitelig tilgang til populære åpne kodemodeller.
|
||||
|
||||
:::note
|
||||
OpenCode Go er for tiden i beta.
|
||||
:::
|
||||
|
||||
Go fungerer som enhver annen leverandør i OpenCode. Du abonnerer på OpenCode Go og
|
||||
får din API-nøkkel. Det er **helt valgfritt** og du trenger ikke bruke det for å
|
||||
bruke OpenCode.
|
||||
|
||||
Det er designet primært for internasjonale brukere, med modeller driftet i USA, EU og Singapore for stabil global tilgang.
|
||||
|
||||
---
|
||||
|
||||
## Bakgrunn
|
||||
|
||||
Åpne modeller har blitt veldig bra. De når nå ytelse nær
|
||||
proprietære modeller for kodeoppgaver. Og fordi mange leverandører kan servere dem
|
||||
konkurransedyktig, er de vanligvis mye billigere.
|
||||
|
||||
Imidlertid kan det være vanskelig å få pålitelig tilgang med lav ventetid. Leverandører
|
||||
varierer i kvalitet og tilgjengelighet.
|
||||
|
||||
:::tip
|
||||
Vi testet en utvalgt gruppe modeller og leverandører som fungerer bra med OpenCode.
|
||||
:::
|
||||
|
||||
For å fikse dette gjorde vi et par ting:
|
||||
|
||||
1. Vi testet en utvalgt gruppe åpne modeller og snakket med teamene deres om hvordan man
|
||||
best kjører dem.
|
||||
2. Vi jobbet deretter med noen få leverandører for å sikre at disse ble servert
|
||||
riktig.
|
||||
3. Til slutt ytelsestestet vi kombinasjonen av modell/leverandør og kom opp
|
||||
med en liste som vi føler oss trygge på å anbefale.
|
||||
|
||||
OpenCode Go gir deg tilgang til disse modellene for **$10/måned**.
|
||||
|
||||
---
|
||||
|
||||
## Hvordan det fungerer
|
||||
|
||||
OpenCode Go fungerer som enhver annen leverandør i OpenCode.
|
||||
|
||||
1. Du logger deg inn på **<a href={console}>OpenCode Zen</a>**, abonnerer på Go, og
|
||||
kopierer API-nøkkelen din.
|
||||
2. Du kjører kommandoen `/connect` i TUI-en, velger `OpenCode Go`, og limer inn
|
||||
API-nøkkelen din.
|
||||
3. Kjør `/models` i TUI-en for å se listen over modeller tilgjengelig gjennom Go.
|
||||
|
||||
:::note
|
||||
Bare ett medlem per arbeidsområde kan abonnere på OpenCode Go.
|
||||
:::
|
||||
|
||||
Den nåværende listen over modeller inkluderer:
|
||||
|
||||
- **GLM-5**
|
||||
- **Kimi K2.5**
|
||||
- **MiniMax M2.5**
|
||||
|
||||
Listen over modeller kan endres etter hvert som vi tester og legger til nye.
|
||||
|
||||
---
|
||||
|
||||
## Bruksgrenser
|
||||
|
||||
OpenCode Go inkluderer følgende grenser:
|
||||
|
||||
- **5 timers grense** — $12 i bruk
|
||||
- **Ukentlig grense** — $30 i bruk
|
||||
- **Månedlig grense** — $60 i bruk
|
||||
|
||||
Grensene er definert i dollarverdi. Dette betyr at ditt faktiske antall forespørsler avhenger av modellen du bruker. Billigere modeller som MiniMax M2.5 tillater flere forespørsler, mens dyrere modeller som GLM-5 tillater færre.
|
||||
|
||||
Tabellen nedenfor gir et estimert antall forespørsler basert på typiske Go-bruksmønstre:
|
||||
|
||||
| | GLM-5 | Kimi K2.5 | MiniMax M2.5 |
|
||||
| ------------------------ | ----- | --------- | ------------ |
|
||||
| forespørsler per 5 timer | 1,150 | 1,850 | 30,000 |
|
||||
| forespørsler per uke | 2,880 | 4,630 | 75,000 |
|
||||
| forespørsler per måned | 5,750 | 9,250 | 150,000 |
|
||||
|
||||
Estimater er basert på observerte gjennomsnittlige forespørselsmønstre:
|
||||
|
||||
- GLM-5 — 700 input, 52,000 cached, 150 output tokens per forespørsel
|
||||
- Kimi K2.5 — 870 input, 55,000 cached, 200 output tokens per forespørsel
|
||||
- MiniMax M2.5 — 300 input, 55,000 cached, 125 output tokens per forespørsel
|
||||
|
||||
Du kan spore din nåværende bruk i **<a href={console}>konsollen</a>**.
|
||||
|
||||
:::tip
|
||||
Hvis du når bruksgrensen, kan du fortsette å bruke de gratis modellene.
|
||||
:::
|
||||
|
||||
Bruksgrenser kan endres etter hvert som vi lærer fra tidlig bruk og tilbakemeldinger.
|
||||
|
||||
---
|
||||
|
||||
### Priser
|
||||
|
||||
OpenCode Go er et **$10/måned** abonnementsplan. Nedenfor er prisene **per 1M tokens**.
|
||||
|
||||
| Modell | Input | Output | Bufret lesing |
|
||||
| ------------ | ----- | ------ | ------------- |
|
||||
| GLM-5 | $1.00 | $3.20 | $0.20 |
|
||||
| Kimi K2.5 | $0.60 | $3.00 | $0.10 |
|
||||
| MiniMax M2.5 | $0.30 | $1.20 | $0.03 |
|
||||
|
||||
---
|
||||
|
||||
### Bruk utover grensene
|
||||
|
||||
Hvis du også har kreditter på din Zen-saldo, kan du aktivere alternativet **Bruk saldo**
|
||||
i konsollen. Når aktivert, vil Go falle tilbake til Zen-saldoen din
|
||||
etter at du har nådd bruksgrensene dine i stedet for å blokkere forespørsler.
|
||||
|
||||
---
|
||||
|
||||
## Endepunkter
|
||||
|
||||
Du kan også få tilgang til Go-modeller gjennom følgende API-endepunkter.
|
||||
|
||||
| Modell | Modell-ID | Endepunkt | AI SDK Pakke |
|
||||
| ------------ | ------------ | ------------------------------------------------ | --------------------------- |
|
||||
| GLM-5 | glm-5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| Kimi K2.5 | kimi-k2.5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| MiniMax M2.5 | minimax-m2.5 | `https://opencode.ai/zen/go/v1/messages` | `@ai-sdk/anthropic` |
|
||||
|
||||
[Modell-ID-en](/docs/config/#models) i din OpenCode-konfigurasjon
|
||||
bruker formatet `opencode-go/<model-id>`. For eksempel, for Kimi K2.5, ville du
|
||||
bruke `opencode-go/kimi-k2.5` i konfigurasjonen din.
|
||||
|
||||
---
|
||||
|
||||
## Personvern
|
||||
|
||||
Planen er designet primært for internasjonale brukere, med modeller driftet i USA, EU og Singapore for stabil global tilgang.
|
||||
|
||||
<a href={email}>Kontakt oss</a> hvis du har noen spørsmål.
|
||||
|
||||
---
|
||||
|
||||
## Mål
|
||||
|
||||
Vi opprettet OpenCode Go for å:
|
||||
|
||||
1. Gjøre AI-koding **tilgjengelig** for flere mennesker med et lavkostnadsabonnement.
|
||||
2. Gi **pålitelig** tilgang til de beste åpne kodemodellene.
|
||||
3. Kurere modeller som er **testet og ytelsestestet** for bruk av kodeagenter.
|
||||
4. Ha **ingen innlåsing** ved å tillate deg å bruke hvilken som helst annen leverandør med OpenCode også.
|
||||
@@ -1,145 +0,0 @@
|
||||
---
|
||||
title: Go
|
||||
description: Tani abonament na otwarte modele kodowania.
|
||||
---
|
||||
|
||||
import config from "../../../../config.mjs"
|
||||
export const console = config.console
|
||||
export const email = `mailto:${config.email}`
|
||||
|
||||
OpenCode Go to tania subskrypcja za **10 USD miesięcznie**, która zapewnia niezawodny dostęp do popularnych otwartych modeli kodowania.
|
||||
|
||||
:::note
|
||||
OpenCode Go jest obecnie w fazie beta.
|
||||
:::
|
||||
|
||||
Go działa jak każdy inny dostawca w OpenCode. Subskrybujesz OpenCode Go i otrzymujesz swój klucz API. Jest to **całkowicie opcjonalne** i nie musisz z tego korzystać, aby używać OpenCode.
|
||||
|
||||
Jest przeznaczony głównie dla użytkowników międzynarodowych, z modelami hostowanymi w USA, UE i Singapurze dla stabilnego dostępu globalnego.
|
||||
|
||||
---
|
||||
|
||||
## Tło
|
||||
|
||||
Otwarte modele stały się naprawdę dobre. Osiągają teraz wydajność zbliżoną do modeli komercyjnych w zadaniach związanych z kodowaniem. A ponieważ wielu dostawców może je obsługiwać konkurencyjnie, są zazwyczaj znacznie tańsze.
|
||||
|
||||
Jednak uzyskanie niezawodnego dostępu o niskim opóźnieniu może być trudne. Dostawcy różnią się jakością i dostępnością.
|
||||
|
||||
:::tip
|
||||
Przetestowaliśmy wybraną grupę modeli i dostawców, którzy dobrze współpracują z OpenCode.
|
||||
:::
|
||||
|
||||
Aby to naprawić, zrobiliśmy kilka rzeczy:
|
||||
|
||||
1. Przetestowaliśmy wybraną grupę otwartych modeli i rozmawialiśmy z ich zespołami o tym, jak najlepiej je uruchamiać.
|
||||
2. Następnie współpracowaliśmy z kilkoma dostawcami, aby upewnić się, że są one obsługiwane poprawnie.
|
||||
3. Na koniec przeprowadziliśmy testy porównawcze kombinacji modelu/dostawcy i stworzyliśmy listę, którą z czystym sumieniem polecamy.
|
||||
|
||||
OpenCode Go daje dostęp do tych modeli za **10 USD miesięcznie**.
|
||||
|
||||
---
|
||||
|
||||
## Jak to działa
|
||||
|
||||
OpenCode Go działa jak każdy inny dostawca w OpenCode.
|
||||
|
||||
1. Logujesz się do **<a href={console}>OpenCode Zen</a>**, subskrybujesz Go i kopiujesz swój klucz API.
|
||||
2. Uruchamiasz polecenie `/connect` w TUI, wybierasz `OpenCode Go` i wklejasz swój klucz API.
|
||||
3. Uruchom `/models` w TUI, aby zobaczyć listę modeli dostępnych przez Go.
|
||||
|
||||
:::note
|
||||
Tylko jeden członek na obszar roboczy może subskrybować OpenCode Go.
|
||||
:::
|
||||
|
||||
Obecna lista modeli obejmuje:
|
||||
|
||||
- **GLM-5**
|
||||
- **Kimi K2.5**
|
||||
- **MiniMax M2.5**
|
||||
|
||||
Lista modeli może ulec zmianie w miarę testowania i dodawania nowych.
|
||||
|
||||
---
|
||||
|
||||
## Limity użycia
|
||||
|
||||
OpenCode Go obejmuje następujące limity:
|
||||
|
||||
- **Limit 5-godzinny** — zużycie o wartości 12 USD
|
||||
- **Limit tygodniowy** — zużycie o wartości 30 USD
|
||||
- **Limit miesięczny** — zużycie o wartości 60 USD
|
||||
|
||||
Limity są definiowane w wartości dolarowej. Oznacza to, że rzeczywista liczba żądań zależy od używanego modelu. Tańsze modele, takie jak MiniMax M2.5, pozwalają na więcej żądań, podczas gdy droższe modele, takie jak GLM-5, na mniej.
|
||||
|
||||
Poniższa tabela przedstawia szacunkową liczbę żądań w oparciu o typowe wzorce użytkowania Go:
|
||||
|
||||
| | GLM-5 | Kimi K2.5 | MiniMax M2.5 |
|
||||
| ------------------- | ----- | --------- | ------------ |
|
||||
| żądania na 5 godzin | 1 150 | 1 850 | 30 000 |
|
||||
| żądania na tydzień | 2 880 | 4 630 | 75 000 |
|
||||
| żądania na miesiąc | 5 750 | 9 250 | 150 000 |
|
||||
|
||||
Szacunki opierają się na zaobserwowanych średnich wzorcach żądań:
|
||||
|
||||
- GLM-5 — 700 wejściowych, 52 000 zbuforowanych, 150 wyjściowych tokenów na żądanie
|
||||
- Kimi K2.5 — 870 wejściowych, 55 000 zbuforowanych, 200 wyjściowych tokenów na żądanie
|
||||
- MiniMax M2.5 — 300 wejściowych, 55 000 zbuforowanych, 125 wyjściowych tokenów na żądanie
|
||||
|
||||
Możesz śledzić swoje bieżące zużycie w **<a href={console}>konsoli</a>**.
|
||||
|
||||
:::tip
|
||||
Jeśli osiągniesz limit użycia, możesz kontynuować korzystanie z darmowych modeli.
|
||||
:::
|
||||
|
||||
Limity użycia mogą ulec zmianie, gdy będziemy uczyć się na podstawie wczesnego użytkowania i opinii.
|
||||
|
||||
---
|
||||
|
||||
### Cennik
|
||||
|
||||
OpenCode Go to plan subskrypcji za **10 USD miesięcznie**. Poniżej znajdują się ceny **za 1 mln tokenów**.
|
||||
|
||||
| Model | Wejście | Wyjście | Odczyt cache |
|
||||
| ------------ | ------- | ------- | ------------ |
|
||||
| GLM-5 | 1,00 $ | 3,20 $ | 0,20 $ |
|
||||
| Kimi K2.5 | 0,60 $ | 3,00 $ | 0,10 $ |
|
||||
| MiniMax M2.5 | 0,30 $ | 1,20 $ | 0,03 $ |
|
||||
|
||||
---
|
||||
|
||||
### Użycie poza limitami
|
||||
|
||||
Jeśli posiadasz również środki na swoim saldzie Zen, możesz włączyć opcję **Use balance** (Użyj salda) w konsoli. Po włączeniu, Go przełączy się na twoje saldo Zen po osiągnięciu limitów użycia, zamiast blokować żądania.
|
||||
|
||||
---
|
||||
|
||||
## Punkty końcowe
|
||||
|
||||
Możesz również uzyskać dostęp do modeli Go poprzez następujące punkty końcowe API.
|
||||
|
||||
| Model | Identyfikator modelu | Endpoint | Pakiet AI SDK |
|
||||
| ------------ | -------------------- | ------------------------------------------------ | --------------------------- |
|
||||
| GLM-5 | glm-5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| Kimi K2.5 | kimi-k2.5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| MiniMax M2.5 | minimax-m2.5 | `https://opencode.ai/zen/go/v1/messages` | `@ai-sdk/anthropic` |
|
||||
|
||||
[Identyfikator modelu](/docs/config/#models) w twojej konfiguracji OpenCode używa formatu `opencode-go/<model-id>`. Na przykład dla Kimi K2.5 użyłbyś `opencode-go/kimi-k2.5` w swojej konfiguracji.
|
||||
|
||||
---
|
||||
|
||||
## Prywatność
|
||||
|
||||
Plan jest przeznaczony głównie dla użytkowników międzynarodowych, z modelami hostowanymi w USA, UE i Singapurze dla stabilnego dostępu globalnego.
|
||||
|
||||
<a href={email}>Skontaktuj się z nami</a>, jeśli masz jakiekolwiek pytania.
|
||||
|
||||
---
|
||||
|
||||
## Cele
|
||||
|
||||
Stworzyliśmy OpenCode Go, aby:
|
||||
|
||||
1. Uczynić kodowanie z AI **dostępnym** dla większej liczby osób dzięki taniej subskrypcji.
|
||||
2. Zapewnić **niezawodny** dostęp do najlepszych otwartych modeli kodowania.
|
||||
3. Wyselekcjonować modele, które są **przetestowane i sprawdzone** pod kątem użycia z agentami kodującymi.
|
||||
4. Nie wprowadzać **żadnych blokad (lock-in)**, pozwalając na korzystanie z dowolnego innego dostawcy w OpenCode.
|
||||
@@ -1,145 +0,0 @@
|
||||
---
|
||||
title: Go
|
||||
description: Assinatura de baixo custo para modelos de codificação abertos.
|
||||
---
|
||||
|
||||
import config from "../../../../config.mjs"
|
||||
export const console = config.console
|
||||
export const email = `mailto:${config.email}`
|
||||
|
||||
O OpenCode Go é uma assinatura de baixo custo de **$10/mês** que oferece acesso confiável a modelos de codificação abertos populares.
|
||||
|
||||
:::note
|
||||
O OpenCode Go está atualmente em beta.
|
||||
:::
|
||||
|
||||
O Go funciona como qualquer outro provedor no OpenCode. Você assina o OpenCode Go e obtém sua chave de API. É **totalmente opcional** e você não precisa usá-lo para usar o OpenCode.
|
||||
|
||||
Ele é projetado principalmente para usuários internacionais, com modelos hospedados nos EUA, UE e Singapura para acesso global estável.
|
||||
|
||||
---
|
||||
|
||||
## Contexto
|
||||
|
||||
Modelos abertos ficaram realmente bons. Eles agora alcançam desempenho próximo aos modelos proprietários para tarefas de codificação. E como muitos provedores podem servi-los competitivamente, eles geralmente são muito mais baratos.
|
||||
|
||||
No entanto, obter acesso confiável e de baixa latência a eles pode ser difícil. Os provedores variam em qualidade e disponibilidade.
|
||||
|
||||
:::tip
|
||||
Testamos um grupo selecionado de modelos e provedores que funcionam bem com o OpenCode.
|
||||
:::
|
||||
|
||||
Para corrigir isso, fizemos algumas coisas:
|
||||
|
||||
1. Testamos um grupo selecionado de modelos abertos e conversamos com suas equipes sobre a melhor forma de executá-los.
|
||||
2. Trabalhamos com alguns provedores para garantir que eles estivessem sendo servidos corretamente.
|
||||
3. Finalmente, fizemos benchmarks da combinação modelo/provedor e chegamos a uma lista que nos sentimos bem em recomendar.
|
||||
|
||||
O OpenCode Go oferece acesso a esses modelos por **$10/mês**.
|
||||
|
||||
---
|
||||
|
||||
## Como funciona
|
||||
|
||||
O OpenCode Go funciona como qualquer outro provedor no OpenCode.
|
||||
|
||||
1. Você faz login no **<a href={console}>OpenCode Zen</a>**, assina o Go e copia sua chave de API.
|
||||
2. Você executa o comando `/connect` na TUI, seleciona `OpenCode Go` e cola sua chave de API.
|
||||
3. Execute `/models` na TUI para ver a lista de modelos disponíveis através do Go.
|
||||
|
||||
:::note
|
||||
Apenas um membro por workspace pode assinar o OpenCode Go.
|
||||
:::
|
||||
|
||||
A lista atual de modelos inclui:
|
||||
|
||||
- **GLM-5**
|
||||
- **Kimi K2.5**
|
||||
- **MiniMax M2.5**
|
||||
|
||||
A lista de modelos pode mudar conforme testamos e adicionamos novos.
|
||||
|
||||
---
|
||||
|
||||
## Limites de uso
|
||||
|
||||
O OpenCode Go inclui os seguintes limites:
|
||||
|
||||
- **Limite de 5 horas** — $12 de uso
|
||||
- **Limite semanal** — $30 de uso
|
||||
- **Limite mensal** — $60 de uso
|
||||
|
||||
Os limites são definidos em valor monetário. Isso significa que sua contagem real de requisições depende do modelo que você usa. Modelos mais baratos como o MiniMax M2.5 permitem mais requisições, enquanto modelos de custo mais alto como o GLM-5 permitem menos.
|
||||
|
||||
A tabela abaixo fornece uma estimativa de contagem de requisições baseada em padrões típicos de uso do Go:
|
||||
|
||||
| | GLM-5 | Kimi K2.5 | MiniMax M2.5 |
|
||||
| ----------------------- | ----- | --------- | ------------ |
|
||||
| requisições por 5 horas | 1.150 | 1.850 | 30.000 |
|
||||
| requisições por semana | 2.880 | 4.630 | 75.000 |
|
||||
| requisições por mês | 5.750 | 9.250 | 150.000 |
|
||||
|
||||
As estimativas são baseadas em padrões médios de requisição observados:
|
||||
|
||||
- GLM-5 — 700 tokens de entrada, 52.000 em cache, 150 tokens de saída por requisição
|
||||
- Kimi K2.5 — 870 tokens de entrada, 55.000 em cache, 200 tokens de saída por requisição
|
||||
- MiniMax M2.5 — 300 tokens de entrada, 55.000 em cache, 125 tokens de saída por requisição
|
||||
|
||||
Você pode acompanhar seu uso atual no **<a href={console}>console</a>**.
|
||||
|
||||
:::tip
|
||||
Se você atingir o limite de uso, pode continuar usando os modelos gratuitos.
|
||||
:::
|
||||
|
||||
Os limites de uso podem mudar conforme aprendemos com o uso inicial e feedback.
|
||||
|
||||
---
|
||||
|
||||
### Preços
|
||||
|
||||
O OpenCode Go é um plano de assinatura de **$10/mês**. Abaixo estão os preços **por 1M de tokens**.
|
||||
|
||||
| Modelo | Entrada | Saída | Leitura em Cache |
|
||||
| ------------ | ------- | ----- | ---------------- |
|
||||
| GLM-5 | $1.00 | $3.20 | $0.20 |
|
||||
| Kimi K2.5 | $0.60 | $3.00 | $0.10 |
|
||||
| MiniMax M2.5 | $0.30 | $1.20 | $0.03 |
|
||||
|
||||
---
|
||||
|
||||
### Uso além dos limites
|
||||
|
||||
Se você também tiver créditos em seu saldo Zen, pode ativar a opção **Use balance** (Usar saldo) no console. Quando ativada, o Go recorrerá ao seu saldo Zen depois que você atingir seus limites de uso, em vez de bloquear as requisições.
|
||||
|
||||
---
|
||||
|
||||
## Endpoints
|
||||
|
||||
Você também pode acessar os modelos Go através dos seguintes endpoints de API.
|
||||
|
||||
| Modelo | ID do Modelo | Endpoint | Pacote AI SDK |
|
||||
| ------------ | ------------ | ------------------------------------------------ | --------------------------- |
|
||||
| GLM-5 | glm-5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| Kimi K2.5 | kimi-k2.5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| MiniMax M2.5 | minimax-m2.5 | `https://opencode.ai/zen/go/v1/messages` | `@ai-sdk/anthropic` |
|
||||
|
||||
O [model id](/docs/config/#models) (ID do modelo) na sua configuração do OpenCode usa o formato `opencode-go/<model-id>`. Por exemplo, para o Kimi K2.5, você usaria `opencode-go/kimi-k2.5` na sua configuração.
|
||||
|
||||
---
|
||||
|
||||
## Privacidade
|
||||
|
||||
O plano é projetado principalmente para usuários internacionais, com modelos hospedados nos EUA, UE e Singapura para acesso global estável.
|
||||
|
||||
<a href={email}>Entre em contato conosco</a> se tiver alguma dúvida.
|
||||
|
||||
---
|
||||
|
||||
## Objetivos
|
||||
|
||||
Criamos o OpenCode Go para:
|
||||
|
||||
1. Tornar a IA de codificação **acessível** a mais pessoas com uma assinatura de baixo custo.
|
||||
2. Fornecer acesso **confiável** aos melhores modelos de codificação abertos.
|
||||
3. Curar modelos que são **testados e avaliados** para uso em agentes de codificação.
|
||||
4. Não ter **nenhum bloqueio (lock-in)**, permitindo que você use qualquer outro provedor com o OpenCode também.
|
||||
@@ -1,145 +0,0 @@
|
||||
---
|
||||
title: Go
|
||||
description: Недорогая подписка на открытые модели для кодинга.
|
||||
---
|
||||
|
||||
import config from "../../../../config.mjs"
|
||||
export const console = config.console
|
||||
export const email = `mailto:${config.email}`
|
||||
|
||||
OpenCode Go — это недорогая подписка за **$10/месяц**, которая предоставляет надежный доступ к популярным открытым моделям для кодинга.
|
||||
|
||||
:::note
|
||||
OpenCode Go в настоящее время находится в бета-версии.
|
||||
:::
|
||||
|
||||
Go работает как любой другой провайдер в OpenCode. Вы подписываетесь на OpenCode Go и получаете свой API ключ. Это **полностью опционально**, и вам не нужно использовать его, чтобы пользоваться OpenCode.
|
||||
|
||||
Он разработан в первую очередь для международных пользователей, с моделями, размещенными в США, ЕС и Сингапуре для стабильного глобального доступа.
|
||||
|
||||
---
|
||||
|
||||
## Предыстория
|
||||
|
||||
Открытые модели стали действительно хорошими. Теперь они достигают производительности, близкой к проприетарным моделям для задач кодинга. И поскольку многие провайдеры могут обслуживать их на конкурентной основе, они обычно намного дешевле.
|
||||
|
||||
Однако получение надежного доступа к ним с низкой задержкой может быть сложным. Качество и доступность провайдеров варьируются.
|
||||
|
||||
:::tip
|
||||
Мы протестировали избранную группу моделей и провайдеров, которые хорошо работают с OpenCode.
|
||||
:::
|
||||
|
||||
Чтобы исправить это, мы сделали пару вещей:
|
||||
|
||||
1. Мы протестировали избранную группу открытых моделей и поговорили с их командами о том, как лучше всего их запускать.
|
||||
2. Затем мы работали с несколькими провайдерами, чтобы убедиться, что они обслуживаются правильно.
|
||||
3. Наконец, мы провели бенчмаркинг комбинации модели/провайдера и составили список, который мы можем смело рекомендовать.
|
||||
|
||||
OpenCode Go дает вам доступ к этим моделям за **$10/месяц**.
|
||||
|
||||
---
|
||||
|
||||
## Как это работает
|
||||
|
||||
OpenCode Go работает как любой другой провайдер в OpenCode.
|
||||
|
||||
1. Вы входите в **<a href={console}>OpenCode Zen</a>**, подписываетесь на Go и копируете свой API ключ.
|
||||
2. Вы запускаете команду `/connect` в TUI, выбираете `OpenCode Go` и вставляете свой API ключ.
|
||||
3. Запустите `/models` в TUI, чтобы увидеть список моделей, доступных через Go.
|
||||
|
||||
:::note
|
||||
Только один участник рабочей области может подписаться на OpenCode Go.
|
||||
:::
|
||||
|
||||
Текущий список моделей включает:
|
||||
|
||||
- **GLM-5**
|
||||
- **Kimi K2.5**
|
||||
- **MiniMax M2.5**
|
||||
|
||||
Список моделей может меняться по мере того, как мы тестируем и добавляем новые.
|
||||
|
||||
---
|
||||
|
||||
## Лимиты использования
|
||||
|
||||
OpenCode Go включает следующие лимиты:
|
||||
|
||||
- **5-часовой лимит** — $12 использования
|
||||
- **Недельный лимит** — $30 использования
|
||||
- **Месячный лимит** — $60 использования
|
||||
|
||||
Лимиты определены в денежном выражении. Это означает, что ваше фактическое количество запросов зависит от модели, которую вы используете. Более дешевые модели, такие как MiniMax M2.5, позволяют делать больше запросов, в то время как более дорогие модели, такие как GLM-5, позволяют меньше.
|
||||
|
||||
Таблица ниже предоставляет примерное количество запросов на основе типичных паттернов использования Go:
|
||||
|
||||
| | GLM-5 | Kimi K2.5 | MiniMax M2.5 |
|
||||
| ------------------- | ----- | --------- | ------------ |
|
||||
| запросов за 5 часов | 1,150 | 1,850 | 30,000 |
|
||||
| запросов в неделю | 2,880 | 4,630 | 75,000 |
|
||||
| запросов в месяц | 5,750 | 9,250 | 150,000 |
|
||||
|
||||
Оценки основаны на наблюдаемых средних паттернах запросов:
|
||||
|
||||
- GLM-5 — 700 входных, 52,000 кэшированных, 150 выходных токенов на запрос
|
||||
- Kimi K2.5 — 870 входных, 55,000 кэшированных, 200 выходных токенов на запрос
|
||||
- MiniMax M2.5 — 300 входных, 55,000 кэшированных, 125 выходных токенов на запрос
|
||||
|
||||
Вы можете отслеживать свое текущее использование в **<a href={console}>консоли</a>**.
|
||||
|
||||
:::tip
|
||||
Если вы достигнете лимита использования, вы можете продолжить использовать бесплатные модели.
|
||||
:::
|
||||
|
||||
Лимиты использования могут меняться по мере того, как мы учимся на раннем использовании и отзывах.
|
||||
|
||||
---
|
||||
|
||||
### Ценообразование
|
||||
|
||||
OpenCode Go — это план подписки за **$10/месяц**. Ниже приведены цены **за 1 млн токенов**.
|
||||
|
||||
| Модель | Ввод | Вывод | Кэшированное чтение |
|
||||
| ------------ | ----- | ----- | ------------------- |
|
||||
| GLM-5 | $1.00 | $3.20 | $0.20 |
|
||||
| Kimi K2.5 | $0.60 | $3.00 | $0.10 |
|
||||
| MiniMax M2.5 | $0.30 | $1.20 | $0.03 |
|
||||
|
||||
---
|
||||
|
||||
### Использование сверх лимитов
|
||||
|
||||
Если у вас также есть кредиты на балансе Zen, вы можете включить опцию **Use balance** (Использовать баланс) в консоли. Когда она включена, Go переключится на ваш баланс Zen после того, как вы исчерпаете свои лимиты использования, вместо блокировки запросов.
|
||||
|
||||
---
|
||||
|
||||
## Эндпоинты
|
||||
|
||||
Вы также можете получить доступ к моделям Go через следующие API эндпоинты.
|
||||
|
||||
| Модель | ID модели | Эндпоинт | Пакет AI SDK |
|
||||
| ------------ | ------------ | ------------------------------------------------ | --------------------------- |
|
||||
| GLM-5 | glm-5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| Kimi K2.5 | kimi-k2.5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| MiniMax M2.5 | minimax-m2.5 | `https://opencode.ai/zen/go/v1/messages` | `@ai-sdk/anthropic` |
|
||||
|
||||
[Model id](/docs/config/#models) в вашей конфигурации OpenCode использует формат `opencode-go/<model-id>`. Например, для Kimi K2.5 вы бы использовали `opencode-go/kimi-k2.5` в вашей конфигурации.
|
||||
|
||||
---
|
||||
|
||||
## Конфиденциальность
|
||||
|
||||
План разработан в первую очередь для международных пользователей, с моделями, размещенными в США, ЕС и Сингапуре для стабильного глобального доступа.
|
||||
|
||||
<a href={email}>Свяжитесь с нами</a>, если у вас есть вопросы.
|
||||
|
||||
---
|
||||
|
||||
## Цели
|
||||
|
||||
Мы создали OpenCode Go, чтобы:
|
||||
|
||||
1. Сделать ИИ-кодинг **доступным** большему количеству людей с недорогой подпиской.
|
||||
2. Обеспечить **надежный** доступ к лучшим открытым моделям для кодинга.
|
||||
3. Отобрать модели, которые **протестированы и проверены** для использования агентами кодинга.
|
||||
4. Не иметь **привязки к поставщику** (no lock-in), позволяя вам использовать любого другого провайдера с OpenCode.
|
||||
@@ -1,145 +0,0 @@
|
||||
---
|
||||
title: Go
|
||||
description: การสมัครสมาชิกราคาประหยัดสำหรับโมเดลการเขียนโค้ดแบบเปิด
|
||||
---
|
||||
|
||||
import config from "../../../../config.mjs"
|
||||
export const console = config.console
|
||||
export const email = `mailto:${config.email}`
|
||||
|
||||
OpenCode Go คือการสมัครสมาชิกราคาประหยัดเพียง **$10/เดือน** ที่ให้คุณเข้าถึงโมเดลการเขียนโค้ดแบบเปิดยอดนิยมได้อย่างน่าเชื่อถือ
|
||||
|
||||
:::note
|
||||
ขณะนี้ OpenCode Go อยู่ในช่วงเบต้า
|
||||
:::
|
||||
|
||||
Go ทำงานเหมือนกับผู้ให้บริการรายอื่นใน OpenCode คุณสมัครสมาชิก OpenCode Go และรับคีย์ API ของคุณ มันเป็น**ตัวเลือกเสริมทั้งหมด** และคุณไม่จำเป็นต้องใช้มันเพื่อใช้งาน OpenCode
|
||||
|
||||
มันถูกออกแบบมาสำหรับผู้ใช้งานระดับนานาชาติเป็นหลัก โดยมีโมเดลโฮสต์อยู่ในสหรัฐอเมริกา สหภาพยุโรป และสิงคโปร์ เพื่อการเข้าถึงที่เสถียรทั่วโลก
|
||||
|
||||
---
|
||||
|
||||
## ความเป็นมา
|
||||
|
||||
โมเดลแบบเปิดมีคุณภาพดีขึ้นมาก ปัจจุบันมีประสิทธิภาพใกล้เคียงกับโมเดลที่เป็นกรรมสิทธิ์สำหรับงานเขียนโค้ด และเนื่องจากผู้ให้บริการหลายรายสามารถให้บริการโมเดลเหล่านี้ได้อย่างแข่งขันกัน จึงมักจะมีราคาถูกกว่ามาก
|
||||
|
||||
อย่างไรก็ตาม การเข้าถึงโมเดลเหล่านี้อย่างน่าเชื่อถือและมีความหน่วงต่ำอาจเป็นเรื่องยาก ผู้ให้บริการมีคุณภาพและความพร้อมใช้งานที่แตกต่างกัน
|
||||
|
||||
:::tip
|
||||
เราได้ทดสอบกลุ่มโมเดลและผู้ให้บริการที่เลือกสรรแล้วซึ่งทำงานได้ดีกับ OpenCode
|
||||
:::
|
||||
|
||||
เพื่อแก้ไขปัญหานี้ เราได้ทำสิ่งต่อไปนี้:
|
||||
|
||||
1. เราทดสอบกลุ่มโมเดลแบบเปิดที่เลือกสรรและพูดคุยกับทีมของพวกเขาเกี่ยวกับวิธีการรันโมเดลให้ดีที่สุด
|
||||
2. จากนั้นเราทำงานร่วมกับผู้ให้บริการบางรายเพื่อให้แน่ใจว่าโมเดลเหล่านี้ได้รับการให้บริการอย่างถูกต้อง
|
||||
3. สุดท้าย เราทำการทดสอบประสิทธิภาพ (Benchmark) การรวมกันของโมเดล/ผู้ให้บริการ และได้รายชื่อที่เรารู้สึกดีที่จะแนะนำ
|
||||
|
||||
OpenCode Go ให้คุณเข้าถึงโมเดลเหล่านี้ในราคา **$10/เดือน**
|
||||
|
||||
---
|
||||
|
||||
## วิธีการทำงาน
|
||||
|
||||
OpenCode Go ทำงานเหมือนกับผู้ให้บริการรายอื่นใน OpenCode
|
||||
|
||||
1. ลงชื่อเข้าใช้ **<a href={console}>OpenCode Zen</a>** สมัครสมาชิก Go และคัดลอกคีย์ API ของคุณ
|
||||
2. รันคำสั่ง `/connect` ใน TUI เลือก `OpenCode Go` และวางคีย์ API ของคุณ
|
||||
3. รัน `/models` ใน TUI เพื่อดูรายชื่อโมเดลที่สามารถใช้งานได้ผ่าน Go
|
||||
|
||||
:::note
|
||||
สมาชิกเพียงหนึ่งคนต่อพื้นที่ทำงาน (Workspace) เท่านั้นที่สามารถสมัครสมาชิก OpenCode Go ได้
|
||||
:::
|
||||
|
||||
รายชื่อโมเดลปัจจุบันประกอบด้วย:
|
||||
|
||||
- **GLM-5**
|
||||
- **Kimi K2.5**
|
||||
- **MiniMax M2.5**
|
||||
|
||||
รายชื่อโมเดลอาจมีการเปลี่ยนแปลงเมื่อเราทดสอบและเพิ่มโมเดลใหม่
|
||||
|
||||
---
|
||||
|
||||
## ขีดจำกัดการใช้งาน
|
||||
|
||||
OpenCode Go มีขีดจำกัดดังต่อไปนี้:
|
||||
|
||||
- **ขีดจำกัด 5 ชั่วโมง** — การใช้งานมูลค่า $12
|
||||
- **ขีดจำกัดรายสัปดาห์** — การใช้งานมูลค่า $30
|
||||
- **ขีดจำกัดรายเดือน** — การใช้งานมูลค่า $60
|
||||
|
||||
ขีดจำกัดถูกกำหนดเป็นมูลค่าดอลลาร์ ซึ่งหมายความว่าจำนวนคำขอจริงของคุณจะขึ้นอยู่กับโมเดลที่คุณใช้ โมเดลที่ถูกกว่าเช่น MiniMax M2.5 อนุญาตให้ส่งคำขอได้มากกว่า ในขณะที่โมเดลที่มีราคาสูงกว่าเช่น GLM-5 จะอนุญาตให้ส่งคำขอได้น้อยกว่า
|
||||
|
||||
ตารางด้านล่างแสดงจำนวนคำขอโดยประมาณตามรูปแบบการใช้งาน Go ทั่วไป:
|
||||
|
||||
| | GLM-5 | Kimi K2.5 | MiniMax M2.5 |
|
||||
| ----------------- | ----- | --------- | ------------ |
|
||||
| คำขอต่อ 5 ชั่วโมง | 1,150 | 1,850 | 30,000 |
|
||||
| คำขอต่อสัปดาห์ | 2,880 | 4,630 | 75,000 |
|
||||
| คำขอต่อเดือน | 5,750 | 9,250 | 150,000 |
|
||||
|
||||
การประมาณการขึ้นอยู่กับรูปแบบคำขอเฉลี่ยที่สังเกตได้:
|
||||
|
||||
- GLM-5 — 700 input, 52,000 cached, 150 output tokens ต่อคำขอ
|
||||
- Kimi K2.5 — 870 input, 55,000 cached, 200 output tokens ต่อคำขอ
|
||||
- MiniMax M2.5 — 300 input, 55,000 cached, 125 output tokens ต่อคำขอ
|
||||
|
||||
คุณสามารถติดตามการใช้งานปัจจุบันของคุณได้ใน **<a href={console}>คอนโซล</a>**
|
||||
|
||||
:::tip
|
||||
หากคุณใช้งานจนถึงขีดจำกัด คุณสามารถใช้โมเดลฟรีต่อไปได้
|
||||
:::
|
||||
|
||||
ขีดจำกัดการใช้งานอาจมีการเปลี่ยนแปลงเมื่อเราเรียนรู้จากการใช้งานและข้อเสนอแนะในช่วงแรก
|
||||
|
||||
---
|
||||
|
||||
### ราคา
|
||||
|
||||
OpenCode Go เป็นแผนการสมัครสมาชิกราคา **$10/เดือน** ด้านล่างคือราคา**ต่อ 1 ล้านโทเค็น**
|
||||
|
||||
| Model | Input | Output | Cached Read |
|
||||
| ------------ | ----- | ------ | ----------- |
|
||||
| GLM-5 | $1.00 | $3.20 | $0.20 |
|
||||
| Kimi K2.5 | $0.60 | $3.00 | $0.10 |
|
||||
| MiniMax M2.5 | $0.30 | $1.20 | $0.03 |
|
||||
|
||||
---
|
||||
|
||||
### การใช้งานเกินขีดจำกัด
|
||||
|
||||
หากคุณมีเครดิตในยอดคงเหลือ Zen ของคุณ คุณสามารถเปิดใช้งานตัวเลือก **Use balance** ในคอนโซล เมื่อเปิดใช้งาน Go จะเปลี่ยนไปใช้ยอดคงเหลือ Zen ของคุณหลังจากที่คุณใช้งานถึงขีดจำกัดแล้ว แทนที่จะบล็อกคำขอ
|
||||
|
||||
---
|
||||
|
||||
## Endpoints
|
||||
|
||||
คุณยังสามารถเข้าถึงโมเดล Go ผ่าน API endpoints ต่อไปนี้
|
||||
|
||||
| Model | Model ID | Endpoint | AI SDK Package |
|
||||
| ------------ | ------------ | ------------------------------------------------ | --------------------------- |
|
||||
| GLM-5 | glm-5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| Kimi K2.5 | kimi-k2.5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| MiniMax M2.5 | minimax-m2.5 | `https://opencode.ai/zen/go/v1/messages` | `@ai-sdk/anthropic` |
|
||||
|
||||
[รหัสโมเดล](/docs/config/#models) ในการกำหนดค่า OpenCode ของคุณใช้รูปแบบ `opencode-go/<model-id>` ตัวอย่างเช่น สำหรับ Kimi K2.5 คุณจะใช้ `opencode-go/kimi-k2.5` ในการกำหนดค่าของคุณ
|
||||
|
||||
---
|
||||
|
||||
## ความเป็นส่วนตัว
|
||||
|
||||
แผนนี้ออกแบบมาสำหรับผู้ใช้ระดับนานาชาติเป็นหลัก โดยมีโมเดลโฮสต์อยู่ในสหรัฐอเมริกา สหภาพยุโรป และสิงคโปร์ เพื่อการเข้าถึงที่เสถียรทั่วโลก
|
||||
|
||||
<a href={email}>ติดต่อเรา</a> หากคุณมีข้อสงสัยใดๆ
|
||||
|
||||
---
|
||||
|
||||
## เป้าหมาย
|
||||
|
||||
เราสร้าง OpenCode Go เพื่อ:
|
||||
|
||||
1. ทำให้การเขียนโค้ดด้วย AI **เข้าถึงได้** สำหรับผู้คนมากขึ้นด้วยการสมัครสมาชิกราคาประหยัด
|
||||
2. ให้การเข้าถึงโมเดลการเขียนโค้ดแบบเปิดที่ดีที่สุดอย่าง **น่าเชื่อถือ**
|
||||
3. คัดสรรโมเดลที่ผ่านการ **ทดสอบและวัดประสิทธิภาพ** สำหรับการใช้งานตัวแทน (Agent) เขียนโค้ด
|
||||
4. **ไม่มีการผูกมัด** โดยอนุญาตให้คุณใช้ผู้ให้บริการรายอื่นกับ OpenCode ได้เช่นกัน
|
||||
@@ -1,145 +0,0 @@
|
||||
---
|
||||
title: Go
|
||||
description: Açık kodlama modelleri için düşük maliyetli abonelik.
|
||||
---
|
||||
|
||||
import config from "../../../../config.mjs"
|
||||
export const console = config.console
|
||||
export const email = `mailto:${config.email}`
|
||||
|
||||
OpenCode Go, popüler açık kodlama modellerine güvenilir erişim sağlayan **aylık 10$** tutarında düşük maliyetli bir aboneliktir.
|
||||
|
||||
:::note
|
||||
OpenCode Go şu anda beta aşamasındadır.
|
||||
:::
|
||||
|
||||
Go, OpenCode içindeki diğer sağlayıcılar gibi çalışır. OpenCode Go'ya abone olur ve API anahtarınızı alırsınız. Bu **tamamen isteğe bağlıdır** ve OpenCode'u kullanmak için buna ihtiyacınız yoktur.
|
||||
|
||||
Kararlı küresel erişim için ABD, AB ve Singapur'da barındırılan modellerle, öncelikli olarak uluslararası kullanıcılar için tasarlanmıştır.
|
||||
|
||||
---
|
||||
|
||||
## Arka Plan
|
||||
|
||||
Açık modeller gerçekten iyi hale geldi. Artık kodlama görevleri için tescilli modellere yakın performans sunuyorlar. Ve birçok sağlayıcı bunları rekabetçi bir şekilde sunabildiği için genellikle çok daha ucuzlar.
|
||||
|
||||
Ancak, bunlara güvenilir ve düşük gecikmeli erişim sağlamak zor olabilir. Sağlayıcılar kalite ve kullanılabilirlik açısından farklılık gösterir.
|
||||
|
||||
:::tip
|
||||
OpenCode ile iyi çalışan seçkin bir model ve sağlayıcı grubunu test ettik.
|
||||
:::
|
||||
|
||||
Bunu düzeltmek için birkaç şey yaptık:
|
||||
|
||||
1. Seçkin bir açık model grubunu test ettik ve bunları en iyi nasıl çalıştıracakları konusunda ekipleriyle görüştük.
|
||||
2. Daha sonra bunların doğru şekilde sunulduğundan emin olmak için birkaç sağlayıcıyla çalıştık.
|
||||
3. Son olarak, model/sağlayıcı kombinasyonunu kıyasladık ve önermekten memnuniyet duyduğumuz bir liste oluşturduk.
|
||||
|
||||
OpenCode Go, bu modellere **aylık 10$** karşılığında erişmenizi sağlar.
|
||||
|
||||
---
|
||||
|
||||
## Nasıl çalışır
|
||||
|
||||
OpenCode Go, OpenCode'daki diğer herhangi bir sağlayıcı gibi çalışır.
|
||||
|
||||
1. **<a href={console}>OpenCode Zen</a>**'de oturum açın, Go'ya abone olun ve API anahtarınızı kopyalayın.
|
||||
2. TUI'de `/connect` komutunu çalıştırın, `OpenCode Go`yu seçin ve API anahtarınızı yapıştırın.
|
||||
3. Go üzerinden kullanılabilen modellerin listesini görmek için TUI'de `/models` komutunu çalıştırın.
|
||||
|
||||
:::note
|
||||
Çalışma alanı başına yalnızca bir üye OpenCode Go'ya abone olabilir.
|
||||
:::
|
||||
|
||||
Mevcut model listesi şunları içerir:
|
||||
|
||||
- **GLM-5**
|
||||
- **Kimi K2.5**
|
||||
- **MiniMax M2.5**
|
||||
|
||||
Test ettikçe ve yenilerini ekledikçe model listesi değişebilir.
|
||||
|
||||
---
|
||||
|
||||
## Kullanım sınırları
|
||||
|
||||
OpenCode Go aşağıdaki sınırları içerir:
|
||||
|
||||
- **5 saatlik sınır** — 12$ kullanım
|
||||
- **Haftalık sınır** — 30$ kullanım
|
||||
- **Aylık sınır** — 60$ kullanım
|
||||
|
||||
Sınırlar dolar değeri üzerinden tanımlanmıştır. Bu, gerçek istek sayınızın kullandığınız modele bağlı olduğu anlamına gelir. MiniMax M2.5 gibi daha ucuz modeller daha fazla isteğe izin verirken, GLM-5 gibi daha yüksek maliyetli modeller daha azına izin verir.
|
||||
|
||||
Aşağıdaki tablo, tipik Go kullanım modellerine dayalı tahmini bir istek sayısı sunmaktadır:
|
||||
|
||||
| | GLM-5 | Kimi K2.5 | MiniMax M2.5 |
|
||||
| ------------------- | ----- | --------- | ------------ |
|
||||
| 5 saat başına istek | 1.150 | 1.850 | 30.000 |
|
||||
| haftalık istek | 2.880 | 4.630 | 75.000 |
|
||||
| aylık istek | 5.750 | 9.250 | 150.000 |
|
||||
|
||||
Tahminler gözlemlenen ortalama istek modellerine dayanmaktadır:
|
||||
|
||||
- GLM-5 — İstek başına 700 girdi, 52.000 önbelleğe alınmış, 150 çıktı token'ı
|
||||
- Kimi K2.5 — İstek başına 870 girdi, 55.000 önbelleğe alınmış, 200 çıktı token'ı
|
||||
- MiniMax M2.5 — İstek başına 300 girdi, 55.000 önbelleğe alınmış, 125 çıktı token'ı
|
||||
|
||||
Mevcut kullanımınızı **<a href={console}>konsoldan</a>** takip edebilirsiniz.
|
||||
|
||||
:::tip
|
||||
Kullanım sınırına ulaşırsanız, ücretsiz modelleri kullanmaya devam edebilirsiniz.
|
||||
:::
|
||||
|
||||
Erken kullanım ve geri bildirimlerden öğrendiklerimize göre kullanım sınırları değişebilir.
|
||||
|
||||
---
|
||||
|
||||
### Fiyatlandırma
|
||||
|
||||
OpenCode Go, **aylık 10$** tutarında bir abonelik planıdır. Aşağıda **1M token başına** fiyatlar yer almaktadır.
|
||||
|
||||
| Model | Girdi | Çıktı | Önbelleğe Alınmış Okuma |
|
||||
| ------------ | ----- | ----- | ----------------------- |
|
||||
| GLM-5 | $1.00 | $3.20 | $0.20 |
|
||||
| Kimi K2.5 | $0.60 | $3.00 | $0.10 |
|
||||
| MiniMax M2.5 | $0.30 | $1.20 | $0.03 |
|
||||
|
||||
---
|
||||
|
||||
### Sınırların ötesinde kullanım
|
||||
|
||||
Zen bakiyenizde krediniz de varsa, konsoldaki **Bakiyeyi kullan** (Use balance) seçeneğini etkinleştirebilirsiniz. Etkinleştirildiğinde, kullanım sınırlarınıza ulaştıktan sonra Go, istekleri engellemek yerine Zen bakiyenize geri dönecektir.
|
||||
|
||||
---
|
||||
|
||||
## Uç Noktalar
|
||||
|
||||
Go modellerine aşağıdaki API uç noktaları üzerinden de erişebilirsiniz.
|
||||
|
||||
| Model | Model ID | Endpoint | AI SDK Paketi |
|
||||
| ------------ | ------------ | ------------------------------------------------ | --------------------------- |
|
||||
| GLM-5 | glm-5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| Kimi K2.5 | kimi-k2.5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| MiniMax M2.5 | minimax-m2.5 | `https://opencode.ai/zen/go/v1/messages` | `@ai-sdk/anthropic` |
|
||||
|
||||
OpenCode yapılandırmanızdaki [model kimliği](/docs/config/#models), `opencode-go/<model-id>` biçimini kullanır. Örneğin, Kimi K2.5 için yapılandırmanızda `opencode-go/kimi-k2.5` kullanırsınız.
|
||||
|
||||
---
|
||||
|
||||
## Gizlilik
|
||||
|
||||
Plan öncelikli olarak uluslararası kullanıcılar için tasarlanmıştır; modeller kararlı küresel erişim için ABD, AB ve Singapur'da barındırılmaktadır.
|
||||
|
||||
Herhangi bir sorunuz varsa <a href={email}>bizimle iletişime geçin</a>.
|
||||
|
||||
---
|
||||
|
||||
## Hedefler
|
||||
|
||||
OpenCode Go'yu şu amaçlarla oluşturduk:
|
||||
|
||||
1. Düşük maliyetli bir abonelikle yapay zeka kodlamasını daha fazla insan için **erişilebilir** kılmak.
|
||||
2. En iyi açık kodlama modellerine **güvenilir** erişim sağlamak.
|
||||
3. Kodlama ajanı kullanımı için **test edilmiş ve kıyaslanmış** modelleri seçmek.
|
||||
4. OpenCode ile başka herhangi bir sağlayıcıyı kullanmanıza da izin vererek **kilitlenmeyi önlemek**.
|
||||
@@ -121,12 +121,12 @@ We support a pay-as-you-go model. Below are the prices **per 1M tokens**.
|
||||
| --------------------------------- | ------ | ------ | ----------- | ------------ |
|
||||
| Big Pickle | Free | Free | Free | - |
|
||||
| MiniMax M2.5 Free | Free | Free | Free | - |
|
||||
| MiniMax M2.5 | $0.30 | $1.20 | $0.06 | $0.375 |
|
||||
| MiniMax M2.5 | $0.30 | $1.20 | $0.06 | - |
|
||||
| MiniMax M2.1 | $0.30 | $1.20 | $0.10 | - |
|
||||
| GLM 5 | $1.00 | $3.20 | $0.20 | - |
|
||||
| GLM 4.7 | $0.60 | $2.20 | $0.10 | - |
|
||||
| GLM 4.6 | $0.60 | $2.20 | $0.10 | - |
|
||||
| Kimi K2.5 | $0.60 | $3.00 | $0.10 | - |
|
||||
| Kimi K2.5 | $0.60 | $3.00 | $0.08 | - |
|
||||
| Kimi K2 Thinking | $0.40 | $2.50 | - | - |
|
||||
| Kimi K2 | $0.40 | $2.50 | - | - |
|
||||
| Qwen3 Coder 480B | $0.45 | $1.50 | - | - |
|
||||
|
||||
@@ -1,145 +0,0 @@
|
||||
---
|
||||
title: Go
|
||||
description: 低成本的开源编程模型订阅服务。
|
||||
---
|
||||
|
||||
import config from "../../../../config.mjs"
|
||||
export const console = config.console
|
||||
export const email = `mailto:${config.email}`
|
||||
|
||||
OpenCode Go 是一项低成本的订阅服务(**$10/月**),为您提供对流行开源编程模型的可靠访问。
|
||||
|
||||
:::note
|
||||
OpenCode Go 目前处于测试阶段。
|
||||
:::
|
||||
|
||||
Go 的工作方式与 OpenCode 中的其他提供商一样。您订阅 OpenCode Go 并获取 API 密钥。这是**完全可选的**,您不需要它也能使用 OpenCode。
|
||||
|
||||
它主要为国际用户设计,模型托管在美国、欧盟和新加坡,以确保稳定的全球访问。
|
||||
|
||||
---
|
||||
|
||||
## 背景
|
||||
|
||||
开源模型已经变得非常出色。它们现在在编程任务上的表现接近专有模型。而且因为许多提供商可以竞争性地提供服务,它们通常要便宜得多。
|
||||
|
||||
然而,获得可靠、低延迟的访问可能很困难。提供商的质量和可用性各不相同。
|
||||
|
||||
:::tip
|
||||
我们测试了一组精选的模型和提供商,它们与 OpenCode 配合良好。
|
||||
:::
|
||||
|
||||
为了解决这个问题,我们做了一些事情:
|
||||
|
||||
1. 我们测试了一组精选的开源模型,并与他们的团队讨论了如何最好地运行它们。
|
||||
2. 然后,我们与几家提供商合作,确保这些模型得到正确的服务。
|
||||
3. 最后,我们对模型/提供商的组合进行了基准测试,并得出了一个我们乐于推荐的列表。
|
||||
|
||||
OpenCode Go 让您可以以 **$10/月** 的价格访问这些模型。
|
||||
|
||||
---
|
||||
|
||||
## 工作原理
|
||||
|
||||
OpenCode Go 的工作方式与 OpenCode 中的其他提供商一样。
|
||||
|
||||
1. 登录 **<a href={console}>OpenCode Zen</a>**,订阅 Go,并复制您的 API 密钥。
|
||||
2. 在 TUI 中运行 `/connect` 命令,选择 `OpenCode Go`,然后粘贴您的 API 密钥。
|
||||
3. 在 TUI 中运行 `/models` 以查看通过 Go 可用的模型列表。
|
||||
|
||||
:::note
|
||||
每个工作区只有一名成员可以订阅 OpenCode Go。
|
||||
:::
|
||||
|
||||
目前的模型列表包括:
|
||||
|
||||
- **GLM-5**
|
||||
- **Kimi K2.5**
|
||||
- **MiniMax M2.5**
|
||||
|
||||
随着我们测试和添加新模型,模型列表可能会发生变化。
|
||||
|
||||
---
|
||||
|
||||
## 使用限制
|
||||
|
||||
OpenCode Go 包含以下限制:
|
||||
|
||||
- **5 小时限制** — $12 的使用量
|
||||
- **每周限制** — $30 的使用量
|
||||
- **每月限制** — $60 的使用量
|
||||
|
||||
限制是以美元价值定义的。这意味着您的实际请求数量取决于您使用的模型。像 MiniMax M2.5 这样更便宜的模型允许更多的请求,而像 GLM-5 这样成本更高的模型允许的请求较少。
|
||||
|
||||
下表提供了基于典型 Go 使用模式的估计请求数:
|
||||
|
||||
| | GLM-5 | Kimi K2.5 | MiniMax M2.5 |
|
||||
| --------------- | ----- | --------- | ------------ |
|
||||
| 每 5 小时请求数 | 1,150 | 1,850 | 30,000 |
|
||||
| 每周请求数 | 2,880 | 4,630 | 75,000 |
|
||||
| 每月请求数 | 5,750 | 9,250 | 150,000 |
|
||||
|
||||
估计值基于观察到的平均请求模式:
|
||||
|
||||
- GLM-5 — 每次请求 700 输入,52,000 缓存,150 输出 token
|
||||
- Kimi K2.5 — 每次请求 870 输入,55,000 缓存,200 输出 token
|
||||
- MiniMax M2.5 — 每次请求 300 输入,55,000 缓存,125 输出 token
|
||||
|
||||
您可以在 **<a href={console}>console</a>** 中跟踪当前的用量。
|
||||
|
||||
:::tip
|
||||
如果您达到使用限制,您可以继续使用免费模型。
|
||||
:::
|
||||
|
||||
随着我们从早期使用和反馈中学习,使用限制可能会发生变化。
|
||||
|
||||
---
|
||||
|
||||
### 定价
|
||||
|
||||
OpenCode Go 是一个 **$10/月** 的订阅计划。以下是**每 1M token** 的价格。
|
||||
|
||||
| Model | Input | Output | Cached Read |
|
||||
| ------------ | ----- | ------ | ----------- |
|
||||
| GLM-5 | $1.00 | $3.20 | $0.20 |
|
||||
| Kimi K2.5 | $0.60 | $3.00 | $0.10 |
|
||||
| MiniMax M2.5 | $0.30 | $1.20 | $0.03 |
|
||||
|
||||
---
|
||||
|
||||
### 超出限制的使用
|
||||
|
||||
如果您的 Zen 余额中还有信用点数,您可以在控制台中启用 **Use balance**(使用余额)选项。启用后,当您达到使用限制时,Go 将回退到您的 Zen 余额,而不是阻止请求。
|
||||
|
||||
---
|
||||
|
||||
## 端点
|
||||
|
||||
您也可以通过以下 API 端点访问 Go 模型。
|
||||
|
||||
| Model | Model ID | Endpoint | AI SDK Package |
|
||||
| ------------ | ------------ | ------------------------------------------------ | --------------------------- |
|
||||
| GLM-5 | glm-5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| Kimi K2.5 | kimi-k2.5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| MiniMax M2.5 | minimax-m2.5 | `https://opencode.ai/zen/go/v1/messages` | `@ai-sdk/anthropic` |
|
||||
|
||||
OpenCode 配置中的 [model id](/docs/config/#models) 使用 `opencode-go/<model-id>` 格式。例如,对于 Kimi K2.5,您将在配置中使用 `opencode-go/kimi-k2.5`。
|
||||
|
||||
---
|
||||
|
||||
## 隐私
|
||||
|
||||
该计划主要为国际用户设计,模型托管在美国、欧盟和新加坡,以确保稳定的全球访问。
|
||||
|
||||
如有任何问题,请 <a href={email}>联系我们</a>。
|
||||
|
||||
---
|
||||
|
||||
## 目标
|
||||
|
||||
我们要创建 OpenCode Go 以:
|
||||
|
||||
1. 通过低成本订阅让更多人**获得** AI 编程能力。
|
||||
2. 提供对最佳开源编程模型的**可靠**访问。
|
||||
3. 策划经过**测试和基准测试**的、适合编程代理使用的模型。
|
||||
4. **无锁定**,允许您在 OpenCode 中使用任何其他提供商。
|
||||
@@ -1,145 +0,0 @@
|
||||
---
|
||||
title: Go
|
||||
description: 針對開放原始碼程式設計模型的低成本訂閱服務。
|
||||
---
|
||||
|
||||
import config from "../../../../config.mjs"
|
||||
export const console = config.console
|
||||
export const email = `mailto:${config.email}`
|
||||
|
||||
OpenCode Go 是一項低成本的 **每月 10 美元** 訂閱服務,讓您可以穩定存取熱門的開放原始碼程式設計模型。
|
||||
|
||||
:::note
|
||||
OpenCode Go 目前處於測試階段 (Beta)。
|
||||
:::
|
||||
|
||||
Go 的運作方式就像 OpenCode 中的任何其他供應商一樣。您訂閱 OpenCode Go 並取得您的 API key。這是**完全選用**的,您不需要使用它也能使用 OpenCode。
|
||||
|
||||
主要是為國際使用者設計,模型託管在美國、歐盟和新加坡,以提供穩定的全球存取。
|
||||
|
||||
---
|
||||
|
||||
## 背景
|
||||
|
||||
開放模型已經變得非常優秀。它們現在在程式設計任務上的表現已接近專有模型。而且因為許多供應商都能以具競爭力的方式提供服務,它們通常便宜得多。
|
||||
|
||||
然而,要獲得穩定且低延遲的存取可能會很困難。供應商的品質和可用性各不相同。
|
||||
|
||||
:::tip
|
||||
我們測試了一組精選的模型和供應商,它們與 OpenCode 搭配運作良好。
|
||||
:::
|
||||
|
||||
為了解決這個問題,我們做了幾件事:
|
||||
|
||||
1. 我們測試了一組精選的開放模型,並與他們的團隊討論如何最好地運行它們。
|
||||
2. 接著我們與幾家供應商合作,確保這些模型能正確地提供服務。
|
||||
3. 最後,我們對模型/供應商的組合進行基準測試,並提出了一份我們覺得值得推薦的清單。
|
||||
|
||||
OpenCode Go 讓您能以 **每月 10 美元** 的價格存取這些模型。
|
||||
|
||||
---
|
||||
|
||||
## 運作方式
|
||||
|
||||
OpenCode Go 的運作方式就像 OpenCode 中的任何其他供應商一樣。
|
||||
|
||||
1. 您登入 **<a href={console}>OpenCode Zen</a>**,訂閱 Go,並複製您的 API key。
|
||||
2. 您在 TUI 中執行 `/connect` 指令,選擇 `OpenCode Go`,並貼上您的 API key。
|
||||
3. 在 TUI 中執行 `/models` 以查看透過 Go 可用的模型清單。
|
||||
|
||||
:::note
|
||||
每個工作區只能有一位成員訂閱 OpenCode Go。
|
||||
:::
|
||||
|
||||
目前的模型清單包括:
|
||||
|
||||
- **GLM-5**
|
||||
- **Kimi K2.5**
|
||||
- **MiniMax M2.5**
|
||||
|
||||
模型清單可能會隨著我們測試和新增模型而變動。
|
||||
|
||||
---
|
||||
|
||||
## 使用限制
|
||||
|
||||
OpenCode Go 包含以下限制:
|
||||
|
||||
- **5 小時限制** — 12 美元的使用量
|
||||
- **每週限制** — 30 美元的使用量
|
||||
- **每月限制** — 60 美元的使用量
|
||||
|
||||
限制是以美元價值定義的。這意味著您的實際請求次數取決於您使用的模型。較便宜的模型(如 MiniMax M2.5)允許更多請求,而較高成本的模型(如 GLM-5)允許較少請求。
|
||||
|
||||
下表根據典型的 Go 使用模式提供估計的請求次數:
|
||||
|
||||
| | GLM-5 | Kimi K2.5 | MiniMax M2.5 |
|
||||
| --------------- | ----- | --------- | ------------ |
|
||||
| 每 5 小時請求數 | 1,150 | 1,850 | 30,000 |
|
||||
| 每週請求數 | 2,880 | 4,630 | 75,000 |
|
||||
| 每月請求數 | 5,750 | 9,250 | 150,000 |
|
||||
|
||||
估計值是根據觀察到的平均請求模式:
|
||||
|
||||
- GLM-5 — 700 輸入, 52,000 快取, 150 輸出 token (每個請求)
|
||||
- Kimi K2.5 — 870 輸入, 55,000 快取, 200 輸出 token (每個請求)
|
||||
- MiniMax M2.5 — 300 輸入, 55,000 快取, 125 輸出 token (每個請求)
|
||||
|
||||
您可以在 **<a href={console}>console</a>** 中追蹤目前的使用量。
|
||||
|
||||
:::tip
|
||||
如果您達到使用限制,您可以繼續使用免費模型。
|
||||
:::
|
||||
|
||||
使用限制可能會隨著我們從早期使用和回饋中學習而變動。
|
||||
|
||||
---
|
||||
|
||||
### 定價
|
||||
|
||||
OpenCode Go 是一個 **每月 10 美元** 的訂閱方案。以下是 **每 100 萬 token** 的價格。
|
||||
|
||||
| 模型 | 輸入 | 輸出 | 快取讀取 |
|
||||
| ------------ | ----- | ----- | -------- |
|
||||
| GLM-5 | $1.00 | $3.20 | $0.20 |
|
||||
| Kimi K2.5 | $0.60 | $3.00 | $0.10 |
|
||||
| MiniMax M2.5 | $0.30 | $1.20 | $0.03 |
|
||||
|
||||
---
|
||||
|
||||
### 超出限制的使用
|
||||
|
||||
如果您的 Zen 餘額中也有點數,您可以在 console 中啟用 **Use balance** 選項。啟用後,當您達到使用限制時,Go 將會改用您的 Zen 餘額,而不是封鎖請求。
|
||||
|
||||
---
|
||||
|
||||
## 端點
|
||||
|
||||
您也可以透過以下 API 端點存取 Go 模型。
|
||||
|
||||
| 模型 | Model ID | 端點 | AI SDK 套件 |
|
||||
| ------------ | ------------ | ------------------------------------------------ | --------------------------- |
|
||||
| GLM-5 | glm-5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| Kimi K2.5 | kimi-k2.5 | `https://opencode.ai/zen/go/v1/chat/completions` | `@ai-sdk/openai-compatible` |
|
||||
| MiniMax M2.5 | minimax-m2.5 | `https://opencode.ai/zen/go/v1/messages` | `@ai-sdk/anthropic` |
|
||||
|
||||
您 OpenCode 設定中的 [model id](/docs/config/#models) 使用 `opencode-go/<model-id>` 格式。例如,對於 Kimi K2.5,您會在設定中使用 `opencode-go/kimi-k2.5`。
|
||||
|
||||
---
|
||||
|
||||
## 隱私權
|
||||
|
||||
此方案主要是為國際使用者設計,模型託管在美國、歐盟和新加坡,以提供穩定的全球存取。
|
||||
|
||||
如果您有任何問題,請 <a href={email}>聯絡我們</a>。
|
||||
|
||||
---
|
||||
|
||||
## 目標
|
||||
|
||||
我們建立 OpenCode Go 是為了:
|
||||
|
||||
1. 透過低成本訂閱,讓更多人能 **輕易取得** AI 程式設計資源。
|
||||
2. 提供對最佳開放程式設計模型的 **可靠** 存取。
|
||||
3. 策劃經過 **測試和基準測試** 的模型,以供程式設計代理使用。
|
||||
4. **沒有鎖定**,允許您在 OpenCode 中同時使用任何其他供應商。
|
||||
@@ -2,7 +2,7 @@
|
||||
"name": "opencode",
|
||||
"displayName": "opencode",
|
||||
"description": "opencode for VS Code",
|
||||
"version": "1.2.16",
|
||||
"version": "1.2.15",
|
||||
"publisher": "sst-dev",
|
||||
"repository": {
|
||||
"type": "git",
|
||||
|
||||
@@ -3,7 +3,9 @@
|
||||
"globalEnv": ["CI", "OPENCODE_DISABLE_SHARE"],
|
||||
"globalPassThroughEnv": ["CI", "OPENCODE_DISABLE_SHARE"],
|
||||
"tasks": {
|
||||
"typecheck": {},
|
||||
"typecheck": {
|
||||
"dependsOn": ["^build"]
|
||||
},
|
||||
"build": {
|
||||
"dependsOn": ["^build"],
|
||||
"outputs": ["dist/**"]
|
||||
|
||||
Reference in New Issue
Block a user