15 KiB
Desktop Db Worker Node Backend Implementation Plan
Goal: Switch the Electron desktop app graph database backend from OPFS plus periodic disk export to db-worker-node with direct disk SQLite access, so desktop app and logseq-cli can safely use the same data-dir at the same time.
Architecture: Reuse existing logseq.cli.server daemon orchestration and lock semantics, and add an Electron main process graph-scoped daemon manager.
Architecture: Replace the Electron renderer PersistentDB implementation from frontend.persist-db.browser to an HTTP plus SSE remote client that talks to /v1/invoke and /v1/events on db-worker-node.
Tech Stack: ClojureScript, Electron main plus renderer, Node child_process, db-worker-node HTTP plus SSE API, Electron IPC with transit-json payloads, SQLite files under data-dir, lock files.
Related: Builds on docs/agent-guide/003-db-worker-node-cli-orchestration.md.
Related: Relates to docs/agent-guide/012-logseq-cli-graph-storage.md.
Related: Relates to docs/agent-guide/030-logseq-cli-db-graph-default-dir-locking.md.
Related: Owner-aware lifecycle follow-up is documented in docs/agent-guide/034-db-worker-node-owner-process-management.md.
Problem statement
The current desktop app uses an OPFS-backed SQLite worker in the renderer and periodically exports to disk through persist-db/run-export-periodically!.
The current logseq-cli starts and connects to db-worker-node, and directly reads and writes SQLite files in data-dir.
This split creates two write paths with eventual synchronization, and desktop plus CLI concurrent usage depends on export timing instead of one shared lock-governed write path.
The goal is to make desktop and CLI share db-worker-node semantics and lock behavior so disk SQLite becomes the single source of truth.
The desktop default DB graphs directory is ~/logseq/graphs, defined in deps/cli/src/logseq/cli/common/graph.cljs, and used by Electron DB file operations in src/electron/electron/db.cljs.
This plan focuses on backend access flow and lifecycle management, and does not change business-level thread-api semantics or SQLite schema.
Testing Plan
I will follow @test-driven-development for every phase and write failing tests before implementation changes.
I will prioritize pure-function and dependency-injected tests so core behavior can be validated without launching a full Electron GUI.
I will add Electron main db-worker manager lifecycle tests for first graph open, multi-window reuse, last-window close, and app shutdown cleanup.
I will add remote client transport tests for invoke success, invoke failure propagation, SSE disconnect and reconnect, and missing auth token handling.
I will extend db-worker-node tests with desktop plus CLI coexistence cases, including lock contention and stale lock recovery.
I will run targeted tests first and then run bb dev:lint-and-test, and I will apply the review checklist in @prompts/review.md.
NOTE: I will write all tests before I add any implementation behavior.
Current behavior map
| Area | Current implementation | Target implementation |
|---|---|---|
| Desktop graph DB runtime | Renderer uses frontend.persist-db.browser and an OPFS worker. |
Renderer uses a remote PersistentDB client against db-worker-node. |
| Desktop persistence model | OPFS acts as source of truth and is periodically exported to disk through Electron IPC. | Disk SQLite in data-dir is the source of truth with no OPFS periodic export flow. |
| CLI persistence model | CLI starts db-worker-node and calls /v1/invoke. |
Keep unchanged and align desktop with the same daemon semantics. |
| Lock and ownership | Desktop OPFS path bypasses db-worker-node lock semantics during in-memory writes. | Desktop and CLI both go through db-worker-node lock and single-writer enforcement. |
| Process lifecycle | Desktop has no graph-scoped daemon manager in main process. | Electron main manages per-graph daemon start, reuse, health checks, and stop. |
Integration sketch
Desktop Renderer
-> requests graph runtime from Electron Main via `electron.ipc/ipc` (transit-json)
-> receives {base-url, auth-token, repo} for db-worker-node
-> calls /v1/invoke and listens /v1/events through remote PersistentDB client
Electron Main
-> on graph open: ensure db-worker-node started for graph in data-dir
-> on graph close or app quit: stop graph daemon when the last window exits
-> maintains graph -> daemon state cache
Logseq CLI
-> uses existing logseq.cli.server ensure/start/stop
-> talks to the same graph data-dir and lock protocol
db-worker-node
-> provides the single write path to disk SQLite files
-> enforces lock ownership and readiness checks
Scope and non-goals
In scope are Electron main daemon lifecycle management, renderer persistence client switch, OPFS export-path removal, and required tests plus docs.
Out of scope are thread-api business behavior changes, SQLite schema changes, mobile behavior changes, and broad sync-system redesign.
Implementation plan
Phase 1: Add failing tests for the new desktop backend contract.
- Add a failing test in
/Users/rcmerci/gh-repos/logseq/src/test/logseq/cli/server_test.cljsthat verifies stable lock-conflict error reporting from daemon orchestration. - Add
/Users/rcmerci/gh-repos/logseq/src/test/electron/db_worker_manager_test.cljsand write a failing test forensure-started!idempotency. - Add a failing test in
/Users/rcmerci/gh-repos/logseq/src/test/electron/db_worker_manager_test.cljsthat verifies one daemon is reused across multiple windows for the same graph. - Add a failing test in
/Users/rcmerci/gh-repos/logseq/src/test/electron/db_worker_manager_test.cljsthat verifies stop on last-window close. - Add a failing test in
/Users/rcmerci/gh-repos/logseq/src/test/electron/db_worker_manager_test.cljsthat verifies stop-all behavior on app quit. - Add
/Users/rcmerci/gh-repos/logseq/src/test/frontend/persist_db/remote_test.cljsand write failing tests for invoke success and invoke error propagation. - Add failing tests in
/Users/rcmerci/gh-repos/logseq/src/test/frontend/persist_db/remote_test.cljsfor SSE parsing and reconnect behavior. - Run
bb dev:test -v 'electron.db-worker-manager-test'and confirm new assertions fail first. - Run
bb dev:test -v 'frontend.persist-db.remote-test'and confirm new assertions fail first.
Phase 2: Extract shared daemon orchestration for CLI and Electron.
- Extract CLI-output-independent daemon orchestration logic from
/Users/rcmerci/gh-repos/logseq/src/main/logseq/cli/server.cljs. - Add
/Users/rcmerci/gh-repos/logseq/src/main/logseq/db_worker/daemon.cljsand movespawn,wait-ready,read-lock, andcleanup-stale-lockcore functions into it. - Keep command-facing API shape in
/Users/rcmerci/gh-repos/logseq/src/main/logseq/cli/server.cljsstable and delegate internally to the new helper. - Extend
/Users/rcmerci/gh-repos/logseq/src/test/logseq/cli/server_test.cljswith regression tests for unchanged CLI behavior. - Run
bb dev:test -v 'logseq.cli.server-test'and confirm green.
Phase 3: Implement Electron main db-worker manager.
- Add
/Users/rcmerci/gh-repos/logseq/src/electron/electron/db_worker.cljswith graph-to-daemon state cache and reference counting. - Implement
ensure-started!in/Users/rcmerci/gh-repos/logseq/src/electron/electron/db_worker.cljsusing the shared daemon helper and returnbase-urlplusauth-token. - Implement
ensure-stopped!andstop-all!in/Users/rcmerci/gh-repos/logseq/src/electron/electron/db_worker.cljs. - Add
electron.ipc/ipchandlers in/Users/rcmerci/gh-repos/logseq/src/electron/electron/handler.cljsfor renderer requests of graph runtime configuration, encoded with transit-json. - Hook
stop-all!into app lifecycle in/Users/rcmerci/gh-repos/logseq/src/electron/electron/core.cljsatbefore-quitandwindow-all-closed. - Hook graph last-window stop logic into close flow in
/Users/rcmerci/gh-repos/logseq/src/electron/electron/core.cljs. - Run
bb dev:test -v 'electron.db-worker-manager-test'and confirm lifecycle tests pass.
Phase 4: Add Electron renderer remote PersistentDB client.
- Add
/Users/rcmerci/gh-repos/logseq/src/main/frontend/persist_db/remote.cljsimplementingprotocol/PersistentDBwith HTTP plus SSE transport. - Implement browser-safe invoke transport in
/Users/rcmerci/gh-repos/logseq/src/main/frontend/persist_db/remote.cljsand avoid Node-onlyhttpdependency in the renderer. - Implement event subscription and reconnect policy in
/Users/rcmerci/gh-repos/logseq/src/main/frontend/persist_db/remote.cljscompatible with current event handler signatures. - Extend runtime implementation selection in
/Users/rcmerci/gh-repos/logseq/src/main/frontend/persist_db.cljsto explicitly use remote client for Electron. - Add initialization flow in
/Users/rcmerci/gh-repos/logseq/src/main/frontend/persist_db.cljsthat fetches runtime config fromelectron.ipc/ipcvia transit-json before creating the remote client. - Update
/Users/rcmerci/gh-repos/logseq/src/test/frontend/persist_db/remote_test.cljsto green after implementation. - Run
bb dev:test -v 'frontend.persist-db.remote-test'and confirm green.
Phase 5: Replace OPFS startup path and remove periodic export workflow.
- Remove Electron-path dependency on
frontend.persist-db.browser/start-db-worker!in/Users/rcmerci/gh-repos/logseq/src/main/frontend/handler.cljs. - Start remote
PersistentDBinitialization flow in/Users/rcmerci/gh-repos/logseq/src/main/frontend/handler.cljs. - Remove Electron-path invocation of
persist-db/run-export-periodically!in/Users/rcmerci/gh-repos/logseq/src/main/frontend/handler.cljs. - Adjust
:graph/save-db-to-diskbehavior in/Users/rcmerci/gh-repos/logseq/src/main/frontend/handler/events.cljsto a no-op or user guidance path. - Adjust shortcut behavior for
:graph/db-savein/Users/rcmerci/gh-repos/logseq/src/main/frontend/modules/shortcut/config.cljsso it no longer triggers legacy export flow. - Mark
:db-getand:db-exportelectron.ipc/ipcendpoints in/Users/rcmerci/gh-repos/logseq/src/electron/electron/handler.cljsas compatibility-only or remove unused entry points. - Clean up OPFS-export-only logic in
/Users/rcmerci/gh-repos/logseq/src/electron/electron/db.cljswhile preserving needed utility paths. - Run
bb dev:test -v 'frontend.handler.route-test'andbb dev:test -v 'frontend.handler.common.config-edn-test'for regression coverage.
Phase 6: Add desktop and CLI coexistence verification.
- Add concurrency tests in
/Users/rcmerci/gh-repos/logseq/src/test/frontend/worker/db_worker_node_test.cljscovering desktop plus CLI access to the same graph. - Add stale-lock recovery tests in
/Users/rcmerci/gh-repos/logseq/src/test/frontend/worker/db_worker_node_test.cljs. - Add tests in
/Users/rcmerci/gh-repos/logseq/src/test/logseq/cli/server_test.cljsfor CLI reuse or conflict reporting when desktop already started the graph daemon. - Run
bb dev:test -v 'frontend.worker.db-worker-node-test'and confirm green for coexistence tests. - Run
bb dev:test -v 'logseq.cli.server-test'and confirm green for coexistence tests.
Phase 7: Docs and rollout safety.
- Update
/Users/rcmerci/gh-repos/logseq/docs/cli/logseq-cli.mdto document shared db-worker-node semantics between desktop and CLI. - Add
/Users/rcmerci/gh-repos/logseq/docs/developers/desktop-db-worker-node.mddescribing Electron main lifecycle and renderer remote-client init ordering. - Add release notes describing that Electron no longer uses OPFS as the primary database storage path.
- Add rollback notes for a temporary fallback switch if emergency recovery is required.
- Run
bb dev:lint-and-testand confirm zero failures and zero errors. - Run a final review against
@prompts/review.mdand fix findings before merge.
Edge cases
| Scenario | Expected behavior |
|---|---|
| Desktop opens a graph before daemon readiness completes. | Renderer waits for main-process ready runtime or receives a retryable error, and does not silently fall back to OPFS. |
| Desktop and CLI both attempt first start for the same graph daemon. | Exactly one owner acquires lock and the other path reuses the existing daemon or retries after a :repo-locked status check. |
| Graph name contains special characters. | Existing graph-dir encoding resolves to the same on-disk directory for desktop and CLI. |
| SSE connection drops. | Remote client reconnects and keeps event handling consistent, while invoke calls remain independent. |
| App exits abnormally and leaves a stale lock. | Next startup cleans stale lock via existing lock housekeeping logic without manual lock deletion. |
| Version switch from OPFS-backed desktop behavior. | No one-time migration is required because desktop already writes to disk data-dir, and startup verification should check disk DB readability before enabling the new backend. |
Verification commands and expected outputs
bb dev:test -v 'electron.db-worker-manager-test'
bb dev:test -v 'frontend.persist-db.remote-test'
bb dev:test -v 'logseq.cli.server-test'
bb dev:test -v 'frontend.worker.db-worker-node-test'
clojure -M:cljs compile db-worker-node
bb dev:lint-and-test
Each test command should finish with 0 failures, 0 errors.
clojure -M:cljs compile db-worker-node should produce a runnable static/db-worker-node.js.
In manual smoke tests, desktop and CLI reads and writes for the same graph should be immediately visible to each other without periodic export dependency.
Rollout strategy
Phase one ships behind a feature flag and enables by default in development builds for coexistence and recovery telemetry.
Phase two enables by default in stable builds and keeps a short-lived rollback switch.
Phase three removes legacy OPFS export-path code and removes the rollback switch.
Clarity required before implementation
Confirm product-level messaging for desktop and CLI lock-contention conflicts on the same graph.
Confirm whether :graph/db-save is removed or redefined as a checkpoint action in the new model.
Confirm rollback-switch lifecycle and target removal release.
Testing Details
Tests focus on behavior, not implementation details, by validating daemon lifecycle, invoke responses, and event-stream consistency.
Core coexistence tests validate lock ownership, recovery, and cross-client visibility for the same graph instead of mock call counts.
Regression tests ensure existing CLI behavior stays stable and Electron startup plus shutdown does not leave zombie processes or lock files.
Implementation Details
- Reuse and extract daemon orchestration from
logseq.cli.serverto avoid duplicate process-management logic. - Add a dedicated Electron main db-worker manager with graph-scoped daemon state cache.
- Use a renderer remote
PersistentDBclient against db-worker-node HTTP plus SSE endpoints. - Use transit-json for frontend and Electron communication through
electron.ipc/ipc(see alsoldb/write-transit-str). - Remove periodic OPFS export workflow in Electron so disk SQLite is the only source of truth.
- Keep thread-api and database schema unchanged to limit application-level regressions.
- Keep lock-file and
:repo-lockedsemantics identical for desktop and CLI. - Add reconnect and stale-lock recovery tests to cover availability risks.
- Roll out with feature-flag phases and a short rollback window.
- Follow
@test-driven-developmentand@prompts/review.mdthrough implementation and verification.