From c29f715c9e7136a3f1dbc981034aa13e4a4b5871 Mon Sep 17 00:00:00 2001 From: Claude Date: Mon, 27 Apr 2026 07:45:44 +0000 Subject: [PATCH] docs(api): clarify thread-safety bounds and multi-process limits MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The Storage trait itself does not declare `Send + Sync` bounds — only the boxed instance held by `TaskRepository` does. Reword to describe what's actually required of an implementation, and call out that `FileSystemStorage` does not coordinate writes across processes outside the `.sync.lock`-protected WebDAV flow. https://claude.ai/code/session_01LweYBKMFbnTen7pCTdeQKq --- docs/API.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/API.md b/docs/API.md index dd1ab78..46ed637 100644 --- a/docs/API.md +++ b/docs/API.md @@ -523,9 +523,9 @@ Key test areas: ## Thread Safety -The `Storage` trait requires `Send + Sync`, and `TaskRepository` wraps `Box`, so repository instances can be shared across threads behind a `Mutex`. The Tauri GUI uses `Mutex` for this purpose. +`TaskRepository` holds its storage as `Box`, so any concrete storage implementation passed in must be `Send + Sync`. Repository instances can be shared across threads behind a `Mutex` — the Tauri GUI uses `Mutex` for this purpose. For concurrent access: 1. Wrap `TaskRepository` in `Mutex` or `RwLock` (the Tauri app does this) -2. Or create separate repository instances per thread (file system handles locking) +2. Or create separate repository instances per thread. Note that `FileSystemStorage` does not coordinate writes between processes — concurrent multi-process writes to the same workspace are not supported outside the WebDAV sync flow, which uses a `.sync.lock` file.