docs(api): clarify thread-safety bounds and multi-process limits
The Storage trait itself does not declare `Send + Sync` bounds — only the boxed instance held by `TaskRepository` does. Reword to describe what's actually required of an implementation, and call out that `FileSystemStorage` does not coordinate writes across processes outside the `.sync.lock`-protected WebDAV flow. https://claude.ai/code/session_01LweYBKMFbnTen7pCTdeQKq
This commit is contained in:
parent
c57ffd3f55
commit
c29f715c9e
|
|
@ -523,9 +523,9 @@ Key test areas:
|
||||||
|
|
||||||
## Thread Safety
|
## Thread Safety
|
||||||
|
|
||||||
The `Storage` trait requires `Send + Sync`, and `TaskRepository` wraps `Box<dyn Storage + Send + Sync>`, so repository instances can be shared across threads behind a `Mutex`. The Tauri GUI uses `Mutex<AppState>` for this purpose.
|
`TaskRepository` holds its storage as `Box<dyn Storage + Send + Sync>`, so any concrete storage implementation passed in must be `Send + Sync`. Repository instances can be shared across threads behind a `Mutex` — the Tauri GUI uses `Mutex<AppState>` for this purpose.
|
||||||
|
|
||||||
For concurrent access:
|
For concurrent access:
|
||||||
|
|
||||||
1. Wrap `TaskRepository` in `Mutex` or `RwLock` (the Tauri app does this)
|
1. Wrap `TaskRepository` in `Mutex` or `RwLock` (the Tauri app does this)
|
||||||
2. Or create separate repository instances per thread (file system handles locking)
|
2. Or create separate repository instances per thread. Note that `FileSystemStorage` does not coordinate writes between processes — concurrent multi-process writes to the same workspace are not supported outside the WebDAV sync flow, which uses a `.sync.lock` file.
|
||||||
|
|
|
||||||
Loading…
Reference in a new issue