Compare commits

...

9 commits

Author SHA1 Message Date
SteelDynamite c5a3840aea
Merge pull request #66 from SteelDynamite/claude/gracious-cray-yN12q
docs(api): clarify thread-safety bounds and multi-process limits
2026-04-29 02:45:47 +01:00
Claude c29f715c9e
docs(api): clarify thread-safety bounds and multi-process limits
The Storage trait itself does not declare `Send + Sync` bounds — only the
boxed instance held by `TaskRepository` does. Reword to describe what's
actually required of an implementation, and call out that
`FileSystemStorage` does not coordinate writes across processes outside
the `.sync.lock`-protected WebDAV flow.

https://claude.ai/code/session_01LweYBKMFbnTen7pCTdeQKq
2026-04-27 07:45:44 +00:00
SteelDynamite 6f4d00b912
Merge pull request #65 from SteelDynamite/claude/serene-ride-Gt8lp
audit: 2026-04-27 — sync clone, google metadata errors, dedup invariant
2026-04-27 08:40:13 +01:00
SteelDynamite 39718ef700
Merge pull request #64 from SteelDynamite/claude/dreamy-brown-4XuTd
docs: sync documentation with codebase state
2026-04-27 08:39:21 +01:00
Claude c57ffd3f55
docs(audit): log 2026-04-27 findings 2026-04-27 07:23:34 +00:00
Claude 12adfdc532
refactor(storage): drop unreachable error in dedup loop
The dedup loop wrapped its winner in `Option<Task>` and then mapped the
`None` case to `Error::InvalidData("Empty dedup entries for task")`.
That branch is unreachable: `by_id` is built by pushing every entry of
`file_tasks` into the vector for its UUID, so every group has at least
one entry, and the `len() > 1` branch keeps the first element after
`drain(1..)`.

Replace the spurious error with `expect` calls that document the
invariant and let the dedup loop yield `Task` directly instead of
`Option<Task>`.
2026-04-27 07:23:12 +00:00
Claude 6e161ba819
fix(google_tasks): surface metadata write failures
`sync_google_workspace` silently dropped errors from `.listdata.json`
and `.onyx-workspace.json` atomic writes via `let _ = ...`, so a sync
could report `downloaded: N` while the list/workspace ordering had not
been persisted.  Push those errors into the `errors` vec returned by
`GoogleSyncResult` so callers see the failure.
2026-04-27 07:22:27 +00:00
Claude e8a69a3222
perf(sync): avoid cloning upload payload
`SyncAction::Upload` cloned the file bytes solely so it could later read
`data.len()` for the sync-state record.  Capture the length up front and
move the buffer into `put_file`.
2026-04-27 07:22:01 +00:00
SteelDynamite 0506d44989
Merge pull request #62 from SteelDynamite/claude/serene-ride-JTRND
audit(2026-04-25): O(n²) sync-status + cascade-delete + atomic-write dedup
2026-04-27 01:50:09 +01:00
5 changed files with 40 additions and 13 deletions

View file

@ -1,5 +1,13 @@
# Audit Log # Audit Log
## 2026-04-27
Found and fixed 3 issues:
1. **Perf: needless clone of upload payload** (sync.rs:733) — the `SyncAction::Upload` arm read the file into `data`, computed `compute_checksum(&data)`, then called `client.put_file(path, data.clone())`. The clone existed only because the next statement needed `data.len()` for the sync-state record. Captured `data.len() as u64` into `len` first, moved `data` into `put_file`, and used `len` afterwards — one full byte copy avoided per uploaded file.
2. **Bug: Google Tasks sync silently drops metadata-write failures** (google_tasks.rs:361, 377) — both `.listdata.json` and `.onyx-workspace.json` were written via `if let Ok(meta_content) = serde_json::to_string_pretty(...) { let _ = atomic_write(...); }`, so a serialization or atomic-write error returned `Ok(GoogleSyncResult { downloaded: N, errors: [] })` even though list/workspace ordering was never persisted. Both writes now push their errors into the `errors` vec already returned in `GoogleSyncResult`.
3. **Code quality: unreachable dead-error path in storage dedup** (storage.rs:447) — the dedup loop computed `Option<Task>` from each `by_id` group and then `ok_or_else(|| Error::InvalidData("Empty dedup entries for task"))?`. `by_id` is only populated by `entry(uuid).or_default().push(entry)`, so every group has ≥1 element and the `None` branch is unreachable. Replaced the `Option`+`?` with direct `expect` calls (one per branch) that document the non-empty invariant; the loop now yields `Task` directly.
## 2026-04-25 ## 2026-04-25
Found and fixed 3 issues: Found and fixed 3 issues:

View file

@ -358,8 +358,15 @@ pub async fn sync_google_tasks(
list_meta.task_order = task_order; list_meta.task_order = task_order;
list_meta.updated_at = Utc::now(); list_meta.updated_at = Utc::now();
if let Ok(meta_content) = serde_json::to_string_pretty(&list_meta) { match serde_json::to_string_pretty(&list_meta) {
let _ = atomic_write(&listdata_path, meta_content.as_bytes()); Ok(meta_content) => {
if let Err(e) = atomic_write(&listdata_path, meta_content.as_bytes()) {
errors.push(format!("Failed to write metadata for list '{}': {}", gt_list.title, e));
}
}
Err(e) => {
errors.push(format!("Failed to serialize metadata for list '{}': {}", gt_list.title, e));
}
} }
} }
@ -374,8 +381,15 @@ pub async fn sync_google_tasks(
RootMetadata::default() RootMetadata::default()
}; };
root_meta.list_order = new_list_order; root_meta.list_order = new_list_order;
if let Ok(meta_content) = serde_json::to_string_pretty(&root_meta) { match serde_json::to_string_pretty(&root_meta) {
let _ = atomic_write(&root_meta_path, meta_content.as_bytes()); Ok(meta_content) => {
if let Err(e) = atomic_write(&root_meta_path, meta_content.as_bytes()) {
errors.push(format!("Failed to write workspace metadata: {}", e));
}
}
Err(e) => {
errors.push(format!("Failed to serialize workspace metadata: {}", e));
}
} }
Ok(GoogleSyncResult { downloaded, errors }) Ok(GoogleSyncResult { downloaded, errors })

View file

@ -454,7 +454,9 @@ impl Storage for FileSystemStorage {
let mut tasks = Vec::new(); let mut tasks = Vec::new();
for (_id, entries) in by_id { for (_id, entries) in by_id {
let winner = if entries.len() > 1 { // `by_id` only inserts non-empty groups, so each `entries` has at
// least one element.
let task = if entries.len() > 1 {
// Read mtime once per file so sort_by doesn't hit the filesystem // Read mtime once per file so sort_by doesn't hit the filesystem
// O(n log n) times and can't produce inconsistent orderings if a // O(n log n) times and can't produce inconsistent orderings if a
// file is touched mid-sort. // file is touched mid-sort.
@ -479,12 +481,14 @@ impl Storage for FileSystemStorage {
eprintln!("Warning: failed to remove stale duplicate task file {:?}: {}", stale_path, e); eprintln!("Warning: failed to remove stale duplicate task file {:?}: {}", stale_path, e);
} }
} }
with_mtime.into_iter().next().map(|(_, t, _)| t) let (_, t, _) = with_mtime.into_iter().next()
.expect("dedup group is non-empty after drain(1..)");
t
} else { } else {
entries.into_iter().next().map(|(_, t)| t) let (_, t) = entries.into_iter().next()
.expect("dedup group is non-empty");
t
}; };
let task = winner
.ok_or_else(|| Error::InvalidData("Empty dedup entries for task".to_string()))?;
tasks.push(task); tasks.push(task);
} }

View file

@ -724,19 +724,20 @@ async fn execute_action(
Err(e) => return Err(e.into()), Err(e) => return Err(e.into()),
}; };
let checksum = compute_checksum(&data); let checksum = compute_checksum(&data);
let len = data.len() as u64;
if let Some(parent) = path_parent(path) { if let Some(parent) = path_parent(path) {
client.ensure_dir(parent).await?; client.ensure_dir(parent).await?;
} }
report(&format!(" ^ Uploading {}", path)); report(&format!(" ^ Uploading {}", path));
client.put_file(path, data.clone()).await?; client.put_file(path, data).await?;
// Record in sync state using local file metadata // Record in sync state using local file metadata
let modified = std::fs::metadata(&local_path).ok() let modified = std::fs::metadata(&local_path).ok()
.and_then(|m| m.modified().ok()) .and_then(|m| m.modified().ok())
.map(|t| { let dt: DateTime<Utc> = t.into(); dt.to_rfc3339() }); .map(|t| { let dt: DateTime<Utc> = t.into(); dt.to_rfc3339() });
sync_state.record_file(path, &checksum, modified.as_deref(), data.len() as u64); sync_state.record_file(path, &checksum, modified.as_deref(), len);
} }
SyncAction::Conflict { path } => { SyncAction::Conflict { path } => {

View file

@ -523,9 +523,9 @@ Key test areas:
## Thread Safety ## Thread Safety
The `Storage` trait requires `Send + Sync`, and `TaskRepository` wraps `Box<dyn Storage + Send + Sync>`, so repository instances can be shared across threads behind a `Mutex`. The Tauri GUI uses `Mutex<AppState>` for this purpose. `TaskRepository` holds its storage as `Box<dyn Storage + Send + Sync>`, so any concrete storage implementation passed in must be `Send + Sync`. Repository instances can be shared across threads behind a `Mutex` — the Tauri GUI uses `Mutex<AppState>` for this purpose.
For concurrent access: For concurrent access:
1. Wrap `TaskRepository` in `Mutex` or `RwLock` (the Tauri app does this) 1. Wrap `TaskRepository` in `Mutex` or `RwLock` (the Tauri app does this)
2. Or create separate repository instances per thread (file system handles locking) 2. Or create separate repository instances per thread. Note that `FileSystemStorage` does not coordinate writes between processes — concurrent multi-process writes to the same workspace are not supported outside the WebDAV sync flow, which uses a `.sync.lock` file.