Compare commits

..

23 commits

Author SHA1 Message Date
SteelDynamite c5a3840aea
Merge pull request #66 from SteelDynamite/claude/gracious-cray-yN12q
docs(api): clarify thread-safety bounds and multi-process limits
2026-04-29 02:45:47 +01:00
Claude c29f715c9e
docs(api): clarify thread-safety bounds and multi-process limits
The Storage trait itself does not declare `Send + Sync` bounds — only the
boxed instance held by `TaskRepository` does. Reword to describe what's
actually required of an implementation, and call out that
`FileSystemStorage` does not coordinate writes across processes outside
the `.sync.lock`-protected WebDAV flow.

https://claude.ai/code/session_01LweYBKMFbnTen7pCTdeQKq
2026-04-27 07:45:44 +00:00
SteelDynamite 6f4d00b912
Merge pull request #65 from SteelDynamite/claude/serene-ride-Gt8lp
audit: 2026-04-27 — sync clone, google metadata errors, dedup invariant
2026-04-27 08:40:13 +01:00
SteelDynamite 39718ef700
Merge pull request #64 from SteelDynamite/claude/dreamy-brown-4XuTd
docs: sync documentation with codebase state
2026-04-27 08:39:21 +01:00
Claude c57ffd3f55
docs(audit): log 2026-04-27 findings 2026-04-27 07:23:34 +00:00
Claude 12adfdc532
refactor(storage): drop unreachable error in dedup loop
The dedup loop wrapped its winner in `Option<Task>` and then mapped the
`None` case to `Error::InvalidData("Empty dedup entries for task")`.
That branch is unreachable: `by_id` is built by pushing every entry of
`file_tasks` into the vector for its UUID, so every group has at least
one entry, and the `len() > 1` branch keeps the first element after
`drain(1..)`.

Replace the spurious error with `expect` calls that document the
invariant and let the dedup loop yield `Task` directly instead of
`Option<Task>`.
2026-04-27 07:23:12 +00:00
Claude 6e161ba819
fix(google_tasks): surface metadata write failures
`sync_google_workspace` silently dropped errors from `.listdata.json`
and `.onyx-workspace.json` atomic writes via `let _ = ...`, so a sync
could report `downloaded: N` while the list/workspace ordering had not
been persisted.  Push those errors into the `errors` vec returned by
`GoogleSyncResult` so callers see the failure.
2026-04-27 07:22:27 +00:00
Claude e8a69a3222
perf(sync): avoid cloning upload payload
`SyncAction::Upload` cloned the file bytes solely so it could later read
`data.len()` for the sync-state record.  Capture the length up front and
move the buffer into `put_file`.
2026-04-27 07:22:01 +00:00
Claude 839b744720
docs: sync documentation with codebase state
- PLAN.md: uncheck push/pull sync mode selector (backend supports it
  via SyncMode enum, but no UI exists in SettingsScreen; always full sync)
- PLAN.md: bump Last Updated to 2026-04-27, Document Version to 4.5
- CLAUDE.md: update Current state date to 2026-04-27

https://claude.ai/code/session_01C7jV6wrzJVhHRKWsq87XwB
2026-04-27 00:55:46 +00:00
SteelDynamite 0506d44989
Merge pull request #62 from SteelDynamite/claude/serene-ride-JTRND
audit(2026-04-25): O(n²) sync-status + cascade-delete + atomic-write dedup
2026-04-27 01:50:09 +01:00
Claude e1c4fd7dfb
docs(audit): log 2026-04-25 findings 2026-04-25 07:28:33 +00:00
Claude 8c8735b2b4
refactor(config): reuse storage::atomic_write for save_to_file
`AppConfig::save_to_file` had its own copy of the temp-file + rename +
cleanup-on-failure dance.  `storage::atomic_write` is already
`pub(crate)` and does exactly that — `google_tasks.rs` was migrated to
use it earlier.  Drop the duplicate so there's one canonical atomic
write path in the crate.
2026-04-25 07:27:25 +00:00
Claude 069afe8d5e
perf(tauri): build child index once for cascade delete
`delete_task`'s descendant walk re-scanned the full task list on every
frontier pop, so the cost was O(n * depth) where n is the list size.
For a list of a few hundred tasks with even moderate nesting that's
already noticeable.

Index `parent_id -> [child_id]` once up-front; the BFS then visits each
descendant in O(1) amortised, dropping the total to O(n).
2026-04-25 07:26:56 +00:00
Claude 1cdf5dff90
perf(sync): hash-set membership check in get_sync_status
The deletion-detection loop in `get_sync_status` scanned `local_files`
linearly for every tracked path in `sync_state.files`, making the cost
quadratic in the file count.  The earlier "pending change" loop just
above already does the inverse direction via `sync_state.files.get`
(O(1)).  Build a `HashSet<&str>` of local paths once and check it
the same way to make the function O(n).

This is called by the GUI status indicator, so the win shows up as
soon as a workspace tracks more than a handful of files.
2026-04-25 07:25:36 +00:00
SteelDynamite 56944360e0
Merge pull request #60 from SteelDynamite/claude/serene-ride-1mX8o 2026-04-24 22:12:58 +01:00
SteelDynamite 16cf409f32
Merge pull request #59 from SteelDynamite/claude/dreamy-brown-Ss931 2026-04-24 22:12:10 +01:00
Claude 8611f55573
docs(audit): log 2026-04-24 findings 2026-04-24 07:38:54 +00:00
Claude a9fac2c1d8
refactor(storage): drop single-caller sanitize_filename wrapper
`FileSystemStorage::sanitize_filename` was a one-line forwarder to
`crate::sanitize_filename` with a single call site in
`task_file_path`. The extra method added a layer of indirection
without value. Inline the crate-level call.
2026-04-24 07:38:18 +00:00
Claude 1fcc6e7f6d
fix(sync): purge orphan base entries when both sides deleted
`compute_sync_actions` emits no action for files that are missing from
both local and remote but still tracked in the sync base (the
`(None, None, Some(_))` arm). Nothing else cleaned those entries, so
`.syncstate.json` grew forever every time a file was deleted both
locally and remotely — and on each subsequent sync the same
no-op match fired again.

Add a `prune_orphan_bases` pass that runs before `compute_sync_actions`
in `sync_workspace_inner`, dropping any base entry whose path is in
neither the local nor remote scan. Unit-tested in isolation.
2026-04-24 07:37:39 +00:00
Claude 970210b647
refactor(sync): destructure remote in deleted-local branch
The `(None, Some(_), Some(b))` arm re-checked the already-matched
`remote` via `remote.is_some_and(...)`, which obscures intent and
compiles to redundant None-branch code. Bind `Some(r)` in the match
and use `r` directly.

No behavior change.
2026-04-24 07:36:28 +00:00
SteelDynamite 1bb1b67977
Merge pull request #58 from SteelDynamite/claude/serene-ride-LeiSc 2026-04-23 11:05:17 +01:00
SteelDynamite 4c318705f6
Merge pull request #57 from SteelDynamite/claude/dreamy-brown-nRanS 2026-04-23 11:01:53 +01:00
Claude 6e1921230a
docs: sync markdown files with current codebase state
- Remove BottomSheet.svelte from PLAN.md file structure (deleted in
  efb4cca — NewTaskInput hand-rolls its own sheet)
- Expand workspace path validation description in API.md and CLAUDE.md
  to include filesystem root "/" alongside system directories, matching
  the forbidden list added in fix(tauri): reject "/" root path

https://claude.ai/code/session_015BSAnuhvMBLk7s4g7dSE53
2026-04-19 08:16:47 +00:00
10 changed files with 127 additions and 43 deletions

View file

@ -1,9 +1,4 @@
{ {
"sandbox": {
"network": {
"allowedDomains": ["nx71726.your-storageshare.de"]
}
},
"hooks": { "hooks": {
"PreToolUse": [ "PreToolUse": [
{ {

View file

@ -1,5 +1,29 @@
# Audit Log # Audit Log
## 2026-04-27
Found and fixed 3 issues:
1. **Perf: needless clone of upload payload** (sync.rs:733) — the `SyncAction::Upload` arm read the file into `data`, computed `compute_checksum(&data)`, then called `client.put_file(path, data.clone())`. The clone existed only because the next statement needed `data.len()` for the sync-state record. Captured `data.len() as u64` into `len` first, moved `data` into `put_file`, and used `len` afterwards — one full byte copy avoided per uploaded file.
2. **Bug: Google Tasks sync silently drops metadata-write failures** (google_tasks.rs:361, 377) — both `.listdata.json` and `.onyx-workspace.json` were written via `if let Ok(meta_content) = serde_json::to_string_pretty(...) { let _ = atomic_write(...); }`, so a serialization or atomic-write error returned `Ok(GoogleSyncResult { downloaded: N, errors: [] })` even though list/workspace ordering was never persisted. Both writes now push their errors into the `errors` vec already returned in `GoogleSyncResult`.
3. **Code quality: unreachable dead-error path in storage dedup** (storage.rs:447) — the dedup loop computed `Option<Task>` from each `by_id` group and then `ok_or_else(|| Error::InvalidData("Empty dedup entries for task"))?`. `by_id` is only populated by `entry(uuid).or_default().push(entry)`, so every group has ≥1 element and the `None` branch is unreachable. Replaced the `Option`+`?` with direct `expect` calls (one per branch) that document the non-empty invariant; the loop now yields `Task` directly.
## 2026-04-25
Found and fixed 3 issues:
1. **Perf: O(n²) deletion-detection in `get_sync_status`** (sync.rs:918) — for every path tracked in `sync_state.files`, the loop scanned `local_files` linearly via `.any(|f| f.path == *path)` to decide whether to count it as a deleted-locally pending change. The earlier "modified or new" loop already used the inverse direction with `sync_state.files.get(...)` (O(1)), so the second loop was the inconsistent one. Built a `HashSet<&str>` of local paths once and used `contains` for the membership check.
2. **Perf: cascade delete walks all_tasks per frontier pop** (tauri/lib.rs:460) — `delete_task`'s descendant BFS scanned the full task list on every parent popped from the frontier, making the work O(n × depth). Built a `parent_id -> [child_id]` `HashMap` once, then the BFS visits each descendant in O(1) amortised, dropping total cost to O(n).
3. **Code quality: duplicate atomic-write in `AppConfig::save_to_file`** (config.rs:114) — the function had its own copy of the temp-file + rename + cleanup-on-failure dance even though `storage::atomic_write` is `pub(crate)` and was already shared by `google_tasks.rs`. Replaced the inline implementation with a call to `crate::storage::atomic_write` so the crate has one canonical atomic write path.
## 2026-04-24
Found and fixed 3 issues:
1. **Bug: orphan base entries never cleaned from sync state** (sync.rs) — when a file was deleted both locally and remotely, `compute_sync_actions` emitted no action (the `(None, None, Some(_))` arm), so the base entry in `.syncstate.json` persisted forever. On each subsequent sync the same no-op case fired and the state file grew. Added `prune_orphan_bases` pass in `sync_workspace_inner` that drops base entries not present in either scan.
2. **Code quality: redundant is_some_and on already-matched Option** (sync.rs:208) — the `(None, Some(_), Some(b))` arm re-checked `remote` via `remote.is_some_and(|r| ...)` even though the pattern had just proven `remote` is `Some(_)`. Bound the inner value with `Some(r)` in the pattern and used `r` directly.
3. **Code quality: single-caller sanitize_filename wrapper** (storage.rs) — `FileSystemStorage::sanitize_filename` was a one-line forwarder to `crate::sanitize_filename` with one call site. Inlined the crate call and removed the method.
## 2026-04-20 ## 2026-04-20
Found and fixed 4 issues: Found and fixed 4 issues:

View file

@ -64,7 +64,7 @@ The GUI uses Svelte 5 runes mode (`$state`, `$derived`, `$effect`, `$props()`).
Pre-alpha. No users, no released builds, no data to migrate. Breaking changes to on-disk formats, config structure, or sync conventions are free — do not add migration logic. Pre-alpha. No users, no released builds, no data to migrate. Breaking changes to on-disk formats, config structure, or sync conventions are free — do not add migration logic.
### Current state (2026-04-15) ### Current state (2026-04-27)
- **Phase 1** (Core + CLI): Complete - **Phase 1** (Core + CLI): Complete
- **Phase 2** (WebDAV sync): Complete — remote folder browsing, checksum-based conflict resolution, auto-sync lifecycle, per-workspace sync interval - **Phase 2** (WebDAV sync): Complete — remote folder browsing, checksum-based conflict resolution, auto-sync lifecycle, per-workspace sync interval
@ -106,7 +106,7 @@ Pre-alpha. No users, no released builds, no data to migrate. Breaking changes to
- Task deduplication on load (handles sync conflict duplicates) - Task deduplication on load (handles sync conflict duplicates)
- Subtask hierarchy: subtask count shown on parent tasks in list, subtask detail via three-panel slide navigation, inline add at top of subtask list (new subtasks prepend), collapsible completed subtasks section, cascade delete (parent deletion removes all subtasks with confirmation warning) - Subtask hierarchy: subtask count shown on parent tasks in list, subtask detail via three-panel slide navigation, inline add at top of subtask list (new subtasks prepend), collapsible completed subtasks section, cascade delete (parent deletion removes all subtasks with confirmation warning)
- Custom confirmation dialogs (ConfirmDialog component replaces native confirm()) - Custom confirmation dialogs (ConfirmDialog component replaces native confirm())
- Workspace path validation (rejects system directories) - Workspace path validation (rejects filesystem root `/` and system directories: `/etc`, `/usr`, `/bin`, `/sbin`, `/var`, `/proc`, `/sys`, `/dev`)
- Task detail auto-cleanup (taskStack clears when viewed task is deleted or list switches) - Task detail auto-cleanup (taskStack clears when viewed task is deleted or list switches)
- Swipe gestures on mobile: swipe left/right on a task to toggle completion (swipe direction depends on current status) - Swipe gestures on mobile: swipe left/right on a task to toggle completion (swipe direction depends on current status)
- Accessibility: ARIA labels/roles on interactive components, keyboard handlers, `prefers-reduced-motion` CSS support - Accessibility: ARIA labels/roles on interactive components, keyboard handlers, `prefers-reduced-motion` CSS support

View file

@ -765,7 +765,7 @@ WorkspaceConfig {
- [x] List rename (inline input via list kebab menu in drawer) - [x] List rename (inline input via list kebab menu in drawer)
- [x] Keyboard shortcuts (Escape closes settings → detail → drawer → menus in priority order) - [x] Keyboard shortcuts (Escape closes settings → detail → drawer → menus in priority order)
- [x] Sync status indicators (last-sync time + upload/download counts chip in TasksScreen) - [x] Sync status indicators (last-sync time + upload/download counts chip in TasksScreen)
- [x] Push/pull sync mode selection (session-only sync direction selector in SettingsScreen) - [ ] Push/pull sync mode selection (session-only sync direction selector in SettingsScreen)
- [x] Group-by-date toggle per list (checkmark toggle in list kebab menu) - [x] Group-by-date toggle per list (checkmark toggle in list kebab menu)
- [x] Subtask hierarchy (expand/collapse, inline add, cascade toggle/delete) - [x] Subtask hierarchy (expand/collapse, inline add, cascade toggle/delete)
- [ ] Search/filter tasks - [ ] Search/filter tasks
@ -1058,6 +1058,6 @@ This project is free and open-source software licensed under GPL v3.
--- ---
**Last Updated**: 2026-04-23 **Last Updated**: 2026-04-27
**Document Version**: 4.4 **Document Version**: 4.5
**Status**: Ready to Implement - Milestone-Driven Plan **Status**: Ready to Implement - Milestone-Driven Plan

View file

@ -455,12 +455,23 @@ fn delete_task(
// so deleting a parent can't leave grandchildren orphaned with a // so deleting a parent can't leave grandchildren orphaned with a
// parent_id pointing at a deleted task. // parent_id pointing at a deleted task.
let all_tasks = repo.list_tasks(lid).map_err(|e| e.to_string())?; let all_tasks = repo.list_tasks(lid).map_err(|e| e.to_string())?;
// Build a parent -> children index in one pass so the BFS below is O(n)
// instead of O(n * depth) scanning all tasks for each frontier pop.
let mut children_by_parent: std::collections::HashMap<Uuid, Vec<Uuid>> =
std::collections::HashMap::new();
for t in &all_tasks {
if let Some(pid) = t.parent_id {
children_by_parent.entry(pid).or_default().push(t.id);
}
}
let mut to_delete: std::collections::HashSet<Uuid> = std::collections::HashSet::new(); let mut to_delete: std::collections::HashSet<Uuid> = std::collections::HashSet::new();
let mut frontier: Vec<Uuid> = vec![tid]; let mut frontier: Vec<Uuid> = vec![tid];
while let Some(parent) = frontier.pop() { while let Some(parent) = frontier.pop() {
for t in &all_tasks { if let Some(children) = children_by_parent.get(&parent) {
if t.parent_id == Some(parent) && to_delete.insert(t.id) { for &child_id in children {
frontier.push(t.id); if to_delete.insert(child_id) {
frontier.push(child_id);
}
} }
} }
} }

View file

@ -116,13 +116,7 @@ impl AppConfig {
std::fs::create_dir_all(parent)?; std::fs::create_dir_all(parent)?;
} }
let content = serde_json::to_string_pretty(&self)?; let content = serde_json::to_string_pretty(&self)?;
// Atomic write: write to temp file then rename to prevent corruption on crash crate::storage::atomic_write(path, content.as_bytes())?;
let temp = path.with_extension("tmp");
std::fs::write(&temp, &content)?;
if let Err(e) = std::fs::rename(&temp, path) {
let _ = std::fs::remove_file(&temp);
return Err(e.into());
}
Ok(()) Ok(())
} }

View file

@ -358,8 +358,15 @@ pub async fn sync_google_tasks(
list_meta.task_order = task_order; list_meta.task_order = task_order;
list_meta.updated_at = Utc::now(); list_meta.updated_at = Utc::now();
if let Ok(meta_content) = serde_json::to_string_pretty(&list_meta) { match serde_json::to_string_pretty(&list_meta) {
let _ = atomic_write(&listdata_path, meta_content.as_bytes()); Ok(meta_content) => {
if let Err(e) = atomic_write(&listdata_path, meta_content.as_bytes()) {
errors.push(format!("Failed to write metadata for list '{}': {}", gt_list.title, e));
}
}
Err(e) => {
errors.push(format!("Failed to serialize metadata for list '{}': {}", gt_list.title, e));
}
} }
} }
@ -374,8 +381,15 @@ pub async fn sync_google_tasks(
RootMetadata::default() RootMetadata::default()
}; };
root_meta.list_order = new_list_order; root_meta.list_order = new_list_order;
if let Ok(meta_content) = serde_json::to_string_pretty(&root_meta) { match serde_json::to_string_pretty(&root_meta) {
let _ = atomic_write(&root_meta_path, meta_content.as_bytes()); Ok(meta_content) => {
if let Err(e) = atomic_write(&root_meta_path, meta_content.as_bytes()) {
errors.push(format!("Failed to write workspace metadata: {}", e));
}
}
Err(e) => {
errors.push(format!("Failed to serialize workspace metadata: {}", e));
}
} }
Ok(GoogleSyncResult { downloaded, errors }) Ok(GoogleSyncResult { downloaded, errors })

View file

@ -236,12 +236,8 @@ impl FileSystemStorage {
Ok(path) Ok(path)
} }
fn sanitize_filename(name: &str) -> String {
crate::sanitize_filename(name)
}
fn task_file_path(&self, list_dir: &Path, task: &Task) -> PathBuf { fn task_file_path(&self, list_dir: &Path, task: &Task) -> PathBuf {
let safe_title = Self::sanitize_filename(&task.title); let safe_title = crate::sanitize_filename(&task.title);
let filename = if safe_title.is_empty() { let filename = if safe_title.is_empty() {
task.id.to_string() task.id.to_string()
} else { } else {
@ -458,7 +454,9 @@ impl Storage for FileSystemStorage {
let mut tasks = Vec::new(); let mut tasks = Vec::new();
for (_id, entries) in by_id { for (_id, entries) in by_id {
let winner = if entries.len() > 1 { // `by_id` only inserts non-empty groups, so each `entries` has at
// least one element.
let task = if entries.len() > 1 {
// Read mtime once per file so sort_by doesn't hit the filesystem // Read mtime once per file so sort_by doesn't hit the filesystem
// O(n log n) times and can't produce inconsistent orderings if a // O(n log n) times and can't produce inconsistent orderings if a
// file is touched mid-sort. // file is touched mid-sort.
@ -483,12 +481,14 @@ impl Storage for FileSystemStorage {
eprintln!("Warning: failed to remove stale duplicate task file {:?}: {}", stale_path, e); eprintln!("Warning: failed to remove stale duplicate task file {:?}: {}", stale_path, e);
} }
} }
with_mtime.into_iter().next().map(|(_, t, _)| t) let (_, t, _) = with_mtime.into_iter().next()
.expect("dedup group is non-empty after drain(1..)");
t
} else { } else {
entries.into_iter().next().map(|(_, t)| t) let (_, t) = entries.into_iter().next()
.expect("dedup group is non-empty");
t
}; };
let task = winner
.ok_or_else(|| Error::InvalidData("Empty dedup entries for task".to_string()))?;
tasks.push(task); tasks.push(task);
} }

View file

@ -204,8 +204,9 @@ pub fn compute_sync_actions(
} }
// Remote present, local gone, base known: local was deleted // Remote present, local gone, base known: local was deleted
(None, Some(_), Some(b)) => { (None, Some(r), Some(b)) => {
let remote_changed = remote.is_some_and(|r| r.size != b.size || !timestamps_equal(r.last_modified.as_deref(), b.modified_at.as_deref())); let remote_changed = r.size != b.size
|| !timestamps_equal(r.last_modified.as_deref(), b.modified_at.as_deref());
if remote_changed { if remote_changed {
// deleted locally + modified remotely -> download (remote wins) // deleted locally + modified remotely -> download (remote wins)
actions.push(SyncAction::Download { path: path.to_string() }); actions.push(SyncAction::Download { path: path.to_string() });
@ -229,6 +230,22 @@ pub fn compute_sync_actions(
actions actions
} }
/// Remove base entries for files that are gone from both local and remote.
/// `compute_sync_actions` emits no action for the both-deleted case, so without
/// this pass those entries would persist in `.syncstate.json` indefinitely.
fn prune_orphan_bases(
sync_state: &mut SyncState,
local_files: &[LocalFileInfo],
remote_files: &[RemoteFileSnapshot],
) {
let live_paths: std::collections::HashSet<&str> = local_files
.iter()
.map(|f| f.path.as_str())
.chain(remote_files.iter().map(|f| f.path.as_str()))
.collect();
sync_state.files.retain(|p, _| live_paths.contains(p.as_str()));
}
/// Compare two timestamps for equality by parsing both, tolerating format differences. /// Compare two timestamps for equality by parsing both, tolerating format differences.
fn timestamps_equal(a: Option<&str>, b: Option<&str>) -> bool { fn timestamps_equal(a: Option<&str>, b: Option<&str>) -> bool {
match (a, b) { match (a, b) {
@ -604,6 +621,12 @@ async fn sync_workspace_inner(
} }
}; };
// Purge orphan base entries: files we previously tracked that are now gone
// from both local and remote. Without this, `.syncstate.json` accumulates
// ghost entries forever because the both-deleted diff case emits no action
// and so nothing else would clean them.
prune_orphan_bases(&mut sync_state, &local_files, &remote_files);
// Compute actions from three-way diff // Compute actions from three-way diff
let fresh_actions = compute_sync_actions(&local_files, &remote_files, &sync_state); let fresh_actions = compute_sync_actions(&local_files, &remote_files, &sync_state);
@ -701,19 +724,20 @@ async fn execute_action(
Err(e) => return Err(e.into()), Err(e) => return Err(e.into()),
}; };
let checksum = compute_checksum(&data); let checksum = compute_checksum(&data);
let len = data.len() as u64;
if let Some(parent) = path_parent(path) { if let Some(parent) = path_parent(path) {
client.ensure_dir(parent).await?; client.ensure_dir(parent).await?;
} }
report(&format!(" ^ Uploading {}", path)); report(&format!(" ^ Uploading {}", path));
client.put_file(path, data.clone()).await?; client.put_file(path, data).await?;
// Record in sync state using local file metadata // Record in sync state using local file metadata
let modified = std::fs::metadata(&local_path).ok() let modified = std::fs::metadata(&local_path).ok()
.and_then(|m| m.modified().ok()) .and_then(|m| m.modified().ok())
.map(|t| { let dt: DateTime<Utc> = t.into(); dt.to_rfc3339() }); .map(|t| { let dt: DateTime<Utc> = t.into(); dt.to_rfc3339() });
sync_state.record_file(path, &checksum, modified.as_deref(), data.len() as u64); sync_state.record_file(path, &checksum, modified.as_deref(), len);
} }
SyncAction::Conflict { path } => { SyncAction::Conflict { path } => {
@ -891,9 +915,15 @@ pub fn get_sync_status(workspace_path: &Path) -> Result<SyncStatusInfo> {
} }
} }
// Count files in base that are now missing locally (deleted) // Count files in base that are now missing locally (deleted).
// Build a set of local paths once so the membership check is O(1) per
// tracked file instead of scanning local_files linearly each time.
let local_paths: std::collections::HashSet<&str> = local_files
.iter()
.map(|f| f.path.as_str())
.collect();
for path in sync_state.files.keys() { for path in sync_state.files.keys() {
if !local_files.iter().any(|f| f.path == *path) { if !local_paths.contains(path.as_str()) {
pending_changes += 1; pending_changes += 1;
} }
} }
@ -1106,6 +1136,22 @@ mod tests {
assert!(actions.is_empty()); assert!(actions.is_empty());
} }
#[test]
fn test_prune_orphan_bases() {
let mut state = SyncState::default();
state.files.insert("kept_local.md".to_string(), make_base("a"));
state.files.insert("kept_remote.md".to_string(), make_base("b"));
state.files.insert("orphan.md".to_string(), make_base("c"));
let local = vec![make_local("kept_local.md", "a")];
let remote = vec![make_remote("kept_remote.md")];
prune_orphan_bases(&mut state, &local, &remote);
assert!(state.files.contains_key("kept_local.md"));
assert!(state.files.contains_key("kept_remote.md"));
assert!(!state.files.contains_key("orphan.md"));
}
#[test] #[test]
fn test_multiple_files_mixed() { fn test_multiple_files_mixed() {
let local = vec![ let local = vec![

View file

@ -456,7 +456,7 @@ All metadata and state files use an atomic write pattern (write to `.tmp` then r
- **List names**: Rejected if they contain `/`, `\`, or `..` components. Canonicalized and verified to stay within workspace root. - **List names**: Rejected if they contain `/`, `\`, or `..` components. Canonicalized and verified to stay within workspace root.
- **Sync paths**: Validated to reject `..` components and backslashes anywhere in the path before any file system operation. - **Sync paths**: Validated to reject `..` components and backslashes anywhere in the path before any file system operation.
- **Workspace paths** (Tauri): Rejected if they point to system directories (`/etc`, `/usr`, `/bin`, etc.). - **Workspace paths** (Tauri): Rejected if they point to the filesystem root (`/`) or system directories (`/etc`, `/usr`, `/bin`, `/sbin`, `/var`, `/proc`, `/sys`, `/dev`).
- **Filenames**: Sanitized to replace `/ \ : * ? " < > |` and control characters with `_`. - **Filenames**: Sanitized to replace `/ \ : * ? " < > |` and control characters with `_`.
## Example: Complete Workflow ## Example: Complete Workflow
@ -523,9 +523,9 @@ Key test areas:
## Thread Safety ## Thread Safety
The `Storage` trait requires `Send + Sync`, and `TaskRepository` wraps `Box<dyn Storage + Send + Sync>`, so repository instances can be shared across threads behind a `Mutex`. The Tauri GUI uses `Mutex<AppState>` for this purpose. `TaskRepository` holds its storage as `Box<dyn Storage + Send + Sync>`, so any concrete storage implementation passed in must be `Send + Sync`. Repository instances can be shared across threads behind a `Mutex` — the Tauri GUI uses `Mutex<AppState>` for this purpose.
For concurrent access: For concurrent access:
1. Wrap `TaskRepository` in `Mutex` or `RwLock` (the Tauri app does this) 1. Wrap `TaskRepository` in `Mutex` or `RwLock` (the Tauri app does this)
2. Or create separate repository instances per thread (file system handles locking) 2. Or create separate repository instances per thread. Note that `FileSystemStorage` does not coordinate writes between processes — concurrent multi-process writes to the same workspace are not supported outside the WebDAV sync flow, which uses a `.sync.lock` file.