Compare commits

..

42 commits

Author SHA1 Message Date
SteelDynamite c5a3840aea
Merge pull request #66 from SteelDynamite/claude/gracious-cray-yN12q
docs(api): clarify thread-safety bounds and multi-process limits
2026-04-29 02:45:47 +01:00
Claude c29f715c9e
docs(api): clarify thread-safety bounds and multi-process limits
The Storage trait itself does not declare `Send + Sync` bounds — only the
boxed instance held by `TaskRepository` does. Reword to describe what's
actually required of an implementation, and call out that
`FileSystemStorage` does not coordinate writes across processes outside
the `.sync.lock`-protected WebDAV flow.

https://claude.ai/code/session_01LweYBKMFbnTen7pCTdeQKq
2026-04-27 07:45:44 +00:00
SteelDynamite 6f4d00b912
Merge pull request #65 from SteelDynamite/claude/serene-ride-Gt8lp
audit: 2026-04-27 — sync clone, google metadata errors, dedup invariant
2026-04-27 08:40:13 +01:00
SteelDynamite 39718ef700
Merge pull request #64 from SteelDynamite/claude/dreamy-brown-4XuTd
docs: sync documentation with codebase state
2026-04-27 08:39:21 +01:00
Claude c57ffd3f55
docs(audit): log 2026-04-27 findings 2026-04-27 07:23:34 +00:00
Claude 12adfdc532
refactor(storage): drop unreachable error in dedup loop
The dedup loop wrapped its winner in `Option<Task>` and then mapped the
`None` case to `Error::InvalidData("Empty dedup entries for task")`.
That branch is unreachable: `by_id` is built by pushing every entry of
`file_tasks` into the vector for its UUID, so every group has at least
one entry, and the `len() > 1` branch keeps the first element after
`drain(1..)`.

Replace the spurious error with `expect` calls that document the
invariant and let the dedup loop yield `Task` directly instead of
`Option<Task>`.
2026-04-27 07:23:12 +00:00
Claude 6e161ba819
fix(google_tasks): surface metadata write failures
`sync_google_workspace` silently dropped errors from `.listdata.json`
and `.onyx-workspace.json` atomic writes via `let _ = ...`, so a sync
could report `downloaded: N` while the list/workspace ordering had not
been persisted.  Push those errors into the `errors` vec returned by
`GoogleSyncResult` so callers see the failure.
2026-04-27 07:22:27 +00:00
Claude e8a69a3222
perf(sync): avoid cloning upload payload
`SyncAction::Upload` cloned the file bytes solely so it could later read
`data.len()` for the sync-state record.  Capture the length up front and
move the buffer into `put_file`.
2026-04-27 07:22:01 +00:00
Claude 839b744720
docs: sync documentation with codebase state
- PLAN.md: uncheck push/pull sync mode selector (backend supports it
  via SyncMode enum, but no UI exists in SettingsScreen; always full sync)
- PLAN.md: bump Last Updated to 2026-04-27, Document Version to 4.5
- CLAUDE.md: update Current state date to 2026-04-27

https://claude.ai/code/session_01C7jV6wrzJVhHRKWsq87XwB
2026-04-27 00:55:46 +00:00
SteelDynamite 0506d44989
Merge pull request #62 from SteelDynamite/claude/serene-ride-JTRND
audit(2026-04-25): O(n²) sync-status + cascade-delete + atomic-write dedup
2026-04-27 01:50:09 +01:00
Claude e1c4fd7dfb
docs(audit): log 2026-04-25 findings 2026-04-25 07:28:33 +00:00
Claude 8c8735b2b4
refactor(config): reuse storage::atomic_write for save_to_file
`AppConfig::save_to_file` had its own copy of the temp-file + rename +
cleanup-on-failure dance.  `storage::atomic_write` is already
`pub(crate)` and does exactly that — `google_tasks.rs` was migrated to
use it earlier.  Drop the duplicate so there's one canonical atomic
write path in the crate.
2026-04-25 07:27:25 +00:00
Claude 069afe8d5e
perf(tauri): build child index once for cascade delete
`delete_task`'s descendant walk re-scanned the full task list on every
frontier pop, so the cost was O(n * depth) where n is the list size.
For a list of a few hundred tasks with even moderate nesting that's
already noticeable.

Index `parent_id -> [child_id]` once up-front; the BFS then visits each
descendant in O(1) amortised, dropping the total to O(n).
2026-04-25 07:26:56 +00:00
Claude 1cdf5dff90
perf(sync): hash-set membership check in get_sync_status
The deletion-detection loop in `get_sync_status` scanned `local_files`
linearly for every tracked path in `sync_state.files`, making the cost
quadratic in the file count.  The earlier "pending change" loop just
above already does the inverse direction via `sync_state.files.get`
(O(1)).  Build a `HashSet<&str>` of local paths once and check it
the same way to make the function O(n).

This is called by the GUI status indicator, so the win shows up as
soon as a workspace tracks more than a handful of files.
2026-04-25 07:25:36 +00:00
SteelDynamite 56944360e0
Merge pull request #60 from SteelDynamite/claude/serene-ride-1mX8o 2026-04-24 22:12:58 +01:00
SteelDynamite 16cf409f32
Merge pull request #59 from SteelDynamite/claude/dreamy-brown-Ss931 2026-04-24 22:12:10 +01:00
Claude 8611f55573
docs(audit): log 2026-04-24 findings 2026-04-24 07:38:54 +00:00
Claude a9fac2c1d8
refactor(storage): drop single-caller sanitize_filename wrapper
`FileSystemStorage::sanitize_filename` was a one-line forwarder to
`crate::sanitize_filename` with a single call site in
`task_file_path`. The extra method added a layer of indirection
without value. Inline the crate-level call.
2026-04-24 07:38:18 +00:00
Claude 1fcc6e7f6d
fix(sync): purge orphan base entries when both sides deleted
`compute_sync_actions` emits no action for files that are missing from
both local and remote but still tracked in the sync base (the
`(None, None, Some(_))` arm). Nothing else cleaned those entries, so
`.syncstate.json` grew forever every time a file was deleted both
locally and remotely — and on each subsequent sync the same
no-op match fired again.

Add a `prune_orphan_bases` pass that runs before `compute_sync_actions`
in `sync_workspace_inner`, dropping any base entry whose path is in
neither the local nor remote scan. Unit-tested in isolation.
2026-04-24 07:37:39 +00:00
Claude 970210b647
refactor(sync): destructure remote in deleted-local branch
The `(None, Some(_), Some(b))` arm re-checked the already-matched
`remote` via `remote.is_some_and(...)`, which obscures intent and
compiles to redundant None-branch code. Bind `Some(r)` in the match
and use `r` directly.

No behavior change.
2026-04-24 07:36:28 +00:00
Claude 66513519ab
docs: fix credential return type, add missing test dir, update plan date
- docs/API.md: load_credentials returns Zeroizing<String> (not String)
- docs/DEVELOPMENT.md: add src/test/ directory to project structure
- PLAN.md: update Last Updated to 2026-04-23, bump version to 4.4

https://claude.ai/code/session_01By1aj94LMM7muDV7AT4egk
2026-04-23 10:08:34 +00:00
SteelDynamite 1bb1b67977
Merge pull request #58 from SteelDynamite/claude/serene-ride-LeiSc 2026-04-23 11:05:17 +01:00
SteelDynamite 4c318705f6
Merge pull request #57 from SteelDynamite/claude/dreamy-brown-nRanS 2026-04-23 11:01:53 +01:00
Claude 890f0c2126
docs(audit): log 2026-04-20 findings 2026-04-20 07:37:54 +00:00
Claude f42697f4ed
refactor(tauri): extract parse_uuid helper
17 Tauri commands repeated `Uuid::parse_str(&s).map_err(|e| e.to_string())`
for each UUID argument. Collapse the pattern into a `parse_uuid`
helper so callers read as `let id = parse_uuid(&list_id)?;`.
2026-04-20 07:35:50 +00:00
Claude 7754ea4b45
fix(tauri): surface errors from toggle_task cascade
When a parent task was toggled, `update_task` failures on child tasks
were silently swallowed with `let _ = ...`, leaving subtasks out of
sync with the parent's status and giving the user no feedback. Map the
error and propagate so the UI can show it and the user can retry.
2026-04-20 07:35:12 +00:00
Claude 6abe95692e
perf(tauri): use HashSet for cascade-delete dedup
Descendant walking in delete_task called Vec::contains in the inner
loop, making the traversal O(n^2) in the number of tasks. Swap the
visited set to HashSet so membership tests are O(1); HashSet::insert
also folds the contains-check and record-new steps into one call.
2026-04-20 07:34:52 +00:00
Claude 70fe7420cd
refactor(sync): remove dead .listdata.json guard in conflict path
The `.listdata.json` check was unreachable: the branch is already
gated on `parts[1].ends_with(".md")`, which `.listdata.json` fails.
2026-04-20 07:33:12 +00:00
Claude 6e1921230a
docs: sync markdown files with current codebase state
- Remove BottomSheet.svelte from PLAN.md file structure (deleted in
  efb4cca — NewTaskInput hand-rolls its own sheet)
- Expand workspace path validation description in API.md and CLAUDE.md
  to include filesystem root "/" alongside system directories, matching
  the forbidden list added in fix(tauri): reject "/" root path

https://claude.ai/code/session_015BSAnuhvMBLk7s4g7dSE53
2026-04-19 08:16:47 +00:00
SteelDynamite 6ae1006ab4
Merge pull request #56 from SteelDynamite/claude/serene-ride-XUY3D 2026-04-19 09:12:44 +01:00
SteelDynamite d8c6b9fc8e
Merge pull request #53 from SteelDynamite/claude/dreamy-brown-pFY5T 2026-04-19 09:12:08 +01:00
Claude 9a8a1a9f8e
style(sync): replace stray var with const in restartSyncInterval
Lone var in an otherwise let/const file — promote to const since the
value never gets reassigned. No behavior change.
2026-04-19 07:13:47 +00:00
Claude c952156491
refactor(date-picker): group selected-state declarations up top
selectedYear/selectedMonth were declared below selectDay, which writes
to them, and below isToday, which is declared nearby. Runtime worked
because the assignments only run on user click (after script init), but
the split made the initialization order confusing. Group all $state
fields at the top of the script.
2026-04-19 07:13:29 +00:00
Claude 62cf05480d
refactor(tauri): extract join_remote_path helper
Three call sites repeated the same "empty base -> child, otherwise
trim_end + slash + child" pattern. Pull it into a helper to keep the
join convention consistent across list_remote_folder, inspect, and
create_remote_workspace.
2026-04-19 07:12:37 +00:00
Claude e911ac1d94
refactor(tauri): extract credential_domain helper
Three call sites reproduced the same scheme://host parsing inline. Pull
it into a named helper so the domain-extraction convention lives in one
place.
2026-04-19 07:11:53 +00:00
Claude 937b6c2c7d
refactor(storage): read dedup mtimes once instead of in sort closure
sort_by may call the comparator many times, so the previous tiebreaker
re-read each duplicate file's metadata on every comparison. With N
duplicates that's O(N log N) stat calls, and the ordering could flip
mid-sort if a file was touched concurrently. Snapshot mtime per file up
front and sort on the cached values.
2026-04-19 07:09:49 +00:00
Claude 4e8f7c4536
fix(tauri): reject "/" root path in workspace validation
trim_end_matches('/') collapses "/" to "", which then isn't matched by
the forbidden list, so a root-filesystem workspace slipped through. Keep
"/" as the canonical form when the stripped value is empty.
2026-04-19 07:08:42 +00:00
Claude b977d275ba
docs: sync markdown docs with current codebase state
- CLAUDE.md: add `sync` to the CLI commands list (commands/sync.rs exists)
- PLAN.md: remove BottomSheet.svelte (deleted in efb4cca)
- DEVELOPMENT.md: add grouping.ts and paths.ts to the lib directory listing

https://claude.ai/code/session_01YbcpJqmwpEW5tCJFFkMSPZ
2026-04-18 08:53:10 +00:00
SteelDynamite 065118789f
Merge pull request #52 from SteelDynamite/claude/smoke-test-and-fixes-TwfSh 2026-04-18 09:49:21 +01:00
SteelDynamite 92475483de
Merge pull request #51 from SteelDynamite/claude/dreamy-brown-YlW25 2026-04-17 16:36:33 +01:00
Claude 771e104486
Merge remote-tracking branch 'origin/main' into pr51-merge
# Conflicts:
#	README.md
2026-04-17 15:01:58 +00:00
Claude 7bef6b07bc
docs: sync markdown docs with actual codebase state
- README.md: update Phase 4 status to reflect Android preliminaries done
  (file-watcher gating, tauri-plugin-credentials, safe area insets, Android
  targets configured) but init/build not yet run; add tauri-plugin-credentials
  to project structure; expand docs/ tree; add newer GUI features (workspace
  rename, safe area insets, accessibility); add setup screen screenshot;
  update What's Next to note Phase 4 is in progress
- PLAN.md: fix Phase 4 checkboxes — android init and build-succeeds were
  marked [x] but gen/android/ does not exist; correct cfg gate annotation
  from #[cfg(not(mobile))] to #[cfg(not(target_os = "android"))]; update
  dependency snippet to reflect actual keyring/zeroize/sha2/quick-xml usage;
  bump Last Updated to 2026-04-17
- docs/DEVELOPMENT.md: add WEBKIT_DISABLE_DMABUF_RENDERER=1 Wayland note
  to tauri dev command

https://claude.ai/code/session_01MypN7wPNqeSgw8b5DYpMc1
2026-04-17 14:44:33 +00:00
13 changed files with 241 additions and 112 deletions

View file

@ -1,5 +1,38 @@
# Audit Log # Audit Log
## 2026-04-27
Found and fixed 3 issues:
1. **Perf: needless clone of upload payload** (sync.rs:733) — the `SyncAction::Upload` arm read the file into `data`, computed `compute_checksum(&data)`, then called `client.put_file(path, data.clone())`. The clone existed only because the next statement needed `data.len()` for the sync-state record. Captured `data.len() as u64` into `len` first, moved `data` into `put_file`, and used `len` afterwards — one full byte copy avoided per uploaded file.
2. **Bug: Google Tasks sync silently drops metadata-write failures** (google_tasks.rs:361, 377) — both `.listdata.json` and `.onyx-workspace.json` were written via `if let Ok(meta_content) = serde_json::to_string_pretty(...) { let _ = atomic_write(...); }`, so a serialization or atomic-write error returned `Ok(GoogleSyncResult { downloaded: N, errors: [] })` even though list/workspace ordering was never persisted. Both writes now push their errors into the `errors` vec already returned in `GoogleSyncResult`.
3. **Code quality: unreachable dead-error path in storage dedup** (storage.rs:447) — the dedup loop computed `Option<Task>` from each `by_id` group and then `ok_or_else(|| Error::InvalidData("Empty dedup entries for task"))?`. `by_id` is only populated by `entry(uuid).or_default().push(entry)`, so every group has ≥1 element and the `None` branch is unreachable. Replaced the `Option`+`?` with direct `expect` calls (one per branch) that document the non-empty invariant; the loop now yields `Task` directly.
## 2026-04-25
Found and fixed 3 issues:
1. **Perf: O(n²) deletion-detection in `get_sync_status`** (sync.rs:918) — for every path tracked in `sync_state.files`, the loop scanned `local_files` linearly via `.any(|f| f.path == *path)` to decide whether to count it as a deleted-locally pending change. The earlier "modified or new" loop already used the inverse direction with `sync_state.files.get(...)` (O(1)), so the second loop was the inconsistent one. Built a `HashSet<&str>` of local paths once and used `contains` for the membership check.
2. **Perf: cascade delete walks all_tasks per frontier pop** (tauri/lib.rs:460) — `delete_task`'s descendant BFS scanned the full task list on every parent popped from the frontier, making the work O(n × depth). Built a `parent_id -> [child_id]` `HashMap` once, then the BFS visits each descendant in O(1) amortised, dropping total cost to O(n).
3. **Code quality: duplicate atomic-write in `AppConfig::save_to_file`** (config.rs:114) — the function had its own copy of the temp-file + rename + cleanup-on-failure dance even though `storage::atomic_write` is `pub(crate)` and was already shared by `google_tasks.rs`. Replaced the inline implementation with a call to `crate::storage::atomic_write` so the crate has one canonical atomic write path.
## 2026-04-24
Found and fixed 3 issues:
1. **Bug: orphan base entries never cleaned from sync state** (sync.rs) — when a file was deleted both locally and remotely, `compute_sync_actions` emitted no action (the `(None, None, Some(_))` arm), so the base entry in `.syncstate.json` persisted forever. On each subsequent sync the same no-op case fired and the state file grew. Added `prune_orphan_bases` pass in `sync_workspace_inner` that drops base entries not present in either scan.
2. **Code quality: redundant is_some_and on already-matched Option** (sync.rs:208) — the `(None, Some(_), Some(b))` arm re-checked `remote` via `remote.is_some_and(|r| ...)` even though the pattern had just proven `remote` is `Some(_)`. Bound the inner value with `Some(r)` in the pattern and used `r` directly.
3. **Code quality: single-caller sanitize_filename wrapper** (storage.rs) — `FileSystemStorage::sanitize_filename` was a one-line forwarder to `crate::sanitize_filename` with one call site. Inlined the crate call and removed the method.
## 2026-04-20
Found and fixed 4 issues:
1. **Dead code in conflict recovery** (sync.rs:756) — `parts[1] != ".listdata.json"` was unreachable because the branch is already gated on `parts[1].ends_with(".md")`, which `.listdata.json` cannot satisfy. Removed the redundant check.
2. **O(n²) cascade delete** (tauri/lib.rs) — descendant traversal in `delete_task` used `Vec::contains` inside the inner loop, making it quadratic in the number of tasks per list. Swapped the visited set to `HashSet`; `HashSet::insert` folds the contains+push into one call.
3. **Silent cascade failure in toggle_task** (tauri/lib.rs) — subtask `update_task` errors were discarded with `let _ = ...`, leaving subtasks stuck at the old status with no UI feedback. Propagate the error so the frontend can surface it.
4. **Duplicated UUID-parse boilerplate** (tauri/lib.rs) — 17 commands repeated `Uuid::parse_str(&x).map_err(|e| e.to_string())?`. Extracted a `parse_uuid` helper so callers read as `let id = parse_uuid(&list_id)?;`.
## 2026-04-15 ## 2026-04-15
Found and fixed 4 issues: Found and fixed 4 issues:

View file

@ -30,7 +30,7 @@ The Tauri dev server runs on port 1422 (`vite.config.ts` and `tauri.conf.json`).
Two-crate workspace (`resolver = "2"`, edition 2021) plus a Tauri app: Two-crate workspace (`resolver = "2"`, edition 2021) plus a Tauri app:
- **onyx-core** — Pure Rust library. Storage trait with `FileSystemStorage` implementation, `TaskRepository` (main API), data models, config, error types. No CLI/UI dependencies. `keyring` feature-gated behind `keyring-storage` (default on) for Android compatibility. - **onyx-core** — Pure Rust library. Storage trait with `FileSystemStorage` implementation, `TaskRepository` (main API), data models, config, error types. No CLI/UI dependencies. `keyring` feature-gated behind `keyring-storage` (default on) for Android compatibility.
- **onyx-cli** — CLI frontend using clap. Commands are in `src/commands/` (init, workspace, list, task, group). Output formatting in `src/output.rs`. - **onyx-cli** — CLI frontend using clap. Commands are in `src/commands/` (init, workspace, list, task, group, sync). Output formatting in `src/output.rs`.
- **apps/tauri/** — Tauri v2 GUI. Svelte 5 frontend in `src/`, Rust backend in `src-tauri/` with Tauri commands that call into `onyx-core`. `notify` crate feature-gated for Android. `tauri-plugin-credentials/` provides cross-platform credential storage (Android Keystore via EncryptedSharedPreferences, desktop via keyring crate). - **apps/tauri/** — Tauri v2 GUI. Svelte 5 frontend in `src/`, Rust backend in `src-tauri/` with Tauri commands that call into `onyx-core`. `notify` crate feature-gated for Android. `tauri-plugin-credentials/` provides cross-platform credential storage (Android Keystore via EncryptedSharedPreferences, desktop via keyring crate).
### Key patterns ### Key patterns
@ -64,7 +64,7 @@ The GUI uses Svelte 5 runes mode (`$state`, `$derived`, `$effect`, `$props()`).
Pre-alpha. No users, no released builds, no data to migrate. Breaking changes to on-disk formats, config structure, or sync conventions are free — do not add migration logic. Pre-alpha. No users, no released builds, no data to migrate. Breaking changes to on-disk formats, config structure, or sync conventions are free — do not add migration logic.
### Current state (2026-04-15) ### Current state (2026-04-27)
- **Phase 1** (Core + CLI): Complete - **Phase 1** (Core + CLI): Complete
- **Phase 2** (WebDAV sync): Complete — remote folder browsing, checksum-based conflict resolution, auto-sync lifecycle, per-workspace sync interval - **Phase 2** (WebDAV sync): Complete — remote folder browsing, checksum-based conflict resolution, auto-sync lifecycle, per-workspace sync interval
@ -106,7 +106,7 @@ Pre-alpha. No users, no released builds, no data to migrate. Breaking changes to
- Task deduplication on load (handles sync conflict duplicates) - Task deduplication on load (handles sync conflict duplicates)
- Subtask hierarchy: subtask count shown on parent tasks in list, subtask detail via three-panel slide navigation, inline add at top of subtask list (new subtasks prepend), collapsible completed subtasks section, cascade delete (parent deletion removes all subtasks with confirmation warning) - Subtask hierarchy: subtask count shown on parent tasks in list, subtask detail via three-panel slide navigation, inline add at top of subtask list (new subtasks prepend), collapsible completed subtasks section, cascade delete (parent deletion removes all subtasks with confirmation warning)
- Custom confirmation dialogs (ConfirmDialog component replaces native confirm()) - Custom confirmation dialogs (ConfirmDialog component replaces native confirm())
- Workspace path validation (rejects system directories) - Workspace path validation (rejects filesystem root `/` and system directories: `/etc`, `/usr`, `/bin`, `/sbin`, `/var`, `/proc`, `/sys`, `/dev`)
- Task detail auto-cleanup (taskStack clears when viewed task is deleted or list switches) - Task detail auto-cleanup (taskStack clears when viewed task is deleted or list switches)
- Swipe gestures on mobile: swipe left/right on a task to toggle completion (swipe direction depends on current status) - Swipe gestures on mobile: swipe left/right on a task to toggle completion (swipe direction depends on current status)
- Accessibility: ARIA labels/roles on interactive components, keyboard handlers, `prefers-reduced-motion` CSS support - Accessibility: ARIA labels/roles on interactive components, keyboard handlers, `prefers-reduced-motion` CSS support

20
PLAN.md
View file

@ -532,8 +532,11 @@ pub fn delete_credentials(domain: &str) -> Result<()>;
Add to `onyx-core/Cargo.toml`: Add to `onyx-core/Cargo.toml`:
```toml ```toml
reqwest = { version = "0.12", features = ["json", "rustls-tls"] } reqwest = { version = "0.12", features = ["json", "rustls-tls"] }
keyring = "3.0" keyring = { version = "3", features = ["apple-native", "windows-native", "sync-secret-service"], optional = true }
# TODO: Evaluate dav-client or implement custom WebDAV zeroize = "1"
sha2 = "0.10"
quick-xml = "0.36"
# WebDAV implemented as custom client using reqwest + quick-xml for PROPFIND parsing
``` ```
### Features ### Features
@ -668,7 +671,6 @@ apps/tauri/
│ │ ├── TaskItem.svelte │ │ ├── TaskItem.svelte
│ │ ├── NewTaskInput.svelte │ │ ├── NewTaskInput.svelte
│ │ ├── TaskDetailView.svelte │ │ ├── TaskDetailView.svelte
│ │ ├── BottomSheet.svelte
│ │ ├── ConfirmDialog.svelte │ │ ├── ConfirmDialog.svelte
│ │ └── DateTimePicker.svelte │ │ └── DateTimePicker.svelte
│ └── stores/ │ └── stores/
@ -763,7 +765,7 @@ WorkspaceConfig {
- [x] List rename (inline input via list kebab menu in drawer) - [x] List rename (inline input via list kebab menu in drawer)
- [x] Keyboard shortcuts (Escape closes settings → detail → drawer → menus in priority order) - [x] Keyboard shortcuts (Escape closes settings → detail → drawer → menus in priority order)
- [x] Sync status indicators (last-sync time + upload/download counts chip in TasksScreen) - [x] Sync status indicators (last-sync time + upload/download counts chip in TasksScreen)
- [x] Push/pull sync mode selection (session-only sync direction selector in SettingsScreen) - [ ] Push/pull sync mode selection (session-only sync direction selector in SettingsScreen)
- [x] Group-by-date toggle per list (checkmark toggle in list kebab menu) - [x] Group-by-date toggle per list (checkmark toggle in list kebab menu)
- [x] Subtask hierarchy (expand/collapse, inline add, cascade toggle/delete) - [x] Subtask hierarchy (expand/collapse, inline add, cascade toggle/delete)
- [ ] Search/filter tasks - [ ] Search/filter tasks
@ -844,11 +846,11 @@ npm run tauri ios build
#### Features #### Features
- [x] Gate file-watcher initialization behind `#[cfg(not(mobile))]` - [x] Gate file-watcher initialization behind `#[cfg(not(target_os = "android"))]`
- [x] Install Android Studio + NDK, configure env vars - [x] Install Android Studio + NDK, configure env vars
- [x] Add Android Rust targets - [x] Add Android Rust targets
- [x] `npm run tauri android init` (generates `gen/android/`) - [ ] `npm run tauri android init` (generates `gen/android/`)
- [x] Confirm `npm run tauri android build` succeeds - [ ] Confirm `npm run tauri android build` succeeds
- [ ] Basic smoke test: app launches, workspace setup, create a task - [ ] Basic smoke test: app launches, workspace setup, create a task
- [ ] Set up macOS CI for iOS builds - [ ] Set up macOS CI for iOS builds
- [ ] `npm run tauri ios init` (generates `gen/ios/`) - [ ] `npm run tauri ios init` (generates `gen/ios/`)
@ -1056,6 +1058,6 @@ This project is free and open-source software licensed under GPL v3.
--- ---
**Last Updated**: 2026-04-15 **Last Updated**: 2026-04-27
**Document Version**: 4.3 **Document Version**: 4.5
**Status**: Ready to Implement - Milestone-Driven Plan **Status**: Ready to Implement - Milestone-Driven Plan

View file

@ -2,6 +2,8 @@
A **local-first, cross-platform tasks application** built with Rust. Inspired by Google Tasks, designed for speed and flexibility. A **local-first, cross-platform tasks application** built with Rust. Inspired by Google Tasks, designed for speed and flexibility.
![Onyx setup screen](screenshot.png)
## Core Principles ## Core Principles
- **Local-First**: Your data, your folder, your control - **Local-First**: Your data, your folder, your control
@ -21,7 +23,10 @@ onyx/
│ └── onyx-cli/ # CLI frontend │ └── onyx-cli/ # CLI frontend
├── apps/ ├── apps/
│ └── tauri/ # Tauri v2 GUI (Svelte 5 + Tailwind CSS 4) │ └── tauri/ # Tauri v2 GUI (Svelte 5 + Tailwind CSS 4)
│ └── tauri-plugin-credentials/ # Cross-platform credential storage plugin
└── docs/ └── docs/
├── API.md # Core library API reference
└── DEVELOPMENT.md # Development guide
``` ```
## Project Status ## Project Status
@ -29,7 +34,7 @@ onyx/
- **Phase 1** (Core + CLI): Complete - **Phase 1** (Core + CLI): Complete
- **Phase 2** (WebDAV Sync): Complete — backend, CLI, and GUI all wired - **Phase 2** (WebDAV Sync): Complete — backend, CLI, and GUI all wired
- **Phase 3** (GUI MVP): Complete - **Phase 3** (GUI MVP): Complete
- **Phase 4** (Mobile): In progress — Android preliminaries done (file-watcher gating, tauri-plugin-credentials, safe area insets, Android targets configured); needs build verification and iOS setup - **Phase 4** (Mobile): In progress — Android preliminaries done (file-watcher gating, `tauri-plugin-credentials`, safe area insets, Android targets configured); needs `tauri android init`, build verification, and iOS setup
### Core Library (`onyx-core`) ### Core Library (`onyx-core`)
- Data models (Task, TaskList, AppConfig, WorkspaceConfig) - Data models (Task, TaskList, AppConfig, WorkspaceConfig)
@ -59,13 +64,15 @@ onyx/
- Due date picker/editor with optional time - Due date picker/editor with optional time
- Subtask hierarchy with three-panel slide navigation - Subtask hierarchy with three-panel slide navigation
- Move tasks between lists - Move tasks between lists
- List rename, group-by-date toggle, delete completed tasks - List rename, workspace rename, group-by-date toggle, delete completed tasks
- Keyboard shortcuts (Escape priority chain) - Keyboard shortcuts (Escape priority chain)
- WebDAV setup flow with credential auto-population - WebDAV setup flow with credential auto-population
- File watcher (auto-reloads on external changes) - File watcher (auto-reloads on external changes)
- Auto-sync with configurable interval, status indicators - Auto-sync with configurable interval, status indicators
- Swipe gestures on mobile (swipe to toggle completion) - Swipe gestures on mobile (swipe to toggle completion)
- Custom confirmation dialogs - Custom confirmation dialogs
- Safe area insets for mobile (viewport-fit=cover)
- Accessibility: ARIA labels/roles, keyboard handlers, `prefers-reduced-motion` support
- Desktop packaging (Linux: AppImage + .deb; Windows: MSI) - Desktop packaging (Linux: AppImage + .deb; Windows: MSI)
## Development Setup ## Development Setup
@ -213,8 +220,8 @@ cargo test -- --nocapture
## What's Next? ## What's Next?
- **Phase 4**: Mobile support (iOS & Android via Tauri v2 mobile) - **Phase 4** (in progress): Complete Android build (`tauri android init` + verification), iOS setup on macOS CI
- **Phase 5**: GUI advanced features (rich markdown editor, search/filter) - **Phase 5**: GUI advanced features (rich markdown editor, search/filter, change storage folder)
- **Phase 6**: Mobile polish and platform-specific integrations - **Phase 6**: Mobile polish and platform-specific integrations
- **Phase 7**: Google Tasks importer and unique features - **Phase 7**: Google Tasks importer and unique features

View file

@ -60,6 +60,11 @@ fn lock_state(state: &Mutex<AppState>) -> Result<std::sync::MutexGuard<'_, AppSt
state.lock().map_err(|e| format!("State lock poisoned: {}", e)) state.lock().map_err(|e| format!("State lock poisoned: {}", e))
} }
/// Parse a UUID from a string, converting errors to the String format Tauri commands use.
fn parse_uuid(s: &str) -> Result<Uuid, String> {
Uuid::parse_str(s).map_err(|e| e.to_string())
}
impl AppState { impl AppState {
/// Persist config to disk, converting errors to String for Tauri commands. /// Persist config to disk, converting errors to String for Tauri commands.
fn save_config(&self) -> Result<(), String> { fn save_config(&self) -> Result<(), String> {
@ -67,6 +72,25 @@ impl AppState {
} }
} }
/// Extract the hostname from a URL (scheme://host/...), used as the credential key.
/// Returns an empty string if the URL has no scheme or host.
fn credential_domain(url: &str) -> String {
url.split("://")
.nth(1)
.and_then(|rest| rest.split('/').next())
.unwrap_or("")
.to_string()
}
/// Join a remote base directory with a child path, handling empty base and trailing slashes.
fn join_remote_path(base: &str, child: &str) -> String {
if base.is_empty() {
child.to_string()
} else {
format!("{}/{}", base.trim_end_matches('/'), child)
}
}
/// Validate that a workspace path is a reasonable directory and not a system path. /// Validate that a workspace path is a reasonable directory and not a system path.
fn validate_workspace_path(path: &str) -> Result<(), String> { fn validate_workspace_path(path: &str) -> Result<(), String> {
let p = PathBuf::from(path); let p = PathBuf::from(path);
@ -79,7 +103,10 @@ fn validate_workspace_path(path: &str) -> Result<(), String> {
#[cfg(unix)] #[cfg(unix)]
{ {
let forbidden = ["/", "/etc", "/usr", "/bin", "/sbin", "/var", "/proc", "/sys", "/dev"]; let forbidden = ["/", "/etc", "/usr", "/bin", "/sbin", "/var", "/proc", "/sys", "/dev"];
// Strip trailing slashes, but keep "/" itself — trim_end_matches would
// collapse it to "" and slip past the forbidden check.
let canonical = normalized.trim_end_matches('/'); let canonical = normalized.trim_end_matches('/');
let canonical = if canonical.is_empty() { "/" } else { canonical };
if forbidden.contains(&canonical) { if forbidden.contains(&canonical) {
return Err(format!("Cannot use system directory as workspace: {}", path)); return Err(format!("Cannot use system directory as workspace: {}", path));
} }
@ -263,10 +290,7 @@ async fn rename_workspace(
let base_url = webdav_url.as_deref().ok_or("No WebDAV URL configured")?; let base_url = webdav_url.as_deref().ok_or("No WebDAV URL configured")?;
let remote_path = webdav_path.as_deref().unwrap_or(""); let remote_path = webdav_path.as_deref().unwrap_or("");
let domain = base_url let domain = credential_domain(base_url);
.split("://").nth(1)
.and_then(|rest| rest.split('/').next())
.unwrap_or("").to_string();
let creds = app_handle.state::<Credentials<tauri::Wry>>(); let creds = app_handle.state::<Credentials<tauri::Wry>>();
let (username, password) = creds.load(&domain)?; let (username, password) = creds.load(&domain)?;
@ -347,7 +371,7 @@ fn delete_list(
let mut s = lock_state(&state)?; let mut s = lock_state(&state)?;
ensure_repo(&mut s)?; ensure_repo(&mut s)?;
mute_watcher(&mut s); mute_watcher(&mut s);
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?; let id = parse_uuid(&list_id)?;
repo_mut(&mut s)? repo_mut(&mut s)?
.delete_list(id) .delete_list(id)
.map_err(|e| e.to_string()) .map_err(|e| e.to_string())
@ -362,7 +386,7 @@ fn list_tasks(
) -> Result<Vec<Task>, String> { ) -> Result<Vec<Task>, String> {
let mut s = lock_state(&state)?; let mut s = lock_state(&state)?;
ensure_repo(&mut s)?; ensure_repo(&mut s)?;
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?; let id = parse_uuid(&list_id)?;
repo_ref(&s)? repo_ref(&s)?
.list_tasks(id) .list_tasks(id)
.map_err(|e| e.to_string()) .map_err(|e| e.to_string())
@ -381,13 +405,13 @@ fn create_task(
let mut s = lock_state(&state)?; let mut s = lock_state(&state)?;
ensure_repo(&mut s)?; ensure_repo(&mut s)?;
mute_watcher(&mut s); mute_watcher(&mut s);
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?; let id = parse_uuid(&list_id)?;
let mut task = Task::new(title); let mut task = Task::new(title);
if let Some(desc) = description.filter(|d| !d.is_empty()) { if let Some(desc) = description.filter(|d| !d.is_empty()) {
task.description = desc; task.description = desc;
} }
if let Some(pid) = parent_id { if let Some(pid) = parent_id {
let parent_uuid = Uuid::parse_str(&pid).map_err(|e| e.to_string())?; let parent_uuid = parse_uuid(&pid)?;
task.parent_id = Some(parent_uuid); task.parent_id = Some(parent_uuid);
} }
// Accept the date fields at creation time so callers don't have to do a // Accept the date fields at creation time so callers don't have to do a
@ -409,7 +433,7 @@ fn update_task(
let mut s = lock_state(&state)?; let mut s = lock_state(&state)?;
ensure_repo(&mut s)?; ensure_repo(&mut s)?;
mute_watcher(&mut s); mute_watcher(&mut s);
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?; let id = parse_uuid(&list_id)?;
repo_mut(&mut s)? repo_mut(&mut s)?
.update_task(id, task) .update_task(id, task)
.map_err(|e| e.to_string()) .map_err(|e| e.to_string())
@ -424,20 +448,30 @@ fn delete_task(
let mut s = lock_state(&state)?; let mut s = lock_state(&state)?;
ensure_repo(&mut s)?; ensure_repo(&mut s)?;
mute_watcher(&mut s); mute_watcher(&mut s);
let lid = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?; let lid = parse_uuid(&list_id)?;
let tid = Uuid::parse_str(&task_id).map_err(|e| e.to_string())?; let tid = parse_uuid(&task_id)?;
let repo = repo_mut(&mut s)?; let repo = repo_mut(&mut s)?;
// Cascade-delete the full descendant subtree (not just direct children) // Cascade-delete the full descendant subtree (not just direct children)
// so deleting a parent can't leave grandchildren orphaned with a // so deleting a parent can't leave grandchildren orphaned with a
// parent_id pointing at a deleted task. // parent_id pointing at a deleted task.
let all_tasks = repo.list_tasks(lid).map_err(|e| e.to_string())?; let all_tasks = repo.list_tasks(lid).map_err(|e| e.to_string())?;
let mut to_delete: Vec<Uuid> = Vec::new(); // Build a parent -> children index in one pass so the BFS below is O(n)
// instead of O(n * depth) scanning all tasks for each frontier pop.
let mut children_by_parent: std::collections::HashMap<Uuid, Vec<Uuid>> =
std::collections::HashMap::new();
for t in &all_tasks {
if let Some(pid) = t.parent_id {
children_by_parent.entry(pid).or_default().push(t.id);
}
}
let mut to_delete: std::collections::HashSet<Uuid> = std::collections::HashSet::new();
let mut frontier: Vec<Uuid> = vec![tid]; let mut frontier: Vec<Uuid> = vec![tid];
while let Some(parent) = frontier.pop() { while let Some(parent) = frontier.pop() {
for t in &all_tasks { if let Some(children) = children_by_parent.get(&parent) {
if t.parent_id == Some(parent) && !to_delete.contains(&t.id) { for &child_id in children {
to_delete.push(t.id); if to_delete.insert(child_id) {
frontier.push(t.id); frontier.push(child_id);
}
} }
} }
} }
@ -459,8 +493,8 @@ fn toggle_task(
let mut s = lock_state(&state)?; let mut s = lock_state(&state)?;
ensure_repo(&mut s)?; ensure_repo(&mut s)?;
mute_watcher(&mut s); mute_watcher(&mut s);
let lid = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?; let lid = parse_uuid(&list_id)?;
let tid = Uuid::parse_str(&task_id).map_err(|e| e.to_string())?; let tid = parse_uuid(&task_id)?;
let repo = repo_mut(&mut s)?; let repo = repo_mut(&mut s)?;
let mut task = repo.get_task(lid, tid).map_err(|e| e.to_string())?; let mut task = repo.get_task(lid, tid).map_err(|e| e.to_string())?;
match task.status { match task.status {
@ -477,7 +511,9 @@ fn toggle_task(
TaskStatus::Backlog => child.uncomplete(), TaskStatus::Backlog => child.uncomplete(),
TaskStatus::Completed => child.complete(), TaskStatus::Completed => child.complete(),
} }
let _ = repo.update_task(lid, child); let child_id = child.id;
repo.update_task(lid, child)
.map_err(|e| format!("Failed to cascade to subtask {}: {}", child_id, e))?;
} }
} }
Ok(task) Ok(task)
@ -493,8 +529,8 @@ fn reorder_task(
let mut s = lock_state(&state)?; let mut s = lock_state(&state)?;
ensure_repo(&mut s)?; ensure_repo(&mut s)?;
mute_watcher(&mut s); mute_watcher(&mut s);
let lid = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?; let lid = parse_uuid(&list_id)?;
let tid = Uuid::parse_str(&task_id).map_err(|e| e.to_string())?; let tid = parse_uuid(&task_id)?;
repo_mut(&mut s)? repo_mut(&mut s)?
.reorder_task(lid, tid, new_position) .reorder_task(lid, tid, new_position)
.map_err(|e| e.to_string()) .map_err(|e| e.to_string())
@ -512,9 +548,9 @@ fn move_task(
let mut s = lock_state(&state)?; let mut s = lock_state(&state)?;
ensure_repo(&mut s)?; ensure_repo(&mut s)?;
mute_watcher(&mut s); mute_watcher(&mut s);
let from = Uuid::parse_str(&from_list_id).map_err(|e| e.to_string())?; let from = parse_uuid(&from_list_id)?;
let to = Uuid::parse_str(&to_list_id).map_err(|e| e.to_string())?; let to = parse_uuid(&to_list_id)?;
let tid = Uuid::parse_str(&task_id).map_err(|e| e.to_string())?; let tid = parse_uuid(&task_id)?;
repo_mut(&mut s)? repo_mut(&mut s)?
.move_task(from, to, tid) .move_task(from, to, tid)
.map_err(|e| e.to_string()) .map_err(|e| e.to_string())
@ -529,7 +565,7 @@ fn rename_list(
let mut s = lock_state(&state)?; let mut s = lock_state(&state)?;
ensure_repo(&mut s)?; ensure_repo(&mut s)?;
mute_watcher(&mut s); mute_watcher(&mut s);
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?; let id = parse_uuid(&list_id)?;
repo_mut(&mut s)? repo_mut(&mut s)?
.rename_list(id, new_name) .rename_list(id, new_name)
.map_err(|e| e.to_string()) .map_err(|e| e.to_string())
@ -544,7 +580,7 @@ fn set_group_by_date(
let mut s = lock_state(&state)?; let mut s = lock_state(&state)?;
ensure_repo(&mut s)?; ensure_repo(&mut s)?;
mute_watcher(&mut s); mute_watcher(&mut s);
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?; let id = parse_uuid(&list_id)?;
repo_mut(&mut s)? repo_mut(&mut s)?
.set_group_by_date(id, enabled) .set_group_by_date(id, enabled)
.map_err(|e| e.to_string()) .map_err(|e| e.to_string())
@ -557,7 +593,7 @@ fn get_group_by_date(
) -> Result<bool, String> { ) -> Result<bool, String> {
let mut s = lock_state(&state)?; let mut s = lock_state(&state)?;
ensure_repo(&mut s)?; ensure_repo(&mut s)?;
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?; let id = parse_uuid(&list_id)?;
repo_ref(&s)? repo_ref(&s)?
.get_group_by_date(id) .get_group_by_date(id)
.map_err(|e| e.to_string()) .map_err(|e| e.to_string())
@ -645,10 +681,9 @@ async fn list_remote_folder(
let dir_entries: Vec<_> = entries.into_iter().filter(|e| e.is_dir).collect(); let dir_entries: Vec<_> = entries.into_iter().filter(|e| e.is_dir).collect();
// Check all subfolders for .onyx-workspace.json in parallel // Check all subfolders for .onyx-workspace.json in parallel
let sub_paths: Vec<_> = dir_entries.iter().map(|entry| { let sub_paths: Vec<_> = dir_entries.iter()
if path.is_empty() { entry.path.clone() } .map(|entry| join_remote_path(&path, &entry.path))
else { format!("{}/{}", path.trim_end_matches('/'), entry.path) } .collect();
}).collect();
let checks: Vec<_> = sub_paths.iter().map(|sp| { let checks: Vec<_> = sub_paths.iter().map(|sp| {
client.list_files(sp) client.list_files(sp)
}).collect(); }).collect();
@ -680,11 +715,7 @@ async fn inspect_remote_workspace(
let mut lists = Vec::new(); let mut lists = Vec::new();
for entry in entries { for entry in entries {
if !entry.is_dir { continue; } if !entry.is_dir { continue; }
let list_path = if path.is_empty() { let list_path = join_remote_path(&path, &entry.path);
entry.path.clone()
} else {
format!("{}/{}", path.trim_end_matches('/'), entry.path)
};
let files = client.list_files(&list_path).await.unwrap_or_else(|e| { let files = client.list_files(&list_path).await.unwrap_or_else(|e| {
eprintln!("Warning: failed to list remote folder '{}': {}", list_path, e); eprintln!("Warning: failed to list remote folder '{}': {}", list_path, e);
Vec::new() Vec::new()
@ -720,11 +751,7 @@ async fn create_remote_workspace(
"list_order": [], "list_order": [],
"last_opened_list": null, "last_opened_list": null,
}); });
let file_path = if path.is_empty() { let file_path = join_remote_path(&path, ".onyx-workspace.json");
".onyx-workspace.json".to_string()
} else {
format!("{}/{}", path.trim_end_matches('/'), ".onyx-workspace.json")
};
client.put_file(&file_path, serde_json::to_string_pretty(&metadata).map_err(|e| e.to_string())?.into_bytes()) client.put_file(&file_path, serde_json::to_string_pretty(&metadata).map_err(|e| e.to_string())?.into_bytes())
.await .await
.map_err(|e| e.to_string())?; .map_err(|e| e.to_string())?;
@ -758,12 +785,7 @@ fn add_webdav_workspace(
s.repo = None; s.repo = None;
// Store credentials keyed by hostname // Store credentials keyed by hostname
let domain = webdav_url let domain = credential_domain(&webdav_url);
.split("://")
.nth(1)
.and_then(|rest| rest.split('/').next())
.unwrap_or("")
.to_string();
s.save_config()?; s.save_config()?;
drop(s); drop(s);
let creds = app_handle.state::<Credentials<tauri::Wry>>(); let creds = app_handle.state::<Credentials<tauri::Wry>>();
@ -826,12 +848,7 @@ async fn sync_workspace(
}; };
// Step 2: load credentials // Step 2: load credentials
let domain = webdav_url let domain = credential_domain(&webdav_url);
.split("://")
.nth(1)
.and_then(|rest| rest.split('/').next())
.unwrap_or("")
.to_string();
let creds = app_handle.state::<Credentials<tauri::Wry>>(); let creds = app_handle.state::<Credentials<tauri::Wry>>();
let (username, password) = creds.load(&domain)?; let (username, password) = creds.load(&domain)?;

View file

@ -13,6 +13,8 @@
let viewYear = $state(existing ? existing.getFullYear() : now.getFullYear()); let viewYear = $state(existing ? existing.getFullYear() : now.getFullYear());
let viewMonth = $state(existing ? existing.getMonth() : now.getMonth()); let viewMonth = $state(existing ? existing.getMonth() : now.getMonth());
let selectedDay = $state(existing ? existing.getDate() : now.getDate()); let selectedDay = $state(existing ? existing.getDate() : now.getDate());
let selectedYear = $state(existing ? existing.getFullYear() : now.getFullYear());
let selectedMonth = $state(existing ? existing.getMonth() : now.getMonth());
let includeTime = $state(has_time); let includeTime = $state(has_time);
let selectedHour = $state(existing ? existing.getHours() : now.getHours()); let selectedHour = $state(existing ? existing.getHours() : now.getHours());
let selectedMinute = $state(existing ? existing.getMinutes() : 0); let selectedMinute = $state(existing ? existing.getMinutes() : 0);
@ -58,9 +60,6 @@
return `${viewYear}-${viewMonth + 1}-${day}` === todayStr; return `${viewYear}-${viewMonth + 1}-${day}` === todayStr;
} }
let selectedYear = $state(existing ? existing.getFullYear() : now.getFullYear());
let selectedMonth = $state(existing ? existing.getMonth() : now.getMonth());
function isSelected(day: number): boolean { function isSelected(day: number): boolean {
return selectedDay === day && selectedYear === viewYear && selectedMonth === viewMonth; return selectedDay === day && selectedYear === viewYear && selectedMonth === viewMonth;
} }

View file

@ -418,7 +418,7 @@ function debouncedSync() {
function restartSyncInterval() { function restartSyncInterval() {
if (_syncInterval) clearInterval(_syncInterval); if (_syncInterval) clearInterval(_syncInterval);
var secs = _appFocused ? syncIntervalSecs : syncIntervalUnfocusedSecs; const secs = _appFocused ? syncIntervalSecs : syncIntervalUnfocusedSecs;
_syncInterval = setInterval(triggerSync, secs * 1000); _syncInterval = setInterval(triggerSync, secs * 1000);
} }

View file

@ -116,13 +116,7 @@ impl AppConfig {
std::fs::create_dir_all(parent)?; std::fs::create_dir_all(parent)?;
} }
let content = serde_json::to_string_pretty(&self)?; let content = serde_json::to_string_pretty(&self)?;
// Atomic write: write to temp file then rename to prevent corruption on crash crate::storage::atomic_write(path, content.as_bytes())?;
let temp = path.with_extension("tmp");
std::fs::write(&temp, &content)?;
if let Err(e) = std::fs::rename(&temp, path) {
let _ = std::fs::remove_file(&temp);
return Err(e.into());
}
Ok(()) Ok(())
} }

View file

@ -358,8 +358,15 @@ pub async fn sync_google_tasks(
list_meta.task_order = task_order; list_meta.task_order = task_order;
list_meta.updated_at = Utc::now(); list_meta.updated_at = Utc::now();
if let Ok(meta_content) = serde_json::to_string_pretty(&list_meta) { match serde_json::to_string_pretty(&list_meta) {
let _ = atomic_write(&listdata_path, meta_content.as_bytes()); Ok(meta_content) => {
if let Err(e) = atomic_write(&listdata_path, meta_content.as_bytes()) {
errors.push(format!("Failed to write metadata for list '{}': {}", gt_list.title, e));
}
}
Err(e) => {
errors.push(format!("Failed to serialize metadata for list '{}': {}", gt_list.title, e));
}
} }
} }
@ -374,8 +381,15 @@ pub async fn sync_google_tasks(
RootMetadata::default() RootMetadata::default()
}; };
root_meta.list_order = new_list_order; root_meta.list_order = new_list_order;
if let Ok(meta_content) = serde_json::to_string_pretty(&root_meta) { match serde_json::to_string_pretty(&root_meta) {
let _ = atomic_write(&root_meta_path, meta_content.as_bytes()); Ok(meta_content) => {
if let Err(e) = atomic_write(&root_meta_path, meta_content.as_bytes()) {
errors.push(format!("Failed to write workspace metadata: {}", e));
}
}
Err(e) => {
errors.push(format!("Failed to serialize workspace metadata: {}", e));
}
} }
Ok(GoogleSyncResult { downloaded, errors }) Ok(GoogleSyncResult { downloaded, errors })

View file

@ -236,12 +236,8 @@ impl FileSystemStorage {
Ok(path) Ok(path)
} }
fn sanitize_filename(name: &str) -> String {
crate::sanitize_filename(name)
}
fn task_file_path(&self, list_dir: &Path, task: &Task) -> PathBuf { fn task_file_path(&self, list_dir: &Path, task: &Task) -> PathBuf {
let safe_title = Self::sanitize_filename(&task.title); let safe_title = crate::sanitize_filename(&task.title);
let filename = if safe_title.is_empty() { let filename = if safe_title.is_empty() {
task.id.to_string() task.id.to_string()
} else { } else {
@ -457,27 +453,42 @@ impl Storage for FileSystemStorage {
} }
let mut tasks = Vec::new(); let mut tasks = Vec::new();
for (_id, mut entries) in by_id { for (_id, entries) in by_id {
if entries.len() > 1 { // `by_id` only inserts non-empty groups, so each `entries` has at
entries.sort_by(|a, b| { // least one element.
let task = if entries.len() > 1 {
// Read mtime once per file so sort_by doesn't hit the filesystem
// O(n log n) times and can't produce inconsistent orderings if a
// file is touched mid-sort.
let mut with_mtime: Vec<(PathBuf, Task, Option<std::time::SystemTime>)> = entries
.into_iter()
.map(|(p, t)| {
let mtime = fs::metadata(&p).and_then(|m| m.modified()).ok();
(p, t, mtime)
})
.collect();
with_mtime.sort_by(|a, b| {
// Primary: highest version first // Primary: highest version first
let version_cmp = b.1.version.cmp(&a.1.version); let version_cmp = b.1.version.cmp(&a.1.version);
if version_cmp != std::cmp::Ordering::Equal { if version_cmp != std::cmp::Ordering::Equal {
return version_cmp; return version_cmp;
} }
// Tiebreaker: most recently modified file first // Tiebreaker: most recently modified file first
let mtime_a = fs::metadata(&a.0).and_then(|m| m.modified()).ok(); b.2.cmp(&a.2)
let mtime_b = fs::metadata(&b.0).and_then(|m| m.modified()).ok();
mtime_b.cmp(&mtime_a)
}); });
for (stale_path, _) in entries.drain(1..) { for (stale_path, _, _) in with_mtime.drain(1..) {
if let Err(e) = fs::remove_file(&stale_path) { if let Err(e) = fs::remove_file(&stale_path) {
eprintln!("Warning: failed to remove stale duplicate task file {:?}: {}", stale_path, e); eprintln!("Warning: failed to remove stale duplicate task file {:?}: {}", stale_path, e);
} }
} }
} let (_, t, _) = with_mtime.into_iter().next()
let (_, task) = entries.into_iter().next() .expect("dedup group is non-empty after drain(1..)");
.ok_or_else(|| Error::InvalidData("Empty dedup entries for task".to_string()))?; t
} else {
let (_, t) = entries.into_iter().next()
.expect("dedup group is non-empty");
t
};
tasks.push(task); tasks.push(task);
} }

View file

@ -204,8 +204,9 @@ pub fn compute_sync_actions(
} }
// Remote present, local gone, base known: local was deleted // Remote present, local gone, base known: local was deleted
(None, Some(_), Some(b)) => { (None, Some(r), Some(b)) => {
let remote_changed = remote.is_some_and(|r| r.size != b.size || !timestamps_equal(r.last_modified.as_deref(), b.modified_at.as_deref())); let remote_changed = r.size != b.size
|| !timestamps_equal(r.last_modified.as_deref(), b.modified_at.as_deref());
if remote_changed { if remote_changed {
// deleted locally + modified remotely -> download (remote wins) // deleted locally + modified remotely -> download (remote wins)
actions.push(SyncAction::Download { path: path.to_string() }); actions.push(SyncAction::Download { path: path.to_string() });
@ -229,6 +230,22 @@ pub fn compute_sync_actions(
actions actions
} }
/// Remove base entries for files that are gone from both local and remote.
/// `compute_sync_actions` emits no action for the both-deleted case, so without
/// this pass those entries would persist in `.syncstate.json` indefinitely.
fn prune_orphan_bases(
sync_state: &mut SyncState,
local_files: &[LocalFileInfo],
remote_files: &[RemoteFileSnapshot],
) {
let live_paths: std::collections::HashSet<&str> = local_files
.iter()
.map(|f| f.path.as_str())
.chain(remote_files.iter().map(|f| f.path.as_str()))
.collect();
sync_state.files.retain(|p, _| live_paths.contains(p.as_str()));
}
/// Compare two timestamps for equality by parsing both, tolerating format differences. /// Compare two timestamps for equality by parsing both, tolerating format differences.
fn timestamps_equal(a: Option<&str>, b: Option<&str>) -> bool { fn timestamps_equal(a: Option<&str>, b: Option<&str>) -> bool {
match (a, b) { match (a, b) {
@ -604,6 +621,12 @@ async fn sync_workspace_inner(
} }
}; };
// Purge orphan base entries: files we previously tracked that are now gone
// from both local and remote. Without this, `.syncstate.json` accumulates
// ghost entries forever because the both-deleted diff case emits no action
// and so nothing else would clean them.
prune_orphan_bases(&mut sync_state, &local_files, &remote_files);
// Compute actions from three-way diff // Compute actions from three-way diff
let fresh_actions = compute_sync_actions(&local_files, &remote_files, &sync_state); let fresh_actions = compute_sync_actions(&local_files, &remote_files, &sync_state);
@ -701,19 +724,20 @@ async fn execute_action(
Err(e) => return Err(e.into()), Err(e) => return Err(e.into()),
}; };
let checksum = compute_checksum(&data); let checksum = compute_checksum(&data);
let len = data.len() as u64;
if let Some(parent) = path_parent(path) { if let Some(parent) = path_parent(path) {
client.ensure_dir(parent).await?; client.ensure_dir(parent).await?;
} }
report(&format!(" ^ Uploading {}", path)); report(&format!(" ^ Uploading {}", path));
client.put_file(path, data.clone()).await?; client.put_file(path, data).await?;
// Record in sync state using local file metadata // Record in sync state using local file metadata
let modified = std::fs::metadata(&local_path).ok() let modified = std::fs::metadata(&local_path).ok()
.and_then(|m| m.modified().ok()) .and_then(|m| m.modified().ok())
.map(|t| { let dt: DateTime<Utc> = t.into(); dt.to_rfc3339() }); .map(|t| { let dt: DateTime<Utc> = t.into(); dt.to_rfc3339() });
sync_state.record_file(path, &checksum, modified.as_deref(), data.len() as u64); sync_state.record_file(path, &checksum, modified.as_deref(), len);
} }
SyncAction::Conflict { path } => { SyncAction::Conflict { path } => {
@ -753,7 +777,7 @@ async fn execute_action(
// For .md task files inside a list dir, create a duplicate of the local version // For .md task files inside a list dir, create a duplicate of the local version
let parts: Vec<&str> = path.split('/').collect(); let parts: Vec<&str> = path.split('/').collect();
if parts.len() == 2 && parts[1].ends_with(".md") && parts[1] != ".listdata.json" { if parts.len() == 2 && parts[1].ends_with(".md") {
let local_content = String::from_utf8_lossy(&local_data); let local_content = String::from_utf8_lossy(&local_data);
if let Ok((frontmatter, description)) = parse_frontmatter_for_conflict(&local_content) { if let Ok((frontmatter, description)) = parse_frontmatter_for_conflict(&local_content) {
let original_id = frontmatter.id; let original_id = frontmatter.id;
@ -891,9 +915,15 @@ pub fn get_sync_status(workspace_path: &Path) -> Result<SyncStatusInfo> {
} }
} }
// Count files in base that are now missing locally (deleted) // Count files in base that are now missing locally (deleted).
// Build a set of local paths once so the membership check is O(1) per
// tracked file instead of scanning local_files linearly each time.
let local_paths: std::collections::HashSet<&str> = local_files
.iter()
.map(|f| f.path.as_str())
.collect();
for path in sync_state.files.keys() { for path in sync_state.files.keys() {
if !local_files.iter().any(|f| f.path == *path) { if !local_paths.contains(path.as_str()) {
pending_changes += 1; pending_changes += 1;
} }
} }
@ -1106,6 +1136,22 @@ mod tests {
assert!(actions.is_empty()); assert!(actions.is_empty());
} }
#[test]
fn test_prune_orphan_bases() {
let mut state = SyncState::default();
state.files.insert("kept_local.md".to_string(), make_base("a"));
state.files.insert("kept_remote.md".to_string(), make_base("b"));
state.files.insert("orphan.md".to_string(), make_base("c"));
let local = vec![make_local("kept_local.md", "a")];
let remote = vec![make_remote("kept_remote.md")];
prune_orphan_bases(&mut state, &local, &remote);
assert!(state.files.contains_key("kept_local.md"));
assert!(state.files.contains_key("kept_remote.md"));
assert!(!state.files.contains_key("orphan.md"));
}
#[test] #[test]
fn test_multiple_files_mixed() { fn test_multiple_files_mixed() {
let local = vec![ let local = vec![

View file

@ -353,12 +353,14 @@ Credentials are stored in the platform keychain (Windows Credential Manager, mac
```rust ```rust
use onyx_core::webdav::{store_credentials, load_credentials, delete_credentials}; use onyx_core::webdav::{store_credentials, load_credentials, delete_credentials};
use zeroize::Zeroizing;
// Store credentials // Store credentials
store_credentials("nextcloud.example.com", "username", "password")?; store_credentials("nextcloud.example.com", "username", "password")?;
// Load credentials (returns Zeroizing<String> wrappers that wipe memory on drop) // Load credentials — returns Zeroizing<String> wrappers that wipe memory on drop
let (username, password) = load_credentials("nextcloud.example.com")?; let (username, password): (Zeroizing<String>, Zeroizing<String>) =
load_credentials("nextcloud.example.com")?;
// Delete credentials // Delete credentials
delete_credentials("nextcloud.example.com")?; delete_credentials("nextcloud.example.com")?;
@ -454,7 +456,7 @@ All metadata and state files use an atomic write pattern (write to `.tmp` then r
- **List names**: Rejected if they contain `/`, `\`, or `..` components. Canonicalized and verified to stay within workspace root. - **List names**: Rejected if they contain `/`, `\`, or `..` components. Canonicalized and verified to stay within workspace root.
- **Sync paths**: Validated to reject `..` components and backslashes anywhere in the path before any file system operation. - **Sync paths**: Validated to reject `..` components and backslashes anywhere in the path before any file system operation.
- **Workspace paths** (Tauri): Rejected if they point to system directories (`/etc`, `/usr`, `/bin`, etc.). - **Workspace paths** (Tauri): Rejected if they point to the filesystem root (`/`) or system directories (`/etc`, `/usr`, `/bin`, `/sbin`, `/var`, `/proc`, `/sys`, `/dev`).
- **Filenames**: Sanitized to replace `/ \ : * ? " < > |` and control characters with `_`. - **Filenames**: Sanitized to replace `/ \ : * ? " < > |` and control characters with `_`.
## Example: Complete Workflow ## Example: Complete Workflow
@ -521,9 +523,9 @@ Key test areas:
## Thread Safety ## Thread Safety
The `Storage` trait requires `Send + Sync`, and `TaskRepository` wraps `Box<dyn Storage + Send + Sync>`, so repository instances can be shared across threads behind a `Mutex`. The Tauri GUI uses `Mutex<AppState>` for this purpose. `TaskRepository` holds its storage as `Box<dyn Storage + Send + Sync>`, so any concrete storage implementation passed in must be `Send + Sync`. Repository instances can be shared across threads behind a `Mutex` — the Tauri GUI uses `Mutex<AppState>` for this purpose.
For concurrent access: For concurrent access:
1. Wrap `TaskRepository` in `Mutex` or `RwLock` (the Tauri app does this) 1. Wrap `TaskRepository` in `Mutex` or `RwLock` (the Tauri app does this)
2. Or create separate repository instances per thread (file system handles locking) 2. Or create separate repository instances per thread. Note that `FileSystemStorage` does not coordinate writes between processes — concurrent multi-process writes to the same workspace are not supported outside the WebDAV sync flow, which uses a `.sync.lock` file.

View file

@ -27,7 +27,7 @@ cargo run -p onyx-cli -- --help
# Run the Tauri GUI # Run the Tauri GUI
cd apps/tauri && npm install cd apps/tauri && npm install
npm run tauri dev npm run tauri dev # (Wayland: WEBKIT_DISABLE_DMABUF_RENDERER=1 npm run tauri dev)
``` ```
## Project Structure ## Project Structure
@ -72,11 +72,15 @@ onyx/
│ │ ├── main.ts │ │ ├── main.ts
│ │ ├── app.css # Tailwind CSS 4 + theme │ │ ├── app.css # Tailwind CSS 4 + theme
│ │ ├── App.svelte │ │ ├── App.svelte
│ │ ├── test/
│ │ │ └── setup.ts
│ │ └── lib/ │ │ └── lib/
│ │ ├── screens/ # Full-page views │ │ ├── screens/ # Full-page views
│ │ ├── components/ # Reusable UI components │ │ ├── components/ # Reusable UI components
│ │ ├── stores/ # Svelte state (app.svelte.ts) │ │ ├── stores/ # Svelte state (app.svelte.ts)
│ │ ├── dateFormat.ts # Date formatting utilities │ │ ├── dateFormat.ts # Date formatting utilities
│ │ ├── grouping.ts # Task grouping logic
│ │ ├── paths.ts # Path utilities
│ │ └── types.ts # TypeScript type definitions │ │ └── types.ts # TypeScript type definitions
│ ├── tauri-plugin-credentials/ # Cross-platform credential storage plugin │ ├── tauri-plugin-credentials/ # Cross-platform credential storage plugin
│ │ ├── Cargo.toml │ │ ├── Cargo.toml