Compare commits

...

72 commits

Author SHA1 Message Date
SteelDynamite c5a3840aea
Merge pull request #66 from SteelDynamite/claude/gracious-cray-yN12q
docs(api): clarify thread-safety bounds and multi-process limits
2026-04-29 02:45:47 +01:00
Claude c29f715c9e
docs(api): clarify thread-safety bounds and multi-process limits
The Storage trait itself does not declare `Send + Sync` bounds — only the
boxed instance held by `TaskRepository` does. Reword to describe what's
actually required of an implementation, and call out that
`FileSystemStorage` does not coordinate writes across processes outside
the `.sync.lock`-protected WebDAV flow.

https://claude.ai/code/session_01LweYBKMFbnTen7pCTdeQKq
2026-04-27 07:45:44 +00:00
SteelDynamite 6f4d00b912
Merge pull request #65 from SteelDynamite/claude/serene-ride-Gt8lp
audit: 2026-04-27 — sync clone, google metadata errors, dedup invariant
2026-04-27 08:40:13 +01:00
SteelDynamite 39718ef700
Merge pull request #64 from SteelDynamite/claude/dreamy-brown-4XuTd
docs: sync documentation with codebase state
2026-04-27 08:39:21 +01:00
Claude c57ffd3f55
docs(audit): log 2026-04-27 findings 2026-04-27 07:23:34 +00:00
Claude 12adfdc532
refactor(storage): drop unreachable error in dedup loop
The dedup loop wrapped its winner in `Option<Task>` and then mapped the
`None` case to `Error::InvalidData("Empty dedup entries for task")`.
That branch is unreachable: `by_id` is built by pushing every entry of
`file_tasks` into the vector for its UUID, so every group has at least
one entry, and the `len() > 1` branch keeps the first element after
`drain(1..)`.

Replace the spurious error with `expect` calls that document the
invariant and let the dedup loop yield `Task` directly instead of
`Option<Task>`.
2026-04-27 07:23:12 +00:00
Claude 6e161ba819
fix(google_tasks): surface metadata write failures
`sync_google_workspace` silently dropped errors from `.listdata.json`
and `.onyx-workspace.json` atomic writes via `let _ = ...`, so a sync
could report `downloaded: N` while the list/workspace ordering had not
been persisted.  Push those errors into the `errors` vec returned by
`GoogleSyncResult` so callers see the failure.
2026-04-27 07:22:27 +00:00
Claude e8a69a3222
perf(sync): avoid cloning upload payload
`SyncAction::Upload` cloned the file bytes solely so it could later read
`data.len()` for the sync-state record.  Capture the length up front and
move the buffer into `put_file`.
2026-04-27 07:22:01 +00:00
Claude 839b744720
docs: sync documentation with codebase state
- PLAN.md: uncheck push/pull sync mode selector (backend supports it
  via SyncMode enum, but no UI exists in SettingsScreen; always full sync)
- PLAN.md: bump Last Updated to 2026-04-27, Document Version to 4.5
- CLAUDE.md: update Current state date to 2026-04-27

https://claude.ai/code/session_01C7jV6wrzJVhHRKWsq87XwB
2026-04-27 00:55:46 +00:00
SteelDynamite 0506d44989
Merge pull request #62 from SteelDynamite/claude/serene-ride-JTRND
audit(2026-04-25): O(n²) sync-status + cascade-delete + atomic-write dedup
2026-04-27 01:50:09 +01:00
Claude e1c4fd7dfb
docs(audit): log 2026-04-25 findings 2026-04-25 07:28:33 +00:00
Claude 8c8735b2b4
refactor(config): reuse storage::atomic_write for save_to_file
`AppConfig::save_to_file` had its own copy of the temp-file + rename +
cleanup-on-failure dance.  `storage::atomic_write` is already
`pub(crate)` and does exactly that — `google_tasks.rs` was migrated to
use it earlier.  Drop the duplicate so there's one canonical atomic
write path in the crate.
2026-04-25 07:27:25 +00:00
Claude 069afe8d5e
perf(tauri): build child index once for cascade delete
`delete_task`'s descendant walk re-scanned the full task list on every
frontier pop, so the cost was O(n * depth) where n is the list size.
For a list of a few hundred tasks with even moderate nesting that's
already noticeable.

Index `parent_id -> [child_id]` once up-front; the BFS then visits each
descendant in O(1) amortised, dropping the total to O(n).
2026-04-25 07:26:56 +00:00
Claude 1cdf5dff90
perf(sync): hash-set membership check in get_sync_status
The deletion-detection loop in `get_sync_status` scanned `local_files`
linearly for every tracked path in `sync_state.files`, making the cost
quadratic in the file count.  The earlier "pending change" loop just
above already does the inverse direction via `sync_state.files.get`
(O(1)).  Build a `HashSet<&str>` of local paths once and check it
the same way to make the function O(n).

This is called by the GUI status indicator, so the win shows up as
soon as a workspace tracks more than a handful of files.
2026-04-25 07:25:36 +00:00
SteelDynamite 56944360e0
Merge pull request #60 from SteelDynamite/claude/serene-ride-1mX8o 2026-04-24 22:12:58 +01:00
SteelDynamite 16cf409f32
Merge pull request #59 from SteelDynamite/claude/dreamy-brown-Ss931 2026-04-24 22:12:10 +01:00
Claude 8611f55573
docs(audit): log 2026-04-24 findings 2026-04-24 07:38:54 +00:00
Claude a9fac2c1d8
refactor(storage): drop single-caller sanitize_filename wrapper
`FileSystemStorage::sanitize_filename` was a one-line forwarder to
`crate::sanitize_filename` with a single call site in
`task_file_path`. The extra method added a layer of indirection
without value. Inline the crate-level call.
2026-04-24 07:38:18 +00:00
Claude 1fcc6e7f6d
fix(sync): purge orphan base entries when both sides deleted
`compute_sync_actions` emits no action for files that are missing from
both local and remote but still tracked in the sync base (the
`(None, None, Some(_))` arm). Nothing else cleaned those entries, so
`.syncstate.json` grew forever every time a file was deleted both
locally and remotely — and on each subsequent sync the same
no-op match fired again.

Add a `prune_orphan_bases` pass that runs before `compute_sync_actions`
in `sync_workspace_inner`, dropping any base entry whose path is in
neither the local nor remote scan. Unit-tested in isolation.
2026-04-24 07:37:39 +00:00
Claude 970210b647
refactor(sync): destructure remote in deleted-local branch
The `(None, Some(_), Some(b))` arm re-checked the already-matched
`remote` via `remote.is_some_and(...)`, which obscures intent and
compiles to redundant None-branch code. Bind `Some(r)` in the match
and use `r` directly.

No behavior change.
2026-04-24 07:36:28 +00:00
Claude 66513519ab
docs: fix credential return type, add missing test dir, update plan date
- docs/API.md: load_credentials returns Zeroizing<String> (not String)
- docs/DEVELOPMENT.md: add src/test/ directory to project structure
- PLAN.md: update Last Updated to 2026-04-23, bump version to 4.4

https://claude.ai/code/session_01By1aj94LMM7muDV7AT4egk
2026-04-23 10:08:34 +00:00
SteelDynamite 1bb1b67977
Merge pull request #58 from SteelDynamite/claude/serene-ride-LeiSc 2026-04-23 11:05:17 +01:00
SteelDynamite 4c318705f6
Merge pull request #57 from SteelDynamite/claude/dreamy-brown-nRanS 2026-04-23 11:01:53 +01:00
Claude 890f0c2126
docs(audit): log 2026-04-20 findings 2026-04-20 07:37:54 +00:00
Claude f42697f4ed
refactor(tauri): extract parse_uuid helper
17 Tauri commands repeated `Uuid::parse_str(&s).map_err(|e| e.to_string())`
for each UUID argument. Collapse the pattern into a `parse_uuid`
helper so callers read as `let id = parse_uuid(&list_id)?;`.
2026-04-20 07:35:50 +00:00
Claude 7754ea4b45
fix(tauri): surface errors from toggle_task cascade
When a parent task was toggled, `update_task` failures on child tasks
were silently swallowed with `let _ = ...`, leaving subtasks out of
sync with the parent's status and giving the user no feedback. Map the
error and propagate so the UI can show it and the user can retry.
2026-04-20 07:35:12 +00:00
Claude 6abe95692e
perf(tauri): use HashSet for cascade-delete dedup
Descendant walking in delete_task called Vec::contains in the inner
loop, making the traversal O(n^2) in the number of tasks. Swap the
visited set to HashSet so membership tests are O(1); HashSet::insert
also folds the contains-check and record-new steps into one call.
2026-04-20 07:34:52 +00:00
Claude 70fe7420cd
refactor(sync): remove dead .listdata.json guard in conflict path
The `.listdata.json` check was unreachable: the branch is already
gated on `parts[1].ends_with(".md")`, which `.listdata.json` fails.
2026-04-20 07:33:12 +00:00
Claude 6e1921230a
docs: sync markdown files with current codebase state
- Remove BottomSheet.svelte from PLAN.md file structure (deleted in
  efb4cca — NewTaskInput hand-rolls its own sheet)
- Expand workspace path validation description in API.md and CLAUDE.md
  to include filesystem root "/" alongside system directories, matching
  the forbidden list added in fix(tauri): reject "/" root path

https://claude.ai/code/session_015BSAnuhvMBLk7s4g7dSE53
2026-04-19 08:16:47 +00:00
SteelDynamite 6ae1006ab4
Merge pull request #56 from SteelDynamite/claude/serene-ride-XUY3D 2026-04-19 09:12:44 +01:00
SteelDynamite d8c6b9fc8e
Merge pull request #53 from SteelDynamite/claude/dreamy-brown-pFY5T 2026-04-19 09:12:08 +01:00
Claude 9a8a1a9f8e
style(sync): replace stray var with const in restartSyncInterval
Lone var in an otherwise let/const file — promote to const since the
value never gets reassigned. No behavior change.
2026-04-19 07:13:47 +00:00
Claude c952156491
refactor(date-picker): group selected-state declarations up top
selectedYear/selectedMonth were declared below selectDay, which writes
to them, and below isToday, which is declared nearby. Runtime worked
because the assignments only run on user click (after script init), but
the split made the initialization order confusing. Group all $state
fields at the top of the script.
2026-04-19 07:13:29 +00:00
Claude 62cf05480d
refactor(tauri): extract join_remote_path helper
Three call sites repeated the same "empty base -> child, otherwise
trim_end + slash + child" pattern. Pull it into a helper to keep the
join convention consistent across list_remote_folder, inspect, and
create_remote_workspace.
2026-04-19 07:12:37 +00:00
Claude e911ac1d94
refactor(tauri): extract credential_domain helper
Three call sites reproduced the same scheme://host parsing inline. Pull
it into a named helper so the domain-extraction convention lives in one
place.
2026-04-19 07:11:53 +00:00
Claude 937b6c2c7d
refactor(storage): read dedup mtimes once instead of in sort closure
sort_by may call the comparator many times, so the previous tiebreaker
re-read each duplicate file's metadata on every comparison. With N
duplicates that's O(N log N) stat calls, and the ordering could flip
mid-sort if a file was touched concurrently. Snapshot mtime per file up
front and sort on the cached values.
2026-04-19 07:09:49 +00:00
Claude 4e8f7c4536
fix(tauri): reject "/" root path in workspace validation
trim_end_matches('/') collapses "/" to "", which then isn't matched by
the forbidden list, so a root-filesystem workspace slipped through. Keep
"/" as the canonical form when the stripped value is empty.
2026-04-19 07:08:42 +00:00
Claude b977d275ba
docs: sync markdown docs with current codebase state
- CLAUDE.md: add `sync` to the CLI commands list (commands/sync.rs exists)
- PLAN.md: remove BottomSheet.svelte (deleted in efb4cca)
- DEVELOPMENT.md: add grouping.ts and paths.ts to the lib directory listing

https://claude.ai/code/session_01YbcpJqmwpEW5tCJFFkMSPZ
2026-04-18 08:53:10 +00:00
SteelDynamite 065118789f
Merge pull request #52 from SteelDynamite/claude/smoke-test-and-fixes-TwfSh 2026-04-18 09:49:21 +01:00
Claude a79dcc4617
test: cover CLI workspace resolver, date picker, saturating version
Add regression tests for the bugs found in this smoke test:

- resolve_workspace: by-name, by-UUID, unknown-identifier, current-fallback,
  actionable no-workspace message.
- DateTimePicker: selected-day highlight must be month-scoped; committing
  after navigating months uses the selected month, not the viewed one.
- create_task: version is saturating_add on u64::MAX (doesn't panic/wrap).

Also fixes the three pre-existing clippy warnings (WorkspaceMode now uses
#[derive(Default)] + #[default], repository test drops unused binding,
sync test uses struct-update syntax instead of field-reassign-default).
2026-04-17 16:32:22 +00:00
Claude efb4ccaaef
chore(cleanup): remove unused BottomSheet component and dead testConnection
BottomSheet.svelte is not imported anywhere — NewTaskInput hand-rolls
its own sheet. SetupScreen had a standalone testConnection() function
that was only ever reachable through connectAndBrowse which calls
test_webdav_connection directly; the standalone variant had no
callers.
2026-04-17 16:29:04 +00:00
Claude f6c8dfc951
fix(cli): create task-edit scratch file with mode 0600 on unix
onyx task edit wrote the task body to /tmp/onyx-<uuid>.md with the
default umask, leaving it world-readable on shared multi-user systems
for the duration of the editor session. Open with O_CREAT|O_TRUNC +
mode 0600 via OpenOptionsExt on unix; Windows keeps the existing
behaviour since unix-style mode bits don't apply.
2026-04-17 16:28:20 +00:00
Claude 3acc4c3f5d
fix(empty-state): replace misleading hint with an actual create button
The no-lists empty state said 'Tap the list name above to create one' —
but there is no list name above, just a static 'Tasks' label. The
actual affordance (+ New list) lives in the drawer, which may not be
open. Add a primary-button shortcut that opens the drawer and puts
focus in the new-list input in one click. Google Tasks workspaces are
read-only so they still get the explanatory text instead.
2026-04-17 16:27:18 +00:00
Claude 391c42aa18
fix(rename): imperatively focus + select rename inputs
Svelte's native autofocus attribute is unreliable for inputs rendered
via conditional blocks (prior smoke-test fixed this for the new-list
input). Apply the same bind:this + $effect pattern to the list-rename
input (TasksScreen) and the workspace-rename input (SettingsScreen),
and select() the existing text so typing replaces the old name
cleanly.
2026-04-17 16:26:29 +00:00
Claude 6283f9ab2c
fix(store): guard fs-changed listener against setup/missing screens
The module-scope fs-changed listener fired unconditionally, calling
loadLists even when the user was on the setup or missing-workspace
screens (where no current workspace exists). The invoke would fail
silently and a WebDAV debounced sync could kick off against an
incomplete state. Bail when there's no active workspace or the tasks
screen isn't mounted.
2026-04-17 16:25:39 +00:00
Claude 5869c305aa
fix(bulk-delete): snapshot targets and bail on first failure
executeDeleteCompleted and executeDeleteCompletedSubtasks iterated over
the reactive completedTasks/completedSubtasks lists with no error
handling: the array shrinks with every successful delete, skipping
subsequent entries, and a failed delete silently left a half-deleted
state. Snapshot the target list up front and abort as soon as a delete
returns false — matching the subtask-cascade path.
2026-04-17 16:25:03 +00:00
Claude d213e523ec
fix(sync): narrow transient-error detection so real errors aren't hidden
The connectivity-vs-real-error classifier tested the message against
/timeout|connect|network|unreachable|refused/i, matching any error
whose text happened to include one of those words. A server-side
permission error like 'network share access refused' was silently
classified as transient, updating only the status dot — the user
never saw the actual problem.

Tighten the regex to well-known connectivity phrases and lowercase
error codes (ENOTFOUND/ECONNREFUSED/etc), using word boundaries so
substrings in unrelated messages don't match.
2026-04-17 16:24:20 +00:00
Claude 0fc1f16c9d
fix(new-task): attach date in a single create_task call to prevent loss
The new-task bottom sheet called createTask then, if a date was set,
made a follow-up updateTask to attach the date. If the update failed
(e.g. filesystem error between the two writes) the user was left with
a dateless task and, because transient sync errors are already
suppressed, often no visible error either.

Extend the create_task Tauri command to accept optional date/has_time
fields and pass them through. The frontend now creates the task in one
round-trip. No separate update path needed.
2026-04-17 16:23:51 +00:00
Claude d01bd9d280
fix(settings): stop clobbering WebDAV edits and save without a successful test
Two coupled issues in workspace settings:

1. The credentials-loading effect re-ran whenever ws.webdav_url changed,
   so any config mutation (e.g. changing sync interval) would trigger a
   re-load of the stored username/password, overwriting whatever the
   user was typing into those fields. Gate with a one-shot credsLoaded
   flag.

2. Save would persist whatever was in the URL input even if the user
   had never tested it — a typo'd host silently pointed the workspace
   at a dead server. Now saveWebdav auto-runs the connection test and
   bails if it fails; any edit to the three inputs clears the "ok"
   status via markDirty() so the next Save is forced to re-verify.

Also replaces the ASCII "Failed -- Retry" with an em dash.
2026-04-17 16:22:31 +00:00
Claude b437b0b7b2
fix(sync): use atomic_write for all payload file writes during sync
Sync's conflict-resolution and download paths wrote the local file with
plain fs::write. A crash or I/O error mid-write left a truncated .md
or .listdata.json that would then fail YAML/JSON parsing on the next
list_tasks. All other callers in this crate use atomic_write; route
the four sync call sites through it for consistency and crash safety.
2026-04-17 16:21:24 +00:00
Claude c134624839
fix(repository): saturating_add for in-memory version bump
create_task used a plain += on the in-memory version returned to the
caller while FileSystemStorage uses saturating_add when serialising
the frontmatter. The two would disagree at u64::MAX, and in debug
builds the + operator would panic on overflow. Match the storage
behaviour.
2026-04-17 16:20:11 +00:00
Claude f276233be5
fix(tauri): cascade delete must handle the full subtree, not just direct children
delete_task only collected direct children when a parent was deleted,
so grandchildren (and deeper descendants — the data model allows any
depth even though the UI is two-level today) would be left with a
parent_id pointing at a deleted task. Walk the parent-child graph to
collect the full descendant set and delete children before the parent
so a mid-cascade failure can't strand descendants.
2026-04-17 16:19:46 +00:00
Claude df66e7bc98
fix(tauri): add_workspace must initialise the target folder
The frontend currently calls init_workspace before add_workspace, but
the Tauri command itself is trivially breakable by any caller that
skips the pre-step or a future frontend refactor: add_workspace would
save the workspace entry pointing at a non-existent directory, and
every subsequent command would then fail with 'Path does not exist'
via TaskRepository::new. Call TaskRepository::init inside the command
so it is self-contained and idempotent.
2026-04-17 16:19:03 +00:00
Claude 604a6058b8
fix(storage): atomic task-file writes
write_task used plain fs::write for the .md payload even though every
other write path in this module (metadata files, sync state, offline
queue, config) uses atomic_write. A crash mid-write left a truncated
.md file whose malformed YAML frontmatter then failed list_tasks for
the entire list. Route through atomic_write so a failed write either
leaves the old file intact or produces the full new file.
2026-04-17 16:18:09 +00:00
Claude a0e2bb214b
fix(date-picker): don't mark the same day in every month as selected
The day cell class used `selectedDay === day`, ignoring the currently
viewed month/year. After picking e.g. April 15, flipping to May still
painted May 15 as the selected day; committing with Done would shift
the task's date to whatever month the user happened to be viewing.

Track selectedYear/selectedMonth alongside selectedDay, update them
only on actual day click, and construct the committed ISO from the
selection (not the view). The pre-existing isSelected() helper is now
wired into the cell template.
2026-04-17 16:17:36 +00:00
Claude 8a81f05492
fix(cli): print clean error chain instead of anyhow Debug with backtrace
When RUST_BACKTRACE was set in the environment, every user-facing error
dumped a 20-line Rust backtrace at the user — e.g. running 'onyx list
show' with no workspace gave them a stack trace through anyhow, clap,
and libc start. Replace 'fn main() -> Result' with an explicit error
printer that walks the anyhow cause chain using Display, and exits 1.
Programming-bug panics still surface through the default panic handler.
2026-04-17 16:16:52 +00:00
Claude 433a950418
fix(cli): accept workspace name or UUID, auto-select on first add
Three related CLI bugs found during smoke testing:

1. `get_repository` used `config.get_workspace(name)` which expects the
   UUID string, so `onyx list create -w dev` or `onyx task add -w dev`
   always failed with "Workspace 'dev' not found". Unified CLI resolution
   into a single `resolve_workspace()` helper that accepts either the
   display name or the UUID; removed sync.rs's duplicated local copy.

2. `workspace switch`/`remove`/`retarget`/`migrate` only accepted the
   display name — the error message even suggested "Use the workspace ID
   instead" on ambiguous names, but IDs were then rejected. Updated
   `resolve_name` to try the map key first.

3. `onyx workspace add` never set `current_workspace`, so the very next
   command failed with "No workspace set. Use 'onyx init'..." even
   though a workspace was just created. Now sets the new workspace as
   current whenever none was previously selected, and reports the fact.
   Updated the error message to point at the correct `workspace add` /
   `workspace switch` commands instead of `init`.
2026-04-17 16:15:45 +00:00
Claude 855fa46a0e
refactor: simplify forgetMissingWorkspace now that removeWorkspace handles switch
removeWorkspace already switches to the next available workspace (or falls
back to setup). forgetMissingWorkspace can just delegate, dropping the
duplicate branch that previously never ran anyway because current_workspace
was always null after removal.
2026-04-17 16:13:46 +00:00
Claude cdef59fab4
fix: keep user on tasks screen when removing current workspace
When a user deletes the current workspace from settings, the backend
clears current_workspace and the frontend's hasWorkspace derived fell
through to the setup screen — even if the user still had other healthy
workspaces configured. Mirror the forgetMissingWorkspace flow: switch
to the next available workspace automatically.
2026-04-17 16:12:49 +00:00
SteelDynamite 92475483de
Merge pull request #51 from SteelDynamite/claude/dreamy-brown-YlW25 2026-04-17 16:36:33 +01:00
Claude 771e104486
Merge remote-tracking branch 'origin/main' into pr51-merge
# Conflicts:
#	README.md
2026-04-17 15:01:58 +00:00
Claude 7bef6b07bc
docs: sync markdown docs with actual codebase state
- README.md: update Phase 4 status to reflect Android preliminaries done
  (file-watcher gating, tauri-plugin-credentials, safe area insets, Android
  targets configured) but init/build not yet run; add tauri-plugin-credentials
  to project structure; expand docs/ tree; add newer GUI features (workspace
  rename, safe area insets, accessibility); add setup screen screenshot;
  update What's Next to note Phase 4 is in progress
- PLAN.md: fix Phase 4 checkboxes — android init and build-succeeds were
  marked [x] but gen/android/ does not exist; correct cfg gate annotation
  from #[cfg(not(mobile))] to #[cfg(not(target_os = "android"))]; update
  dependency snippet to reflect actual keyring/zeroize/sha2/quick-xml usage;
  bump Last Updated to 2026-04-17
- docs/DEVELOPMENT.md: add WEBKIT_DISABLE_DMABUF_RENDERER=1 Wayland note
  to tauri dev command

https://claude.ai/code/session_01MypN7wPNqeSgw8b5DYpMc1
2026-04-17 14:44:33 +00:00
SteelDynamite 0c2a218260
Merge pull request #50 from SteelDynamite/claude/run-app-screenshot-Z02aY 2026-04-17 15:39:16 +01:00
SteelDynamite 95b89b78e6
Merge pull request #49 from SteelDynamite/claude/dreamy-brown-d12z7 2026-04-17 15:36:21 +01:00
Claude 212e3d43d5
Merge main into claude/dreamy-brown-d12z7
Resolve conflicts against latest main:
- PLAN.md: keep main's updated Settings/theme list (window decorations,
  Black and Gold) while adopting PR's "Move to..." inline phrasing.
- README.md: keep main's theme list including Black and Gold.
- docs/API.md: keep main's atomic move_task documentation.

https://claude.ai/code/session_01NCtJ5PNhaDh21kYnDZXYsN
2026-04-17 14:35:39 +00:00
Claude 67ac43e527
Add Vitest suite covering the smoke-test fixes
Extracts two pure helpers out of Svelte components so they can be
exercised without the reactive runtime, and adds component tests for
ConfirmDialog's Escape-handling behavior.

- apps/tauri/src/lib/grouping.ts (new): `groupTasksByDate` lifted out of
  the `groupedPendingTasks` $derived in the app store.
- apps/tauri/src/lib/paths.ts (new): `workspaceNameFromPath` lifted out
  of SetupScreen.handleOpen.
- apps/tauri/src/lib/grouping.test.ts: 8 cases — "No Date" placed last
  (regression), full bucket ordering, empty input, within-bucket
  stable sort, earlier-today stays in Today, multi-task same-day,
  No Date preserves insertion order.
- apps/tauri/src/lib/paths.test.ts: 8 cases — POSIX/Windows/mixed
  separators, trailing slash regression ("…/Tasks/" → "Tasks"), empty
  and root-only fallback, names with spaces.
- apps/tauri/src/lib/components/ConfirmDialog.test.ts: 6 cases —
  renders message/detail/custom confirm text, Cancel/Confirm fire the
  right callbacks, Escape calls oncancel and does NOT reach an outer
  window listener (regression), non-Escape keys are ignored, and the
  module-level open-count increments/decrements correctly (including
  when two dialogs are mounted at once).

Test harness: Vitest + jsdom + @testing-library/svelte. `npm test`
runs the suite; `resolve.conditions` is set to "browser" under VITEST
so Svelte resolves its client entry and mount() works.

23/23 tests pass. cargo check, cargo test -p onyx-core (162/162),
and npm run build all still green.
2026-04-17 14:33:12 +00:00
Claude 8a04895270
Fix nine GUI bugs found during local-workspace smoke test
- crates/onyx-core/src/webdav.rs: rename `getpassword`/`setpassword`
  (7 call sites) to `get_password`/`set_password` so `cargo build`
  and the CLI compile again under the default `keyring-storage` feature.
- ConfirmDialog.svelte: intercept Escape at window capture phase and
  expose a module-level open-count so TasksScreen's Escape handler can
  defer; previously Escape on a dialog both dismissed the dialog AND
  popped the task-detail view behind it. Cancel is also focused on
  mount for keyboard users.
- TasksScreen.svelte: extend the taskStack cleanup effect to collapse
  back to parent detail when only the subtask is gone (was leaving a
  blank third panel); focus the new-list input when it appears; reset
  the Completed section's expand state when switching lists.
- TaskDetailView.svelte: re-sync local title/description state when
  the task prop's content changes (unless the user is editing), so a
  sync pull doesn't get silently overwritten on next save. Bail out of
  the parent delete if a subtask delete fails instead of orphaning.
- app.svelte.ts: deleteTask now returns a success boolean; move the
  "No Date" group to the end of the grouped-by-date view so Overdue
  and Today surface first.
- SetupScreen.svelte: strip trailing separators before splitting the
  picked folder path so "…/MyTasks/" yields "MyTasks" instead of the
  literal fallback "workspace".

Verified live under Xvfb for the three user-visible cases (ConfirmDialog
Escape, orphan subtask collapse, new-list autofocus). Screenshots in
screenshots/smoke-test/. cargo test --lib -p onyx-core is green
(162/162); npm run build succeeds.
2026-04-17 14:24:59 +00:00
Claude 3b65dc4216
Add smoke-test screenshots demonstrating GUI bugs
Screenshots captured from a seeded local workspace loaded under Xvfb.
Includes working flows (task list, drawer, detail view, group by date)
and four bug demonstrations: Escape on ConfirmDialog pops navigation,
subtask panel orphaned after external delete, new-list input lacks
autofocus.
2026-04-17 13:57:02 +00:00
Claude 9f40061b07
Add screenshot of Tauri app setup screen
Captured by running the Tauri GUI under Xvfb (1024x768x24) with
WEBKIT_DISABLE_DMABUF_RENDERER=1 and WEBKIT_DISABLE_COMPOSITING_MODE=1,
then using ImageMagick `import` against the Onyx X window id.
2026-04-17 12:06:35 +00:00
SteelDynamite aceeac0442
Merge pull request #48 from SteelDynamite/claude/jolly-mendel-Hwl4L 2026-04-16 09:29:57 +01:00
SteelDynamite 707e1ac2e2
Merge pull request #47 from SteelDynamite/claude/dreamy-brown-AVqxJ 2026-04-16 09:28:17 +01:00
Claude 76f5502257
docs: sync all markdown files with current codebase state
- CLAUDE.md: add missing "Black and Gold" theme; add last_sync to WorkspaceConfig description; document swipe-to-toggle gesture
- README.md: replace stale "dark mode" with full theme list; add swipe gestures entry
- PLAN.md: add last_sync to Phase 1 WorkspaceConfig model; update Phase 3 to reflect theme selector/settings/inline move; fix Phase 5 swipe (no delete swipe); fix Phase 7 Google Tasks Tauri command names
- docs/API.md: add rename_list and move_task to TaskRepository API section

https://claude.ai/code/session_01AaJksBkcU94BKzbtPAstmP
2026-04-15 20:51:01 +00:00
50 changed files with 2264 additions and 344 deletions

View file

@ -1,5 +1,38 @@
# Audit Log
## 2026-04-27
Found and fixed 3 issues:
1. **Perf: needless clone of upload payload** (sync.rs:733) — the `SyncAction::Upload` arm read the file into `data`, computed `compute_checksum(&data)`, then called `client.put_file(path, data.clone())`. The clone existed only because the next statement needed `data.len()` for the sync-state record. Captured `data.len() as u64` into `len` first, moved `data` into `put_file`, and used `len` afterwards — one full byte copy avoided per uploaded file.
2. **Bug: Google Tasks sync silently drops metadata-write failures** (google_tasks.rs:361, 377) — both `.listdata.json` and `.onyx-workspace.json` were written via `if let Ok(meta_content) = serde_json::to_string_pretty(...) { let _ = atomic_write(...); }`, so a serialization or atomic-write error returned `Ok(GoogleSyncResult { downloaded: N, errors: [] })` even though list/workspace ordering was never persisted. Both writes now push their errors into the `errors` vec already returned in `GoogleSyncResult`.
3. **Code quality: unreachable dead-error path in storage dedup** (storage.rs:447) — the dedup loop computed `Option<Task>` from each `by_id` group and then `ok_or_else(|| Error::InvalidData("Empty dedup entries for task"))?`. `by_id` is only populated by `entry(uuid).or_default().push(entry)`, so every group has ≥1 element and the `None` branch is unreachable. Replaced the `Option`+`?` with direct `expect` calls (one per branch) that document the non-empty invariant; the loop now yields `Task` directly.
## 2026-04-25
Found and fixed 3 issues:
1. **Perf: O(n²) deletion-detection in `get_sync_status`** (sync.rs:918) — for every path tracked in `sync_state.files`, the loop scanned `local_files` linearly via `.any(|f| f.path == *path)` to decide whether to count it as a deleted-locally pending change. The earlier "modified or new" loop already used the inverse direction with `sync_state.files.get(...)` (O(1)), so the second loop was the inconsistent one. Built a `HashSet<&str>` of local paths once and used `contains` for the membership check.
2. **Perf: cascade delete walks all_tasks per frontier pop** (tauri/lib.rs:460) — `delete_task`'s descendant BFS scanned the full task list on every parent popped from the frontier, making the work O(n × depth). Built a `parent_id -> [child_id]` `HashMap` once, then the BFS visits each descendant in O(1) amortised, dropping total cost to O(n).
3. **Code quality: duplicate atomic-write in `AppConfig::save_to_file`** (config.rs:114) — the function had its own copy of the temp-file + rename + cleanup-on-failure dance even though `storage::atomic_write` is `pub(crate)` and was already shared by `google_tasks.rs`. Replaced the inline implementation with a call to `crate::storage::atomic_write` so the crate has one canonical atomic write path.
## 2026-04-24
Found and fixed 3 issues:
1. **Bug: orphan base entries never cleaned from sync state** (sync.rs) — when a file was deleted both locally and remotely, `compute_sync_actions` emitted no action (the `(None, None, Some(_))` arm), so the base entry in `.syncstate.json` persisted forever. On each subsequent sync the same no-op case fired and the state file grew. Added `prune_orphan_bases` pass in `sync_workspace_inner` that drops base entries not present in either scan.
2. **Code quality: redundant is_some_and on already-matched Option** (sync.rs:208) — the `(None, Some(_), Some(b))` arm re-checked `remote` via `remote.is_some_and(|r| ...)` even though the pattern had just proven `remote` is `Some(_)`. Bound the inner value with `Some(r)` in the pattern and used `r` directly.
3. **Code quality: single-caller sanitize_filename wrapper** (storage.rs) — `FileSystemStorage::sanitize_filename` was a one-line forwarder to `crate::sanitize_filename` with one call site. Inlined the crate call and removed the method.
## 2026-04-20
Found and fixed 4 issues:
1. **Dead code in conflict recovery** (sync.rs:756) — `parts[1] != ".listdata.json"` was unreachable because the branch is already gated on `parts[1].ends_with(".md")`, which `.listdata.json` cannot satisfy. Removed the redundant check.
2. **O(n²) cascade delete** (tauri/lib.rs) — descendant traversal in `delete_task` used `Vec::contains` inside the inner loop, making it quadratic in the number of tasks per list. Swapped the visited set to `HashSet`; `HashSet::insert` folds the contains+push into one call.
3. **Silent cascade failure in toggle_task** (tauri/lib.rs) — subtask `update_task` errors were discarded with `let _ = ...`, leaving subtasks stuck at the old status with no UI feedback. Propagate the error so the frontend can surface it.
4. **Duplicated UUID-parse boilerplate** (tauri/lib.rs) — 17 commands repeated `Uuid::parse_str(&x).map_err(|e| e.to_string())?`. Extracted a `parse_uuid` helper so callers read as `let id = parse_uuid(&list_id)?;`.
## 2026-04-15
Found and fixed 4 issues:

View file

@ -30,7 +30,7 @@ The Tauri dev server runs on port 1422 (`vite.config.ts` and `tauri.conf.json`).
Two-crate workspace (`resolver = "2"`, edition 2021) plus a Tauri app:
- **onyx-core** — Pure Rust library. Storage trait with `FileSystemStorage` implementation, `TaskRepository` (main API), data models, config, error types. No CLI/UI dependencies. `keyring` feature-gated behind `keyring-storage` (default on) for Android compatibility.
- **onyx-cli** — CLI frontend using clap. Commands are in `src/commands/` (init, workspace, list, task, group). Output formatting in `src/output.rs`.
- **onyx-cli** — CLI frontend using clap. Commands are in `src/commands/` (init, workspace, list, task, group, sync). Output formatting in `src/output.rs`.
- **apps/tauri/** — Tauri v2 GUI. Svelte 5 frontend in `src/`, Rust backend in `src-tauri/` with Tauri commands that call into `onyx-core`. `notify` crate feature-gated for Android. `tauri-plugin-credentials/` provides cross-platform credential storage (Android Keystore via EncryptedSharedPreferences, desktop via keyring crate).
### Key patterns
@ -64,7 +64,7 @@ The GUI uses Svelte 5 runes mode (`$state`, `$derived`, `$effect`, `$props()`).
Pre-alpha. No users, no released builds, no data to migrate. Breaking changes to on-disk formats, config structure, or sync conventions are free — do not add migration logic.
### Current state (2026-04-15)
### Current state (2026-04-27)
- **Phase 1** (Core + CLI): Complete
- **Phase 2** (WebDAV sync): Complete — remote folder browsing, checksum-based conflict resolution, auto-sync lifecycle, per-workspace sync interval
@ -80,7 +80,7 @@ Pre-alpha. No users, no released builds, no data to migrate. Breaking changes to
- Sliding lists drawer with checkmark selection
- Settings popup overlay
- Workspace switcher drop-up with add/remove
- Per-workspace theme system (System default, Light, Dark, Nord, Dracula, Solarized Dark, Ink) via CSS `data-theme` attribute
- Per-workspace theme system (System default, Light, Dark, Nord, Dracula, Solarized Dark, Black and Gold, Ink) via CSS `data-theme` attribute
- Completed tasks section with animated show/hide
- Date picker/editor (DateTimePicker in new task + task detail); `has_time: bool` field tracks whether time is set
- Move task between lists (inline list in kebab menu, no submenu)
@ -106,8 +106,9 @@ Pre-alpha. No users, no released builds, no data to migrate. Breaking changes to
- Task deduplication on load (handles sync conflict duplicates)
- Subtask hierarchy: subtask count shown on parent tasks in list, subtask detail via three-panel slide navigation, inline add at top of subtask list (new subtasks prepend), collapsible completed subtasks section, cascade delete (parent deletion removes all subtasks with confirmation warning)
- Custom confirmation dialogs (ConfirmDialog component replaces native confirm())
- Workspace path validation (rejects system directories)
- Workspace path validation (rejects filesystem root `/` and system directories: `/etc`, `/usr`, `/bin`, `/sbin`, `/var`, `/proc`, `/sys`, `/dev`)
- Task detail auto-cleanup (taskStack clears when viewed task is deleted or list switches)
- Swipe gestures on mobile: swipe left/right on a task to toggle completion (swipe direction depends on current status)
- Accessibility: ARIA labels/roles on interactive components, keyboard handlers, `prefers-reduced-motion` CSS support
### GUI features NOT yet done

27
PLAN.md
View file

@ -532,8 +532,11 @@ pub fn delete_credentials(domain: &str) -> Result<()>;
Add to `onyx-core/Cargo.toml`:
```toml
reqwest = { version = "0.12", features = ["json", "rustls-tls"] }
keyring = "3.0"
# TODO: Evaluate dav-client or implement custom WebDAV
keyring = { version = "3", features = ["apple-native", "windows-native", "sync-secret-service"], optional = true }
zeroize = "1"
sha2 = "0.10"
quick-xml = "0.36"
# WebDAV implemented as custom client using reqwest + quick-xml for PROPFIND parsing
```
### Features
@ -668,7 +671,6 @@ apps/tauri/
│ │ ├── TaskItem.svelte
│ │ ├── NewTaskInput.svelte
│ │ ├── TaskDetailView.svelte
│ │ ├── BottomSheet.svelte
│ │ ├── ConfirmDialog.svelte
│ │ └── DateTimePicker.svelte
│ └── stores/
@ -753,8 +755,8 @@ WorkspaceConfig {
- [x] Mark tasks complete/incomplete with animated transitions
- [x] Drag-and-drop task reordering
- [x] Sliding lists drawer (80cqi wide, left side)
- [x] Settings popup overlay (WebDAV config, theme selector)
- [x] Per-workspace theme system (System default, Light, Dark, Nord, Dracula, Solarized Dark, Ink)
- [x] Settings popup overlay (WebDAV config, theme selector, window decorations)
- [x] Per-workspace theme system (System default, Light, Dark, Nord, Dracula, Solarized Dark, Black and Gold, Ink)
- [x] Animated completed section show/hide
- [x] Move task between lists (kebab menu → "Move to..." inline list in task detail view, not a submenu)
- [x] Optional time on due dates (`has_time: bool` field on Task with `#[serde(default)]` for backward compat; replaces the hours==0 heuristic)
@ -763,7 +765,7 @@ WorkspaceConfig {
- [x] List rename (inline input via list kebab menu in drawer)
- [x] Keyboard shortcuts (Escape closes settings → detail → drawer → menus in priority order)
- [x] Sync status indicators (last-sync time + upload/download counts chip in TasksScreen)
- [x] Push/pull sync mode selection (session-only sync direction selector in SettingsScreen)
- [ ] Push/pull sync mode selection (session-only sync direction selector in SettingsScreen)
- [x] Group-by-date toggle per list (checkmark toggle in list kebab menu)
- [x] Subtask hierarchy (expand/collapse, inline add, cascade toggle/delete)
- [ ] Search/filter tasks
@ -844,11 +846,11 @@ npm run tauri ios build
#### Features
- [x] Gate file-watcher initialization behind `#[cfg(not(mobile))]`
- [x] Gate file-watcher initialization behind `#[cfg(not(target_os = "android"))]`
- [x] Install Android Studio + NDK, configure env vars
- [x] Add Android Rust targets
- [x] `npm run tauri android init` (generates `gen/android/`)
- [x] Confirm `npm run tauri android build` succeeds
- [ ] `npm run tauri android init` (generates `gen/android/`)
- [ ] Confirm `npm run tauri android build` succeeds
- [ ] Basic smoke test: app launches, workspace setup, create a task
- [ ] Set up macOS CI for iOS builds
- [ ] `npm run tauri ios init` (generates `gen/ios/`)
@ -911,7 +913,8 @@ npm run tauri ios build
- [ ] Multiple windows (optional)
#### Mobile-Specific
- [x] Swipe gestures (swipe to complete, swipe to delete)
- [x] Swipe gestures (swipe to toggle completion; direction depends on current task status)
- [ ] Swipe to delete
- [ ] Pull-to-refresh
- [ ] Touch-optimized UI elements
- [ ] Larger touch targets
@ -1055,6 +1058,6 @@ This project is free and open-source software licensed under GPL v3.
---
**Last Updated**: 2026-04-15
**Document Version**: 4.3
**Last Updated**: 2026-04-27
**Document Version**: 4.5
**Status**: Ready to Implement - Milestone-Driven Plan

View file

@ -2,6 +2,8 @@
A **local-first, cross-platform tasks application** built with Rust. Inspired by Google Tasks, designed for speed and flexibility.
![Onyx setup screen](screenshot.png)
## Core Principles
- **Local-First**: Your data, your folder, your control
@ -21,7 +23,10 @@ onyx/
│ └── onyx-cli/ # CLI frontend
├── apps/
│ └── tauri/ # Tauri v2 GUI (Svelte 5 + Tailwind CSS 4)
│ └── tauri-plugin-credentials/ # Cross-platform credential storage plugin
└── docs/
├── API.md # Core library API reference
└── DEVELOPMENT.md # Development guide
```
## Project Status
@ -29,7 +34,7 @@ onyx/
- **Phase 1** (Core + CLI): Complete
- **Phase 2** (WebDAV Sync): Complete — backend, CLI, and GUI all wired
- **Phase 3** (GUI MVP): Complete
- **Phase 4** (Mobile): In progress — Android preliminaries done (file-watcher gating, tauri-plugin-credentials, safe area insets, Android targets configured); needs build verification and iOS setup
- **Phase 4** (Mobile): In progress — Android preliminaries done (file-watcher gating, `tauri-plugin-credentials`, safe area insets, Android targets configured); needs `tauri android init`, build verification, and iOS setup
### Core Library (`onyx-core`)
- Data models (Task, TaskList, AppConfig, WorkspaceConfig)
@ -55,16 +60,19 @@ onyx/
- Drag-and-drop reordering
- Sliding lists drawer, settings popup
- Workspace switcher with add/remove
- Per-workspace theme system (System default, Light, Dark, Nord, Dracula, Solarized Dark, Ink)
- Per-workspace theme system (System default, Light, Dark, Nord, Dracula, Solarized Dark, Black and Gold, Ink)
- Due date picker/editor with optional time
- Subtask hierarchy with three-panel slide navigation
- Move tasks between lists
- List rename, group-by-date toggle, delete completed tasks
- List rename, workspace rename, group-by-date toggle, delete completed tasks
- Keyboard shortcuts (Escape priority chain)
- WebDAV setup flow with credential auto-population
- File watcher (auto-reloads on external changes)
- Auto-sync with configurable interval, status indicators
- Swipe gestures on mobile (swipe to toggle completion)
- Custom confirmation dialogs
- Safe area insets for mobile (viewport-fit=cover)
- Accessibility: ARIA labels/roles, keyboard handlers, `prefers-reduced-motion` support
- Desktop packaging (Linux: AppImage + .deb; Windows: MSI)
## Development Setup
@ -212,8 +220,8 @@ cargo test -- --nocapture
## What's Next?
- **Phase 4**: Mobile support (iOS & Android via Tauri v2 mobile)
- **Phase 5**: GUI advanced features (rich markdown editor, search/filter)
- **Phase 4** (in progress): Complete Android build (`tauri android init` + verification), iOS setup on macOS CI
- **Phase 5**: GUI advanced features (rich markdown editor, search/filter, change storage folder)
- **Phase 6**: Mobile polish and platform-specific integrations
- **Phase 7**: Google Tasks importer and unique features

File diff suppressed because it is too large Load diff

View file

@ -7,16 +7,23 @@
"dev": "vite",
"build": "vite build",
"preview": "vite preview",
"tauri": "tauri"
"tauri": "tauri",
"test": "vitest run",
"test:watch": "vitest"
},
"devDependencies": {
"@sveltejs/vite-plugin-svelte": "^5.0.0",
"@tailwindcss/vite": "^4.0.0",
"@tauri-apps/cli": "^2.0.0",
"@testing-library/jest-dom": "^6.9.1",
"@testing-library/svelte": "^5.3.1",
"@testing-library/user-event": "^14.6.1",
"jsdom": "^29.0.2",
"svelte": "^5.0.0",
"tailwindcss": "^4.0.0",
"typescript": "^5.6.0",
"vite": "^6.0.0"
"vite": "^6.0.0",
"vitest": "^4.1.4"
},
"dependencies": {
"@tauri-apps/api": "^2.0.0",

View file

@ -60,6 +60,11 @@ fn lock_state(state: &Mutex<AppState>) -> Result<std::sync::MutexGuard<'_, AppSt
state.lock().map_err(|e| format!("State lock poisoned: {}", e))
}
/// Parse a UUID from a string, converting errors to the String format Tauri commands use.
fn parse_uuid(s: &str) -> Result<Uuid, String> {
Uuid::parse_str(s).map_err(|e| e.to_string())
}
impl AppState {
/// Persist config to disk, converting errors to String for Tauri commands.
fn save_config(&self) -> Result<(), String> {
@ -67,6 +72,25 @@ impl AppState {
}
}
/// Extract the hostname from a URL (scheme://host/...), used as the credential key.
/// Returns an empty string if the URL has no scheme or host.
fn credential_domain(url: &str) -> String {
url.split("://")
.nth(1)
.and_then(|rest| rest.split('/').next())
.unwrap_or("")
.to_string()
}
/// Join a remote base directory with a child path, handling empty base and trailing slashes.
fn join_remote_path(base: &str, child: &str) -> String {
if base.is_empty() {
child.to_string()
} else {
format!("{}/{}", base.trim_end_matches('/'), child)
}
}
/// Validate that a workspace path is a reasonable directory and not a system path.
fn validate_workspace_path(path: &str) -> Result<(), String> {
let p = PathBuf::from(path);
@ -79,7 +103,10 @@ fn validate_workspace_path(path: &str) -> Result<(), String> {
#[cfg(unix)]
{
let forbidden = ["/", "/etc", "/usr", "/bin", "/sbin", "/var", "/proc", "/sys", "/dev"];
// Strip trailing slashes, but keep "/" itself — trim_end_matches would
// collapse it to "" and slip past the forbidden check.
let canonical = normalized.trim_end_matches('/');
let canonical = if canonical.is_empty() { "/" } else { canonical };
if forbidden.contains(&canonical) {
return Err(format!("Cannot use system directory as workspace: {}", path));
}
@ -179,6 +206,13 @@ fn add_workspace(
state: State<'_, Mutex<AppState>>,
) -> Result<(), String> {
validate_workspace_path(&path)?;
// Ensure the path exists and is a valid workspace before persisting the
// config. Without this, calling add_workspace directly on a missing
// directory would save the workspace but every subsequent ensure_repo
// call would fail with "Path does not exist".
TaskRepository::init(PathBuf::from(&path))
.map(|_| ())
.map_err(|e| e.to_string())?;
let mut s = lock_state(&state)?;
let ws = WorkspaceConfig::new(name, PathBuf::from(&path));
let id = s.config.add_workspace(ws);
@ -256,10 +290,7 @@ async fn rename_workspace(
let base_url = webdav_url.as_deref().ok_or("No WebDAV URL configured")?;
let remote_path = webdav_path.as_deref().unwrap_or("");
let domain = base_url
.split("://").nth(1)
.and_then(|rest| rest.split('/').next())
.unwrap_or("").to_string();
let domain = credential_domain(base_url);
let creds = app_handle.state::<Credentials<tauri::Wry>>();
let (username, password) = creds.load(&domain)?;
@ -340,7 +371,7 @@ fn delete_list(
let mut s = lock_state(&state)?;
ensure_repo(&mut s)?;
mute_watcher(&mut s);
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
let id = parse_uuid(&list_id)?;
repo_mut(&mut s)?
.delete_list(id)
.map_err(|e| e.to_string())
@ -355,7 +386,7 @@ fn list_tasks(
) -> Result<Vec<Task>, String> {
let mut s = lock_state(&state)?;
ensure_repo(&mut s)?;
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
let id = parse_uuid(&list_id)?;
repo_ref(&s)?
.list_tasks(id)
.map_err(|e| e.to_string())
@ -367,20 +398,27 @@ fn create_task(
title: String,
description: Option<String>,
parent_id: Option<String>,
date: Option<chrono::DateTime<chrono::Utc>>,
has_time: Option<bool>,
state: State<'_, Mutex<AppState>>,
) -> Result<Task, String> {
let mut s = lock_state(&state)?;
ensure_repo(&mut s)?;
mute_watcher(&mut s);
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
let id = parse_uuid(&list_id)?;
let mut task = Task::new(title);
if let Some(desc) = description.filter(|d| !d.is_empty()) {
task.description = desc;
}
if let Some(pid) = parent_id {
let parent_uuid = Uuid::parse_str(&pid).map_err(|e| e.to_string())?;
let parent_uuid = parse_uuid(&pid)?;
task.parent_id = Some(parent_uuid);
}
// Accept the date fields at creation time so callers don't have to do a
// second update() round-trip just to attach a date — which previously
// dropped the date entirely if the follow-up update failed.
task.date = date;
task.has_time = has_time.unwrap_or(false);
repo_mut(&mut s)?
.create_task(id, task)
.map_err(|e| e.to_string())
@ -395,7 +433,7 @@ fn update_task(
let mut s = lock_state(&state)?;
ensure_repo(&mut s)?;
mute_watcher(&mut s);
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
let id = parse_uuid(&list_id)?;
repo_mut(&mut s)?
.update_task(id, task)
.map_err(|e| e.to_string())
@ -410,17 +448,36 @@ fn delete_task(
let mut s = lock_state(&state)?;
ensure_repo(&mut s)?;
mute_watcher(&mut s);
let lid = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
let tid = Uuid::parse_str(&task_id).map_err(|e| e.to_string())?;
let lid = parse_uuid(&list_id)?;
let tid = parse_uuid(&task_id)?;
let repo = repo_mut(&mut s)?;
// Cascade-delete subtasks first
// Cascade-delete the full descendant subtree (not just direct children)
// so deleting a parent can't leave grandchildren orphaned with a
// parent_id pointing at a deleted task.
let all_tasks = repo.list_tasks(lid).map_err(|e| e.to_string())?;
let child_ids: Vec<Uuid> = all_tasks
.iter()
.filter(|t| t.parent_id == Some(tid))
.map(|t| t.id)
.collect();
for child_id in child_ids {
// Build a parent -> children index in one pass so the BFS below is O(n)
// instead of O(n * depth) scanning all tasks for each frontier pop.
let mut children_by_parent: std::collections::HashMap<Uuid, Vec<Uuid>> =
std::collections::HashMap::new();
for t in &all_tasks {
if let Some(pid) = t.parent_id {
children_by_parent.entry(pid).or_default().push(t.id);
}
}
let mut to_delete: std::collections::HashSet<Uuid> = std::collections::HashSet::new();
let mut frontier: Vec<Uuid> = vec![tid];
while let Some(parent) = frontier.pop() {
if let Some(children) = children_by_parent.get(&parent) {
for &child_id in children {
if to_delete.insert(child_id) {
frontier.push(child_id);
}
}
}
}
// Delete children before the parent so a mid-cascade failure doesn't
// leave the parent removed but descendants stranded.
for child_id in to_delete {
repo.delete_task(lid, child_id).map_err(|e| format!("Failed to delete subtask {}: {}", child_id, e))?;
}
repo.delete_task(lid, tid)
@ -436,8 +493,8 @@ fn toggle_task(
let mut s = lock_state(&state)?;
ensure_repo(&mut s)?;
mute_watcher(&mut s);
let lid = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
let tid = Uuid::parse_str(&task_id).map_err(|e| e.to_string())?;
let lid = parse_uuid(&list_id)?;
let tid = parse_uuid(&task_id)?;
let repo = repo_mut(&mut s)?;
let mut task = repo.get_task(lid, tid).map_err(|e| e.to_string())?;
match task.status {
@ -454,7 +511,9 @@ fn toggle_task(
TaskStatus::Backlog => child.uncomplete(),
TaskStatus::Completed => child.complete(),
}
let _ = repo.update_task(lid, child);
let child_id = child.id;
repo.update_task(lid, child)
.map_err(|e| format!("Failed to cascade to subtask {}: {}", child_id, e))?;
}
}
Ok(task)
@ -470,8 +529,8 @@ fn reorder_task(
let mut s = lock_state(&state)?;
ensure_repo(&mut s)?;
mute_watcher(&mut s);
let lid = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
let tid = Uuid::parse_str(&task_id).map_err(|e| e.to_string())?;
let lid = parse_uuid(&list_id)?;
let tid = parse_uuid(&task_id)?;
repo_mut(&mut s)?
.reorder_task(lid, tid, new_position)
.map_err(|e| e.to_string())
@ -489,9 +548,9 @@ fn move_task(
let mut s = lock_state(&state)?;
ensure_repo(&mut s)?;
mute_watcher(&mut s);
let from = Uuid::parse_str(&from_list_id).map_err(|e| e.to_string())?;
let to = Uuid::parse_str(&to_list_id).map_err(|e| e.to_string())?;
let tid = Uuid::parse_str(&task_id).map_err(|e| e.to_string())?;
let from = parse_uuid(&from_list_id)?;
let to = parse_uuid(&to_list_id)?;
let tid = parse_uuid(&task_id)?;
repo_mut(&mut s)?
.move_task(from, to, tid)
.map_err(|e| e.to_string())
@ -506,7 +565,7 @@ fn rename_list(
let mut s = lock_state(&state)?;
ensure_repo(&mut s)?;
mute_watcher(&mut s);
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
let id = parse_uuid(&list_id)?;
repo_mut(&mut s)?
.rename_list(id, new_name)
.map_err(|e| e.to_string())
@ -521,7 +580,7 @@ fn set_group_by_date(
let mut s = lock_state(&state)?;
ensure_repo(&mut s)?;
mute_watcher(&mut s);
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
let id = parse_uuid(&list_id)?;
repo_mut(&mut s)?
.set_group_by_date(id, enabled)
.map_err(|e| e.to_string())
@ -534,7 +593,7 @@ fn get_group_by_date(
) -> Result<bool, String> {
let mut s = lock_state(&state)?;
ensure_repo(&mut s)?;
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
let id = parse_uuid(&list_id)?;
repo_ref(&s)?
.get_group_by_date(id)
.map_err(|e| e.to_string())
@ -622,10 +681,9 @@ async fn list_remote_folder(
let dir_entries: Vec<_> = entries.into_iter().filter(|e| e.is_dir).collect();
// Check all subfolders for .onyx-workspace.json in parallel
let sub_paths: Vec<_> = dir_entries.iter().map(|entry| {
if path.is_empty() { entry.path.clone() }
else { format!("{}/{}", path.trim_end_matches('/'), entry.path) }
}).collect();
let sub_paths: Vec<_> = dir_entries.iter()
.map(|entry| join_remote_path(&path, &entry.path))
.collect();
let checks: Vec<_> = sub_paths.iter().map(|sp| {
client.list_files(sp)
}).collect();
@ -657,11 +715,7 @@ async fn inspect_remote_workspace(
let mut lists = Vec::new();
for entry in entries {
if !entry.is_dir { continue; }
let list_path = if path.is_empty() {
entry.path.clone()
} else {
format!("{}/{}", path.trim_end_matches('/'), entry.path)
};
let list_path = join_remote_path(&path, &entry.path);
let files = client.list_files(&list_path).await.unwrap_or_else(|e| {
eprintln!("Warning: failed to list remote folder '{}': {}", list_path, e);
Vec::new()
@ -697,11 +751,7 @@ async fn create_remote_workspace(
"list_order": [],
"last_opened_list": null,
});
let file_path = if path.is_empty() {
".onyx-workspace.json".to_string()
} else {
format!("{}/{}", path.trim_end_matches('/'), ".onyx-workspace.json")
};
let file_path = join_remote_path(&path, ".onyx-workspace.json");
client.put_file(&file_path, serde_json::to_string_pretty(&metadata).map_err(|e| e.to_string())?.into_bytes())
.await
.map_err(|e| e.to_string())?;
@ -735,12 +785,7 @@ fn add_webdav_workspace(
s.repo = None;
// Store credentials keyed by hostname
let domain = webdav_url
.split("://")
.nth(1)
.and_then(|rest| rest.split('/').next())
.unwrap_or("")
.to_string();
let domain = credential_domain(&webdav_url);
s.save_config()?;
drop(s);
let creds = app_handle.state::<Credentials<tauri::Wry>>();
@ -803,12 +848,7 @@ async fn sync_workspace(
};
// Step 2: load credentials
let domain = webdav_url
.split("://")
.nth(1)
.and_then(|rest| rest.split('/').next())
.unwrap_or("")
.to_string();
let domain = credential_domain(&webdav_url);
let creds = app_handle.state::<Credentials<tauri::Wry>>();
let (username, password) = creds.load(&domain)?;

View file

@ -1,42 +0,0 @@
<script lang="ts">
import type { Snippet } from "svelte";
let { onclose, children }: { onclose: () => void; children: Snippet } = $props();
</script>
<!-- Backdrop -->
<div
class="fixed inset-0 z-40 bg-black/40"
role="button"
tabindex="-1"
aria-label="Close sheet"
onclick={onclose}
onkeydown={(e) => { if (e.key === "Escape") onclose(); }}
></div>
<!-- Sheet -->
<div
role="dialog"
aria-modal="true"
class="fixed bottom-0 left-0 right-0 z-50 max-h-[70vh] overflow-y-auto rounded-t-2xl bg-surface-light shadow-xl dark:bg-card-dark animate-slide-up"
>
<!-- Drag handle -->
<div class="flex justify-center py-2">
<div class="h-1 w-8 rounded-full bg-gray-300 dark:bg-gray-600"></div>
</div>
{@render children()}
<div class="h-[env(safe-area-inset-bottom)]"></div>
</div>
<style>
@keyframes slide-up {
from {
transform: translateY(100%);
}
to {
transform: translateY(0);
}
}
.animate-slide-up {
animation: slide-up 0.25s ease-out;
}
</style>

View file

@ -1,6 +1,43 @@
<script lang="ts" module>
// Shared counter so sibling Escape handlers (e.g. TasksScreen's svelte:window
// listener) can tell when a ConfirmDialog is open and defer to it instead of
// popping the task-detail view behind the dialog.
let openCount = $state(0);
export function isConfirmDialogOpen(): boolean {
return openCount > 0;
}
</script>
<script lang="ts">
import { onMount, onDestroy, tick } from "svelte";
let { message, detail, confirmText = "Confirm", danger = false, onconfirm, oncancel }:
{ message: string; detail?: string; confirmText?: string; danger?: boolean; onconfirm: () => void; oncancel: () => void } = $props();
let cancelBtn: HTMLButtonElement | undefined = $state();
function handleGlobalKeydown(e: KeyboardEvent) {
if (e.key !== "Escape") return;
e.stopPropagation();
e.stopImmediatePropagation();
e.preventDefault();
oncancel();
}
onMount(() => {
openCount += 1;
// Focus Cancel so Escape/Enter go through the dialog's own keydown handler
// (which cancels) instead of leaking to the global svelte:window listener
// in TasksScreen (which would pop the task detail view).
tick().then(() => cancelBtn?.focus());
// Belt-and-suspenders: capture-phase listener dismisses even if focus
// didn't land on Cancel (e.g. under test harnesses or headless compositors).
window.addEventListener("keydown", handleGlobalKeydown, true);
});
onDestroy(() => {
openCount -= 1;
window.removeEventListener("keydown", handleGlobalKeydown, true);
});
</script>
<div
@ -23,6 +60,7 @@
{/if}
<div class="mt-4 flex justify-end gap-2">
<button
bind:this={cancelBtn}
onclick={oncancel}
class="rounded-lg px-4 py-2 text-sm hover:bg-black/5 dark:hover:bg-white/10"
>

View file

@ -0,0 +1,105 @@
import { describe, it, expect, vi, beforeEach } from "vitest";
import { render, screen, cleanup } from "@testing-library/svelte";
import userEvent from "@testing-library/user-event";
import ConfirmDialog, { isConfirmDialogOpen } from "./ConfirmDialog.svelte";
beforeEach(() => {
cleanup();
});
describe("ConfirmDialog", () => {
it("renders the message, detail and custom confirm label", () => {
render(ConfirmDialog, {
message: "Delete task?",
detail: "This cannot be undone.",
confirmText: "Delete",
onconfirm: vi.fn(),
oncancel: vi.fn(),
});
expect(screen.getByText("Delete task?")).toBeInTheDocument();
expect(screen.getByText("This cannot be undone.")).toBeInTheDocument();
expect(screen.getByRole("button", { name: "Delete" })).toBeInTheDocument();
expect(screen.getByRole("button", { name: "Cancel" })).toBeInTheDocument();
});
it("fires oncancel when Cancel is clicked", async () => {
const user = userEvent.setup();
const oncancel = vi.fn();
render(ConfirmDialog, {
message: "Delete?",
onconfirm: vi.fn(),
oncancel,
});
await user.click(screen.getByRole("button", { name: "Cancel" }));
expect(oncancel).toHaveBeenCalledTimes(1);
});
it("fires onconfirm when Confirm is clicked and not oncancel", async () => {
const user = userEvent.setup();
const onconfirm = vi.fn();
const oncancel = vi.fn();
render(ConfirmDialog, {
message: "Delete?",
confirmText: "Delete",
onconfirm,
oncancel,
});
await user.click(screen.getByRole("button", { name: "Delete" }));
expect(onconfirm).toHaveBeenCalledTimes(1);
expect(oncancel).not.toHaveBeenCalled();
});
it("cancels and stops propagation on Escape (regression: used to bubble and pop task detail)", async () => {
const oncancel = vi.fn();
// An outer bubble-phase listener emulates TasksScreen's svelte:window
// Escape handler. If the dialog leaks Escape, this spy fires too.
const outer = vi.fn();
window.addEventListener("keydown", outer);
try {
render(ConfirmDialog, {
message: "Delete?",
onconfirm: vi.fn(),
oncancel,
});
window.dispatchEvent(new KeyboardEvent("keydown", { key: "Escape", bubbles: true, cancelable: true }));
expect(oncancel).toHaveBeenCalledTimes(1);
expect(outer).not.toHaveBeenCalled();
} finally {
window.removeEventListener("keydown", outer);
}
});
it("ignores non-Escape keydowns", async () => {
const oncancel = vi.fn();
render(ConfirmDialog, {
message: "Delete?",
onconfirm: vi.fn(),
oncancel,
});
window.dispatchEvent(new KeyboardEvent("keydown", { key: "a" }));
window.dispatchEvent(new KeyboardEvent("keydown", { key: "Enter" }));
expect(oncancel).not.toHaveBeenCalled();
});
it("increments the open-count singleton so parent Escape handlers can defer", () => {
expect(isConfirmDialogOpen()).toBe(false);
const { unmount } = render(ConfirmDialog, {
message: "Delete?",
onconfirm: vi.fn(),
oncancel: vi.fn(),
});
expect(isConfirmDialogOpen()).toBe(true);
unmount();
expect(isConfirmDialogOpen()).toBe(false);
});
it("tracks multiple concurrently-mounted dialogs and releases on unmount", () => {
const a = render(ConfirmDialog, { message: "A?", onconfirm: vi.fn(), oncancel: vi.fn() });
const b = render(ConfirmDialog, { message: "B?", onconfirm: vi.fn(), oncancel: vi.fn() });
expect(isConfirmDialogOpen()).toBe(true);
a.unmount();
expect(isConfirmDialogOpen()).toBe(true);
b.unmount();
expect(isConfirmDialogOpen()).toBe(false);
});
});

View file

@ -13,6 +13,8 @@
let viewYear = $state(existing ? existing.getFullYear() : now.getFullYear());
let viewMonth = $state(existing ? existing.getMonth() : now.getMonth());
let selectedDay = $state(existing ? existing.getDate() : now.getDate());
let selectedYear = $state(existing ? existing.getFullYear() : now.getFullYear());
let selectedMonth = $state(existing ? existing.getMonth() : now.getMonth());
let includeTime = $state(has_time);
let selectedHour = $state(existing ? existing.getHours() : now.getHours());
let selectedMinute = $state(existing ? existing.getMinutes() : 0);
@ -50,6 +52,8 @@
function selectDay(day: number) {
selectedDay = day;
selectedYear = viewYear;
selectedMonth = viewMonth;
}
function isToday(day: number): boolean {
@ -57,16 +61,16 @@
}
function isSelected(day: number): boolean {
return selectedDay === day && (!value || (() => {
const v = new Date(value);
return v.getFullYear() === viewYear && v.getMonth() === viewMonth;
})());
return selectedDay === day && selectedYear === viewYear && selectedMonth === viewMonth;
}
function done() {
const h = includeTime ? selectedHour : 0;
const m = includeTime ? selectedMinute : 0;
const iso = new Date(viewYear, viewMonth, selectedDay, h, m).toISOString();
// Commit based on the last-selected year/month, not the currently-viewed
// ones — users can navigate months after selecting a day without
// accidentally shifting the chosen date to the viewed month.
const iso = new Date(selectedYear, selectedMonth, selectedDay, h, m).toISOString();
onchange(iso, includeTime);
dismiss();
}
@ -129,9 +133,9 @@
<button
onclick={() => selectDay(day)}
class="mx-auto flex h-8 w-8 items-center justify-center rounded-full text-sm transition-colors
{selectedDay === day ? 'bg-primary text-white' : ''}
{isToday(day) && selectedDay !== day ? 'font-bold text-primary' : ''}
{selectedDay !== day && !isToday(day) ? 'hover:bg-black/5 dark:hover:bg-white/10' : ''}"
{isSelected(day) ? 'bg-primary text-white' : ''}
{isToday(day) && !isSelected(day) ? 'font-bold text-primary' : ''}
{!isSelected(day) && !isToday(day) ? 'hover:bg-black/5 dark:hover:bg-white/10' : ''}"
>
{day}
</button>

View file

@ -0,0 +1,74 @@
import { describe, it, expect, vi, beforeEach } from "vitest";
import { render, screen, cleanup } from "@testing-library/svelte";
import userEvent from "@testing-library/user-event";
import DateTimePicker from "./DateTimePicker.svelte";
beforeEach(() => {
cleanup();
});
describe("DateTimePicker — selected highlight", () => {
it("only marks the selected day in the month/year that was actually picked", async () => {
const user = userEvent.setup();
// Pick a date in the current month so the component opens on it.
const now = new Date();
const existing = new Date(now.getFullYear(), now.getMonth(), 15, 0, 0, 0).toISOString();
render(DateTimePicker, {
value: existing,
has_time: false,
onchange: vi.fn(),
onclose: vi.fn(),
});
// The "15" button for the current month should be rendered with the
// selected styling (bg-primary).
const day15 = screen.getByRole("button", { name: "15" });
expect(day15.className).toMatch(/bg-primary/);
// Navigate one month forward. The same "15" cell must NOT be marked as
// selected, because the user hasn't picked a day in that month yet.
const nextMonthBtn = screen.getAllByRole("button").find((b) =>
b.querySelector("svg path[d*='M7.21 14.77']"),
) as HTMLElement;
await user.click(nextMonthBtn);
const nextMonth15 = screen.getByRole("button", { name: "15" });
expect(nextMonth15.className).not.toMatch(/bg-primary/);
});
it("commits based on the last-selected month, not the currently-viewed month", async () => {
const user = userEvent.setup();
const onchange = vi.fn();
const onclose = vi.fn();
// Start with April 10 selected (use a fixed month/year so the test is stable).
const existing = new Date(2026, 3, 10, 0, 0, 0).toISOString();
render(DateTimePicker, {
value: existing,
has_time: false,
onchange,
onclose,
});
// Pick the 20th while viewing April.
await user.click(screen.getByRole("button", { name: "20" }));
// Flip to May.
const nextMonthBtn = screen.getAllByRole("button").find((b) =>
b.querySelector("svg path[d*='M7.21 14.77']"),
) as HTMLElement;
await user.click(nextMonthBtn);
// Hit Done.
await user.click(screen.getByRole("button", { name: "Done" }));
expect(onchange).toHaveBeenCalled();
const committed = new Date(onchange.mock.calls[0][0] as string);
// April == month 3 (0-indexed). We navigated to May without reselecting,
// so the committed date must still be April 20.
expect(committed.getMonth()).toBe(3);
expect(committed.getDate()).toBe(20);
expect(committed.getFullYear()).toBe(2026);
});
});

View file

@ -17,10 +17,15 @@
async function handleSubmit() {
if (!title.trim()) return;
const created = await app.createTask(title.trim(), description.trim() || undefined);
if (date && created) {
await app.updateTask({ ...created, date: date, has_time: dateHasTime });
}
// Pass date/has_time into createTask directly so the date can't be lost
// if a second round-trip to update() failed after the create succeeded.
await app.createTask(
title.trim(),
description.trim() || undefined,
undefined,
date,
dateHasTime,
);
title = "";
description = "";
date = null;

View file

@ -25,6 +25,20 @@
return () => clearTimeout(saveTimer);
});
// Re-sync local editor state when the task prop's content changes from elsewhere
// (sync pull, external file edit). Skip the reset while the user is actively
// editing an input so we don't clobber in-progress typing.
$effect(() => {
const incomingTitle = task.title;
const incomingDesc = task.description;
const active = document.activeElement;
const editing = active instanceof HTMLInputElement || active instanceof HTMLTextAreaElement;
if (!editing) {
if (incomingTitle !== title) title = incomingTitle;
if (incomingDesc !== description) description = incomingDesc;
}
});
let otherLists = $derived(app.lists.filter((l) => l.id !== app.activeListId));
function handleHeaderMouseDown(e: MouseEvent) {
@ -64,10 +78,12 @@
async function executeDelete() {
confirmDelete = false;
// Cascade: delete subtasks first
for (const s of subtasks) await app.deleteTask(s.id);
await app.deleteTask(task.id);
onback();
// Cascade: delete subtasks first. Bail out on first failure so we don't
// remove the parent while orphaning subtasks; the error is already surfaced.
for (const s of subtasks) {
if (!(await app.deleteTask(s.id))) return;
}
if (await app.deleteTask(task.id)) onback();
}
function handleMenuClickOutside(e: MouseEvent) {
@ -104,7 +120,12 @@
async function executeDeleteCompletedSubtasks() {
confirmDeleteCompleted = false;
showSubtaskMenu = false;
for (const s of completedSubtasks) await app.deleteTask(s.id);
// Snapshot — completedSubtasks is reactive and shrinks as we delete.
// Bail on first failure so we don't silently leave a partial delete.
const targets = [...completedSubtasks];
for (const s of targets) {
if (!(await app.deleteTask(s.id))) return;
}
}
function handleSubtaskMenuClickOutside(e: MouseEvent) {

View file

@ -0,0 +1,97 @@
import { describe, it, expect } from "vitest";
import { groupTasksByDate } from "./grouping";
import type { Task } from "./types";
// 2026-04-17 12:00 local time — "today" in the fixtures below.
const NOW = new Date(2026, 3, 17, 12, 0, 0);
function task(partial: Partial<Task> & { id: string }): Task {
return {
id: partial.id,
title: partial.title ?? partial.id,
description: "",
status: "backlog",
date: partial.date ?? null,
has_time: partial.has_time ?? false,
version: 1,
parent_id: null,
...partial,
};
}
describe("groupTasksByDate", () => {
it("returns an empty array when there are no pending tasks", () => {
expect(groupTasksByDate([], NOW)).toEqual([]);
});
it("puts 'No Date' last — regression: was first, burying urgent tasks", () => {
const tasks = [
task({ id: "overdue", date: "2026-04-15T00:00:00Z" }),
task({ id: "no-date" }),
task({ id: "today", date: "2026-04-17T09:00:00Z" }),
];
const labels = groupTasksByDate(tasks, NOW).map((g) => g.label);
expect(labels).toEqual(["Overdue", "Today", "No Date"]);
});
it("orders dated buckets: Overdue, Today, Tomorrow, future…, then No Date", () => {
const tasks = [
task({ id: "nd1" }),
task({ id: "future", date: "2026-04-20T00:00:00Z" }),
task({ id: "tomorrow", date: "2026-04-18T00:00:00Z" }),
task({ id: "today", date: "2026-04-17T09:00:00Z" }),
task({ id: "overdue", date: "2026-04-10T00:00:00Z" }),
];
const labels = groupTasksByDate(tasks, NOW).map((g) => g.label);
expect(labels[0]).toBe("Overdue");
expect(labels[1]).toBe("Today");
expect(labels[2]).toBe("Tomorrow");
// One future day label between tomorrow and No Date
expect(labels[labels.length - 1]).toBe("No Date");
expect(labels).toHaveLength(5);
});
it("drops empty buckets", () => {
const tasks = [task({ id: "t1", date: "2026-04-17T08:00:00Z" })];
expect(groupTasksByDate(tasks, NOW).map((g) => g.label)).toEqual(["Today"]);
});
it("sorts tasks within a bucket by due time ascending, stable on ties", () => {
const tasks = [
task({ id: "b", date: "2026-04-17T15:00:00Z", has_time: true }),
task({ id: "a", date: "2026-04-17T09:00:00Z", has_time: true }),
task({ id: "c", date: "2026-04-17T15:00:00Z", has_time: true }),
];
const today = groupTasksByDate(tasks, NOW).find((g) => g.label === "Today")!;
expect(today.tasks.map((t) => t.id)).toEqual(["a", "b", "c"]);
});
it("places a task with today's date but time before 'now' in the Today bucket (not Overdue)", () => {
const tasks = [task({ id: "earlier-today", date: "2026-04-17T08:00:00Z" })];
const groups = groupTasksByDate(tasks, NOW);
expect(groups.map((g) => g.label)).toEqual(["Today"]);
});
it("preserves No Date order as given by the caller", () => {
const tasks = [
task({ id: "z" }),
task({ id: "a" }),
task({ id: "m" }),
];
const nd = groupTasksByDate(tasks, NOW).find((g) => g.label === "No Date")!;
expect(nd.tasks.map((t) => t.id)).toEqual(["z", "a", "m"]);
});
it("groups multiple tasks on the same future day under one label", () => {
const tasks = [
task({ id: "f1", date: "2026-04-25T09:00:00Z", has_time: true }),
task({ id: "f2", date: "2026-04-25T14:00:00Z", has_time: true }),
];
const groups = groupTasksByDate(tasks, NOW);
const future = groups.find((g) => g.date?.getDate() === 25);
expect(future).toBeDefined();
expect(future!.tasks.map((t) => t.id)).toEqual(["f1", "f2"]);
// And it comes before No Date (which is absent here).
expect(groups).toHaveLength(1);
});
});

View file

@ -0,0 +1,70 @@
import type { Task } from "./types";
export type TaskGroup = { label: string; tasks: Task[]; date: Date | null };
/**
* Group pending tasks into date buckets for the "group by date" view.
*
* Order:
* Overdue Today Tomorrow future days (chronological) No Date
*
* Within each dated bucket tasks sort by due date+time ascending, with the
* original `pendingTasks` index as a stable tiebreaker. "No Date" preserves
* the caller-supplied order.
*/
export function groupTasksByDate(pendingTasks: Task[], now: Date = new Date()): TaskGroup[] {
const todayStart = new Date(now.getFullYear(), now.getMonth(), now.getDate());
const tomorrowStart = new Date(todayStart);
tomorrowStart.setDate(todayStart.getDate() + 1);
const overdue: Task[] = [];
const today: Task[] = [];
const tomorrow: Task[] = [];
const futureByDay = new Map<string, { date: Date; tasks: Task[] }>();
const noDate: Task[] = [];
for (const task of pendingTasks) {
if (!task.date) {
noDate.push(task);
} else {
const d = new Date(task.date);
const dayStart = new Date(d.getFullYear(), d.getMonth(), d.getDate());
if (dayStart < todayStart) overdue.push(task);
else if (dayStart.getTime() === todayStart.getTime()) today.push(task);
else if (dayStart.getTime() === tomorrowStart.getTime()) tomorrow.push(task);
else {
const key = dayStart.toISOString();
if (!futureByDay.has(key)) futureByDay.set(key, { date: dayStart, tasks: [] });
futureByDay.get(key)!.tasks.push(task);
}
}
}
const taskOrderIndex = new Map(pendingTasks.map((t, i) => [t.id, i]));
const sortByDue = (a: Task, b: Task) => {
const dateDiff = new Date(a.date!).getTime() - new Date(b.date!).getTime();
if (dateDiff !== 0) return dateDiff;
return (taskOrderIndex.get(a.id) ?? 0) - (taskOrderIndex.get(b.id) ?? 0);
};
overdue.sort(sortByDue);
today.sort(sortByDue);
tomorrow.sort(sortByDue);
const groups: TaskGroup[] = [];
if (overdue.length) groups.push({ label: "Overdue", tasks: overdue, date: null });
if (today.length) groups.push({ label: "Today", tasks: today, date: todayStart });
if (tomorrow.length) groups.push({ label: "Tomorrow", tasks: tomorrow, date: tomorrowStart });
const currentYear = now.getFullYear();
for (const [, { date, tasks }] of [...futureByDay.entries()].sort(([a], [b]) => a.localeCompare(b))) {
tasks.sort(sortByDue);
const opts: Intl.DateTimeFormatOptions = date.getFullYear() !== currentYear
? { weekday: "short", month: "short", day: "numeric", year: "numeric" }
: { weekday: "short", month: "short", day: "numeric" };
groups.push({ label: date.toLocaleDateString(undefined, opts), tasks, date });
}
if (noDate.length) groups.push({ label: "No Date", tasks: noDate, date: null });
return groups;
}

View file

@ -0,0 +1,38 @@
import { describe, it, expect } from "vitest";
import { workspaceNameFromPath } from "./paths";
describe("workspaceNameFromPath", () => {
it("returns the last path component of a POSIX path", () => {
expect(workspaceNameFromPath("/home/me/Tasks")).toBe("Tasks");
});
it("strips a trailing slash (regression: used to fall back to 'workspace')", () => {
expect(workspaceNameFromPath("/home/me/Tasks/")).toBe("Tasks");
});
it("strips multiple trailing slashes", () => {
expect(workspaceNameFromPath("/home/me/Tasks///")).toBe("Tasks");
});
it("handles Windows-style backslash paths", () => {
expect(workspaceNameFromPath("C:\\Users\\me\\Tasks")).toBe("Tasks");
});
it("strips a trailing backslash on Windows paths", () => {
expect(workspaceNameFromPath("C:\\Users\\me\\Tasks\\")).toBe("Tasks");
});
it("handles mixed separators", () => {
expect(workspaceNameFromPath("C:\\Users/me\\Tasks")).toBe("Tasks");
});
it("falls back to 'workspace' when the path has no usable tail", () => {
expect(workspaceNameFromPath("/")).toBe("workspace");
expect(workspaceNameFromPath("\\")).toBe("workspace");
expect(workspaceNameFromPath("")).toBe("workspace");
});
it("preserves names with spaces", () => {
expect(workspaceNameFromPath("/home/me/My Tasks/")).toBe("My Tasks");
});
});

View file

@ -0,0 +1,9 @@
/**
* Derive a workspace display name from a folder path picked via the file
* dialog. Handles both `/` and `\` separators and tolerates trailing
* separators (e.g. `"/home/me/Tasks/"` `"Tasks"`, not `"workspace"`).
*/
export function workspaceNameFromPath(folder: string): string {
const parts = folder.replace(/[\\/]+$/, "").split(/[\\/]/);
return parts[parts.length - 1] || "workspace";
}

View file

@ -15,14 +15,29 @@
let webdavUser = $state("");
let webdavPass = $state("");
let testStatus = $state<"idle" | "testing" | "ok" | "fail">("idle");
let credsLoaded = $state(false);
let renaming = $state(false);
let renameValue = $state("");
let renameInput = $state<HTMLInputElement | null>(null);
let showKebab = $state(false);
let confirmRename = $state(false);
// Imperative focus — Svelte's native autofocus attribute is unreliable
// for inputs that appear only via conditional blocks.
$effect(() => {
if (!ws?.webdav_url) return;
if (renaming && renameInput) {
renameInput.focus();
renameInput.select();
}
});
// Load stored credentials exactly once for this workspace. Previously this
// ran on every `ws.webdav_url` change, which silently clobbered in-progress
// user edits whenever any other setting updated the config.
$effect(() => {
if (credsLoaded || !ws?.webdav_url) return;
credsLoaded = true;
webdavUrl = ws.webdav_url;
try {
const domain = new URL(ws.webdav_url).hostname;
@ -35,6 +50,12 @@
} catch {}
});
// Any edit invalidates a prior test so users can't Save a config they
// haven't validated since changing it.
function markDirty() {
if (testStatus !== "idle") testStatus = "idle";
}
async function testConnection() {
testStatus = "testing";
try {
@ -51,6 +72,12 @@
async function saveWebdav() {
if (!webdavUrl.trim()) return;
// Require a successful test so a typo'd URL can't silently point the
// workspace at a dead server.
if (testStatus !== "ok") {
await testConnection();
if (testStatus !== "ok") return;
}
await invoke("set_webdav_config", {
workspaceId,
webdavUrl: webdavUrl.trim(),
@ -116,11 +143,11 @@
{#if renaming}
<input
type="text"
bind:this={renameInput}
bind:value={renameValue}
class="w-full bg-transparent text-xl font-bold outline-none"
onkeydown={(e) => { if (e.key === "Enter") handleRename(); if (e.key === "Escape") { renaming = false; } }}
onblur={handleRename}
autofocus
/>
{:else}
<p class="text-xl font-bold">{ws?.name}</p>
@ -172,6 +199,7 @@
<input
type="url"
bind:value={webdavUrl}
oninput={markDirty}
placeholder="https://dav.example.com/tasks/"
class="mb-3 w-full rounded-lg border border-border-light bg-transparent px-3 py-2 text-sm outline-none focus:border-primary dark:border-border-dark"
/>
@ -180,6 +208,7 @@
<input
type="text"
bind:value={webdavUser}
oninput={markDirty}
class="mb-3 w-full rounded-lg border border-border-light bg-transparent px-3 py-2 text-sm outline-none focus:border-primary dark:border-border-dark"
/>
@ -187,6 +216,7 @@
<input
type="password"
bind:value={webdavPass}
oninput={markDirty}
class="mb-4 w-full rounded-lg border border-border-light bg-transparent px-3 py-2 text-sm outline-none focus:border-primary dark:border-border-dark"
/>
@ -196,7 +226,7 @@
disabled={!webdavUrl.trim()}
class="rounded-lg border border-border-light px-4 py-2 text-sm font-medium hover:bg-black/5 disabled:opacity-40 dark:border-border-dark dark:hover:bg-white/10"
>
{testStatus === "testing" ? "Testing..." : testStatus === "ok" ? "Connected" : testStatus === "fail" ? "Failed -- Retry" : "Test Connection"}
{testStatus === "testing" ? "Testing" : testStatus === "ok" ? "Connected" : testStatus === "fail" ? "Failed Retry" : "Test Connection"}
</button>
<button
onclick={saveWebdav}

View file

@ -5,6 +5,7 @@
import { app } from "../stores/app.svelte";
import { getCurrentWindow } from "@tauri-apps/api/window";
import { platform } from "@tauri-apps/plugin-os";
import { workspaceNameFromPath } from "../paths";
let { cancellable = false }: { cancellable?: boolean } = $props();
@ -71,27 +72,11 @@
const selected = await open({ directory: true, multiple: false });
if (!selected) return;
const folder = selected as string;
const parts = folder.replace(/\\/g, "/").split("/");
const wsName = parts[parts.length - 1] || "workspace";
await app.addWorkspace(wsName, folder);
await app.addWorkspace(workspaceNameFromPath(folder), folder);
}
// ── WebDAV handlers ───────────────────────────────────────────────
async function testConnection() {
testStatus = "testing";
try {
await invoke("test_webdav_connection", {
url: webdavUrl,
username: webdavUser,
password: webdavPass,
});
testStatus = "ok";
} catch {
testStatus = "fail";
}
}
async function connectAndBrowse() {
testStatus = "testing";
try {

View file

@ -3,7 +3,7 @@
import TaskItem from "../components/TaskItem.svelte";
import TaskDetailView from "../components/TaskDetailView.svelte";
import NewTaskInput, { newTaskState } from "../components/NewTaskInput.svelte";
import ConfirmDialog from "../components/ConfirmDialog.svelte";
import ConfirmDialog, { isConfirmDialogOpen } from "../components/ConfirmDialog.svelte";
import SettingsScreen from "./SettingsScreen.svelte";
import { getCurrentWindow } from "@tauri-apps/api/window";
import { platform } from "@tauri-apps/plugin-os";
@ -18,10 +18,15 @@
let parentTask = $derived(taskStack.length >= 1 ? app.tasks.find(t => t.id === taskStack[0]) ?? null : null);
let subtaskDetail = $derived(taskStack.length >= 2 ? app.tasks.find(t => t.id === taskStack[1]) ?? null : null);
// Clear taskStack when the viewed task no longer exists (e.g. deleted or list switched)
// Clear taskStack when the viewed task no longer exists (e.g. deleted or list switched).
// Handles both the parent-gone case (clear entirely) and the subtask-gone case
// (collapse back to parent detail) so an externally deleted subtask doesn't leave
// the slider parked over a blank third panel.
$effect(() => {
if (taskStack.length > 0 && !parentTask) {
taskStack = [];
} else if (taskStack.length >= 2 && !subtaskDetail) {
taskStack = taskStack.slice(0, 1);
}
});
@ -48,10 +53,12 @@
let showWorkspacePicker = $state(false);
let newListName = $state("");
let newListInput = $state<HTMLInputElement | null>(null);
let showCompleted = $state(false);
let completedVisible = $state(false);
let renamingListId = $state<string | null>(null);
let renameValue = $state("");
let renameListInput = $state<HTMLInputElement | null>(null);
let showListMenu = $state(false);
let showSubtasks = $state(false);
let confirmDeleteList = $state(false);
@ -73,6 +80,20 @@
return () => window.removeEventListener("resize", handleResize);
});
// Focus the new-list input when it appears. Svelte's native `autofocus`
// attribute is unreliable for conditional blocks, so focus imperatively.
$effect(() => {
if (showNewList && newListInput) newListInput.focus();
});
// Same imperative-focus trick for the inline list-rename input.
$effect(() => {
if (renamingListId && renameListInput) {
renameListInput.focus();
renameListInput.select();
}
});
async function handleNewList() {
if (!newListName.trim()) return;
@ -88,7 +109,12 @@
async function executeDeleteCompleted() {
confirmDeleteCompleted = false;
for (var t of app.completedTasks) await app.deleteTask(t.id);
// Snapshot targets first — deletes mutate app.completedTasks reactively.
// Bail on first failure so we don't silently leave a partial delete.
const targets = [...app.completedTasks];
for (const t of targets) {
if (!(await app.deleteTask(t.id))) return;
}
}
function promptDeleteList() {
@ -128,6 +154,9 @@
function handleKeydown(e: KeyboardEvent) {
if (e.key !== "Escape") return;
// Defer to any open ConfirmDialog — it installs a capture-phase listener
// that dismisses itself; we must not also pop the task-detail view behind it.
if (isConfirmDialogOpen()) return;
if (showSettings) { showSettings = false; return; }
if (taskStack.length > 0) { closeDetail(); return; }
if (showListMenu) { showListMenu = false; return; }
@ -367,7 +396,7 @@
<div class="flex-1 overflow-y-auto py-2">
{#each app.lists as list (list.id)}
<button
onclick={() => { app.selectList(list.id); taskStack = []; closeDrawer(); }}
onclick={() => { app.selectList(list.id); taskStack = []; showCompleted = false; completedVisible = false; closeDrawer(); }}
class="group flex w-full items-center gap-2 px-5 py-2.5 text-left text-sm hover:bg-black/5 dark:hover:bg-white/10 {list.id === app.activeListId ? 'font-bold' : ''}"
>
{#if list.id === app.activeListId}
@ -388,6 +417,7 @@
{#if showNewList}
<div class="flex gap-2 px-1">
<input
bind:this={newListInput}
type="text"
bind:value={newListName}
placeholder="List name"
@ -610,11 +640,11 @@
{#if renamingListId === app.activeListId}
<input
type="text"
bind:this={renameListInput}
bind:value={renameValue}
class="w-full bg-transparent text-xl font-bold outline-none"
onkeydown={(e) => { if (e.key === "Enter") handleRenameList(); if (e.key === "Escape") renamingListId = null; }}
onblur={handleRenameList}
autofocus
/>
{:else}
<p class="text-xl font-bold">{app.activeList?.title ?? "Tasks"}</p>
@ -627,7 +657,16 @@
{#if app.lists.length === 0}
<div class="flex h-full flex-col items-center justify-center p-8 text-center">
<p class="text-lg font-medium opacity-60">No lists yet</p>
<p class="mt-1 text-sm opacity-40">Tap the list name above to create one</p>
{#if app.isGoogleTasks}
<p class="mt-1 text-sm opacity-40">Lists will appear after your next sync.</p>
{:else}
<button
onclick={() => { showDrawer = true; showNewList = true; }}
class="mt-4 rounded-lg bg-primary px-4 py-2 text-sm font-medium text-white hover:bg-primary-hover"
>
Create a list
</button>
{/if}
</div>
{:else if !app.activeListId}
<div class="flex h-full items-center justify-center opacity-40">

View file

@ -8,11 +8,15 @@ import type {
Screen,
SyncResult,
} from "../types";
import { groupTasksByDate, type TaskGroup } from "../grouping";
// Listen for file system changes from the backend watcher.
// Listen for file system changes from the backend watcher. Guard against
// firing while the user is on the setup/missing screens — loadLists would
// fail (no workspace) and a debouncedSync against a non-synced workspace
// would be wasted work.
listen("fs-changed", () => {
if (!hasWorkspace || screen !== "tasks") return;
loadLists();
// Debounced sync for WebDAV workspaces on local file changes
if (isSyncedWorkspace) debouncedSync();
});
@ -52,64 +56,9 @@ let activeList = $derived(lists.find((l) => l.id === activeListId) ?? null);
let pendingTasks = $derived(tasks.filter((t) => t.status === "backlog" && !t.parent_id));
let completedTasks = $derived(tasks.filter((t) => t.status === "completed" && !t.parent_id));
type TaskGroup = { label: string; tasks: Task[]; date: Date | null };
let groupedPendingTasks = $derived.by((): TaskGroup[] | null => {
if (!activeList?.group_by_date) return null;
const now = new Date();
const todayStart = new Date(now.getFullYear(), now.getMonth(), now.getDate());
const tomorrowStart = new Date(todayStart);
tomorrowStart.setDate(todayStart.getDate() + 1);
const overdue: Task[] = [];
const today: Task[] = [];
const tomorrow: Task[] = [];
const futureByDay = new Map<string, { date: Date; tasks: Task[] }>();
const noDate: Task[] = [];
for (const task of pendingTasks) {
if (!task.date) {
noDate.push(task);
} else {
const d = new Date(task.date);
const dayStart = new Date(d.getFullYear(), d.getMonth(), d.getDate());
if (dayStart < todayStart) overdue.push(task);
else if (dayStart.getTime() === todayStart.getTime()) today.push(task);
else if (dayStart.getTime() === tomorrowStart.getTime()) tomorrow.push(task);
else {
const key = dayStart.toISOString();
if (!futureByDay.has(key)) futureByDay.set(key, { date: dayStart, tasks: [] });
futureByDay.get(key)!.tasks.push(task);
}
}
}
const taskOrderIndex = new Map(pendingTasks.map((t, i) => [t.id, i]));
const sortByDue = (a: Task, b: Task) => {
const dateDiff = new Date(a.date!).getTime() - new Date(b.date!).getTime();
if (dateDiff !== 0) return dateDiff;
return (taskOrderIndex.get(a.id) ?? 0) - (taskOrderIndex.get(b.id) ?? 0);
};
overdue.sort(sortByDue);
today.sort(sortByDue);
tomorrow.sort(sortByDue);
const groups: TaskGroup[] = [];
if (noDate.length) groups.push({ label: "No Date", tasks: noDate, date: null });
if (overdue.length) groups.push({ label: "Overdue", tasks: overdue, date: null });
if (today.length) groups.push({ label: "Today", tasks: today, date: todayStart });
if (tomorrow.length) groups.push({ label: "Tomorrow", tasks: tomorrow, date: tomorrowStart });
const currentYear = now.getFullYear();
for (const [, { date, tasks }] of [...futureByDay.entries()].sort(([a], [b]) => a.localeCompare(b))) {
tasks.sort(sortByDue);
const opts: Intl.DateTimeFormatOptions = date.getFullYear() !== currentYear
? { weekday: "short", month: "short", day: "numeric", year: "numeric" }
: { weekday: "short", month: "short", day: "numeric" };
groups.push({ label: date.toLocaleDateString(undefined, opts), tasks, date });
}
return groups;
return groupTasksByDate(pendingTasks);
});
// Build a map of parent_id -> children for subtask hierarchy
@ -238,11 +187,17 @@ async function removeWorkspace(id: string) {
try {
await invoke("remove_workspace", { id });
config = await invoke<AppConfig>("get_config");
if (!hasWorkspace) {
activeListId = null;
tasks = [];
lists = [];
// Switch to the next available workspace rather than dumping the user
// to the setup screen when they still have other workspaces.
const remaining = Object.keys(config?.workspaces ?? {});
if (remaining.length > 0) {
await switchWorkspace(remaining[0]);
screen = "tasks";
} else {
screen = "setup";
lists = [];
tasks = [];
activeListId = null;
}
} catch (e) {
error = String(e);
@ -309,7 +264,13 @@ async function deleteList(id: string) {
}
}
async function createTask(title: string, description?: string, parentId?: string): Promise<Task | null> {
async function createTask(
title: string,
description?: string,
parentId?: string,
date?: string | null,
hasTime?: boolean,
): Promise<Task | null> {
if (!activeListId) return null;
try {
const task = await invoke<Task>("create_task", {
@ -317,6 +278,8 @@ async function createTask(title: string, description?: string, parentId?: string
title,
description: description ?? "",
parentId: parentId ?? null,
date: date ?? null,
hasTime: hasTime ?? false,
});
tasks = parentId ? [task, ...tasks] : [...tasks, task];
error = null;
@ -366,13 +329,15 @@ async function reorderTask(taskId: string, newPosition: number) {
}
}
async function deleteTask(taskId: string) {
if (!activeListId) return;
async function deleteTask(taskId: string): Promise<boolean> {
if (!activeListId) return false;
try {
await invoke("delete_task", { listId: activeListId, taskId });
tasks = tasks.filter((t) => t.id !== taskId);
return true;
} catch (e) {
error = String(e);
return false;
}
}
@ -433,7 +398,11 @@ async function triggerSync() {
await loadLists();
} catch (e) {
const msg = String(e);
const isTransient = /timeout|connect|network|unreachable|refused/i.test(msg);
// Narrow phrases so that a legitimate server-side error containing a
// word like "network" or "refused" in its description isn't silently
// swallowed as an offline blip. Only treat obvious connectivity failures
// as transient.
const isTransient = /(^|\W)(timed? out|timeout|connection (refused|reset|timed out|aborted)|connect error|network (is )?unreachable|no route to host|host (not found|is unreachable)|dns|enotfound|econnrefused|etimedout|ehostunreach|enetunreach)(\W|$)/i.test(msg);
syncStatus = isTransient ? "offline" : "error";
// Only show the error banner for non-transient failures; connectivity issues just update the status dot
if (!isTransient) error = msg;
@ -449,7 +418,7 @@ function debouncedSync() {
function restartSyncInterval() {
if (_syncInterval) clearInterval(_syncInterval);
var secs = _appFocused ? syncIntervalSecs : syncIntervalUnfocusedSecs;
const secs = _appFocused ? syncIntervalSecs : syncIntervalUnfocusedSecs;
_syncInterval = setInterval(triggerSync, secs * 1000);
}
@ -571,22 +540,10 @@ async function addGoogleTasksWorkspace(
async function forgetMissingWorkspace() {
if (!missingWorkspace) return;
// removeWorkspace handles switching to the next available workspace (or
// falling back to the setup screen when none remain); just delegate.
await removeWorkspace(missingWorkspace);
missingWorkspace = null;
config = await invoke<AppConfig>("get_config");
if (hasWorkspace) {
// Switch to the next available workspace
const nextName = Object.keys(config!.workspaces)[0];
if (nextName) {
await switchWorkspace(nextName);
screen = "tasks";
return;
}
}
screen = "setup";
lists = [];
tasks = [];
activeListId = null;
}
function setScreen(s: Screen) {

View file

@ -0,0 +1 @@
import "@testing-library/jest-dom/vitest";

View file

@ -1,3 +1,4 @@
/// <reference types="vitest/config" />
import { defineConfig } from "vite";
import { svelte } from "@sveltejs/vite-plugin-svelte";
import tailwindcss from "@tailwindcss/vite";
@ -14,4 +15,17 @@ export default defineConfig({
hmr: host ? { protocol: "ws", host, port: 1421 } : undefined,
watch: { ignored: ["**/src-tauri/**"] },
},
test: {
environment: "jsdom",
globals: true,
setupFiles: ["./src/test/setup.ts"],
include: ["src/**/*.{test,spec}.{ts,svelte}"],
// Resolve Svelte's client (browser) entry under Vitest — without the
// browser condition mount() picks up Svelte's SSR export and throws
// lifecycle_function_unavailable.
server: { deps: { inline: ["@testing-library/svelte"] } },
},
resolve: {
conditions: process.env.VITEST ? ["browser"] : [],
},
});

View file

@ -6,6 +6,7 @@ pub mod group;
pub mod sync;
use onyx_core::{AppConfig, TaskRepository};
use onyx_core::config::WorkspaceConfig;
use anyhow::{Context, Result};
use std::path::PathBuf;
@ -23,21 +24,89 @@ pub fn save_config(config: &AppConfig) -> Result<()> {
config.save_to_file(&path).context("Failed to save config")
}
pub fn get_repository(workspace_name: Option<String>) -> Result<(TaskRepository, String)> {
let config = load_config()?;
let (name, workspace_config) = if let Some(name) = workspace_name {
let workspace_config = config.get_workspace(&name)
.ok_or_else(|| anyhow::anyhow!("Workspace '{}' not found", name))?;
(name, workspace_config.clone())
/// Resolve a user-supplied identifier to (id, WorkspaceConfig). Accepts either
/// the workspace's display name or its UUID. Falls back to the current
/// workspace when `identifier` is `None`.
pub fn resolve_workspace(config: &AppConfig, identifier: Option<&str>) -> Result<(String, WorkspaceConfig)> {
if let Some(s) = identifier {
// Try by UUID first (exact match on map key), then fall back to name lookup.
if let Some(ws) = config.get_workspace(s) {
return Ok((s.to_string(), ws.clone()));
}
let (id, ws) = config.find_by_name(s)
.ok_or_else(|| anyhow::anyhow!("Workspace '{}' not found", s))?;
Ok((id.clone(), ws.clone()))
} else {
let (name, workspace_config) = config.get_current_workspace()
.context("No workspace set. Use 'onyx init' to create one.")?;
(name.clone(), workspace_config.clone())
};
let (id, ws) = config.get_current_workspace()
.context("No workspace set. Run 'onyx workspace add <name> <path>' to create one, or 'onyx workspace switch <name>' to select one.")?;
Ok((id.clone(), ws.clone()))
}
}
pub fn get_repository(workspace_identifier: Option<String>) -> Result<(TaskRepository, String)> {
let config = load_config()?;
let (_id, workspace_config) = resolve_workspace(&config, workspace_identifier.as_deref())?;
let name = workspace_config.name.clone();
let repo = TaskRepository::new(workspace_config.path.clone())
.context(format!("Failed to open workspace '{}'", name))?;
Ok((repo, name))
}
#[cfg(test)]
mod tests {
use super::*;
fn make_config_with(ws: &[(&str, &str)]) -> (AppConfig, Vec<String>) {
let mut config = AppConfig::new();
let ids: Vec<String> = ws.iter()
.map(|(name, path)| config.add_workspace(WorkspaceConfig::new(name.to_string(), PathBuf::from(path))))
.collect();
(config, ids)
}
#[test]
fn resolve_by_name() {
let (config, _ids) = make_config_with(&[("dev", "/tmp/dev"), ("home", "/tmp/home")]);
let (id, ws) = resolve_workspace(&config, Some("dev")).unwrap();
assert_eq!(ws.name, "dev");
assert!(config.workspaces.contains_key(&id));
}
#[test]
fn resolve_by_uuid() {
let (config, ids) = make_config_with(&[("dev", "/tmp/dev")]);
let target = ids[0].clone();
let (id, ws) = resolve_workspace(&config, Some(&target)).unwrap();
assert_eq!(id, target);
assert_eq!(ws.name, "dev");
}
#[test]
fn resolve_unknown_identifier_errors() {
let (config, _ids) = make_config_with(&[("dev", "/tmp/dev")]);
let err = resolve_workspace(&config, Some("ghost")).unwrap_err();
assert!(err.to_string().contains("Workspace 'ghost' not found"));
}
#[test]
fn resolve_falls_back_to_current() {
let (mut config, ids) = make_config_with(&[("a", "/tmp/a"), ("b", "/tmp/b")]);
config.set_current_workspace(ids[1].clone()).unwrap();
let (id, ws) = resolve_workspace(&config, None).unwrap();
assert_eq!(id, ids[1]);
assert_eq!(ws.name, "b");
}
#[test]
fn resolve_no_current_gives_actionable_message() {
let config = AppConfig::new();
let err = resolve_workspace(&config, None).unwrap_err();
let msg = err.to_string();
// The message should point the user at the right sub-commands, not
// at the obsolete 'onyx init' suggestion.
assert!(msg.contains("workspace add") || msg.contains("workspace switch"),
"expected actionable message, got: {msg}");
}
}

View file

@ -2,22 +2,8 @@ use anyhow::{Context, Result};
use colored::Colorize;
use onyx_core::sync::{SyncMode, sync_workspace, get_sync_status};
use onyx_core::webdav::{WebDavClient, store_credentials, load_credentials};
use onyx_core::config::AppConfig;
use crate::output;
use super::{load_config, save_config};
/// Resolve a workspace name to (id, config). Falls back to current workspace if name is None.
fn resolve_workspace(config: &AppConfig, name: Option<&str>) -> Result<(String, onyx_core::config::WorkspaceConfig)> {
if let Some(name) = name {
let (id, ws) = config.find_by_name(name)
.ok_or_else(|| anyhow::anyhow!("Workspace '{}' not found", name))?;
Ok((id.clone(), ws.clone()))
} else {
let (id, ws) = config.get_current_workspace()
.context("No workspace set. Use 'onyx init' to create one.")?;
Ok((id.clone(), ws.clone()))
}
}
use super::{load_config, save_config, resolve_workspace};
/// Run sync setup: prompt for URL, username, password, test connection, store credentials.
pub fn setup(workspace_name: Option<String>) -> Result<()> {

View file

@ -119,13 +119,26 @@ pub fn edit(task_id_str: String, workspace: Option<String>) -> Result<()> {
let (list_id, task) = find_task(&lists, task_id)
.ok_or_else(|| anyhow::anyhow!("Task not found: {}", task_id_str))?;
// Create temporary file with task content
// Create temporary file with task content. On Unix, open with 0600 so
// other local users on a shared system can't read the task body off /tmp
// while the editor is running.
let temp_dir = std::env::temp_dir();
let temp_file = temp_dir.join(format!("onyx-{}.md", task.id));
// Write current task content to temp file
let content = format!("# {}\n\n{}", task.title, task.description);
std::fs::write(&temp_file, content)?;
{
use std::io::Write;
let mut opts = std::fs::OpenOptions::new();
opts.write(true).create(true).truncate(true);
#[cfg(unix)]
{
use std::os::unix::fs::OpenOptionsExt;
opts.mode(0o600);
}
let mut f = opts.open(&temp_file)
.with_context(|| format!("Failed to create {}", temp_file.display()))?;
f.write_all(content.as_bytes())?;
}
// Get editor from environment
let editor = std::env::var("EDITOR").unwrap_or_else(|_| {

View file

@ -30,11 +30,21 @@ pub fn add(name: String, path: String) -> Result<()> {
// Add workspace
let id = config.add_workspace(WorkspaceConfig::new(name.clone(), path_buf.clone()));
// Select the new workspace as current when none was previously set, so the
// very next command doesn't fail with "No workspace set".
let made_current = config.current_workspace.is_none();
if made_current {
config.set_current_workspace(id.clone())?;
}
// Save config
save_config(&config)?;
output::success(&format!("Added workspace \"{}\" ({}) at {}", name, &id[..8], path_buf.display()));
output::success("Created default list \"My Tasks\"");
if made_current {
output::success(&format!("Set \"{}\" as the current workspace", name));
}
Ok(())
}
@ -64,15 +74,20 @@ pub fn list() -> Result<()> {
Ok(())
}
/// Resolve a workspace name to its ID. Errors if not found or ambiguous.
fn resolve_name(config: &onyx_core::config::AppConfig, name: &str) -> Result<String> {
/// Resolve a user-supplied identifier to a workspace ID. Accepts either the
/// display name or the UUID. Errors if not found or ambiguous.
fn resolve_name(config: &onyx_core::config::AppConfig, identifier: &str) -> Result<String> {
// Direct UUID hit on the map key — unambiguous.
if config.workspaces.contains_key(identifier) {
return Ok(identifier.to_string());
}
let matches: Vec<_> = config.workspaces.iter()
.filter(|(_, ws)| ws.name == name)
.filter(|(_, ws)| ws.name == identifier)
.collect();
match matches.len() {
0 => anyhow::bail!("Workspace '{}' not found", name),
0 => anyhow::bail!("Workspace '{}' not found", identifier),
1 => Ok(matches[0].0.clone()),
n => anyhow::bail!("Ambiguous: {} workspaces named '{}'. Use the workspace ID instead.", n, name),
n => anyhow::bail!("Ambiguous: {} workspaces named '{}'. Use the workspace ID instead.", n, identifier),
}
}

View file

@ -3,6 +3,7 @@ mod output;
use anyhow::Result;
use clap::{Parser, Subcommand};
use colored::Colorize;
use commands::*;
#[derive(Parser)]
@ -197,7 +198,24 @@ enum GroupCommands {
},
}
fn main() -> Result<()> {
fn main() {
match run() {
Ok(()) => {}
Err(e) => {
// Print user-friendly error chain (no backtrace). Programming-bug
// panics still surface through their default handler.
eprintln!("{}: {}", "Error".red().bold(), e);
let mut cause = e.source();
while let Some(c) = cause {
eprintln!(" caused by: {}", c);
cause = c.source();
}
std::process::exit(1);
}
}
}
fn run() -> Result<()> {
let cli = Cli::parse();
match cli.command {

View file

@ -4,20 +4,15 @@ use serde::{Deserialize, Serialize};
use uuid::Uuid;
use crate::error::{Error, Result};
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Default)]
#[serde(rename_all = "lowercase")]
pub enum WorkspaceMode {
#[default]
Local,
Webdav,
GoogleTasks,
}
impl Default for WorkspaceMode {
fn default() -> Self {
Self::Local
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct WorkspaceConfig {
pub name: String,
@ -121,13 +116,7 @@ impl AppConfig {
std::fs::create_dir_all(parent)?;
}
let content = serde_json::to_string_pretty(&self)?;
// Atomic write: write to temp file then rename to prevent corruption on crash
let temp = path.with_extension("tmp");
std::fs::write(&temp, &content)?;
if let Err(e) = std::fs::rename(&temp, path) {
let _ = std::fs::remove_file(&temp);
return Err(e.into());
}
crate::storage::atomic_write(path, content.as_bytes())?;
Ok(())
}

View file

@ -358,8 +358,15 @@ pub async fn sync_google_tasks(
list_meta.task_order = task_order;
list_meta.updated_at = Utc::now();
if let Ok(meta_content) = serde_json::to_string_pretty(&list_meta) {
let _ = atomic_write(&listdata_path, meta_content.as_bytes());
match serde_json::to_string_pretty(&list_meta) {
Ok(meta_content) => {
if let Err(e) = atomic_write(&listdata_path, meta_content.as_bytes()) {
errors.push(format!("Failed to write metadata for list '{}': {}", gt_list.title, e));
}
}
Err(e) => {
errors.push(format!("Failed to serialize metadata for list '{}': {}", gt_list.title, e));
}
}
}
@ -374,8 +381,15 @@ pub async fn sync_google_tasks(
RootMetadata::default()
};
root_meta.list_order = new_list_order;
if let Ok(meta_content) = serde_json::to_string_pretty(&root_meta) {
let _ = atomic_write(&root_meta_path, meta_content.as_bytes());
match serde_json::to_string_pretty(&root_meta) {
Ok(meta_content) => {
if let Err(e) = atomic_write(&root_meta_path, meta_content.as_bytes()) {
errors.push(format!("Failed to write workspace metadata: {}", e));
}
}
Err(e) => {
errors.push(format!("Failed to serialize workspace metadata: {}", e));
}
}
Ok(GoogleSyncResult { downloaded, errors })

View file

@ -26,7 +26,10 @@ impl TaskRepository {
// Task operations
pub fn create_task(&mut self, list_id: Uuid, mut task: Task) -> Result<Task> {
self.storage.write_task(list_id, &task)?;
task.version += 1;
// Mirror the saturating increment that FileSystemStorage applies to
// the on-disk frontmatter so the in-memory Task matches what was
// written and doesn't wrap at u64::MAX.
task.version = task.version.saturating_add(1);
Ok(task)
}
@ -154,7 +157,7 @@ mod tests {
// Create a task
let task = Task::new("Test Task".to_string());
let created_task = repo.create_task(list.id, task).unwrap();
let _ = repo.create_task(list.id, task).unwrap();
// List tasks
let tasks = repo.list_tasks(list.id).unwrap();
@ -162,6 +165,20 @@ mod tests {
assert_eq!(tasks[0].title, "Test Task");
}
#[test]
fn test_create_task_saturates_version_at_max() {
let temp_dir = TempDir::new().unwrap();
let mut repo = TaskRepository::init(temp_dir.path().to_path_buf()).unwrap();
let list = repo.create_list("L".to_string()).unwrap();
// Simulate a task that is already at u64::MAX. A plain `+=` would
// overflow — saturating_add must clamp.
let mut task = Task::new("max".to_string());
task.version = u64::MAX;
let created = repo.create_task(list.id, task).unwrap();
assert_eq!(created.version, u64::MAX);
}
#[test]
fn test_update_task() {
let temp_dir = TempDir::new().unwrap();

View file

@ -236,12 +236,8 @@ impl FileSystemStorage {
Ok(path)
}
fn sanitize_filename(name: &str) -> String {
crate::sanitize_filename(name)
}
fn task_file_path(&self, list_dir: &Path, task: &Task) -> PathBuf {
let safe_title = Self::sanitize_filename(&task.title);
let safe_title = crate::sanitize_filename(&task.title);
let filename = if safe_title.is_empty() {
task.id.to_string()
} else {
@ -381,7 +377,9 @@ impl Storage for FileSystemStorage {
}
let content = self.write_markdown_with_frontmatter(task)?;
fs::write(&task_path, content)?;
// Atomic write: a crash mid-write must not leave a truncated .md file
// that then fails YAML parsing on the next list_tasks/read_task.
atomic_write(&task_path, content.as_bytes())?;
// Update list metadata to include this task in task_order if not already present
let mut list_metadata = self.read_list_metadata(list_id)?;
@ -455,27 +453,42 @@ impl Storage for FileSystemStorage {
}
let mut tasks = Vec::new();
for (_id, mut entries) in by_id {
if entries.len() > 1 {
entries.sort_by(|a, b| {
for (_id, entries) in by_id {
// `by_id` only inserts non-empty groups, so each `entries` has at
// least one element.
let task = if entries.len() > 1 {
// Read mtime once per file so sort_by doesn't hit the filesystem
// O(n log n) times and can't produce inconsistent orderings if a
// file is touched mid-sort.
let mut with_mtime: Vec<(PathBuf, Task, Option<std::time::SystemTime>)> = entries
.into_iter()
.map(|(p, t)| {
let mtime = fs::metadata(&p).and_then(|m| m.modified()).ok();
(p, t, mtime)
})
.collect();
with_mtime.sort_by(|a, b| {
// Primary: highest version first
let version_cmp = b.1.version.cmp(&a.1.version);
if version_cmp != std::cmp::Ordering::Equal {
return version_cmp;
}
// Tiebreaker: most recently modified file first
let mtime_a = fs::metadata(&a.0).and_then(|m| m.modified()).ok();
let mtime_b = fs::metadata(&b.0).and_then(|m| m.modified()).ok();
mtime_b.cmp(&mtime_a)
b.2.cmp(&a.2)
});
for (stale_path, _) in entries.drain(1..) {
for (stale_path, _, _) in with_mtime.drain(1..) {
if let Err(e) = fs::remove_file(&stale_path) {
eprintln!("Warning: failed to remove stale duplicate task file {:?}: {}", stale_path, e);
}
}
}
let (_, task) = entries.into_iter().next()
.ok_or_else(|| Error::InvalidData("Empty dedup entries for task".to_string()))?;
let (_, t, _) = with_mtime.into_iter().next()
.expect("dedup group is non-empty after drain(1..)");
t
} else {
let (_, t) = entries.into_iter().next()
.expect("dedup group is non-empty");
t
};
tasks.push(task);
}

View file

@ -5,7 +5,7 @@ use serde::{Deserialize, Serialize};
use sha2::{Sha256, Digest};
use uuid::Uuid;
use crate::error::{Error, Result};
use crate::storage::{ListMetadata, TaskFrontmatter};
use crate::storage::{atomic_write, ListMetadata, TaskFrontmatter};
use crate::webdav::WebDavClient;
/// File-based lock to prevent concurrent sync operations on the same workspace.
@ -204,8 +204,9 @@ pub fn compute_sync_actions(
}
// Remote present, local gone, base known: local was deleted
(None, Some(_), Some(b)) => {
let remote_changed = remote.is_some_and(|r| r.size != b.size || !timestamps_equal(r.last_modified.as_deref(), b.modified_at.as_deref()));
(None, Some(r), Some(b)) => {
let remote_changed = r.size != b.size
|| !timestamps_equal(r.last_modified.as_deref(), b.modified_at.as_deref());
if remote_changed {
// deleted locally + modified remotely -> download (remote wins)
actions.push(SyncAction::Download { path: path.to_string() });
@ -229,6 +230,22 @@ pub fn compute_sync_actions(
actions
}
/// Remove base entries for files that are gone from both local and remote.
/// `compute_sync_actions` emits no action for the both-deleted case, so without
/// this pass those entries would persist in `.syncstate.json` indefinitely.
fn prune_orphan_bases(
sync_state: &mut SyncState,
local_files: &[LocalFileInfo],
remote_files: &[RemoteFileSnapshot],
) {
let live_paths: std::collections::HashSet<&str> = local_files
.iter()
.map(|f| f.path.as_str())
.chain(remote_files.iter().map(|f| f.path.as_str()))
.collect();
sync_state.files.retain(|p, _| live_paths.contains(p.as_str()));
}
/// Compare two timestamps for equality by parsing both, tolerating format differences.
fn timestamps_equal(a: Option<&str>, b: Option<&str>) -> bool {
match (a, b) {
@ -604,6 +621,12 @@ async fn sync_workspace_inner(
}
};
// Purge orphan base entries: files we previously tracked that are now gone
// from both local and remote. Without this, `.syncstate.json` accumulates
// ghost entries forever because the both-deleted diff case emits no action
// and so nothing else would clean them.
prune_orphan_bases(&mut sync_state, &local_files, &remote_files);
// Compute actions from three-way diff
let fresh_actions = compute_sync_actions(&local_files, &remote_files, &sync_state);
@ -701,19 +724,20 @@ async fn execute_action(
Err(e) => return Err(e.into()),
};
let checksum = compute_checksum(&data);
let len = data.len() as u64;
if let Some(parent) = path_parent(path) {
client.ensure_dir(parent).await?;
}
report(&format!(" ^ Uploading {}", path));
client.put_file(path, data.clone()).await?;
client.put_file(path, data).await?;
// Record in sync state using local file metadata
let modified = std::fs::metadata(&local_path).ok()
.and_then(|m| m.modified().ok())
.map(|t| { let dt: DateTime<Utc> = t.into(); dt.to_rfc3339() });
sync_state.record_file(path, &checksum, modified.as_deref(), data.len() as u64);
sync_state.record_file(path, &checksum, modified.as_deref(), len);
}
SyncAction::Conflict { path } => {
@ -743,8 +767,9 @@ async fn execute_action(
} else {
report(&format!(" ! Conflict: remote wins for {}, recovering local as duplicate", path));
// Remote wins: overwrite local with remote content
std::fs::write(&local_path, &remote_data)?;
// Remote wins: overwrite local with remote content. Atomic
// so a crash mid-sync cannot leave a truncated file behind.
atomic_write(&local_path, &remote_data)?;
let modified = std::fs::metadata(&local_path).ok()
.and_then(|m| m.modified().ok())
.map(|t| { let dt: DateTime<Utc> = t.into(); dt.to_rfc3339() });
@ -752,7 +777,7 @@ async fn execute_action(
// For .md task files inside a list dir, create a duplicate of the local version
let parts: Vec<&str> = path.split('/').collect();
if parts.len() == 2 && parts[1].ends_with(".md") && parts[1] != ".listdata.json" {
if parts.len() == 2 && parts[1].ends_with(".md") {
let local_content = String::from_utf8_lossy(&local_data);
if let Ok((frontmatter, description)) = parse_frontmatter_for_conflict(&local_content) {
let original_id = frontmatter.id;
@ -775,7 +800,7 @@ async fn execute_action(
let list_dir = workspace_path.join(parts[0]);
let dup_filename = format!("{}.md", new_id);
let dup_path = list_dir.join(&dup_filename);
std::fs::write(&dup_path, &new_content)?;
atomic_write(&dup_path, new_content.as_bytes())?;
// Insert new task adjacent to original in .listdata.json.
// If metadata update fails, remove the duplicate file to
@ -791,7 +816,7 @@ async fn execute_action(
.unwrap_or(metadata.task_order.len());
metadata.task_order.insert(insert_pos, new_id);
let json = serde_json::to_string_pretty(&metadata)?;
std::fs::write(&listdata_path, json)?;
atomic_write(&listdata_path, json.as_bytes())?;
Ok(())
})();
if let Err(e) = metadata_updated {
@ -816,7 +841,7 @@ async fn execute_action(
if let Some(parent) = local_path.parent() {
std::fs::create_dir_all(parent)?;
}
std::fs::write(&local_path, &data)?;
atomic_write(&local_path, &data)?;
// Record remote's last_modified so next diff won't see a timestamp mismatch
let modified = remote_meta.get(path.as_str()).and_then(|r| r.last_modified.clone());
@ -890,9 +915,15 @@ pub fn get_sync_status(workspace_path: &Path) -> Result<SyncStatusInfo> {
}
}
// Count files in base that are now missing locally (deleted)
// Count files in base that are now missing locally (deleted).
// Build a set of local paths once so the membership check is O(1) per
// tracked file instead of scanning local_files linearly each time.
let local_paths: std::collections::HashSet<&str> = local_files
.iter()
.map(|f| f.path.as_str())
.collect();
for path in sync_state.files.keys() {
if !local_files.iter().any(|f| f.path == *path) {
if !local_paths.contains(path.as_str()) {
pending_changes += 1;
}
}
@ -1105,6 +1136,22 @@ mod tests {
assert!(actions.is_empty());
}
#[test]
fn test_prune_orphan_bases() {
let mut state = SyncState::default();
state.files.insert("kept_local.md".to_string(), make_base("a"));
state.files.insert("kept_remote.md".to_string(), make_base("b"));
state.files.insert("orphan.md".to_string(), make_base("c"));
let local = vec![make_local("kept_local.md", "a")];
let remote = vec![make_remote("kept_remote.md")];
prune_orphan_bases(&mut state, &local, &remote);
assert!(state.files.contains_key("kept_local.md"));
assert!(state.files.contains_key("kept_remote.md"));
assert!(!state.files.contains_key("orphan.md"));
}
#[test]
fn test_multiple_files_mixed() {
let local = vec![
@ -1136,8 +1183,7 @@ mod tests {
#[test]
fn test_sync_state_save_load_roundtrip() {
let temp_dir = TempDir::new().unwrap();
let mut state = SyncState::default();
state.last_sync = Some(Utc::now());
let mut state = SyncState { last_sync: Some(Utc::now()), ..Default::default() };
state.record_file("test.md", "abc123", Some("2026-01-01T00:00:00Z"), 42);
state.save(temp_dir.path()).unwrap();

View file

@ -448,12 +448,12 @@ pub fn store_credentials(domain: &str, username: &str, password: &str) -> Result
let user_entry = keyring::Entry::new(&service, "username")
.map_err(|e| Error::Credential(format!("Failed to create keyring entry: {}", e)))?;
user_entry.setpassword(username)
user_entry.set_password(username)
.map_err(|e| Error::Credential(format!("Failed to store username: {}", e)))?;
let pass_entry = keyring::Entry::new(&scoped_service, "password")
.map_err(|e| Error::Credential(format!("Failed to create keyring entry: {}", e)))?;
pass_entry.setpassword(password)
pass_entry.set_password(password)
.map_err(|e| Error::Credential(format!("Failed to store password: {}", e)))?;
// Clean up legacy unscoped password entry if present
@ -478,18 +478,18 @@ pub fn load_credentials(domain: &str) -> Result<(Zeroizing<String>, Zeroizing<St
let user_entry = keyring::Entry::new(&service, "username")
.map_err(|e| Error::Credential(format!("Failed to create keyring entry: {}", e)))?;
if let Ok(user) = user_entry.getpassword() {
if let Ok(user) = user_entry.get_password() {
// Try scoped password key first (domain+username), fall back to legacy unscoped key
let scoped_service = format!("com.onyx.webdav.{}::{}", domain, user);
let found = keyring::Entry::new(&scoped_service, "password")
.ok()
.and_then(|e| e.getpassword().ok())
.and_then(|e| e.get_password().ok())
.map(|p| (p, false))
.or_else(|| {
// Migration fallback: try legacy unscoped password entry
keyring::Entry::new(&service, "password")
.ok()
.and_then(|e| e.getpassword().ok())
.and_then(|e| e.get_password().ok())
.map(|p| (p, true))
});
@ -497,7 +497,7 @@ pub fn load_credentials(domain: &str) -> Result<(Zeroizing<String>, Zeroizing<St
// Auto-migrate legacy credentials to scoped format
if needs_migration {
if let Ok(entry) = keyring::Entry::new(&scoped_service, "password") {
let _ = entry.setpassword(&pass);
let _ = entry.set_password(&pass);
}
if let Ok(legacy) = keyring::Entry::new(&service, "password") {
let _ = legacy.delete_credential();
@ -547,7 +547,7 @@ pub fn delete_credentials(domain: &str) -> Result<()> {
// Load username first so we can delete the scoped password entry
let username = keyring::Entry::new(&service, "username")
.ok()
.and_then(|e| e.getpassword().ok());
.and_then(|e| e.get_password().ok());
if let Some(user) = &username {
let scoped_service = format!("com.onyx.webdav.{}::{}", domain, user);

View file

@ -216,6 +216,8 @@ repo.rename_list(list_id, "New Name".to_string())?;
#### Move Task Between Lists
```rust
// Atomically moves a task from one list to another.
// If the delete-from-source step fails, the copy in the destination is rolled back.
repo.move_task(from_list_id, to_list_id, task_id)?;
```
@ -351,12 +353,14 @@ Credentials are stored in the platform keychain (Windows Credential Manager, mac
```rust
use onyx_core::webdav::{store_credentials, load_credentials, delete_credentials};
use zeroize::Zeroizing;
// Store credentials
store_credentials("nextcloud.example.com", "username", "password")?;
// Load credentials (returns Zeroizing<String> wrappers that wipe memory on drop)
let (username, password) = load_credentials("nextcloud.example.com")?;
// Load credentials — returns Zeroizing<String> wrappers that wipe memory on drop
let (username, password): (Zeroizing<String>, Zeroizing<String>) =
load_credentials("nextcloud.example.com")?;
// Delete credentials
delete_credentials("nextcloud.example.com")?;
@ -452,7 +456,7 @@ All metadata and state files use an atomic write pattern (write to `.tmp` then r
- **List names**: Rejected if they contain `/`, `\`, or `..` components. Canonicalized and verified to stay within workspace root.
- **Sync paths**: Validated to reject `..` components and backslashes anywhere in the path before any file system operation.
- **Workspace paths** (Tauri): Rejected if they point to system directories (`/etc`, `/usr`, `/bin`, etc.).
- **Workspace paths** (Tauri): Rejected if they point to the filesystem root (`/`) or system directories (`/etc`, `/usr`, `/bin`, `/sbin`, `/var`, `/proc`, `/sys`, `/dev`).
- **Filenames**: Sanitized to replace `/ \ : * ? " < > |` and control characters with `_`.
## Example: Complete Workflow
@ -519,9 +523,9 @@ Key test areas:
## Thread Safety
The `Storage` trait requires `Send + Sync`, and `TaskRepository` wraps `Box<dyn Storage + Send + Sync>`, so repository instances can be shared across threads behind a `Mutex`. The Tauri GUI uses `Mutex<AppState>` for this purpose.
`TaskRepository` holds its storage as `Box<dyn Storage + Send + Sync>`, so any concrete storage implementation passed in must be `Send + Sync`. Repository instances can be shared across threads behind a `Mutex` — the Tauri GUI uses `Mutex<AppState>` for this purpose.
For concurrent access:
1. Wrap `TaskRepository` in `Mutex` or `RwLock` (the Tauri app does this)
2. Or create separate repository instances per thread (file system handles locking)
2. Or create separate repository instances per thread. Note that `FileSystemStorage` does not coordinate writes between processes — concurrent multi-process writes to the same workspace are not supported outside the WebDAV sync flow, which uses a `.sync.lock` file.

View file

@ -27,7 +27,7 @@ cargo run -p onyx-cli -- --help
# Run the Tauri GUI
cd apps/tauri && npm install
npm run tauri dev
npm run tauri dev # (Wayland: WEBKIT_DISABLE_DMABUF_RENDERER=1 npm run tauri dev)
```
## Project Structure
@ -72,11 +72,15 @@ onyx/
│ │ ├── main.ts
│ │ ├── app.css # Tailwind CSS 4 + theme
│ │ ├── App.svelte
│ │ ├── test/
│ │ │ └── setup.ts
│ │ └── lib/
│ │ ├── screens/ # Full-page views
│ │ ├── components/ # Reusable UI components
│ │ ├── stores/ # Svelte state (app.svelte.ts)
│ │ ├── dateFormat.ts # Date formatting utilities
│ │ ├── grouping.ts # Task grouping logic
│ │ ├── paths.ts # Path utilities
│ │ └── types.ts # TypeScript type definitions
│ ├── tauri-plugin-credentials/ # Cross-platform credential storage plugin
│ │ ├── Cargo.toml

BIN
screenshot.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB