Compare commits

..

1 commit

Author SHA1 Message Date
Claude 9036ac360a
docs: sync markdown docs with actual codebase state
- README.md: update Phase 4 status to reflect Android preliminaries done
  (file-watcher gating, tauri-plugin-credentials, safe area insets, Android
  targets configured) but init/build not yet run; add tauri-plugin-credentials
  to project structure; expand docs/ tree; add newer GUI features (workspace
  rename, safe area insets, accessibility); add setup screen screenshot;
  update What's Next to note Phase 4 is in progress
- PLAN.md: fix Phase 4 checkboxes — android init and build-succeeds were
  marked [x] but gen/android/ does not exist; correct cfg gate annotation
  from #[cfg(not(mobile))] to #[cfg(not(target_os = "android"))]; update
  dependency snippet to reflect actual keyring/zeroize/sha2/quick-xml usage;
  bump Last Updated to 2026-04-17
- docs/DEVELOPMENT.md: add WEBKIT_DISABLE_DMABUF_RENDERER=1 Wayland note
  to tauri dev command

https://claude.ai/code/session_01MypN7wPNqeSgw8b5DYpMc1
2026-04-17 15:00:06 +00:00
25 changed files with 251 additions and 603 deletions

View file

@ -1,38 +1,5 @@
# Audit Log # Audit Log
## 2026-04-27
Found and fixed 3 issues:
1. **Perf: needless clone of upload payload** (sync.rs:733) — the `SyncAction::Upload` arm read the file into `data`, computed `compute_checksum(&data)`, then called `client.put_file(path, data.clone())`. The clone existed only because the next statement needed `data.len()` for the sync-state record. Captured `data.len() as u64` into `len` first, moved `data` into `put_file`, and used `len` afterwards — one full byte copy avoided per uploaded file.
2. **Bug: Google Tasks sync silently drops metadata-write failures** (google_tasks.rs:361, 377) — both `.listdata.json` and `.onyx-workspace.json` were written via `if let Ok(meta_content) = serde_json::to_string_pretty(...) { let _ = atomic_write(...); }`, so a serialization or atomic-write error returned `Ok(GoogleSyncResult { downloaded: N, errors: [] })` even though list/workspace ordering was never persisted. Both writes now push their errors into the `errors` vec already returned in `GoogleSyncResult`.
3. **Code quality: unreachable dead-error path in storage dedup** (storage.rs:447) — the dedup loop computed `Option<Task>` from each `by_id` group and then `ok_or_else(|| Error::InvalidData("Empty dedup entries for task"))?`. `by_id` is only populated by `entry(uuid).or_default().push(entry)`, so every group has ≥1 element and the `None` branch is unreachable. Replaced the `Option`+`?` with direct `expect` calls (one per branch) that document the non-empty invariant; the loop now yields `Task` directly.
## 2026-04-25
Found and fixed 3 issues:
1. **Perf: O(n²) deletion-detection in `get_sync_status`** (sync.rs:918) — for every path tracked in `sync_state.files`, the loop scanned `local_files` linearly via `.any(|f| f.path == *path)` to decide whether to count it as a deleted-locally pending change. The earlier "modified or new" loop already used the inverse direction with `sync_state.files.get(...)` (O(1)), so the second loop was the inconsistent one. Built a `HashSet<&str>` of local paths once and used `contains` for the membership check.
2. **Perf: cascade delete walks all_tasks per frontier pop** (tauri/lib.rs:460) — `delete_task`'s descendant BFS scanned the full task list on every parent popped from the frontier, making the work O(n × depth). Built a `parent_id -> [child_id]` `HashMap` once, then the BFS visits each descendant in O(1) amortised, dropping total cost to O(n).
3. **Code quality: duplicate atomic-write in `AppConfig::save_to_file`** (config.rs:114) — the function had its own copy of the temp-file + rename + cleanup-on-failure dance even though `storage::atomic_write` is `pub(crate)` and was already shared by `google_tasks.rs`. Replaced the inline implementation with a call to `crate::storage::atomic_write` so the crate has one canonical atomic write path.
## 2026-04-24
Found and fixed 3 issues:
1. **Bug: orphan base entries never cleaned from sync state** (sync.rs) — when a file was deleted both locally and remotely, `compute_sync_actions` emitted no action (the `(None, None, Some(_))` arm), so the base entry in `.syncstate.json` persisted forever. On each subsequent sync the same no-op case fired and the state file grew. Added `prune_orphan_bases` pass in `sync_workspace_inner` that drops base entries not present in either scan.
2. **Code quality: redundant is_some_and on already-matched Option** (sync.rs:208) — the `(None, Some(_), Some(b))` arm re-checked `remote` via `remote.is_some_and(|r| ...)` even though the pattern had just proven `remote` is `Some(_)`. Bound the inner value with `Some(r)` in the pattern and used `r` directly.
3. **Code quality: single-caller sanitize_filename wrapper** (storage.rs) — `FileSystemStorage::sanitize_filename` was a one-line forwarder to `crate::sanitize_filename` with one call site. Inlined the crate call and removed the method.
## 2026-04-20
Found and fixed 4 issues:
1. **Dead code in conflict recovery** (sync.rs:756) — `parts[1] != ".listdata.json"` was unreachable because the branch is already gated on `parts[1].ends_with(".md")`, which `.listdata.json` cannot satisfy. Removed the redundant check.
2. **O(n²) cascade delete** (tauri/lib.rs) — descendant traversal in `delete_task` used `Vec::contains` inside the inner loop, making it quadratic in the number of tasks per list. Swapped the visited set to `HashSet`; `HashSet::insert` folds the contains+push into one call.
3. **Silent cascade failure in toggle_task** (tauri/lib.rs) — subtask `update_task` errors were discarded with `let _ = ...`, leaving subtasks stuck at the old status with no UI feedback. Propagate the error so the frontend can surface it.
4. **Duplicated UUID-parse boilerplate** (tauri/lib.rs) — 17 commands repeated `Uuid::parse_str(&x).map_err(|e| e.to_string())?`. Extracted a `parse_uuid` helper so callers read as `let id = parse_uuid(&list_id)?;`.
## 2026-04-15 ## 2026-04-15
Found and fixed 4 issues: Found and fixed 4 issues:

View file

@ -30,7 +30,7 @@ The Tauri dev server runs on port 1422 (`vite.config.ts` and `tauri.conf.json`).
Two-crate workspace (`resolver = "2"`, edition 2021) plus a Tauri app: Two-crate workspace (`resolver = "2"`, edition 2021) plus a Tauri app:
- **onyx-core** — Pure Rust library. Storage trait with `FileSystemStorage` implementation, `TaskRepository` (main API), data models, config, error types. No CLI/UI dependencies. `keyring` feature-gated behind `keyring-storage` (default on) for Android compatibility. - **onyx-core** — Pure Rust library. Storage trait with `FileSystemStorage` implementation, `TaskRepository` (main API), data models, config, error types. No CLI/UI dependencies. `keyring` feature-gated behind `keyring-storage` (default on) for Android compatibility.
- **onyx-cli** — CLI frontend using clap. Commands are in `src/commands/` (init, workspace, list, task, group, sync). Output formatting in `src/output.rs`. - **onyx-cli** — CLI frontend using clap. Commands are in `src/commands/` (init, workspace, list, task, group). Output formatting in `src/output.rs`.
- **apps/tauri/** — Tauri v2 GUI. Svelte 5 frontend in `src/`, Rust backend in `src-tauri/` with Tauri commands that call into `onyx-core`. `notify` crate feature-gated for Android. `tauri-plugin-credentials/` provides cross-platform credential storage (Android Keystore via EncryptedSharedPreferences, desktop via keyring crate). - **apps/tauri/** — Tauri v2 GUI. Svelte 5 frontend in `src/`, Rust backend in `src-tauri/` with Tauri commands that call into `onyx-core`. `notify` crate feature-gated for Android. `tauri-plugin-credentials/` provides cross-platform credential storage (Android Keystore via EncryptedSharedPreferences, desktop via keyring crate).
### Key patterns ### Key patterns
@ -64,7 +64,7 @@ The GUI uses Svelte 5 runes mode (`$state`, `$derived`, `$effect`, `$props()`).
Pre-alpha. No users, no released builds, no data to migrate. Breaking changes to on-disk formats, config structure, or sync conventions are free — do not add migration logic. Pre-alpha. No users, no released builds, no data to migrate. Breaking changes to on-disk formats, config structure, or sync conventions are free — do not add migration logic.
### Current state (2026-04-27) ### Current state (2026-04-15)
- **Phase 1** (Core + CLI): Complete - **Phase 1** (Core + CLI): Complete
- **Phase 2** (WebDAV sync): Complete — remote folder browsing, checksum-based conflict resolution, auto-sync lifecycle, per-workspace sync interval - **Phase 2** (WebDAV sync): Complete — remote folder browsing, checksum-based conflict resolution, auto-sync lifecycle, per-workspace sync interval
@ -106,7 +106,7 @@ Pre-alpha. No users, no released builds, no data to migrate. Breaking changes to
- Task deduplication on load (handles sync conflict duplicates) - Task deduplication on load (handles sync conflict duplicates)
- Subtask hierarchy: subtask count shown on parent tasks in list, subtask detail via three-panel slide navigation, inline add at top of subtask list (new subtasks prepend), collapsible completed subtasks section, cascade delete (parent deletion removes all subtasks with confirmation warning) - Subtask hierarchy: subtask count shown on parent tasks in list, subtask detail via three-panel slide navigation, inline add at top of subtask list (new subtasks prepend), collapsible completed subtasks section, cascade delete (parent deletion removes all subtasks with confirmation warning)
- Custom confirmation dialogs (ConfirmDialog component replaces native confirm()) - Custom confirmation dialogs (ConfirmDialog component replaces native confirm())
- Workspace path validation (rejects filesystem root `/` and system directories: `/etc`, `/usr`, `/bin`, `/sbin`, `/var`, `/proc`, `/sys`, `/dev`) - Workspace path validation (rejects system directories)
- Task detail auto-cleanup (taskStack clears when viewed task is deleted or list switches) - Task detail auto-cleanup (taskStack clears when viewed task is deleted or list switches)
- Swipe gestures on mobile: swipe left/right on a task to toggle completion (swipe direction depends on current status) - Swipe gestures on mobile: swipe left/right on a task to toggle completion (swipe direction depends on current status)
- Accessibility: ARIA labels/roles on interactive components, keyboard handlers, `prefers-reduced-motion` CSS support - Accessibility: ARIA labels/roles on interactive components, keyboard handlers, `prefers-reduced-motion` CSS support

View file

@ -671,6 +671,7 @@ apps/tauri/
│ │ ├── TaskItem.svelte │ │ ├── TaskItem.svelte
│ │ ├── NewTaskInput.svelte │ │ ├── NewTaskInput.svelte
│ │ ├── TaskDetailView.svelte │ │ ├── TaskDetailView.svelte
│ │ ├── BottomSheet.svelte
│ │ ├── ConfirmDialog.svelte │ │ ├── ConfirmDialog.svelte
│ │ └── DateTimePicker.svelte │ │ └── DateTimePicker.svelte
│ └── stores/ │ └── stores/
@ -765,7 +766,7 @@ WorkspaceConfig {
- [x] List rename (inline input via list kebab menu in drawer) - [x] List rename (inline input via list kebab menu in drawer)
- [x] Keyboard shortcuts (Escape closes settings → detail → drawer → menus in priority order) - [x] Keyboard shortcuts (Escape closes settings → detail → drawer → menus in priority order)
- [x] Sync status indicators (last-sync time + upload/download counts chip in TasksScreen) - [x] Sync status indicators (last-sync time + upload/download counts chip in TasksScreen)
- [ ] Push/pull sync mode selection (session-only sync direction selector in SettingsScreen) - [x] Push/pull sync mode selection (session-only sync direction selector in SettingsScreen)
- [x] Group-by-date toggle per list (checkmark toggle in list kebab menu) - [x] Group-by-date toggle per list (checkmark toggle in list kebab menu)
- [x] Subtask hierarchy (expand/collapse, inline add, cascade toggle/delete) - [x] Subtask hierarchy (expand/collapse, inline add, cascade toggle/delete)
- [ ] Search/filter tasks - [ ] Search/filter tasks
@ -1058,6 +1059,6 @@ This project is free and open-source software licensed under GPL v3.
--- ---
**Last Updated**: 2026-04-27 **Last Updated**: 2026-04-17
**Document Version**: 4.5 **Document Version**: 4.3
**Status**: Ready to Implement - Milestone-Driven Plan **Status**: Ready to Implement - Milestone-Driven Plan

View file

@ -60,11 +60,6 @@ fn lock_state(state: &Mutex<AppState>) -> Result<std::sync::MutexGuard<'_, AppSt
state.lock().map_err(|e| format!("State lock poisoned: {}", e)) state.lock().map_err(|e| format!("State lock poisoned: {}", e))
} }
/// Parse a UUID from a string, converting errors to the String format Tauri commands use.
fn parse_uuid(s: &str) -> Result<Uuid, String> {
Uuid::parse_str(s).map_err(|e| e.to_string())
}
impl AppState { impl AppState {
/// Persist config to disk, converting errors to String for Tauri commands. /// Persist config to disk, converting errors to String for Tauri commands.
fn save_config(&self) -> Result<(), String> { fn save_config(&self) -> Result<(), String> {
@ -72,25 +67,6 @@ impl AppState {
} }
} }
/// Extract the hostname from a URL (scheme://host/...), used as the credential key.
/// Returns an empty string if the URL has no scheme or host.
fn credential_domain(url: &str) -> String {
url.split("://")
.nth(1)
.and_then(|rest| rest.split('/').next())
.unwrap_or("")
.to_string()
}
/// Join a remote base directory with a child path, handling empty base and trailing slashes.
fn join_remote_path(base: &str, child: &str) -> String {
if base.is_empty() {
child.to_string()
} else {
format!("{}/{}", base.trim_end_matches('/'), child)
}
}
/// Validate that a workspace path is a reasonable directory and not a system path. /// Validate that a workspace path is a reasonable directory and not a system path.
fn validate_workspace_path(path: &str) -> Result<(), String> { fn validate_workspace_path(path: &str) -> Result<(), String> {
let p = PathBuf::from(path); let p = PathBuf::from(path);
@ -103,10 +79,7 @@ fn validate_workspace_path(path: &str) -> Result<(), String> {
#[cfg(unix)] #[cfg(unix)]
{ {
let forbidden = ["/", "/etc", "/usr", "/bin", "/sbin", "/var", "/proc", "/sys", "/dev"]; let forbidden = ["/", "/etc", "/usr", "/bin", "/sbin", "/var", "/proc", "/sys", "/dev"];
// Strip trailing slashes, but keep "/" itself — trim_end_matches would
// collapse it to "" and slip past the forbidden check.
let canonical = normalized.trim_end_matches('/'); let canonical = normalized.trim_end_matches('/');
let canonical = if canonical.is_empty() { "/" } else { canonical };
if forbidden.contains(&canonical) { if forbidden.contains(&canonical) {
return Err(format!("Cannot use system directory as workspace: {}", path)); return Err(format!("Cannot use system directory as workspace: {}", path));
} }
@ -206,13 +179,6 @@ fn add_workspace(
state: State<'_, Mutex<AppState>>, state: State<'_, Mutex<AppState>>,
) -> Result<(), String> { ) -> Result<(), String> {
validate_workspace_path(&path)?; validate_workspace_path(&path)?;
// Ensure the path exists and is a valid workspace before persisting the
// config. Without this, calling add_workspace directly on a missing
// directory would save the workspace but every subsequent ensure_repo
// call would fail with "Path does not exist".
TaskRepository::init(PathBuf::from(&path))
.map(|_| ())
.map_err(|e| e.to_string())?;
let mut s = lock_state(&state)?; let mut s = lock_state(&state)?;
let ws = WorkspaceConfig::new(name, PathBuf::from(&path)); let ws = WorkspaceConfig::new(name, PathBuf::from(&path));
let id = s.config.add_workspace(ws); let id = s.config.add_workspace(ws);
@ -290,7 +256,10 @@ async fn rename_workspace(
let base_url = webdav_url.as_deref().ok_or("No WebDAV URL configured")?; let base_url = webdav_url.as_deref().ok_or("No WebDAV URL configured")?;
let remote_path = webdav_path.as_deref().unwrap_or(""); let remote_path = webdav_path.as_deref().unwrap_or("");
let domain = credential_domain(base_url); let domain = base_url
.split("://").nth(1)
.and_then(|rest| rest.split('/').next())
.unwrap_or("").to_string();
let creds = app_handle.state::<Credentials<tauri::Wry>>(); let creds = app_handle.state::<Credentials<tauri::Wry>>();
let (username, password) = creds.load(&domain)?; let (username, password) = creds.load(&domain)?;
@ -371,7 +340,7 @@ fn delete_list(
let mut s = lock_state(&state)?; let mut s = lock_state(&state)?;
ensure_repo(&mut s)?; ensure_repo(&mut s)?;
mute_watcher(&mut s); mute_watcher(&mut s);
let id = parse_uuid(&list_id)?; let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
repo_mut(&mut s)? repo_mut(&mut s)?
.delete_list(id) .delete_list(id)
.map_err(|e| e.to_string()) .map_err(|e| e.to_string())
@ -386,7 +355,7 @@ fn list_tasks(
) -> Result<Vec<Task>, String> { ) -> Result<Vec<Task>, String> {
let mut s = lock_state(&state)?; let mut s = lock_state(&state)?;
ensure_repo(&mut s)?; ensure_repo(&mut s)?;
let id = parse_uuid(&list_id)?; let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
repo_ref(&s)? repo_ref(&s)?
.list_tasks(id) .list_tasks(id)
.map_err(|e| e.to_string()) .map_err(|e| e.to_string())
@ -398,27 +367,20 @@ fn create_task(
title: String, title: String,
description: Option<String>, description: Option<String>,
parent_id: Option<String>, parent_id: Option<String>,
date: Option<chrono::DateTime<chrono::Utc>>,
has_time: Option<bool>,
state: State<'_, Mutex<AppState>>, state: State<'_, Mutex<AppState>>,
) -> Result<Task, String> { ) -> Result<Task, String> {
let mut s = lock_state(&state)?; let mut s = lock_state(&state)?;
ensure_repo(&mut s)?; ensure_repo(&mut s)?;
mute_watcher(&mut s); mute_watcher(&mut s);
let id = parse_uuid(&list_id)?; let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
let mut task = Task::new(title); let mut task = Task::new(title);
if let Some(desc) = description.filter(|d| !d.is_empty()) { if let Some(desc) = description.filter(|d| !d.is_empty()) {
task.description = desc; task.description = desc;
} }
if let Some(pid) = parent_id { if let Some(pid) = parent_id {
let parent_uuid = parse_uuid(&pid)?; let parent_uuid = Uuid::parse_str(&pid).map_err(|e| e.to_string())?;
task.parent_id = Some(parent_uuid); task.parent_id = Some(parent_uuid);
} }
// Accept the date fields at creation time so callers don't have to do a
// second update() round-trip just to attach a date — which previously
// dropped the date entirely if the follow-up update failed.
task.date = date;
task.has_time = has_time.unwrap_or(false);
repo_mut(&mut s)? repo_mut(&mut s)?
.create_task(id, task) .create_task(id, task)
.map_err(|e| e.to_string()) .map_err(|e| e.to_string())
@ -433,7 +395,7 @@ fn update_task(
let mut s = lock_state(&state)?; let mut s = lock_state(&state)?;
ensure_repo(&mut s)?; ensure_repo(&mut s)?;
mute_watcher(&mut s); mute_watcher(&mut s);
let id = parse_uuid(&list_id)?; let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
repo_mut(&mut s)? repo_mut(&mut s)?
.update_task(id, task) .update_task(id, task)
.map_err(|e| e.to_string()) .map_err(|e| e.to_string())
@ -448,36 +410,17 @@ fn delete_task(
let mut s = lock_state(&state)?; let mut s = lock_state(&state)?;
ensure_repo(&mut s)?; ensure_repo(&mut s)?;
mute_watcher(&mut s); mute_watcher(&mut s);
let lid = parse_uuid(&list_id)?; let lid = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
let tid = parse_uuid(&task_id)?; let tid = Uuid::parse_str(&task_id).map_err(|e| e.to_string())?;
let repo = repo_mut(&mut s)?; let repo = repo_mut(&mut s)?;
// Cascade-delete the full descendant subtree (not just direct children) // Cascade-delete subtasks first
// so deleting a parent can't leave grandchildren orphaned with a
// parent_id pointing at a deleted task.
let all_tasks = repo.list_tasks(lid).map_err(|e| e.to_string())?; let all_tasks = repo.list_tasks(lid).map_err(|e| e.to_string())?;
// Build a parent -> children index in one pass so the BFS below is O(n) let child_ids: Vec<Uuid> = all_tasks
// instead of O(n * depth) scanning all tasks for each frontier pop. .iter()
let mut children_by_parent: std::collections::HashMap<Uuid, Vec<Uuid>> = .filter(|t| t.parent_id == Some(tid))
std::collections::HashMap::new(); .map(|t| t.id)
for t in &all_tasks { .collect();
if let Some(pid) = t.parent_id { for child_id in child_ids {
children_by_parent.entry(pid).or_default().push(t.id);
}
}
let mut to_delete: std::collections::HashSet<Uuid> = std::collections::HashSet::new();
let mut frontier: Vec<Uuid> = vec![tid];
while let Some(parent) = frontier.pop() {
if let Some(children) = children_by_parent.get(&parent) {
for &child_id in children {
if to_delete.insert(child_id) {
frontier.push(child_id);
}
}
}
}
// Delete children before the parent so a mid-cascade failure doesn't
// leave the parent removed but descendants stranded.
for child_id in to_delete {
repo.delete_task(lid, child_id).map_err(|e| format!("Failed to delete subtask {}: {}", child_id, e))?; repo.delete_task(lid, child_id).map_err(|e| format!("Failed to delete subtask {}: {}", child_id, e))?;
} }
repo.delete_task(lid, tid) repo.delete_task(lid, tid)
@ -493,8 +436,8 @@ fn toggle_task(
let mut s = lock_state(&state)?; let mut s = lock_state(&state)?;
ensure_repo(&mut s)?; ensure_repo(&mut s)?;
mute_watcher(&mut s); mute_watcher(&mut s);
let lid = parse_uuid(&list_id)?; let lid = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
let tid = parse_uuid(&task_id)?; let tid = Uuid::parse_str(&task_id).map_err(|e| e.to_string())?;
let repo = repo_mut(&mut s)?; let repo = repo_mut(&mut s)?;
let mut task = repo.get_task(lid, tid).map_err(|e| e.to_string())?; let mut task = repo.get_task(lid, tid).map_err(|e| e.to_string())?;
match task.status { match task.status {
@ -511,9 +454,7 @@ fn toggle_task(
TaskStatus::Backlog => child.uncomplete(), TaskStatus::Backlog => child.uncomplete(),
TaskStatus::Completed => child.complete(), TaskStatus::Completed => child.complete(),
} }
let child_id = child.id; let _ = repo.update_task(lid, child);
repo.update_task(lid, child)
.map_err(|e| format!("Failed to cascade to subtask {}: {}", child_id, e))?;
} }
} }
Ok(task) Ok(task)
@ -529,8 +470,8 @@ fn reorder_task(
let mut s = lock_state(&state)?; let mut s = lock_state(&state)?;
ensure_repo(&mut s)?; ensure_repo(&mut s)?;
mute_watcher(&mut s); mute_watcher(&mut s);
let lid = parse_uuid(&list_id)?; let lid = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
let tid = parse_uuid(&task_id)?; let tid = Uuid::parse_str(&task_id).map_err(|e| e.to_string())?;
repo_mut(&mut s)? repo_mut(&mut s)?
.reorder_task(lid, tid, new_position) .reorder_task(lid, tid, new_position)
.map_err(|e| e.to_string()) .map_err(|e| e.to_string())
@ -548,9 +489,9 @@ fn move_task(
let mut s = lock_state(&state)?; let mut s = lock_state(&state)?;
ensure_repo(&mut s)?; ensure_repo(&mut s)?;
mute_watcher(&mut s); mute_watcher(&mut s);
let from = parse_uuid(&from_list_id)?; let from = Uuid::parse_str(&from_list_id).map_err(|e| e.to_string())?;
let to = parse_uuid(&to_list_id)?; let to = Uuid::parse_str(&to_list_id).map_err(|e| e.to_string())?;
let tid = parse_uuid(&task_id)?; let tid = Uuid::parse_str(&task_id).map_err(|e| e.to_string())?;
repo_mut(&mut s)? repo_mut(&mut s)?
.move_task(from, to, tid) .move_task(from, to, tid)
.map_err(|e| e.to_string()) .map_err(|e| e.to_string())
@ -565,7 +506,7 @@ fn rename_list(
let mut s = lock_state(&state)?; let mut s = lock_state(&state)?;
ensure_repo(&mut s)?; ensure_repo(&mut s)?;
mute_watcher(&mut s); mute_watcher(&mut s);
let id = parse_uuid(&list_id)?; let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
repo_mut(&mut s)? repo_mut(&mut s)?
.rename_list(id, new_name) .rename_list(id, new_name)
.map_err(|e| e.to_string()) .map_err(|e| e.to_string())
@ -580,7 +521,7 @@ fn set_group_by_date(
let mut s = lock_state(&state)?; let mut s = lock_state(&state)?;
ensure_repo(&mut s)?; ensure_repo(&mut s)?;
mute_watcher(&mut s); mute_watcher(&mut s);
let id = parse_uuid(&list_id)?; let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
repo_mut(&mut s)? repo_mut(&mut s)?
.set_group_by_date(id, enabled) .set_group_by_date(id, enabled)
.map_err(|e| e.to_string()) .map_err(|e| e.to_string())
@ -593,7 +534,7 @@ fn get_group_by_date(
) -> Result<bool, String> { ) -> Result<bool, String> {
let mut s = lock_state(&state)?; let mut s = lock_state(&state)?;
ensure_repo(&mut s)?; ensure_repo(&mut s)?;
let id = parse_uuid(&list_id)?; let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
repo_ref(&s)? repo_ref(&s)?
.get_group_by_date(id) .get_group_by_date(id)
.map_err(|e| e.to_string()) .map_err(|e| e.to_string())
@ -681,9 +622,10 @@ async fn list_remote_folder(
let dir_entries: Vec<_> = entries.into_iter().filter(|e| e.is_dir).collect(); let dir_entries: Vec<_> = entries.into_iter().filter(|e| e.is_dir).collect();
// Check all subfolders for .onyx-workspace.json in parallel // Check all subfolders for .onyx-workspace.json in parallel
let sub_paths: Vec<_> = dir_entries.iter() let sub_paths: Vec<_> = dir_entries.iter().map(|entry| {
.map(|entry| join_remote_path(&path, &entry.path)) if path.is_empty() { entry.path.clone() }
.collect(); else { format!("{}/{}", path.trim_end_matches('/'), entry.path) }
}).collect();
let checks: Vec<_> = sub_paths.iter().map(|sp| { let checks: Vec<_> = sub_paths.iter().map(|sp| {
client.list_files(sp) client.list_files(sp)
}).collect(); }).collect();
@ -715,7 +657,11 @@ async fn inspect_remote_workspace(
let mut lists = Vec::new(); let mut lists = Vec::new();
for entry in entries { for entry in entries {
if !entry.is_dir { continue; } if !entry.is_dir { continue; }
let list_path = join_remote_path(&path, &entry.path); let list_path = if path.is_empty() {
entry.path.clone()
} else {
format!("{}/{}", path.trim_end_matches('/'), entry.path)
};
let files = client.list_files(&list_path).await.unwrap_or_else(|e| { let files = client.list_files(&list_path).await.unwrap_or_else(|e| {
eprintln!("Warning: failed to list remote folder '{}': {}", list_path, e); eprintln!("Warning: failed to list remote folder '{}': {}", list_path, e);
Vec::new() Vec::new()
@ -751,7 +697,11 @@ async fn create_remote_workspace(
"list_order": [], "list_order": [],
"last_opened_list": null, "last_opened_list": null,
}); });
let file_path = join_remote_path(&path, ".onyx-workspace.json"); let file_path = if path.is_empty() {
".onyx-workspace.json".to_string()
} else {
format!("{}/{}", path.trim_end_matches('/'), ".onyx-workspace.json")
};
client.put_file(&file_path, serde_json::to_string_pretty(&metadata).map_err(|e| e.to_string())?.into_bytes()) client.put_file(&file_path, serde_json::to_string_pretty(&metadata).map_err(|e| e.to_string())?.into_bytes())
.await .await
.map_err(|e| e.to_string())?; .map_err(|e| e.to_string())?;
@ -785,7 +735,12 @@ fn add_webdav_workspace(
s.repo = None; s.repo = None;
// Store credentials keyed by hostname // Store credentials keyed by hostname
let domain = credential_domain(&webdav_url); let domain = webdav_url
.split("://")
.nth(1)
.and_then(|rest| rest.split('/').next())
.unwrap_or("")
.to_string();
s.save_config()?; s.save_config()?;
drop(s); drop(s);
let creds = app_handle.state::<Credentials<tauri::Wry>>(); let creds = app_handle.state::<Credentials<tauri::Wry>>();
@ -848,7 +803,12 @@ async fn sync_workspace(
}; };
// Step 2: load credentials // Step 2: load credentials
let domain = credential_domain(&webdav_url); let domain = webdav_url
.split("://")
.nth(1)
.and_then(|rest| rest.split('/').next())
.unwrap_or("")
.to_string();
let creds = app_handle.state::<Credentials<tauri::Wry>>(); let creds = app_handle.state::<Credentials<tauri::Wry>>();
let (username, password) = creds.load(&domain)?; let (username, password) = creds.load(&domain)?;

View file

@ -0,0 +1,42 @@
<script lang="ts">
import type { Snippet } from "svelte";
let { onclose, children }: { onclose: () => void; children: Snippet } = $props();
</script>
<!-- Backdrop -->
<div
class="fixed inset-0 z-40 bg-black/40"
role="button"
tabindex="-1"
aria-label="Close sheet"
onclick={onclose}
onkeydown={(e) => { if (e.key === "Escape") onclose(); }}
></div>
<!-- Sheet -->
<div
role="dialog"
aria-modal="true"
class="fixed bottom-0 left-0 right-0 z-50 max-h-[70vh] overflow-y-auto rounded-t-2xl bg-surface-light shadow-xl dark:bg-card-dark animate-slide-up"
>
<!-- Drag handle -->
<div class="flex justify-center py-2">
<div class="h-1 w-8 rounded-full bg-gray-300 dark:bg-gray-600"></div>
</div>
{@render children()}
<div class="h-[env(safe-area-inset-bottom)]"></div>
</div>
<style>
@keyframes slide-up {
from {
transform: translateY(100%);
}
to {
transform: translateY(0);
}
}
.animate-slide-up {
animation: slide-up 0.25s ease-out;
}
</style>

View file

@ -13,8 +13,6 @@
let viewYear = $state(existing ? existing.getFullYear() : now.getFullYear()); let viewYear = $state(existing ? existing.getFullYear() : now.getFullYear());
let viewMonth = $state(existing ? existing.getMonth() : now.getMonth()); let viewMonth = $state(existing ? existing.getMonth() : now.getMonth());
let selectedDay = $state(existing ? existing.getDate() : now.getDate()); let selectedDay = $state(existing ? existing.getDate() : now.getDate());
let selectedYear = $state(existing ? existing.getFullYear() : now.getFullYear());
let selectedMonth = $state(existing ? existing.getMonth() : now.getMonth());
let includeTime = $state(has_time); let includeTime = $state(has_time);
let selectedHour = $state(existing ? existing.getHours() : now.getHours()); let selectedHour = $state(existing ? existing.getHours() : now.getHours());
let selectedMinute = $state(existing ? existing.getMinutes() : 0); let selectedMinute = $state(existing ? existing.getMinutes() : 0);
@ -52,8 +50,6 @@
function selectDay(day: number) { function selectDay(day: number) {
selectedDay = day; selectedDay = day;
selectedYear = viewYear;
selectedMonth = viewMonth;
} }
function isToday(day: number): boolean { function isToday(day: number): boolean {
@ -61,16 +57,16 @@
} }
function isSelected(day: number): boolean { function isSelected(day: number): boolean {
return selectedDay === day && selectedYear === viewYear && selectedMonth === viewMonth; return selectedDay === day && (!value || (() => {
const v = new Date(value);
return v.getFullYear() === viewYear && v.getMonth() === viewMonth;
})());
} }
function done() { function done() {
const h = includeTime ? selectedHour : 0; const h = includeTime ? selectedHour : 0;
const m = includeTime ? selectedMinute : 0; const m = includeTime ? selectedMinute : 0;
// Commit based on the last-selected year/month, not the currently-viewed const iso = new Date(viewYear, viewMonth, selectedDay, h, m).toISOString();
// ones — users can navigate months after selecting a day without
// accidentally shifting the chosen date to the viewed month.
const iso = new Date(selectedYear, selectedMonth, selectedDay, h, m).toISOString();
onchange(iso, includeTime); onchange(iso, includeTime);
dismiss(); dismiss();
} }
@ -133,9 +129,9 @@
<button <button
onclick={() => selectDay(day)} onclick={() => selectDay(day)}
class="mx-auto flex h-8 w-8 items-center justify-center rounded-full text-sm transition-colors class="mx-auto flex h-8 w-8 items-center justify-center rounded-full text-sm transition-colors
{isSelected(day) ? 'bg-primary text-white' : ''} {selectedDay === day ? 'bg-primary text-white' : ''}
{isToday(day) && !isSelected(day) ? 'font-bold text-primary' : ''} {isToday(day) && selectedDay !== day ? 'font-bold text-primary' : ''}
{!isSelected(day) && !isToday(day) ? 'hover:bg-black/5 dark:hover:bg-white/10' : ''}" {selectedDay !== day && !isToday(day) ? 'hover:bg-black/5 dark:hover:bg-white/10' : ''}"
> >
{day} {day}
</button> </button>

View file

@ -1,74 +0,0 @@
import { describe, it, expect, vi, beforeEach } from "vitest";
import { render, screen, cleanup } from "@testing-library/svelte";
import userEvent from "@testing-library/user-event";
import DateTimePicker from "./DateTimePicker.svelte";
beforeEach(() => {
cleanup();
});
describe("DateTimePicker — selected highlight", () => {
it("only marks the selected day in the month/year that was actually picked", async () => {
const user = userEvent.setup();
// Pick a date in the current month so the component opens on it.
const now = new Date();
const existing = new Date(now.getFullYear(), now.getMonth(), 15, 0, 0, 0).toISOString();
render(DateTimePicker, {
value: existing,
has_time: false,
onchange: vi.fn(),
onclose: vi.fn(),
});
// The "15" button for the current month should be rendered with the
// selected styling (bg-primary).
const day15 = screen.getByRole("button", { name: "15" });
expect(day15.className).toMatch(/bg-primary/);
// Navigate one month forward. The same "15" cell must NOT be marked as
// selected, because the user hasn't picked a day in that month yet.
const nextMonthBtn = screen.getAllByRole("button").find((b) =>
b.querySelector("svg path[d*='M7.21 14.77']"),
) as HTMLElement;
await user.click(nextMonthBtn);
const nextMonth15 = screen.getByRole("button", { name: "15" });
expect(nextMonth15.className).not.toMatch(/bg-primary/);
});
it("commits based on the last-selected month, not the currently-viewed month", async () => {
const user = userEvent.setup();
const onchange = vi.fn();
const onclose = vi.fn();
// Start with April 10 selected (use a fixed month/year so the test is stable).
const existing = new Date(2026, 3, 10, 0, 0, 0).toISOString();
render(DateTimePicker, {
value: existing,
has_time: false,
onchange,
onclose,
});
// Pick the 20th while viewing April.
await user.click(screen.getByRole("button", { name: "20" }));
// Flip to May.
const nextMonthBtn = screen.getAllByRole("button").find((b) =>
b.querySelector("svg path[d*='M7.21 14.77']"),
) as HTMLElement;
await user.click(nextMonthBtn);
// Hit Done.
await user.click(screen.getByRole("button", { name: "Done" }));
expect(onchange).toHaveBeenCalled();
const committed = new Date(onchange.mock.calls[0][0] as string);
// April == month 3 (0-indexed). We navigated to May without reselecting,
// so the committed date must still be April 20.
expect(committed.getMonth()).toBe(3);
expect(committed.getDate()).toBe(20);
expect(committed.getFullYear()).toBe(2026);
});
});

View file

@ -17,15 +17,10 @@
async function handleSubmit() { async function handleSubmit() {
if (!title.trim()) return; if (!title.trim()) return;
// Pass date/has_time into createTask directly so the date can't be lost const created = await app.createTask(title.trim(), description.trim() || undefined);
// if a second round-trip to update() failed after the create succeeded. if (date && created) {
await app.createTask( await app.updateTask({ ...created, date: date, has_time: dateHasTime });
title.trim(), }
description.trim() || undefined,
undefined,
date,
dateHasTime,
);
title = ""; title = "";
description = ""; description = "";
date = null; date = null;

View file

@ -120,12 +120,7 @@
async function executeDeleteCompletedSubtasks() { async function executeDeleteCompletedSubtasks() {
confirmDeleteCompleted = false; confirmDeleteCompleted = false;
showSubtaskMenu = false; showSubtaskMenu = false;
// Snapshot — completedSubtasks is reactive and shrinks as we delete. for (const s of completedSubtasks) await app.deleteTask(s.id);
// Bail on first failure so we don't silently leave a partial delete.
const targets = [...completedSubtasks];
for (const s of targets) {
if (!(await app.deleteTask(s.id))) return;
}
} }
function handleSubtaskMenuClickOutside(e: MouseEvent) { function handleSubtaskMenuClickOutside(e: MouseEvent) {

View file

@ -15,29 +15,14 @@
let webdavUser = $state(""); let webdavUser = $state("");
let webdavPass = $state(""); let webdavPass = $state("");
let testStatus = $state<"idle" | "testing" | "ok" | "fail">("idle"); let testStatus = $state<"idle" | "testing" | "ok" | "fail">("idle");
let credsLoaded = $state(false);
let renaming = $state(false); let renaming = $state(false);
let renameValue = $state(""); let renameValue = $state("");
let renameInput = $state<HTMLInputElement | null>(null);
let showKebab = $state(false); let showKebab = $state(false);
let confirmRename = $state(false); let confirmRename = $state(false);
// Imperative focus — Svelte's native autofocus attribute is unreliable
// for inputs that appear only via conditional blocks.
$effect(() => { $effect(() => {
if (renaming && renameInput) { if (!ws?.webdav_url) return;
renameInput.focus();
renameInput.select();
}
});
// Load stored credentials exactly once for this workspace. Previously this
// ran on every `ws.webdav_url` change, which silently clobbered in-progress
// user edits whenever any other setting updated the config.
$effect(() => {
if (credsLoaded || !ws?.webdav_url) return;
credsLoaded = true;
webdavUrl = ws.webdav_url; webdavUrl = ws.webdav_url;
try { try {
const domain = new URL(ws.webdav_url).hostname; const domain = new URL(ws.webdav_url).hostname;
@ -50,12 +35,6 @@
} catch {} } catch {}
}); });
// Any edit invalidates a prior test so users can't Save a config they
// haven't validated since changing it.
function markDirty() {
if (testStatus !== "idle") testStatus = "idle";
}
async function testConnection() { async function testConnection() {
testStatus = "testing"; testStatus = "testing";
try { try {
@ -72,12 +51,6 @@
async function saveWebdav() { async function saveWebdav() {
if (!webdavUrl.trim()) return; if (!webdavUrl.trim()) return;
// Require a successful test so a typo'd URL can't silently point the
// workspace at a dead server.
if (testStatus !== "ok") {
await testConnection();
if (testStatus !== "ok") return;
}
await invoke("set_webdav_config", { await invoke("set_webdav_config", {
workspaceId, workspaceId,
webdavUrl: webdavUrl.trim(), webdavUrl: webdavUrl.trim(),
@ -143,11 +116,11 @@
{#if renaming} {#if renaming}
<input <input
type="text" type="text"
bind:this={renameInput}
bind:value={renameValue} bind:value={renameValue}
class="w-full bg-transparent text-xl font-bold outline-none" class="w-full bg-transparent text-xl font-bold outline-none"
onkeydown={(e) => { if (e.key === "Enter") handleRename(); if (e.key === "Escape") { renaming = false; } }} onkeydown={(e) => { if (e.key === "Enter") handleRename(); if (e.key === "Escape") { renaming = false; } }}
onblur={handleRename} onblur={handleRename}
autofocus
/> />
{:else} {:else}
<p class="text-xl font-bold">{ws?.name}</p> <p class="text-xl font-bold">{ws?.name}</p>
@ -199,7 +172,6 @@
<input <input
type="url" type="url"
bind:value={webdavUrl} bind:value={webdavUrl}
oninput={markDirty}
placeholder="https://dav.example.com/tasks/" placeholder="https://dav.example.com/tasks/"
class="mb-3 w-full rounded-lg border border-border-light bg-transparent px-3 py-2 text-sm outline-none focus:border-primary dark:border-border-dark" class="mb-3 w-full rounded-lg border border-border-light bg-transparent px-3 py-2 text-sm outline-none focus:border-primary dark:border-border-dark"
/> />
@ -208,7 +180,6 @@
<input <input
type="text" type="text"
bind:value={webdavUser} bind:value={webdavUser}
oninput={markDirty}
class="mb-3 w-full rounded-lg border border-border-light bg-transparent px-3 py-2 text-sm outline-none focus:border-primary dark:border-border-dark" class="mb-3 w-full rounded-lg border border-border-light bg-transparent px-3 py-2 text-sm outline-none focus:border-primary dark:border-border-dark"
/> />
@ -216,7 +187,6 @@
<input <input
type="password" type="password"
bind:value={webdavPass} bind:value={webdavPass}
oninput={markDirty}
class="mb-4 w-full rounded-lg border border-border-light bg-transparent px-3 py-2 text-sm outline-none focus:border-primary dark:border-border-dark" class="mb-4 w-full rounded-lg border border-border-light bg-transparent px-3 py-2 text-sm outline-none focus:border-primary dark:border-border-dark"
/> />
@ -226,7 +196,7 @@
disabled={!webdavUrl.trim()} disabled={!webdavUrl.trim()}
class="rounded-lg border border-border-light px-4 py-2 text-sm font-medium hover:bg-black/5 disabled:opacity-40 dark:border-border-dark dark:hover:bg-white/10" class="rounded-lg border border-border-light px-4 py-2 text-sm font-medium hover:bg-black/5 disabled:opacity-40 dark:border-border-dark dark:hover:bg-white/10"
> >
{testStatus === "testing" ? "Testing" : testStatus === "ok" ? "Connected" : testStatus === "fail" ? "Failed Retry" : "Test Connection"} {testStatus === "testing" ? "Testing..." : testStatus === "ok" ? "Connected" : testStatus === "fail" ? "Failed -- Retry" : "Test Connection"}
</button> </button>
<button <button
onclick={saveWebdav} onclick={saveWebdav}

View file

@ -77,6 +77,20 @@
// ── WebDAV handlers ─────────────────────────────────────────────── // ── WebDAV handlers ───────────────────────────────────────────────
async function testConnection() {
testStatus = "testing";
try {
await invoke("test_webdav_connection", {
url: webdavUrl,
username: webdavUser,
password: webdavPass,
});
testStatus = "ok";
} catch {
testStatus = "fail";
}
}
async function connectAndBrowse() { async function connectAndBrowse() {
testStatus = "testing"; testStatus = "testing";
try { try {

View file

@ -58,7 +58,6 @@
let completedVisible = $state(false); let completedVisible = $state(false);
let renamingListId = $state<string | null>(null); let renamingListId = $state<string | null>(null);
let renameValue = $state(""); let renameValue = $state("");
let renameListInput = $state<HTMLInputElement | null>(null);
let showListMenu = $state(false); let showListMenu = $state(false);
let showSubtasks = $state(false); let showSubtasks = $state(false);
let confirmDeleteList = $state(false); let confirmDeleteList = $state(false);
@ -86,14 +85,6 @@
if (showNewList && newListInput) newListInput.focus(); if (showNewList && newListInput) newListInput.focus();
}); });
// Same imperative-focus trick for the inline list-rename input.
$effect(() => {
if (renamingListId && renameListInput) {
renameListInput.focus();
renameListInput.select();
}
});
async function handleNewList() { async function handleNewList() {
if (!newListName.trim()) return; if (!newListName.trim()) return;
@ -109,12 +100,7 @@
async function executeDeleteCompleted() { async function executeDeleteCompleted() {
confirmDeleteCompleted = false; confirmDeleteCompleted = false;
// Snapshot targets first — deletes mutate app.completedTasks reactively. for (var t of app.completedTasks) await app.deleteTask(t.id);
// Bail on first failure so we don't silently leave a partial delete.
const targets = [...app.completedTasks];
for (const t of targets) {
if (!(await app.deleteTask(t.id))) return;
}
} }
function promptDeleteList() { function promptDeleteList() {
@ -640,11 +626,11 @@
{#if renamingListId === app.activeListId} {#if renamingListId === app.activeListId}
<input <input
type="text" type="text"
bind:this={renameListInput}
bind:value={renameValue} bind:value={renameValue}
class="w-full bg-transparent text-xl font-bold outline-none" class="w-full bg-transparent text-xl font-bold outline-none"
onkeydown={(e) => { if (e.key === "Enter") handleRenameList(); if (e.key === "Escape") renamingListId = null; }} onkeydown={(e) => { if (e.key === "Enter") handleRenameList(); if (e.key === "Escape") renamingListId = null; }}
onblur={handleRenameList} onblur={handleRenameList}
autofocus
/> />
{:else} {:else}
<p class="text-xl font-bold">{app.activeList?.title ?? "Tasks"}</p> <p class="text-xl font-bold">{app.activeList?.title ?? "Tasks"}</p>
@ -657,16 +643,7 @@
{#if app.lists.length === 0} {#if app.lists.length === 0}
<div class="flex h-full flex-col items-center justify-center p-8 text-center"> <div class="flex h-full flex-col items-center justify-center p-8 text-center">
<p class="text-lg font-medium opacity-60">No lists yet</p> <p class="text-lg font-medium opacity-60">No lists yet</p>
{#if app.isGoogleTasks} <p class="mt-1 text-sm opacity-40">Tap the list name above to create one</p>
<p class="mt-1 text-sm opacity-40">Lists will appear after your next sync.</p>
{:else}
<button
onclick={() => { showDrawer = true; showNewList = true; }}
class="mt-4 rounded-lg bg-primary px-4 py-2 text-sm font-medium text-white hover:bg-primary-hover"
>
Create a list
</button>
{/if}
</div> </div>
{:else if !app.activeListId} {:else if !app.activeListId}
<div class="flex h-full items-center justify-center opacity-40"> <div class="flex h-full items-center justify-center opacity-40">

View file

@ -10,13 +10,10 @@ import type {
} from "../types"; } from "../types";
import { groupTasksByDate, type TaskGroup } from "../grouping"; import { groupTasksByDate, type TaskGroup } from "../grouping";
// Listen for file system changes from the backend watcher. Guard against // Listen for file system changes from the backend watcher.
// firing while the user is on the setup/missing screens — loadLists would
// fail (no workspace) and a debouncedSync against a non-synced workspace
// would be wasted work.
listen("fs-changed", () => { listen("fs-changed", () => {
if (!hasWorkspace || screen !== "tasks") return;
loadLists(); loadLists();
// Debounced sync for WebDAV workspaces on local file changes
if (isSyncedWorkspace) debouncedSync(); if (isSyncedWorkspace) debouncedSync();
}); });
@ -187,17 +184,11 @@ async function removeWorkspace(id: string) {
try { try {
await invoke("remove_workspace", { id }); await invoke("remove_workspace", { id });
config = await invoke<AppConfig>("get_config"); config = await invoke<AppConfig>("get_config");
activeListId = null; if (!hasWorkspace) {
tasks = [];
lists = [];
// Switch to the next available workspace rather than dumping the user
// to the setup screen when they still have other workspaces.
const remaining = Object.keys(config?.workspaces ?? {});
if (remaining.length > 0) {
await switchWorkspace(remaining[0]);
screen = "tasks";
} else {
screen = "setup"; screen = "setup";
lists = [];
tasks = [];
activeListId = null;
} }
} catch (e) { } catch (e) {
error = String(e); error = String(e);
@ -264,13 +255,7 @@ async function deleteList(id: string) {
} }
} }
async function createTask( async function createTask(title: string, description?: string, parentId?: string): Promise<Task | null> {
title: string,
description?: string,
parentId?: string,
date?: string | null,
hasTime?: boolean,
): Promise<Task | null> {
if (!activeListId) return null; if (!activeListId) return null;
try { try {
const task = await invoke<Task>("create_task", { const task = await invoke<Task>("create_task", {
@ -278,8 +263,6 @@ async function createTask(
title, title,
description: description ?? "", description: description ?? "",
parentId: parentId ?? null, parentId: parentId ?? null,
date: date ?? null,
hasTime: hasTime ?? false,
}); });
tasks = parentId ? [task, ...tasks] : [...tasks, task]; tasks = parentId ? [task, ...tasks] : [...tasks, task];
error = null; error = null;
@ -398,11 +381,7 @@ async function triggerSync() {
await loadLists(); await loadLists();
} catch (e) { } catch (e) {
const msg = String(e); const msg = String(e);
// Narrow phrases so that a legitimate server-side error containing a const isTransient = /timeout|connect|network|unreachable|refused/i.test(msg);
// word like "network" or "refused" in its description isn't silently
// swallowed as an offline blip. Only treat obvious connectivity failures
// as transient.
const isTransient = /(^|\W)(timed? out|timeout|connection (refused|reset|timed out|aborted)|connect error|network (is )?unreachable|no route to host|host (not found|is unreachable)|dns|enotfound|econnrefused|etimedout|ehostunreach|enetunreach)(\W|$)/i.test(msg);
syncStatus = isTransient ? "offline" : "error"; syncStatus = isTransient ? "offline" : "error";
// Only show the error banner for non-transient failures; connectivity issues just update the status dot // Only show the error banner for non-transient failures; connectivity issues just update the status dot
if (!isTransient) error = msg; if (!isTransient) error = msg;
@ -418,7 +397,7 @@ function debouncedSync() {
function restartSyncInterval() { function restartSyncInterval() {
if (_syncInterval) clearInterval(_syncInterval); if (_syncInterval) clearInterval(_syncInterval);
const secs = _appFocused ? syncIntervalSecs : syncIntervalUnfocusedSecs; var secs = _appFocused ? syncIntervalSecs : syncIntervalUnfocusedSecs;
_syncInterval = setInterval(triggerSync, secs * 1000); _syncInterval = setInterval(triggerSync, secs * 1000);
} }
@ -540,10 +519,22 @@ async function addGoogleTasksWorkspace(
async function forgetMissingWorkspace() { async function forgetMissingWorkspace() {
if (!missingWorkspace) return; if (!missingWorkspace) return;
// removeWorkspace handles switching to the next available workspace (or
// falling back to the setup screen when none remain); just delegate.
await removeWorkspace(missingWorkspace); await removeWorkspace(missingWorkspace);
missingWorkspace = null; missingWorkspace = null;
config = await invoke<AppConfig>("get_config");
if (hasWorkspace) {
// Switch to the next available workspace
const nextName = Object.keys(config!.workspaces)[0];
if (nextName) {
await switchWorkspace(nextName);
screen = "tasks";
return;
}
}
screen = "setup";
lists = [];
tasks = [];
activeListId = null;
} }
function setScreen(s: Screen) { function setScreen(s: Screen) {

View file

@ -6,7 +6,6 @@ pub mod group;
pub mod sync; pub mod sync;
use onyx_core::{AppConfig, TaskRepository}; use onyx_core::{AppConfig, TaskRepository};
use onyx_core::config::WorkspaceConfig;
use anyhow::{Context, Result}; use anyhow::{Context, Result};
use std::path::PathBuf; use std::path::PathBuf;
@ -24,89 +23,21 @@ pub fn save_config(config: &AppConfig) -> Result<()> {
config.save_to_file(&path).context("Failed to save config") config.save_to_file(&path).context("Failed to save config")
} }
/// Resolve a user-supplied identifier to (id, WorkspaceConfig). Accepts either pub fn get_repository(workspace_name: Option<String>) -> Result<(TaskRepository, String)> {
/// the workspace's display name or its UUID. Falls back to the current
/// workspace when `identifier` is `None`.
pub fn resolve_workspace(config: &AppConfig, identifier: Option<&str>) -> Result<(String, WorkspaceConfig)> {
if let Some(s) = identifier {
// Try by UUID first (exact match on map key), then fall back to name lookup.
if let Some(ws) = config.get_workspace(s) {
return Ok((s.to_string(), ws.clone()));
}
let (id, ws) = config.find_by_name(s)
.ok_or_else(|| anyhow::anyhow!("Workspace '{}' not found", s))?;
Ok((id.clone(), ws.clone()))
} else {
let (id, ws) = config.get_current_workspace()
.context("No workspace set. Run 'onyx workspace add <name> <path>' to create one, or 'onyx workspace switch <name>' to select one.")?;
Ok((id.clone(), ws.clone()))
}
}
pub fn get_repository(workspace_identifier: Option<String>) -> Result<(TaskRepository, String)> {
let config = load_config()?; let config = load_config()?;
let (_id, workspace_config) = resolve_workspace(&config, workspace_identifier.as_deref())?;
let name = workspace_config.name.clone(); let (name, workspace_config) = if let Some(name) = workspace_name {
let workspace_config = config.get_workspace(&name)
.ok_or_else(|| anyhow::anyhow!("Workspace '{}' not found", name))?;
(name, workspace_config.clone())
} else {
let (name, workspace_config) = config.get_current_workspace()
.context("No workspace set. Use 'onyx init' to create one.")?;
(name.clone(), workspace_config.clone())
};
let repo = TaskRepository::new(workspace_config.path.clone()) let repo = TaskRepository::new(workspace_config.path.clone())
.context(format!("Failed to open workspace '{}'", name))?; .context(format!("Failed to open workspace '{}'", name))?;
Ok((repo, name)) Ok((repo, name))
} }
#[cfg(test)]
mod tests {
use super::*;
fn make_config_with(ws: &[(&str, &str)]) -> (AppConfig, Vec<String>) {
let mut config = AppConfig::new();
let ids: Vec<String> = ws.iter()
.map(|(name, path)| config.add_workspace(WorkspaceConfig::new(name.to_string(), PathBuf::from(path))))
.collect();
(config, ids)
}
#[test]
fn resolve_by_name() {
let (config, _ids) = make_config_with(&[("dev", "/tmp/dev"), ("home", "/tmp/home")]);
let (id, ws) = resolve_workspace(&config, Some("dev")).unwrap();
assert_eq!(ws.name, "dev");
assert!(config.workspaces.contains_key(&id));
}
#[test]
fn resolve_by_uuid() {
let (config, ids) = make_config_with(&[("dev", "/tmp/dev")]);
let target = ids[0].clone();
let (id, ws) = resolve_workspace(&config, Some(&target)).unwrap();
assert_eq!(id, target);
assert_eq!(ws.name, "dev");
}
#[test]
fn resolve_unknown_identifier_errors() {
let (config, _ids) = make_config_with(&[("dev", "/tmp/dev")]);
let err = resolve_workspace(&config, Some("ghost")).unwrap_err();
assert!(err.to_string().contains("Workspace 'ghost' not found"));
}
#[test]
fn resolve_falls_back_to_current() {
let (mut config, ids) = make_config_with(&[("a", "/tmp/a"), ("b", "/tmp/b")]);
config.set_current_workspace(ids[1].clone()).unwrap();
let (id, ws) = resolve_workspace(&config, None).unwrap();
assert_eq!(id, ids[1]);
assert_eq!(ws.name, "b");
}
#[test]
fn resolve_no_current_gives_actionable_message() {
let config = AppConfig::new();
let err = resolve_workspace(&config, None).unwrap_err();
let msg = err.to_string();
// The message should point the user at the right sub-commands, not
// at the obsolete 'onyx init' suggestion.
assert!(msg.contains("workspace add") || msg.contains("workspace switch"),
"expected actionable message, got: {msg}");
}
}

View file

@ -2,8 +2,22 @@ use anyhow::{Context, Result};
use colored::Colorize; use colored::Colorize;
use onyx_core::sync::{SyncMode, sync_workspace, get_sync_status}; use onyx_core::sync::{SyncMode, sync_workspace, get_sync_status};
use onyx_core::webdav::{WebDavClient, store_credentials, load_credentials}; use onyx_core::webdav::{WebDavClient, store_credentials, load_credentials};
use onyx_core::config::AppConfig;
use crate::output; use crate::output;
use super::{load_config, save_config, resolve_workspace}; use super::{load_config, save_config};
/// Resolve a workspace name to (id, config). Falls back to current workspace if name is None.
fn resolve_workspace(config: &AppConfig, name: Option<&str>) -> Result<(String, onyx_core::config::WorkspaceConfig)> {
if let Some(name) = name {
let (id, ws) = config.find_by_name(name)
.ok_or_else(|| anyhow::anyhow!("Workspace '{}' not found", name))?;
Ok((id.clone(), ws.clone()))
} else {
let (id, ws) = config.get_current_workspace()
.context("No workspace set. Use 'onyx init' to create one.")?;
Ok((id.clone(), ws.clone()))
}
}
/// Run sync setup: prompt for URL, username, password, test connection, store credentials. /// Run sync setup: prompt for URL, username, password, test connection, store credentials.
pub fn setup(workspace_name: Option<String>) -> Result<()> { pub fn setup(workspace_name: Option<String>) -> Result<()> {

View file

@ -119,26 +119,13 @@ pub fn edit(task_id_str: String, workspace: Option<String>) -> Result<()> {
let (list_id, task) = find_task(&lists, task_id) let (list_id, task) = find_task(&lists, task_id)
.ok_or_else(|| anyhow::anyhow!("Task not found: {}", task_id_str))?; .ok_or_else(|| anyhow::anyhow!("Task not found: {}", task_id_str))?;
// Create temporary file with task content. On Unix, open with 0600 so // Create temporary file with task content
// other local users on a shared system can't read the task body off /tmp
// while the editor is running.
let temp_dir = std::env::temp_dir(); let temp_dir = std::env::temp_dir();
let temp_file = temp_dir.join(format!("onyx-{}.md", task.id)); let temp_file = temp_dir.join(format!("onyx-{}.md", task.id));
// Write current task content to temp file
let content = format!("# {}\n\n{}", task.title, task.description); let content = format!("# {}\n\n{}", task.title, task.description);
{ std::fs::write(&temp_file, content)?;
use std::io::Write;
let mut opts = std::fs::OpenOptions::new();
opts.write(true).create(true).truncate(true);
#[cfg(unix)]
{
use std::os::unix::fs::OpenOptionsExt;
opts.mode(0o600);
}
let mut f = opts.open(&temp_file)
.with_context(|| format!("Failed to create {}", temp_file.display()))?;
f.write_all(content.as_bytes())?;
}
// Get editor from environment // Get editor from environment
let editor = std::env::var("EDITOR").unwrap_or_else(|_| { let editor = std::env::var("EDITOR").unwrap_or_else(|_| {

View file

@ -30,21 +30,11 @@ pub fn add(name: String, path: String) -> Result<()> {
// Add workspace // Add workspace
let id = config.add_workspace(WorkspaceConfig::new(name.clone(), path_buf.clone())); let id = config.add_workspace(WorkspaceConfig::new(name.clone(), path_buf.clone()));
// Select the new workspace as current when none was previously set, so the
// very next command doesn't fail with "No workspace set".
let made_current = config.current_workspace.is_none();
if made_current {
config.set_current_workspace(id.clone())?;
}
// Save config // Save config
save_config(&config)?; save_config(&config)?;
output::success(&format!("Added workspace \"{}\" ({}) at {}", name, &id[..8], path_buf.display())); output::success(&format!("Added workspace \"{}\" ({}) at {}", name, &id[..8], path_buf.display()));
output::success("Created default list \"My Tasks\""); output::success("Created default list \"My Tasks\"");
if made_current {
output::success(&format!("Set \"{}\" as the current workspace", name));
}
Ok(()) Ok(())
} }
@ -74,20 +64,15 @@ pub fn list() -> Result<()> {
Ok(()) Ok(())
} }
/// Resolve a user-supplied identifier to a workspace ID. Accepts either the /// Resolve a workspace name to its ID. Errors if not found or ambiguous.
/// display name or the UUID. Errors if not found or ambiguous. fn resolve_name(config: &onyx_core::config::AppConfig, name: &str) -> Result<String> {
fn resolve_name(config: &onyx_core::config::AppConfig, identifier: &str) -> Result<String> {
// Direct UUID hit on the map key — unambiguous.
if config.workspaces.contains_key(identifier) {
return Ok(identifier.to_string());
}
let matches: Vec<_> = config.workspaces.iter() let matches: Vec<_> = config.workspaces.iter()
.filter(|(_, ws)| ws.name == identifier) .filter(|(_, ws)| ws.name == name)
.collect(); .collect();
match matches.len() { match matches.len() {
0 => anyhow::bail!("Workspace '{}' not found", identifier), 0 => anyhow::bail!("Workspace '{}' not found", name),
1 => Ok(matches[0].0.clone()), 1 => Ok(matches[0].0.clone()),
n => anyhow::bail!("Ambiguous: {} workspaces named '{}'. Use the workspace ID instead.", n, identifier), n => anyhow::bail!("Ambiguous: {} workspaces named '{}'. Use the workspace ID instead.", n, name),
} }
} }

View file

@ -3,7 +3,6 @@ mod output;
use anyhow::Result; use anyhow::Result;
use clap::{Parser, Subcommand}; use clap::{Parser, Subcommand};
use colored::Colorize;
use commands::*; use commands::*;
#[derive(Parser)] #[derive(Parser)]
@ -198,24 +197,7 @@ enum GroupCommands {
}, },
} }
fn main() { fn main() -> Result<()> {
match run() {
Ok(()) => {}
Err(e) => {
// Print user-friendly error chain (no backtrace). Programming-bug
// panics still surface through their default handler.
eprintln!("{}: {}", "Error".red().bold(), e);
let mut cause = e.source();
while let Some(c) = cause {
eprintln!(" caused by: {}", c);
cause = c.source();
}
std::process::exit(1);
}
}
}
fn run() -> Result<()> {
let cli = Cli::parse(); let cli = Cli::parse();
match cli.command { match cli.command {

View file

@ -4,15 +4,20 @@ use serde::{Deserialize, Serialize};
use uuid::Uuid; use uuid::Uuid;
use crate::error::{Error, Result}; use crate::error::{Error, Result};
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Default)] #[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
#[serde(rename_all = "lowercase")] #[serde(rename_all = "lowercase")]
pub enum WorkspaceMode { pub enum WorkspaceMode {
#[default]
Local, Local,
Webdav, Webdav,
GoogleTasks, GoogleTasks,
} }
impl Default for WorkspaceMode {
fn default() -> Self {
Self::Local
}
}
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
pub struct WorkspaceConfig { pub struct WorkspaceConfig {
pub name: String, pub name: String,
@ -116,7 +121,13 @@ impl AppConfig {
std::fs::create_dir_all(parent)?; std::fs::create_dir_all(parent)?;
} }
let content = serde_json::to_string_pretty(&self)?; let content = serde_json::to_string_pretty(&self)?;
crate::storage::atomic_write(path, content.as_bytes())?; // Atomic write: write to temp file then rename to prevent corruption on crash
let temp = path.with_extension("tmp");
std::fs::write(&temp, &content)?;
if let Err(e) = std::fs::rename(&temp, path) {
let _ = std::fs::remove_file(&temp);
return Err(e.into());
}
Ok(()) Ok(())
} }

View file

@ -358,15 +358,8 @@ pub async fn sync_google_tasks(
list_meta.task_order = task_order; list_meta.task_order = task_order;
list_meta.updated_at = Utc::now(); list_meta.updated_at = Utc::now();
match serde_json::to_string_pretty(&list_meta) { if let Ok(meta_content) = serde_json::to_string_pretty(&list_meta) {
Ok(meta_content) => { let _ = atomic_write(&listdata_path, meta_content.as_bytes());
if let Err(e) = atomic_write(&listdata_path, meta_content.as_bytes()) {
errors.push(format!("Failed to write metadata for list '{}': {}", gt_list.title, e));
}
}
Err(e) => {
errors.push(format!("Failed to serialize metadata for list '{}': {}", gt_list.title, e));
}
} }
} }
@ -381,15 +374,8 @@ pub async fn sync_google_tasks(
RootMetadata::default() RootMetadata::default()
}; };
root_meta.list_order = new_list_order; root_meta.list_order = new_list_order;
match serde_json::to_string_pretty(&root_meta) { if let Ok(meta_content) = serde_json::to_string_pretty(&root_meta) {
Ok(meta_content) => { let _ = atomic_write(&root_meta_path, meta_content.as_bytes());
if let Err(e) = atomic_write(&root_meta_path, meta_content.as_bytes()) {
errors.push(format!("Failed to write workspace metadata: {}", e));
}
}
Err(e) => {
errors.push(format!("Failed to serialize workspace metadata: {}", e));
}
} }
Ok(GoogleSyncResult { downloaded, errors }) Ok(GoogleSyncResult { downloaded, errors })

View file

@ -26,10 +26,7 @@ impl TaskRepository {
// Task operations // Task operations
pub fn create_task(&mut self, list_id: Uuid, mut task: Task) -> Result<Task> { pub fn create_task(&mut self, list_id: Uuid, mut task: Task) -> Result<Task> {
self.storage.write_task(list_id, &task)?; self.storage.write_task(list_id, &task)?;
// Mirror the saturating increment that FileSystemStorage applies to task.version += 1;
// the on-disk frontmatter so the in-memory Task matches what was
// written and doesn't wrap at u64::MAX.
task.version = task.version.saturating_add(1);
Ok(task) Ok(task)
} }
@ -157,7 +154,7 @@ mod tests {
// Create a task // Create a task
let task = Task::new("Test Task".to_string()); let task = Task::new("Test Task".to_string());
let _ = repo.create_task(list.id, task).unwrap(); let created_task = repo.create_task(list.id, task).unwrap();
// List tasks // List tasks
let tasks = repo.list_tasks(list.id).unwrap(); let tasks = repo.list_tasks(list.id).unwrap();
@ -165,20 +162,6 @@ mod tests {
assert_eq!(tasks[0].title, "Test Task"); assert_eq!(tasks[0].title, "Test Task");
} }
#[test]
fn test_create_task_saturates_version_at_max() {
let temp_dir = TempDir::new().unwrap();
let mut repo = TaskRepository::init(temp_dir.path().to_path_buf()).unwrap();
let list = repo.create_list("L".to_string()).unwrap();
// Simulate a task that is already at u64::MAX. A plain `+=` would
// overflow — saturating_add must clamp.
let mut task = Task::new("max".to_string());
task.version = u64::MAX;
let created = repo.create_task(list.id, task).unwrap();
assert_eq!(created.version, u64::MAX);
}
#[test] #[test]
fn test_update_task() { fn test_update_task() {
let temp_dir = TempDir::new().unwrap(); let temp_dir = TempDir::new().unwrap();

View file

@ -236,8 +236,12 @@ impl FileSystemStorage {
Ok(path) Ok(path)
} }
fn sanitize_filename(name: &str) -> String {
crate::sanitize_filename(name)
}
fn task_file_path(&self, list_dir: &Path, task: &Task) -> PathBuf { fn task_file_path(&self, list_dir: &Path, task: &Task) -> PathBuf {
let safe_title = crate::sanitize_filename(&task.title); let safe_title = Self::sanitize_filename(&task.title);
let filename = if safe_title.is_empty() { let filename = if safe_title.is_empty() {
task.id.to_string() task.id.to_string()
} else { } else {
@ -377,9 +381,7 @@ impl Storage for FileSystemStorage {
} }
let content = self.write_markdown_with_frontmatter(task)?; let content = self.write_markdown_with_frontmatter(task)?;
// Atomic write: a crash mid-write must not leave a truncated .md file fs::write(&task_path, content)?;
// that then fails YAML parsing on the next list_tasks/read_task.
atomic_write(&task_path, content.as_bytes())?;
// Update list metadata to include this task in task_order if not already present // Update list metadata to include this task in task_order if not already present
let mut list_metadata = self.read_list_metadata(list_id)?; let mut list_metadata = self.read_list_metadata(list_id)?;
@ -453,42 +455,27 @@ impl Storage for FileSystemStorage {
} }
let mut tasks = Vec::new(); let mut tasks = Vec::new();
for (_id, entries) in by_id { for (_id, mut entries) in by_id {
// `by_id` only inserts non-empty groups, so each `entries` has at if entries.len() > 1 {
// least one element. entries.sort_by(|a, b| {
let task = if entries.len() > 1 {
// Read mtime once per file so sort_by doesn't hit the filesystem
// O(n log n) times and can't produce inconsistent orderings if a
// file is touched mid-sort.
let mut with_mtime: Vec<(PathBuf, Task, Option<std::time::SystemTime>)> = entries
.into_iter()
.map(|(p, t)| {
let mtime = fs::metadata(&p).and_then(|m| m.modified()).ok();
(p, t, mtime)
})
.collect();
with_mtime.sort_by(|a, b| {
// Primary: highest version first // Primary: highest version first
let version_cmp = b.1.version.cmp(&a.1.version); let version_cmp = b.1.version.cmp(&a.1.version);
if version_cmp != std::cmp::Ordering::Equal { if version_cmp != std::cmp::Ordering::Equal {
return version_cmp; return version_cmp;
} }
// Tiebreaker: most recently modified file first // Tiebreaker: most recently modified file first
b.2.cmp(&a.2) let mtime_a = fs::metadata(&a.0).and_then(|m| m.modified()).ok();
let mtime_b = fs::metadata(&b.0).and_then(|m| m.modified()).ok();
mtime_b.cmp(&mtime_a)
}); });
for (stale_path, _, _) in with_mtime.drain(1..) { for (stale_path, _) in entries.drain(1..) {
if let Err(e) = fs::remove_file(&stale_path) { if let Err(e) = fs::remove_file(&stale_path) {
eprintln!("Warning: failed to remove stale duplicate task file {:?}: {}", stale_path, e); eprintln!("Warning: failed to remove stale duplicate task file {:?}: {}", stale_path, e);
} }
} }
let (_, t, _) = with_mtime.into_iter().next() }
.expect("dedup group is non-empty after drain(1..)"); let (_, task) = entries.into_iter().next()
t .ok_or_else(|| Error::InvalidData("Empty dedup entries for task".to_string()))?;
} else {
let (_, t) = entries.into_iter().next()
.expect("dedup group is non-empty");
t
};
tasks.push(task); tasks.push(task);
} }

View file

@ -5,7 +5,7 @@ use serde::{Deserialize, Serialize};
use sha2::{Sha256, Digest}; use sha2::{Sha256, Digest};
use uuid::Uuid; use uuid::Uuid;
use crate::error::{Error, Result}; use crate::error::{Error, Result};
use crate::storage::{atomic_write, ListMetadata, TaskFrontmatter}; use crate::storage::{ListMetadata, TaskFrontmatter};
use crate::webdav::WebDavClient; use crate::webdav::WebDavClient;
/// File-based lock to prevent concurrent sync operations on the same workspace. /// File-based lock to prevent concurrent sync operations on the same workspace.
@ -204,9 +204,8 @@ pub fn compute_sync_actions(
} }
// Remote present, local gone, base known: local was deleted // Remote present, local gone, base known: local was deleted
(None, Some(r), Some(b)) => { (None, Some(_), Some(b)) => {
let remote_changed = r.size != b.size let remote_changed = remote.is_some_and(|r| r.size != b.size || !timestamps_equal(r.last_modified.as_deref(), b.modified_at.as_deref()));
|| !timestamps_equal(r.last_modified.as_deref(), b.modified_at.as_deref());
if remote_changed { if remote_changed {
// deleted locally + modified remotely -> download (remote wins) // deleted locally + modified remotely -> download (remote wins)
actions.push(SyncAction::Download { path: path.to_string() }); actions.push(SyncAction::Download { path: path.to_string() });
@ -230,22 +229,6 @@ pub fn compute_sync_actions(
actions actions
} }
/// Remove base entries for files that are gone from both local and remote.
/// `compute_sync_actions` emits no action for the both-deleted case, so without
/// this pass those entries would persist in `.syncstate.json` indefinitely.
fn prune_orphan_bases(
sync_state: &mut SyncState,
local_files: &[LocalFileInfo],
remote_files: &[RemoteFileSnapshot],
) {
let live_paths: std::collections::HashSet<&str> = local_files
.iter()
.map(|f| f.path.as_str())
.chain(remote_files.iter().map(|f| f.path.as_str()))
.collect();
sync_state.files.retain(|p, _| live_paths.contains(p.as_str()));
}
/// Compare two timestamps for equality by parsing both, tolerating format differences. /// Compare two timestamps for equality by parsing both, tolerating format differences.
fn timestamps_equal(a: Option<&str>, b: Option<&str>) -> bool { fn timestamps_equal(a: Option<&str>, b: Option<&str>) -> bool {
match (a, b) { match (a, b) {
@ -621,12 +604,6 @@ async fn sync_workspace_inner(
} }
}; };
// Purge orphan base entries: files we previously tracked that are now gone
// from both local and remote. Without this, `.syncstate.json` accumulates
// ghost entries forever because the both-deleted diff case emits no action
// and so nothing else would clean them.
prune_orphan_bases(&mut sync_state, &local_files, &remote_files);
// Compute actions from three-way diff // Compute actions from three-way diff
let fresh_actions = compute_sync_actions(&local_files, &remote_files, &sync_state); let fresh_actions = compute_sync_actions(&local_files, &remote_files, &sync_state);
@ -724,20 +701,19 @@ async fn execute_action(
Err(e) => return Err(e.into()), Err(e) => return Err(e.into()),
}; };
let checksum = compute_checksum(&data); let checksum = compute_checksum(&data);
let len = data.len() as u64;
if let Some(parent) = path_parent(path) { if let Some(parent) = path_parent(path) {
client.ensure_dir(parent).await?; client.ensure_dir(parent).await?;
} }
report(&format!(" ^ Uploading {}", path)); report(&format!(" ^ Uploading {}", path));
client.put_file(path, data).await?; client.put_file(path, data.clone()).await?;
// Record in sync state using local file metadata // Record in sync state using local file metadata
let modified = std::fs::metadata(&local_path).ok() let modified = std::fs::metadata(&local_path).ok()
.and_then(|m| m.modified().ok()) .and_then(|m| m.modified().ok())
.map(|t| { let dt: DateTime<Utc> = t.into(); dt.to_rfc3339() }); .map(|t| { let dt: DateTime<Utc> = t.into(); dt.to_rfc3339() });
sync_state.record_file(path, &checksum, modified.as_deref(), len); sync_state.record_file(path, &checksum, modified.as_deref(), data.len() as u64);
} }
SyncAction::Conflict { path } => { SyncAction::Conflict { path } => {
@ -767,9 +743,8 @@ async fn execute_action(
} else { } else {
report(&format!(" ! Conflict: remote wins for {}, recovering local as duplicate", path)); report(&format!(" ! Conflict: remote wins for {}, recovering local as duplicate", path));
// Remote wins: overwrite local with remote content. Atomic // Remote wins: overwrite local with remote content
// so a crash mid-sync cannot leave a truncated file behind. std::fs::write(&local_path, &remote_data)?;
atomic_write(&local_path, &remote_data)?;
let modified = std::fs::metadata(&local_path).ok() let modified = std::fs::metadata(&local_path).ok()
.and_then(|m| m.modified().ok()) .and_then(|m| m.modified().ok())
.map(|t| { let dt: DateTime<Utc> = t.into(); dt.to_rfc3339() }); .map(|t| { let dt: DateTime<Utc> = t.into(); dt.to_rfc3339() });
@ -777,7 +752,7 @@ async fn execute_action(
// For .md task files inside a list dir, create a duplicate of the local version // For .md task files inside a list dir, create a duplicate of the local version
let parts: Vec<&str> = path.split('/').collect(); let parts: Vec<&str> = path.split('/').collect();
if parts.len() == 2 && parts[1].ends_with(".md") { if parts.len() == 2 && parts[1].ends_with(".md") && parts[1] != ".listdata.json" {
let local_content = String::from_utf8_lossy(&local_data); let local_content = String::from_utf8_lossy(&local_data);
if let Ok((frontmatter, description)) = parse_frontmatter_for_conflict(&local_content) { if let Ok((frontmatter, description)) = parse_frontmatter_for_conflict(&local_content) {
let original_id = frontmatter.id; let original_id = frontmatter.id;
@ -800,7 +775,7 @@ async fn execute_action(
let list_dir = workspace_path.join(parts[0]); let list_dir = workspace_path.join(parts[0]);
let dup_filename = format!("{}.md", new_id); let dup_filename = format!("{}.md", new_id);
let dup_path = list_dir.join(&dup_filename); let dup_path = list_dir.join(&dup_filename);
atomic_write(&dup_path, new_content.as_bytes())?; std::fs::write(&dup_path, &new_content)?;
// Insert new task adjacent to original in .listdata.json. // Insert new task adjacent to original in .listdata.json.
// If metadata update fails, remove the duplicate file to // If metadata update fails, remove the duplicate file to
@ -816,7 +791,7 @@ async fn execute_action(
.unwrap_or(metadata.task_order.len()); .unwrap_or(metadata.task_order.len());
metadata.task_order.insert(insert_pos, new_id); metadata.task_order.insert(insert_pos, new_id);
let json = serde_json::to_string_pretty(&metadata)?; let json = serde_json::to_string_pretty(&metadata)?;
atomic_write(&listdata_path, json.as_bytes())?; std::fs::write(&listdata_path, json)?;
Ok(()) Ok(())
})(); })();
if let Err(e) = metadata_updated { if let Err(e) = metadata_updated {
@ -841,7 +816,7 @@ async fn execute_action(
if let Some(parent) = local_path.parent() { if let Some(parent) = local_path.parent() {
std::fs::create_dir_all(parent)?; std::fs::create_dir_all(parent)?;
} }
atomic_write(&local_path, &data)?; std::fs::write(&local_path, &data)?;
// Record remote's last_modified so next diff won't see a timestamp mismatch // Record remote's last_modified so next diff won't see a timestamp mismatch
let modified = remote_meta.get(path.as_str()).and_then(|r| r.last_modified.clone()); let modified = remote_meta.get(path.as_str()).and_then(|r| r.last_modified.clone());
@ -915,15 +890,9 @@ pub fn get_sync_status(workspace_path: &Path) -> Result<SyncStatusInfo> {
} }
} }
// Count files in base that are now missing locally (deleted). // Count files in base that are now missing locally (deleted)
// Build a set of local paths once so the membership check is O(1) per
// tracked file instead of scanning local_files linearly each time.
let local_paths: std::collections::HashSet<&str> = local_files
.iter()
.map(|f| f.path.as_str())
.collect();
for path in sync_state.files.keys() { for path in sync_state.files.keys() {
if !local_paths.contains(path.as_str()) { if !local_files.iter().any(|f| f.path == *path) {
pending_changes += 1; pending_changes += 1;
} }
} }
@ -1136,22 +1105,6 @@ mod tests {
assert!(actions.is_empty()); assert!(actions.is_empty());
} }
#[test]
fn test_prune_orphan_bases() {
let mut state = SyncState::default();
state.files.insert("kept_local.md".to_string(), make_base("a"));
state.files.insert("kept_remote.md".to_string(), make_base("b"));
state.files.insert("orphan.md".to_string(), make_base("c"));
let local = vec![make_local("kept_local.md", "a")];
let remote = vec![make_remote("kept_remote.md")];
prune_orphan_bases(&mut state, &local, &remote);
assert!(state.files.contains_key("kept_local.md"));
assert!(state.files.contains_key("kept_remote.md"));
assert!(!state.files.contains_key("orphan.md"));
}
#[test] #[test]
fn test_multiple_files_mixed() { fn test_multiple_files_mixed() {
let local = vec![ let local = vec![
@ -1183,7 +1136,8 @@ mod tests {
#[test] #[test]
fn test_sync_state_save_load_roundtrip() { fn test_sync_state_save_load_roundtrip() {
let temp_dir = TempDir::new().unwrap(); let temp_dir = TempDir::new().unwrap();
let mut state = SyncState { last_sync: Some(Utc::now()), ..Default::default() }; let mut state = SyncState::default();
state.last_sync = Some(Utc::now());
state.record_file("test.md", "abc123", Some("2026-01-01T00:00:00Z"), 42); state.record_file("test.md", "abc123", Some("2026-01-01T00:00:00Z"), 42);
state.save(temp_dir.path()).unwrap(); state.save(temp_dir.path()).unwrap();

View file

@ -353,14 +353,12 @@ Credentials are stored in the platform keychain (Windows Credential Manager, mac
```rust ```rust
use onyx_core::webdav::{store_credentials, load_credentials, delete_credentials}; use onyx_core::webdav::{store_credentials, load_credentials, delete_credentials};
use zeroize::Zeroizing;
// Store credentials // Store credentials
store_credentials("nextcloud.example.com", "username", "password")?; store_credentials("nextcloud.example.com", "username", "password")?;
// Load credentials — returns Zeroizing<String> wrappers that wipe memory on drop // Load credentials (returns Zeroizing<String> wrappers that wipe memory on drop)
let (username, password): (Zeroizing<String>, Zeroizing<String>) = let (username, password) = load_credentials("nextcloud.example.com")?;
load_credentials("nextcloud.example.com")?;
// Delete credentials // Delete credentials
delete_credentials("nextcloud.example.com")?; delete_credentials("nextcloud.example.com")?;
@ -456,7 +454,7 @@ All metadata and state files use an atomic write pattern (write to `.tmp` then r
- **List names**: Rejected if they contain `/`, `\`, or `..` components. Canonicalized and verified to stay within workspace root. - **List names**: Rejected if they contain `/`, `\`, or `..` components. Canonicalized and verified to stay within workspace root.
- **Sync paths**: Validated to reject `..` components and backslashes anywhere in the path before any file system operation. - **Sync paths**: Validated to reject `..` components and backslashes anywhere in the path before any file system operation.
- **Workspace paths** (Tauri): Rejected if they point to the filesystem root (`/`) or system directories (`/etc`, `/usr`, `/bin`, `/sbin`, `/var`, `/proc`, `/sys`, `/dev`). - **Workspace paths** (Tauri): Rejected if they point to system directories (`/etc`, `/usr`, `/bin`, etc.).
- **Filenames**: Sanitized to replace `/ \ : * ? " < > |` and control characters with `_`. - **Filenames**: Sanitized to replace `/ \ : * ? " < > |` and control characters with `_`.
## Example: Complete Workflow ## Example: Complete Workflow
@ -523,9 +521,9 @@ Key test areas:
## Thread Safety ## Thread Safety
`TaskRepository` holds its storage as `Box<dyn Storage + Send + Sync>`, so any concrete storage implementation passed in must be `Send + Sync`. Repository instances can be shared across threads behind a `Mutex` — the Tauri GUI uses `Mutex<AppState>` for this purpose. The `Storage` trait requires `Send + Sync`, and `TaskRepository` wraps `Box<dyn Storage + Send + Sync>`, so repository instances can be shared across threads behind a `Mutex`. The Tauri GUI uses `Mutex<AppState>` for this purpose.
For concurrent access: For concurrent access:
1. Wrap `TaskRepository` in `Mutex` or `RwLock` (the Tauri app does this) 1. Wrap `TaskRepository` in `Mutex` or `RwLock` (the Tauri app does this)
2. Or create separate repository instances per thread. Note that `FileSystemStorage` does not coordinate writes between processes — concurrent multi-process writes to the same workspace are not supported outside the WebDAV sync flow, which uses a `.sync.lock` file. 2. Or create separate repository instances per thread (file system handles locking)

View file

@ -72,15 +72,11 @@ onyx/
│ │ ├── main.ts │ │ ├── main.ts
│ │ ├── app.css # Tailwind CSS 4 + theme │ │ ├── app.css # Tailwind CSS 4 + theme
│ │ ├── App.svelte │ │ ├── App.svelte
│ │ ├── test/
│ │ │ └── setup.ts
│ │ └── lib/ │ │ └── lib/
│ │ ├── screens/ # Full-page views │ │ ├── screens/ # Full-page views
│ │ ├── components/ # Reusable UI components │ │ ├── components/ # Reusable UI components
│ │ ├── stores/ # Svelte state (app.svelte.ts) │ │ ├── stores/ # Svelte state (app.svelte.ts)
│ │ ├── dateFormat.ts # Date formatting utilities │ │ ├── dateFormat.ts # Date formatting utilities
│ │ ├── grouping.ts # Task grouping logic
│ │ ├── paths.ts # Path utilities
│ │ └── types.ts # TypeScript type definitions │ │ └── types.ts # TypeScript type definitions
│ ├── tauri-plugin-credentials/ # Cross-platform credential storage plugin │ ├── tauri-plugin-credentials/ # Cross-platform credential storage plugin
│ │ ├── Cargo.toml │ │ ├── Cargo.toml