Compare commits
No commits in common. "main" and "claude/jolly-mendel-Hwl4L" have entirely different histories.
main
...
claude/jol
33
Audit.md
|
|
@ -1,38 +1,5 @@
|
||||||
# Audit Log
|
# Audit Log
|
||||||
|
|
||||||
## 2026-04-27
|
|
||||||
|
|
||||||
Found and fixed 3 issues:
|
|
||||||
|
|
||||||
1. **Perf: needless clone of upload payload** (sync.rs:733) — the `SyncAction::Upload` arm read the file into `data`, computed `compute_checksum(&data)`, then called `client.put_file(path, data.clone())`. The clone existed only because the next statement needed `data.len()` for the sync-state record. Captured `data.len() as u64` into `len` first, moved `data` into `put_file`, and used `len` afterwards — one full byte copy avoided per uploaded file.
|
|
||||||
2. **Bug: Google Tasks sync silently drops metadata-write failures** (google_tasks.rs:361, 377) — both `.listdata.json` and `.onyx-workspace.json` were written via `if let Ok(meta_content) = serde_json::to_string_pretty(...) { let _ = atomic_write(...); }`, so a serialization or atomic-write error returned `Ok(GoogleSyncResult { downloaded: N, errors: [] })` even though list/workspace ordering was never persisted. Both writes now push their errors into the `errors` vec already returned in `GoogleSyncResult`.
|
|
||||||
3. **Code quality: unreachable dead-error path in storage dedup** (storage.rs:447) — the dedup loop computed `Option<Task>` from each `by_id` group and then `ok_or_else(|| Error::InvalidData("Empty dedup entries for task"))?`. `by_id` is only populated by `entry(uuid).or_default().push(entry)`, so every group has ≥1 element and the `None` branch is unreachable. Replaced the `Option`+`?` with direct `expect` calls (one per branch) that document the non-empty invariant; the loop now yields `Task` directly.
|
|
||||||
|
|
||||||
## 2026-04-25
|
|
||||||
|
|
||||||
Found and fixed 3 issues:
|
|
||||||
|
|
||||||
1. **Perf: O(n²) deletion-detection in `get_sync_status`** (sync.rs:918) — for every path tracked in `sync_state.files`, the loop scanned `local_files` linearly via `.any(|f| f.path == *path)` to decide whether to count it as a deleted-locally pending change. The earlier "modified or new" loop already used the inverse direction with `sync_state.files.get(...)` (O(1)), so the second loop was the inconsistent one. Built a `HashSet<&str>` of local paths once and used `contains` for the membership check.
|
|
||||||
2. **Perf: cascade delete walks all_tasks per frontier pop** (tauri/lib.rs:460) — `delete_task`'s descendant BFS scanned the full task list on every parent popped from the frontier, making the work O(n × depth). Built a `parent_id -> [child_id]` `HashMap` once, then the BFS visits each descendant in O(1) amortised, dropping total cost to O(n).
|
|
||||||
3. **Code quality: duplicate atomic-write in `AppConfig::save_to_file`** (config.rs:114) — the function had its own copy of the temp-file + rename + cleanup-on-failure dance even though `storage::atomic_write` is `pub(crate)` and was already shared by `google_tasks.rs`. Replaced the inline implementation with a call to `crate::storage::atomic_write` so the crate has one canonical atomic write path.
|
|
||||||
|
|
||||||
## 2026-04-24
|
|
||||||
|
|
||||||
Found and fixed 3 issues:
|
|
||||||
|
|
||||||
1. **Bug: orphan base entries never cleaned from sync state** (sync.rs) — when a file was deleted both locally and remotely, `compute_sync_actions` emitted no action (the `(None, None, Some(_))` arm), so the base entry in `.syncstate.json` persisted forever. On each subsequent sync the same no-op case fired and the state file grew. Added `prune_orphan_bases` pass in `sync_workspace_inner` that drops base entries not present in either scan.
|
|
||||||
2. **Code quality: redundant is_some_and on already-matched Option** (sync.rs:208) — the `(None, Some(_), Some(b))` arm re-checked `remote` via `remote.is_some_and(|r| ...)` even though the pattern had just proven `remote` is `Some(_)`. Bound the inner value with `Some(r)` in the pattern and used `r` directly.
|
|
||||||
3. **Code quality: single-caller sanitize_filename wrapper** (storage.rs) — `FileSystemStorage::sanitize_filename` was a one-line forwarder to `crate::sanitize_filename` with one call site. Inlined the crate call and removed the method.
|
|
||||||
|
|
||||||
## 2026-04-20
|
|
||||||
|
|
||||||
Found and fixed 4 issues:
|
|
||||||
|
|
||||||
1. **Dead code in conflict recovery** (sync.rs:756) — `parts[1] != ".listdata.json"` was unreachable because the branch is already gated on `parts[1].ends_with(".md")`, which `.listdata.json` cannot satisfy. Removed the redundant check.
|
|
||||||
2. **O(n²) cascade delete** (tauri/lib.rs) — descendant traversal in `delete_task` used `Vec::contains` inside the inner loop, making it quadratic in the number of tasks per list. Swapped the visited set to `HashSet`; `HashSet::insert` folds the contains+push into one call.
|
|
||||||
3. **Silent cascade failure in toggle_task** (tauri/lib.rs) — subtask `update_task` errors were discarded with `let _ = ...`, leaving subtasks stuck at the old status with no UI feedback. Propagate the error so the frontend can surface it.
|
|
||||||
4. **Duplicated UUID-parse boilerplate** (tauri/lib.rs) — 17 commands repeated `Uuid::parse_str(&x).map_err(|e| e.to_string())?`. Extracted a `parse_uuid` helper so callers read as `let id = parse_uuid(&list_id)?;`.
|
|
||||||
|
|
||||||
## 2026-04-15
|
## 2026-04-15
|
||||||
|
|
||||||
Found and fixed 4 issues:
|
Found and fixed 4 issues:
|
||||||
|
|
|
||||||
11
CLAUDE.md
|
|
@ -30,14 +30,14 @@ The Tauri dev server runs on port 1422 (`vite.config.ts` and `tauri.conf.json`).
|
||||||
Two-crate workspace (`resolver = "2"`, edition 2021) plus a Tauri app:
|
Two-crate workspace (`resolver = "2"`, edition 2021) plus a Tauri app:
|
||||||
|
|
||||||
- **onyx-core** — Pure Rust library. Storage trait with `FileSystemStorage` implementation, `TaskRepository` (main API), data models, config, error types. No CLI/UI dependencies. `keyring` feature-gated behind `keyring-storage` (default on) for Android compatibility.
|
- **onyx-core** — Pure Rust library. Storage trait with `FileSystemStorage` implementation, `TaskRepository` (main API), data models, config, error types. No CLI/UI dependencies. `keyring` feature-gated behind `keyring-storage` (default on) for Android compatibility.
|
||||||
- **onyx-cli** — CLI frontend using clap. Commands are in `src/commands/` (init, workspace, list, task, group, sync). Output formatting in `src/output.rs`.
|
- **onyx-cli** — CLI frontend using clap. Commands are in `src/commands/` (init, workspace, list, task, group). Output formatting in `src/output.rs`.
|
||||||
- **apps/tauri/** — Tauri v2 GUI. Svelte 5 frontend in `src/`, Rust backend in `src-tauri/` with Tauri commands that call into `onyx-core`. `notify` crate feature-gated for Android. `tauri-plugin-credentials/` provides cross-platform credential storage (Android Keystore via EncryptedSharedPreferences, desktop via keyring crate).
|
- **apps/tauri/** — Tauri v2 GUI. Svelte 5 frontend in `src/`, Rust backend in `src-tauri/` with Tauri commands that call into `onyx-core`. `notify` crate feature-gated for Android. `tauri-plugin-credentials/` provides cross-platform credential storage (Android Keystore via EncryptedSharedPreferences, desktop via keyring crate).
|
||||||
|
|
||||||
### Key patterns
|
### Key patterns
|
||||||
|
|
||||||
- **Storage trait** (`storage.rs`): Strategy pattern for task persistence. `FileSystemStorage` reads/writes markdown files with YAML frontmatter and JSON metadata files. Atomic writes (temp file + rename, with temp cleanup on failure) for all metadata files. Input validation: task titles max 500 chars, descriptions max 1MB, list names max 255 chars, YAML frontmatter max 64KB. Delete operations update metadata before removing files to prevent orphaned metadata on crash.
|
- **Storage trait** (`storage.rs`): Strategy pattern for task persistence. `FileSystemStorage` reads/writes markdown files with YAML frontmatter and JSON metadata files. Atomic writes (temp file + rename, with temp cleanup on failure) for all metadata files. Input validation: task titles max 500 chars, descriptions max 1MB, list names max 255 chars, YAML frontmatter max 64KB. Delete operations update metadata before removing files to prevent orphaned metadata on crash.
|
||||||
- **Repository** (`repository.rs`): `TaskRepository` wraps a `Storage` impl and provides the public API for task/list CRUD, ordering, and grouping. Tests live here.
|
- **Repository** (`repository.rs`): `TaskRepository` wraps a `Storage` impl and provides the public API for task/list CRUD, ordering, and grouping. Tests live here.
|
||||||
- **Config** (`config.rs`): `AppConfig` manages workspaces keyed by UUID string. `WorkspaceConfig` stores `name`, `path`, `mode` (Local/Webdav/GoogleTasks), `webdav_url`, `webdav_path` (user-selected remote folder), `google_account` (display name/email for GoogleTasks workspaces), `last_sync` (timestamp of last successful sync), `theme`, `sync_interval_secs` (focused polling interval), and `sync_interval_unfocused_secs` (lower-frequency polling when window loses focus, for mobile battery optimization). `add_workspace` returns a generated UUID. Stored in platform-specific config dirs via the `directories` crate. Atomic writes (temp file + rename) prevent corruption on crash.
|
- **Config** (`config.rs`): `AppConfig` manages workspaces keyed by UUID string. `WorkspaceConfig` stores `name`, `path`, `mode` (Local/Webdav/GoogleTasks), `webdav_url`, `webdav_path` (user-selected remote folder), `google_account` (display name/email for GoogleTasks workspaces), `theme`, `sync_interval_secs` (focused polling interval), and `sync_interval_unfocused_secs` (lower-frequency polling when window loses focus, for mobile battery optimization). `add_workspace` returns a generated UUID. Stored in platform-specific config dirs via the `directories` crate. Atomic writes (temp file + rename) prevent corruption on crash.
|
||||||
- **Sync** (`sync.rs`): Three-way diff sync with offline queue. File-based `.sync.lock` prevents concurrent sync operations (auto-cleaned after 5 minutes if stale). Checksum-based conflict resolution: downloads remote, compares SHA-256 — identical content is a false conflict (skipped); when different, remote wins and local is recovered as a duplicate with a new UUID and `[RECOVERED FROM CONFLICT]` prefix (duplicate file cleaned up if metadata update fails). Auto-sync lifecycle: periodic polling (configurable interval, default 60s), debounced file-change (5s), window-focus (30s stale threshold). Wrapped in `tokio::time::timeout` (60s) to handle unreachable servers on Windows. Path traversal validation rejects `..` components and backslashes anywhere in sync paths. Atomic writes for sync state and queue files (temp cleanup on failure).
|
- **Sync** (`sync.rs`): Three-way diff sync with offline queue. File-based `.sync.lock` prevents concurrent sync operations (auto-cleaned after 5 minutes if stale). Checksum-based conflict resolution: downloads remote, compares SHA-256 — identical content is a false conflict (skipped); when different, remote wins and local is recovered as a duplicate with a new UUID and `[RECOVERED FROM CONFLICT]` prefix (duplicate file cleaned up if metadata update fails). Auto-sync lifecycle: periodic polling (configurable interval, default 60s), debounced file-change (5s), window-focus (30s stale threshold). Wrapped in `tokio::time::timeout` (60s) to handle unreachable servers on Windows. Path traversal validation rejects `..` components and backslashes anywhere in sync paths. Atomic writes for sync state and queue files (temp cleanup on failure).
|
||||||
- **WebDAV** (`webdav.rs`): reqwest client with rustls-tls, 30s request timeout, 10s connect timeout. Rejects non-HTTPS URLs. `Zeroizing<String>` for credential fields. `move_resource` method for WebDAV MOVE (workspace rename). 10MB cap on both PROPFIND responses and file downloads. Desktop credentials via `keyring` crate (feature-gated); Tauri GUI uses `tauri-plugin-credentials` for cross-platform support (Android Keystore + desktop keychain).
|
- **WebDAV** (`webdav.rs`): reqwest client with rustls-tls, 30s request timeout, 10s connect timeout. Rejects non-HTTPS URLs. `Zeroizing<String>` for credential fields. `move_resource` method for WebDAV MOVE (workspace rename). 10MB cap on both PROPFIND responses and file downloads. Desktop credentials via `keyring` crate (feature-gated); Tauri GUI uses `tauri-plugin-credentials` for cross-platform support (Android Keystore + desktop keychain).
|
||||||
- **Google Tasks** (`google_tasks.rs`): Read-only Google Tasks API client using reqwest with Bearer auth. `gt_id_to_uuid()` converts Google Task IDs to stable UUID v5 values for consistent cross-sync identity. Fetches all task lists and tasks from the Google Tasks REST API and writes them locally via `FileSystemStorage`. Remote always wins (read-only workspace mode). OAuth flow is partially implemented — client ID/secret are placeholders pending real credentials.
|
- **Google Tasks** (`google_tasks.rs`): Read-only Google Tasks API client using reqwest with Bearer auth. `gt_id_to_uuid()` converts Google Task IDs to stable UUID v5 values for consistent cross-sync identity. Fetches all task lists and tasks from the Google Tasks REST API and writes them locally via `FileSystemStorage`. Remote always wins (read-only workspace mode). OAuth flow is partially implemented — client ID/secret are placeholders pending real credentials.
|
||||||
|
|
@ -64,7 +64,7 @@ The GUI uses Svelte 5 runes mode (`$state`, `$derived`, `$effect`, `$props()`).
|
||||||
|
|
||||||
Pre-alpha. No users, no released builds, no data to migrate. Breaking changes to on-disk formats, config structure, or sync conventions are free — do not add migration logic.
|
Pre-alpha. No users, no released builds, no data to migrate. Breaking changes to on-disk formats, config structure, or sync conventions are free — do not add migration logic.
|
||||||
|
|
||||||
### Current state (2026-04-27)
|
### Current state (2026-04-15)
|
||||||
|
|
||||||
- **Phase 1** (Core + CLI): Complete
|
- **Phase 1** (Core + CLI): Complete
|
||||||
- **Phase 2** (WebDAV sync): Complete — remote folder browsing, checksum-based conflict resolution, auto-sync lifecycle, per-workspace sync interval
|
- **Phase 2** (WebDAV sync): Complete — remote folder browsing, checksum-based conflict resolution, auto-sync lifecycle, per-workspace sync interval
|
||||||
|
|
@ -80,7 +80,7 @@ Pre-alpha. No users, no released builds, no data to migrate. Breaking changes to
|
||||||
- Sliding lists drawer with checkmark selection
|
- Sliding lists drawer with checkmark selection
|
||||||
- Settings popup overlay
|
- Settings popup overlay
|
||||||
- Workspace switcher drop-up with add/remove
|
- Workspace switcher drop-up with add/remove
|
||||||
- Per-workspace theme system (System default, Light, Dark, Nord, Dracula, Solarized Dark, Black and Gold, Ink) via CSS `data-theme` attribute
|
- Per-workspace theme system (System default, Light, Dark, Nord, Dracula, Solarized Dark, Ink) via CSS `data-theme` attribute
|
||||||
- Completed tasks section with animated show/hide
|
- Completed tasks section with animated show/hide
|
||||||
- Date picker/editor (DateTimePicker in new task + task detail); `has_time: bool` field tracks whether time is set
|
- Date picker/editor (DateTimePicker in new task + task detail); `has_time: bool` field tracks whether time is set
|
||||||
- Move task between lists (inline list in kebab menu, no submenu)
|
- Move task between lists (inline list in kebab menu, no submenu)
|
||||||
|
|
@ -106,9 +106,8 @@ Pre-alpha. No users, no released builds, no data to migrate. Breaking changes to
|
||||||
- Task deduplication on load (handles sync conflict duplicates)
|
- Task deduplication on load (handles sync conflict duplicates)
|
||||||
- Subtask hierarchy: subtask count shown on parent tasks in list, subtask detail via three-panel slide navigation, inline add at top of subtask list (new subtasks prepend), collapsible completed subtasks section, cascade delete (parent deletion removes all subtasks with confirmation warning)
|
- Subtask hierarchy: subtask count shown on parent tasks in list, subtask detail via three-panel slide navigation, inline add at top of subtask list (new subtasks prepend), collapsible completed subtasks section, cascade delete (parent deletion removes all subtasks with confirmation warning)
|
||||||
- Custom confirmation dialogs (ConfirmDialog component replaces native confirm())
|
- Custom confirmation dialogs (ConfirmDialog component replaces native confirm())
|
||||||
- Workspace path validation (rejects filesystem root `/` and system directories: `/etc`, `/usr`, `/bin`, `/sbin`, `/var`, `/proc`, `/sys`, `/dev`)
|
- Workspace path validation (rejects system directories)
|
||||||
- Task detail auto-cleanup (taskStack clears when viewed task is deleted or list switches)
|
- Task detail auto-cleanup (taskStack clears when viewed task is deleted or list switches)
|
||||||
- Swipe gestures on mobile: swipe left/right on a task to toggle completion (swipe direction depends on current status)
|
|
||||||
- Accessibility: ARIA labels/roles on interactive components, keyboard handlers, `prefers-reduced-motion` CSS support
|
- Accessibility: ARIA labels/roles on interactive components, keyboard handlers, `prefers-reduced-motion` CSS support
|
||||||
|
|
||||||
### GUI features NOT yet done
|
### GUI features NOT yet done
|
||||||
|
|
|
||||||
34
PLAN.md
|
|
@ -123,7 +123,6 @@ WorkspaceConfig {
|
||||||
webdav_url: Option<String>,
|
webdav_url: Option<String>,
|
||||||
webdav_path: Option<String>, // User-selected remote folder
|
webdav_path: Option<String>, // User-selected remote folder
|
||||||
google_account: Option<String>, // Email/display name (GoogleTasks workspaces)
|
google_account: Option<String>, // Email/display name (GoogleTasks workspaces)
|
||||||
last_sync: Option<DateTime>, // Timestamp of last successful sync
|
|
||||||
theme: Option<String>,
|
theme: Option<String>,
|
||||||
sync_interval_secs: Option<u64>, // Auto-sync polling interval (focused)
|
sync_interval_secs: Option<u64>, // Auto-sync polling interval (focused)
|
||||||
sync_interval_unfocused_secs: Option<u64>, // Auto-sync interval when unfocused
|
sync_interval_unfocused_secs: Option<u64>, // Auto-sync interval when unfocused
|
||||||
|
|
@ -224,8 +223,6 @@ impl TaskRepository {
|
||||||
pub fn get_lists(&self) -> Result<Vec<TaskList>>;
|
pub fn get_lists(&self) -> Result<Vec<TaskList>>;
|
||||||
pub fn get_list(&self, list_id: Uuid) -> Result<TaskList>;
|
pub fn get_list(&self, list_id: Uuid) -> Result<TaskList>;
|
||||||
pub fn delete_list(&mut self, id: Uuid) -> Result<()>;
|
pub fn delete_list(&mut self, id: Uuid) -> Result<()>;
|
||||||
pub fn rename_list(&mut self, list_id: Uuid, new_name: String) -> Result<()>;
|
|
||||||
pub fn move_task(&mut self, from_list_id: Uuid, to_list_id: Uuid, task_id: Uuid) -> Result<()>;
|
|
||||||
|
|
||||||
// Task ordering (modifies .listdata.json)
|
// Task ordering (modifies .listdata.json)
|
||||||
pub fn reorder_task(&mut self, list_id: Uuid, task_id: Uuid, new_position: usize) -> Result<()>;
|
pub fn reorder_task(&mut self, list_id: Uuid, task_id: Uuid, new_position: usize) -> Result<()>;
|
||||||
|
|
@ -532,11 +529,8 @@ pub fn delete_credentials(domain: &str) -> Result<()>;
|
||||||
Add to `onyx-core/Cargo.toml`:
|
Add to `onyx-core/Cargo.toml`:
|
||||||
```toml
|
```toml
|
||||||
reqwest = { version = "0.12", features = ["json", "rustls-tls"] }
|
reqwest = { version = "0.12", features = ["json", "rustls-tls"] }
|
||||||
keyring = { version = "3", features = ["apple-native", "windows-native", "sync-secret-service"], optional = true }
|
keyring = "3.0"
|
||||||
zeroize = "1"
|
# TODO: Evaluate dav-client or implement custom WebDAV
|
||||||
sha2 = "0.10"
|
|
||||||
quick-xml = "0.36"
|
|
||||||
# WebDAV implemented as custom client using reqwest + quick-xml for PROPFIND parsing
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Features
|
### Features
|
||||||
|
|
@ -671,6 +665,7 @@ apps/tauri/
|
||||||
│ │ ├── TaskItem.svelte
|
│ │ ├── TaskItem.svelte
|
||||||
│ │ ├── NewTaskInput.svelte
|
│ │ ├── NewTaskInput.svelte
|
||||||
│ │ ├── TaskDetailView.svelte
|
│ │ ├── TaskDetailView.svelte
|
||||||
|
│ │ ├── BottomSheet.svelte
|
||||||
│ │ ├── ConfirmDialog.svelte
|
│ │ ├── ConfirmDialog.svelte
|
||||||
│ │ └── DateTimePicker.svelte
|
│ │ └── DateTimePicker.svelte
|
||||||
│ └── stores/
|
│ └── stores/
|
||||||
|
|
@ -755,17 +750,17 @@ WorkspaceConfig {
|
||||||
- [x] Mark tasks complete/incomplete with animated transitions
|
- [x] Mark tasks complete/incomplete with animated transitions
|
||||||
- [x] Drag-and-drop task reordering
|
- [x] Drag-and-drop task reordering
|
||||||
- [x] Sliding lists drawer (80cqi wide, left side)
|
- [x] Sliding lists drawer (80cqi wide, left side)
|
||||||
- [x] Settings popup overlay (WebDAV config, theme selector, window decorations)
|
- [x] Settings popup overlay (WebDAV config, dark mode toggle)
|
||||||
- [x] Per-workspace theme system (System default, Light, Dark, Nord, Dracula, Solarized Dark, Black and Gold, Ink)
|
- [x] Dark mode (GNOME-style neutral theme, cyan-blue accent)
|
||||||
- [x] Animated completed section show/hide
|
- [x] Animated completed section show/hide
|
||||||
- [x] Move task between lists (kebab menu → "Move to..." inline list in task detail view, not a submenu)
|
- [x] Move task between lists (kebab menu → "Move to..." submenu in task detail view)
|
||||||
- [x] Optional time on due dates (`has_time: bool` field on Task with `#[serde(default)]` for backward compat; replaces the hours==0 heuristic)
|
- [x] Optional time on due dates (`has_time: bool` field on Task with `#[serde(default)]` for backward compat; replaces the hours==0 heuristic)
|
||||||
- [x] Due date picker/editor (DateTimePicker component in both new task toast + task detail view)
|
- [x] Due date picker/editor (DateTimePicker component in both new task toast + task detail view)
|
||||||
- [x] WebDAV setup flow with credentials (settings auto-populates URL/username/password from config + keychain on open)
|
- [x] WebDAV setup flow with credentials (settings auto-populates URL/username/password from config + keychain on open)
|
||||||
- [x] List rename (inline input via list kebab menu in drawer)
|
- [x] List rename (inline input via list kebab menu in drawer)
|
||||||
- [x] Keyboard shortcuts (Escape closes settings → detail → drawer → menus in priority order)
|
- [x] Keyboard shortcuts (Escape closes settings → detail → drawer → menus in priority order)
|
||||||
- [x] Sync status indicators (last-sync time + upload/download counts chip in TasksScreen)
|
- [x] Sync status indicators (last-sync time + upload/download counts chip in TasksScreen)
|
||||||
- [ ] Push/pull sync mode selection (session-only sync direction selector in SettingsScreen)
|
- [x] Push/pull sync mode selection (session-only sync direction selector in SettingsScreen)
|
||||||
- [x] Group-by-date toggle per list (checkmark toggle in list kebab menu)
|
- [x] Group-by-date toggle per list (checkmark toggle in list kebab menu)
|
||||||
- [x] Subtask hierarchy (expand/collapse, inline add, cascade toggle/delete)
|
- [x] Subtask hierarchy (expand/collapse, inline add, cascade toggle/delete)
|
||||||
- [ ] Search/filter tasks
|
- [ ] Search/filter tasks
|
||||||
|
|
@ -846,11 +841,11 @@ npm run tauri ios build
|
||||||
|
|
||||||
#### Features
|
#### Features
|
||||||
|
|
||||||
- [x] Gate file-watcher initialization behind `#[cfg(not(target_os = "android"))]`
|
- [x] Gate file-watcher initialization behind `#[cfg(not(mobile))]`
|
||||||
- [x] Install Android Studio + NDK, configure env vars
|
- [x] Install Android Studio + NDK, configure env vars
|
||||||
- [x] Add Android Rust targets
|
- [x] Add Android Rust targets
|
||||||
- [ ] `npm run tauri android init` (generates `gen/android/`)
|
- [x] `npm run tauri android init` (generates `gen/android/`)
|
||||||
- [ ] Confirm `npm run tauri android build` succeeds
|
- [x] Confirm `npm run tauri android build` succeeds
|
||||||
- [ ] Basic smoke test: app launches, workspace setup, create a task
|
- [ ] Basic smoke test: app launches, workspace setup, create a task
|
||||||
- [ ] Set up macOS CI for iOS builds
|
- [ ] Set up macOS CI for iOS builds
|
||||||
- [ ] `npm run tauri ios init` (generates `gen/ios/`)
|
- [ ] `npm run tauri ios init` (generates `gen/ios/`)
|
||||||
|
|
@ -913,8 +908,7 @@ npm run tauri ios build
|
||||||
- [ ] Multiple windows (optional)
|
- [ ] Multiple windows (optional)
|
||||||
|
|
||||||
#### Mobile-Specific
|
#### Mobile-Specific
|
||||||
- [x] Swipe gestures (swipe to toggle completion; direction depends on current task status)
|
- [x] Swipe gestures (swipe to complete, swipe to delete)
|
||||||
- [ ] Swipe to delete
|
|
||||||
- [ ] Pull-to-refresh
|
- [ ] Pull-to-refresh
|
||||||
- [ ] Touch-optimized UI elements
|
- [ ] Touch-optimized UI elements
|
||||||
- [ ] Larger touch targets
|
- [ ] Larger touch targets
|
||||||
|
|
@ -983,7 +977,7 @@ npm run tauri ios build
|
||||||
#### Google Tasks Importer
|
#### Google Tasks Importer
|
||||||
- [x] `google_tasks.rs` module in `onyx-core` — client, UUID mapping, read-only sync (remote always wins)
|
- [x] `google_tasks.rs` module in `onyx-core` — client, UUID mapping, read-only sync (remote always wins)
|
||||||
- [x] `GoogleTasks` workspace mode and `google_account` config field
|
- [x] `GoogleTasks` workspace mode and `google_account` config field
|
||||||
- [x] Tauri commands: `start_google_oauth()`, `add_google_tasks_workspace()`, `sync_google_tasks_workspace()`
|
- [x] Tauri commands: `google_tasks_authorize()`, `google_tasks_sync()`
|
||||||
- [ ] Complete OAuth flow (client ID/secret placeholders need real credentials)
|
- [ ] Complete OAuth flow (client ID/secret placeholders need real credentials)
|
||||||
- [ ] Migrate tasks, lists, due dates, notes with full UI integration
|
- [ ] Migrate tasks, lists, due dates, notes with full UI integration
|
||||||
- [ ] Preserve task hierarchy and order
|
- [ ] Preserve task hierarchy and order
|
||||||
|
|
@ -1058,6 +1052,6 @@ This project is free and open-source software licensed under GPL v3.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
**Last Updated**: 2026-04-27
|
**Last Updated**: 2026-04-15
|
||||||
**Document Version**: 4.5
|
**Document Version**: 4.3
|
||||||
**Status**: Ready to Implement - Milestone-Driven Plan
|
**Status**: Ready to Implement - Milestone-Driven Plan
|
||||||
|
|
|
||||||
20
README.md
|
|
@ -2,8 +2,6 @@
|
||||||
|
|
||||||
A **local-first, cross-platform tasks application** built with Rust. Inspired by Google Tasks, designed for speed and flexibility.
|
A **local-first, cross-platform tasks application** built with Rust. Inspired by Google Tasks, designed for speed and flexibility.
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
## Core Principles
|
## Core Principles
|
||||||
|
|
||||||
- **Local-First**: Your data, your folder, your control
|
- **Local-First**: Your data, your folder, your control
|
||||||
|
|
@ -23,10 +21,7 @@ onyx/
|
||||||
│ └── onyx-cli/ # CLI frontend
|
│ └── onyx-cli/ # CLI frontend
|
||||||
├── apps/
|
├── apps/
|
||||||
│ └── tauri/ # Tauri v2 GUI (Svelte 5 + Tailwind CSS 4)
|
│ └── tauri/ # Tauri v2 GUI (Svelte 5 + Tailwind CSS 4)
|
||||||
│ └── tauri-plugin-credentials/ # Cross-platform credential storage plugin
|
|
||||||
└── docs/
|
└── docs/
|
||||||
├── API.md # Core library API reference
|
|
||||||
└── DEVELOPMENT.md # Development guide
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Project Status
|
## Project Status
|
||||||
|
|
@ -34,7 +29,7 @@ onyx/
|
||||||
- **Phase 1** (Core + CLI): Complete
|
- **Phase 1** (Core + CLI): Complete
|
||||||
- **Phase 2** (WebDAV Sync): Complete — backend, CLI, and GUI all wired
|
- **Phase 2** (WebDAV Sync): Complete — backend, CLI, and GUI all wired
|
||||||
- **Phase 3** (GUI MVP): Complete
|
- **Phase 3** (GUI MVP): Complete
|
||||||
- **Phase 4** (Mobile): In progress — Android preliminaries done (file-watcher gating, `tauri-plugin-credentials`, safe area insets, Android targets configured); needs `tauri android init`, build verification, and iOS setup
|
- **Phase 4** (Mobile): Tauri Android cfg-gated, needs `tauri android init` + build
|
||||||
|
|
||||||
### Core Library (`onyx-core`)
|
### Core Library (`onyx-core`)
|
||||||
- Data models (Task, TaskList, AppConfig, WorkspaceConfig)
|
- Data models (Task, TaskList, AppConfig, WorkspaceConfig)
|
||||||
|
|
@ -60,19 +55,16 @@ onyx/
|
||||||
- Drag-and-drop reordering
|
- Drag-and-drop reordering
|
||||||
- Sliding lists drawer, settings popup
|
- Sliding lists drawer, settings popup
|
||||||
- Workspace switcher with add/remove
|
- Workspace switcher with add/remove
|
||||||
- Per-workspace theme system (System default, Light, Dark, Nord, Dracula, Solarized Dark, Black and Gold, Ink)
|
- Dark mode (GNOME-style neutral grays, cyan-blue accent)
|
||||||
- Due date picker/editor with optional time
|
- Due date picker/editor with optional time
|
||||||
- Subtask hierarchy with three-panel slide navigation
|
- Subtask hierarchy with three-panel slide navigation
|
||||||
- Move tasks between lists
|
- Move tasks between lists
|
||||||
- List rename, workspace rename, group-by-date toggle, delete completed tasks
|
- List rename, group-by-date toggle, delete completed tasks
|
||||||
- Keyboard shortcuts (Escape priority chain)
|
- Keyboard shortcuts (Escape priority chain)
|
||||||
- WebDAV setup flow with credential auto-population
|
- WebDAV setup flow with credential auto-population
|
||||||
- File watcher (auto-reloads on external changes)
|
- File watcher (auto-reloads on external changes)
|
||||||
- Auto-sync with configurable interval, status indicators
|
- Auto-sync with configurable interval, status indicators
|
||||||
- Swipe gestures on mobile (swipe to toggle completion)
|
|
||||||
- Custom confirmation dialogs
|
- Custom confirmation dialogs
|
||||||
- Safe area insets for mobile (viewport-fit=cover)
|
|
||||||
- Accessibility: ARIA labels/roles, keyboard handlers, `prefers-reduced-motion` support
|
|
||||||
- Desktop packaging (Linux: AppImage + .deb; Windows: MSI)
|
- Desktop packaging (Linux: AppImage + .deb; Windows: MSI)
|
||||||
|
|
||||||
## Development Setup
|
## Development Setup
|
||||||
|
|
@ -177,8 +169,6 @@ id: 550e8400-e29b-41d4-a716-446655440000
|
||||||
status: backlog
|
status: backlog
|
||||||
version: 3
|
version: 3
|
||||||
date: 2026-11-15T14:00:00Z
|
date: 2026-11-15T14:00:00Z
|
||||||
has_time: true
|
|
||||||
parent: 550e8400-e29b-41d4-a716-446655440001
|
|
||||||
---
|
---
|
||||||
|
|
||||||
Task description and notes go here in **markdown** format.
|
Task description and notes go here in **markdown** format.
|
||||||
|
|
@ -220,8 +210,8 @@ cargo test -- --nocapture
|
||||||
|
|
||||||
## What's Next?
|
## What's Next?
|
||||||
|
|
||||||
- **Phase 4** (in progress): Complete Android build (`tauri android init` + verification), iOS setup on macOS CI
|
- **Phase 4**: Mobile support (iOS & Android via Tauri v2 mobile)
|
||||||
- **Phase 5**: GUI advanced features (rich markdown editor, search/filter, change storage folder)
|
- **Phase 5**: GUI advanced features (rich markdown editor, search/filter)
|
||||||
- **Phase 6**: Mobile polish and platform-specific integrations
|
- **Phase 6**: Mobile polish and platform-specific integrations
|
||||||
- **Phase 7**: Google Tasks importer and unique features
|
- **Phase 7**: Google Tasks importer and unique features
|
||||||
|
|
||||||
|
|
|
||||||
1197
apps/tauri/package-lock.json
generated
|
|
@ -7,23 +7,16 @@
|
||||||
"dev": "vite",
|
"dev": "vite",
|
||||||
"build": "vite build",
|
"build": "vite build",
|
||||||
"preview": "vite preview",
|
"preview": "vite preview",
|
||||||
"tauri": "tauri",
|
"tauri": "tauri"
|
||||||
"test": "vitest run",
|
|
||||||
"test:watch": "vitest"
|
|
||||||
},
|
},
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
"@sveltejs/vite-plugin-svelte": "^5.0.0",
|
"@sveltejs/vite-plugin-svelte": "^5.0.0",
|
||||||
"@tailwindcss/vite": "^4.0.0",
|
"@tailwindcss/vite": "^4.0.0",
|
||||||
"@tauri-apps/cli": "^2.0.0",
|
"@tauri-apps/cli": "^2.0.0",
|
||||||
"@testing-library/jest-dom": "^6.9.1",
|
|
||||||
"@testing-library/svelte": "^5.3.1",
|
|
||||||
"@testing-library/user-event": "^14.6.1",
|
|
||||||
"jsdom": "^29.0.2",
|
|
||||||
"svelte": "^5.0.0",
|
"svelte": "^5.0.0",
|
||||||
"tailwindcss": "^4.0.0",
|
"tailwindcss": "^4.0.0",
|
||||||
"typescript": "^5.6.0",
|
"typescript": "^5.6.0",
|
||||||
"vite": "^6.0.0",
|
"vite": "^6.0.0"
|
||||||
"vitest": "^4.1.4"
|
|
||||||
},
|
},
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@tauri-apps/api": "^2.0.0",
|
"@tauri-apps/api": "^2.0.0",
|
||||||
|
|
|
||||||
|
|
@ -60,11 +60,6 @@ fn lock_state(state: &Mutex<AppState>) -> Result<std::sync::MutexGuard<'_, AppSt
|
||||||
state.lock().map_err(|e| format!("State lock poisoned: {}", e))
|
state.lock().map_err(|e| format!("State lock poisoned: {}", e))
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Parse a UUID from a string, converting errors to the String format Tauri commands use.
|
|
||||||
fn parse_uuid(s: &str) -> Result<Uuid, String> {
|
|
||||||
Uuid::parse_str(s).map_err(|e| e.to_string())
|
|
||||||
}
|
|
||||||
|
|
||||||
impl AppState {
|
impl AppState {
|
||||||
/// Persist config to disk, converting errors to String for Tauri commands.
|
/// Persist config to disk, converting errors to String for Tauri commands.
|
||||||
fn save_config(&self) -> Result<(), String> {
|
fn save_config(&self) -> Result<(), String> {
|
||||||
|
|
@ -72,25 +67,6 @@ impl AppState {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Extract the hostname from a URL (scheme://host/...), used as the credential key.
|
|
||||||
/// Returns an empty string if the URL has no scheme or host.
|
|
||||||
fn credential_domain(url: &str) -> String {
|
|
||||||
url.split("://")
|
|
||||||
.nth(1)
|
|
||||||
.and_then(|rest| rest.split('/').next())
|
|
||||||
.unwrap_or("")
|
|
||||||
.to_string()
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Join a remote base directory with a child path, handling empty base and trailing slashes.
|
|
||||||
fn join_remote_path(base: &str, child: &str) -> String {
|
|
||||||
if base.is_empty() {
|
|
||||||
child.to_string()
|
|
||||||
} else {
|
|
||||||
format!("{}/{}", base.trim_end_matches('/'), child)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Validate that a workspace path is a reasonable directory and not a system path.
|
/// Validate that a workspace path is a reasonable directory and not a system path.
|
||||||
fn validate_workspace_path(path: &str) -> Result<(), String> {
|
fn validate_workspace_path(path: &str) -> Result<(), String> {
|
||||||
let p = PathBuf::from(path);
|
let p = PathBuf::from(path);
|
||||||
|
|
@ -103,10 +79,7 @@ fn validate_workspace_path(path: &str) -> Result<(), String> {
|
||||||
#[cfg(unix)]
|
#[cfg(unix)]
|
||||||
{
|
{
|
||||||
let forbidden = ["/", "/etc", "/usr", "/bin", "/sbin", "/var", "/proc", "/sys", "/dev"];
|
let forbidden = ["/", "/etc", "/usr", "/bin", "/sbin", "/var", "/proc", "/sys", "/dev"];
|
||||||
// Strip trailing slashes, but keep "/" itself — trim_end_matches would
|
|
||||||
// collapse it to "" and slip past the forbidden check.
|
|
||||||
let canonical = normalized.trim_end_matches('/');
|
let canonical = normalized.trim_end_matches('/');
|
||||||
let canonical = if canonical.is_empty() { "/" } else { canonical };
|
|
||||||
if forbidden.contains(&canonical) {
|
if forbidden.contains(&canonical) {
|
||||||
return Err(format!("Cannot use system directory as workspace: {}", path));
|
return Err(format!("Cannot use system directory as workspace: {}", path));
|
||||||
}
|
}
|
||||||
|
|
@ -206,13 +179,6 @@ fn add_workspace(
|
||||||
state: State<'_, Mutex<AppState>>,
|
state: State<'_, Mutex<AppState>>,
|
||||||
) -> Result<(), String> {
|
) -> Result<(), String> {
|
||||||
validate_workspace_path(&path)?;
|
validate_workspace_path(&path)?;
|
||||||
// Ensure the path exists and is a valid workspace before persisting the
|
|
||||||
// config. Without this, calling add_workspace directly on a missing
|
|
||||||
// directory would save the workspace but every subsequent ensure_repo
|
|
||||||
// call would fail with "Path does not exist".
|
|
||||||
TaskRepository::init(PathBuf::from(&path))
|
|
||||||
.map(|_| ())
|
|
||||||
.map_err(|e| e.to_string())?;
|
|
||||||
let mut s = lock_state(&state)?;
|
let mut s = lock_state(&state)?;
|
||||||
let ws = WorkspaceConfig::new(name, PathBuf::from(&path));
|
let ws = WorkspaceConfig::new(name, PathBuf::from(&path));
|
||||||
let id = s.config.add_workspace(ws);
|
let id = s.config.add_workspace(ws);
|
||||||
|
|
@ -290,7 +256,10 @@ async fn rename_workspace(
|
||||||
let base_url = webdav_url.as_deref().ok_or("No WebDAV URL configured")?;
|
let base_url = webdav_url.as_deref().ok_or("No WebDAV URL configured")?;
|
||||||
let remote_path = webdav_path.as_deref().unwrap_or("");
|
let remote_path = webdav_path.as_deref().unwrap_or("");
|
||||||
|
|
||||||
let domain = credential_domain(base_url);
|
let domain = base_url
|
||||||
|
.split("://").nth(1)
|
||||||
|
.and_then(|rest| rest.split('/').next())
|
||||||
|
.unwrap_or("").to_string();
|
||||||
let creds = app_handle.state::<Credentials<tauri::Wry>>();
|
let creds = app_handle.state::<Credentials<tauri::Wry>>();
|
||||||
let (username, password) = creds.load(&domain)?;
|
let (username, password) = creds.load(&domain)?;
|
||||||
|
|
||||||
|
|
@ -371,7 +340,7 @@ fn delete_list(
|
||||||
let mut s = lock_state(&state)?;
|
let mut s = lock_state(&state)?;
|
||||||
ensure_repo(&mut s)?;
|
ensure_repo(&mut s)?;
|
||||||
mute_watcher(&mut s);
|
mute_watcher(&mut s);
|
||||||
let id = parse_uuid(&list_id)?;
|
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||||
repo_mut(&mut s)?
|
repo_mut(&mut s)?
|
||||||
.delete_list(id)
|
.delete_list(id)
|
||||||
.map_err(|e| e.to_string())
|
.map_err(|e| e.to_string())
|
||||||
|
|
@ -386,7 +355,7 @@ fn list_tasks(
|
||||||
) -> Result<Vec<Task>, String> {
|
) -> Result<Vec<Task>, String> {
|
||||||
let mut s = lock_state(&state)?;
|
let mut s = lock_state(&state)?;
|
||||||
ensure_repo(&mut s)?;
|
ensure_repo(&mut s)?;
|
||||||
let id = parse_uuid(&list_id)?;
|
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||||
repo_ref(&s)?
|
repo_ref(&s)?
|
||||||
.list_tasks(id)
|
.list_tasks(id)
|
||||||
.map_err(|e| e.to_string())
|
.map_err(|e| e.to_string())
|
||||||
|
|
@ -398,27 +367,20 @@ fn create_task(
|
||||||
title: String,
|
title: String,
|
||||||
description: Option<String>,
|
description: Option<String>,
|
||||||
parent_id: Option<String>,
|
parent_id: Option<String>,
|
||||||
date: Option<chrono::DateTime<chrono::Utc>>,
|
|
||||||
has_time: Option<bool>,
|
|
||||||
state: State<'_, Mutex<AppState>>,
|
state: State<'_, Mutex<AppState>>,
|
||||||
) -> Result<Task, String> {
|
) -> Result<Task, String> {
|
||||||
let mut s = lock_state(&state)?;
|
let mut s = lock_state(&state)?;
|
||||||
ensure_repo(&mut s)?;
|
ensure_repo(&mut s)?;
|
||||||
mute_watcher(&mut s);
|
mute_watcher(&mut s);
|
||||||
let id = parse_uuid(&list_id)?;
|
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||||
let mut task = Task::new(title);
|
let mut task = Task::new(title);
|
||||||
if let Some(desc) = description.filter(|d| !d.is_empty()) {
|
if let Some(desc) = description.filter(|d| !d.is_empty()) {
|
||||||
task.description = desc;
|
task.description = desc;
|
||||||
}
|
}
|
||||||
if let Some(pid) = parent_id {
|
if let Some(pid) = parent_id {
|
||||||
let parent_uuid = parse_uuid(&pid)?;
|
let parent_uuid = Uuid::parse_str(&pid).map_err(|e| e.to_string())?;
|
||||||
task.parent_id = Some(parent_uuid);
|
task.parent_id = Some(parent_uuid);
|
||||||
}
|
}
|
||||||
// Accept the date fields at creation time so callers don't have to do a
|
|
||||||
// second update() round-trip just to attach a date — which previously
|
|
||||||
// dropped the date entirely if the follow-up update failed.
|
|
||||||
task.date = date;
|
|
||||||
task.has_time = has_time.unwrap_or(false);
|
|
||||||
repo_mut(&mut s)?
|
repo_mut(&mut s)?
|
||||||
.create_task(id, task)
|
.create_task(id, task)
|
||||||
.map_err(|e| e.to_string())
|
.map_err(|e| e.to_string())
|
||||||
|
|
@ -433,7 +395,7 @@ fn update_task(
|
||||||
let mut s = lock_state(&state)?;
|
let mut s = lock_state(&state)?;
|
||||||
ensure_repo(&mut s)?;
|
ensure_repo(&mut s)?;
|
||||||
mute_watcher(&mut s);
|
mute_watcher(&mut s);
|
||||||
let id = parse_uuid(&list_id)?;
|
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||||
repo_mut(&mut s)?
|
repo_mut(&mut s)?
|
||||||
.update_task(id, task)
|
.update_task(id, task)
|
||||||
.map_err(|e| e.to_string())
|
.map_err(|e| e.to_string())
|
||||||
|
|
@ -448,36 +410,17 @@ fn delete_task(
|
||||||
let mut s = lock_state(&state)?;
|
let mut s = lock_state(&state)?;
|
||||||
ensure_repo(&mut s)?;
|
ensure_repo(&mut s)?;
|
||||||
mute_watcher(&mut s);
|
mute_watcher(&mut s);
|
||||||
let lid = parse_uuid(&list_id)?;
|
let lid = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||||
let tid = parse_uuid(&task_id)?;
|
let tid = Uuid::parse_str(&task_id).map_err(|e| e.to_string())?;
|
||||||
let repo = repo_mut(&mut s)?;
|
let repo = repo_mut(&mut s)?;
|
||||||
// Cascade-delete the full descendant subtree (not just direct children)
|
// Cascade-delete subtasks first
|
||||||
// so deleting a parent can't leave grandchildren orphaned with a
|
|
||||||
// parent_id pointing at a deleted task.
|
|
||||||
let all_tasks = repo.list_tasks(lid).map_err(|e| e.to_string())?;
|
let all_tasks = repo.list_tasks(lid).map_err(|e| e.to_string())?;
|
||||||
// Build a parent -> children index in one pass so the BFS below is O(n)
|
let child_ids: Vec<Uuid> = all_tasks
|
||||||
// instead of O(n * depth) scanning all tasks for each frontier pop.
|
.iter()
|
||||||
let mut children_by_parent: std::collections::HashMap<Uuid, Vec<Uuid>> =
|
.filter(|t| t.parent_id == Some(tid))
|
||||||
std::collections::HashMap::new();
|
.map(|t| t.id)
|
||||||
for t in &all_tasks {
|
.collect();
|
||||||
if let Some(pid) = t.parent_id {
|
for child_id in child_ids {
|
||||||
children_by_parent.entry(pid).or_default().push(t.id);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
let mut to_delete: std::collections::HashSet<Uuid> = std::collections::HashSet::new();
|
|
||||||
let mut frontier: Vec<Uuid> = vec![tid];
|
|
||||||
while let Some(parent) = frontier.pop() {
|
|
||||||
if let Some(children) = children_by_parent.get(&parent) {
|
|
||||||
for &child_id in children {
|
|
||||||
if to_delete.insert(child_id) {
|
|
||||||
frontier.push(child_id);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// Delete children before the parent so a mid-cascade failure doesn't
|
|
||||||
// leave the parent removed but descendants stranded.
|
|
||||||
for child_id in to_delete {
|
|
||||||
repo.delete_task(lid, child_id).map_err(|e| format!("Failed to delete subtask {}: {}", child_id, e))?;
|
repo.delete_task(lid, child_id).map_err(|e| format!("Failed to delete subtask {}: {}", child_id, e))?;
|
||||||
}
|
}
|
||||||
repo.delete_task(lid, tid)
|
repo.delete_task(lid, tid)
|
||||||
|
|
@ -493,8 +436,8 @@ fn toggle_task(
|
||||||
let mut s = lock_state(&state)?;
|
let mut s = lock_state(&state)?;
|
||||||
ensure_repo(&mut s)?;
|
ensure_repo(&mut s)?;
|
||||||
mute_watcher(&mut s);
|
mute_watcher(&mut s);
|
||||||
let lid = parse_uuid(&list_id)?;
|
let lid = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||||
let tid = parse_uuid(&task_id)?;
|
let tid = Uuid::parse_str(&task_id).map_err(|e| e.to_string())?;
|
||||||
let repo = repo_mut(&mut s)?;
|
let repo = repo_mut(&mut s)?;
|
||||||
let mut task = repo.get_task(lid, tid).map_err(|e| e.to_string())?;
|
let mut task = repo.get_task(lid, tid).map_err(|e| e.to_string())?;
|
||||||
match task.status {
|
match task.status {
|
||||||
|
|
@ -511,9 +454,7 @@ fn toggle_task(
|
||||||
TaskStatus::Backlog => child.uncomplete(),
|
TaskStatus::Backlog => child.uncomplete(),
|
||||||
TaskStatus::Completed => child.complete(),
|
TaskStatus::Completed => child.complete(),
|
||||||
}
|
}
|
||||||
let child_id = child.id;
|
let _ = repo.update_task(lid, child);
|
||||||
repo.update_task(lid, child)
|
|
||||||
.map_err(|e| format!("Failed to cascade to subtask {}: {}", child_id, e))?;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
Ok(task)
|
Ok(task)
|
||||||
|
|
@ -529,8 +470,8 @@ fn reorder_task(
|
||||||
let mut s = lock_state(&state)?;
|
let mut s = lock_state(&state)?;
|
||||||
ensure_repo(&mut s)?;
|
ensure_repo(&mut s)?;
|
||||||
mute_watcher(&mut s);
|
mute_watcher(&mut s);
|
||||||
let lid = parse_uuid(&list_id)?;
|
let lid = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||||
let tid = parse_uuid(&task_id)?;
|
let tid = Uuid::parse_str(&task_id).map_err(|e| e.to_string())?;
|
||||||
repo_mut(&mut s)?
|
repo_mut(&mut s)?
|
||||||
.reorder_task(lid, tid, new_position)
|
.reorder_task(lid, tid, new_position)
|
||||||
.map_err(|e| e.to_string())
|
.map_err(|e| e.to_string())
|
||||||
|
|
@ -548,9 +489,9 @@ fn move_task(
|
||||||
let mut s = lock_state(&state)?;
|
let mut s = lock_state(&state)?;
|
||||||
ensure_repo(&mut s)?;
|
ensure_repo(&mut s)?;
|
||||||
mute_watcher(&mut s);
|
mute_watcher(&mut s);
|
||||||
let from = parse_uuid(&from_list_id)?;
|
let from = Uuid::parse_str(&from_list_id).map_err(|e| e.to_string())?;
|
||||||
let to = parse_uuid(&to_list_id)?;
|
let to = Uuid::parse_str(&to_list_id).map_err(|e| e.to_string())?;
|
||||||
let tid = parse_uuid(&task_id)?;
|
let tid = Uuid::parse_str(&task_id).map_err(|e| e.to_string())?;
|
||||||
repo_mut(&mut s)?
|
repo_mut(&mut s)?
|
||||||
.move_task(from, to, tid)
|
.move_task(from, to, tid)
|
||||||
.map_err(|e| e.to_string())
|
.map_err(|e| e.to_string())
|
||||||
|
|
@ -565,7 +506,7 @@ fn rename_list(
|
||||||
let mut s = lock_state(&state)?;
|
let mut s = lock_state(&state)?;
|
||||||
ensure_repo(&mut s)?;
|
ensure_repo(&mut s)?;
|
||||||
mute_watcher(&mut s);
|
mute_watcher(&mut s);
|
||||||
let id = parse_uuid(&list_id)?;
|
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||||
repo_mut(&mut s)?
|
repo_mut(&mut s)?
|
||||||
.rename_list(id, new_name)
|
.rename_list(id, new_name)
|
||||||
.map_err(|e| e.to_string())
|
.map_err(|e| e.to_string())
|
||||||
|
|
@ -580,7 +521,7 @@ fn set_group_by_date(
|
||||||
let mut s = lock_state(&state)?;
|
let mut s = lock_state(&state)?;
|
||||||
ensure_repo(&mut s)?;
|
ensure_repo(&mut s)?;
|
||||||
mute_watcher(&mut s);
|
mute_watcher(&mut s);
|
||||||
let id = parse_uuid(&list_id)?;
|
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||||
repo_mut(&mut s)?
|
repo_mut(&mut s)?
|
||||||
.set_group_by_date(id, enabled)
|
.set_group_by_date(id, enabled)
|
||||||
.map_err(|e| e.to_string())
|
.map_err(|e| e.to_string())
|
||||||
|
|
@ -593,7 +534,7 @@ fn get_group_by_date(
|
||||||
) -> Result<bool, String> {
|
) -> Result<bool, String> {
|
||||||
let mut s = lock_state(&state)?;
|
let mut s = lock_state(&state)?;
|
||||||
ensure_repo(&mut s)?;
|
ensure_repo(&mut s)?;
|
||||||
let id = parse_uuid(&list_id)?;
|
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||||
repo_ref(&s)?
|
repo_ref(&s)?
|
||||||
.get_group_by_date(id)
|
.get_group_by_date(id)
|
||||||
.map_err(|e| e.to_string())
|
.map_err(|e| e.to_string())
|
||||||
|
|
@ -681,9 +622,10 @@ async fn list_remote_folder(
|
||||||
let dir_entries: Vec<_> = entries.into_iter().filter(|e| e.is_dir).collect();
|
let dir_entries: Vec<_> = entries.into_iter().filter(|e| e.is_dir).collect();
|
||||||
|
|
||||||
// Check all subfolders for .onyx-workspace.json in parallel
|
// Check all subfolders for .onyx-workspace.json in parallel
|
||||||
let sub_paths: Vec<_> = dir_entries.iter()
|
let sub_paths: Vec<_> = dir_entries.iter().map(|entry| {
|
||||||
.map(|entry| join_remote_path(&path, &entry.path))
|
if path.is_empty() { entry.path.clone() }
|
||||||
.collect();
|
else { format!("{}/{}", path.trim_end_matches('/'), entry.path) }
|
||||||
|
}).collect();
|
||||||
let checks: Vec<_> = sub_paths.iter().map(|sp| {
|
let checks: Vec<_> = sub_paths.iter().map(|sp| {
|
||||||
client.list_files(sp)
|
client.list_files(sp)
|
||||||
}).collect();
|
}).collect();
|
||||||
|
|
@ -715,7 +657,11 @@ async fn inspect_remote_workspace(
|
||||||
let mut lists = Vec::new();
|
let mut lists = Vec::new();
|
||||||
for entry in entries {
|
for entry in entries {
|
||||||
if !entry.is_dir { continue; }
|
if !entry.is_dir { continue; }
|
||||||
let list_path = join_remote_path(&path, &entry.path);
|
let list_path = if path.is_empty() {
|
||||||
|
entry.path.clone()
|
||||||
|
} else {
|
||||||
|
format!("{}/{}", path.trim_end_matches('/'), entry.path)
|
||||||
|
};
|
||||||
let files = client.list_files(&list_path).await.unwrap_or_else(|e| {
|
let files = client.list_files(&list_path).await.unwrap_or_else(|e| {
|
||||||
eprintln!("Warning: failed to list remote folder '{}': {}", list_path, e);
|
eprintln!("Warning: failed to list remote folder '{}': {}", list_path, e);
|
||||||
Vec::new()
|
Vec::new()
|
||||||
|
|
@ -751,7 +697,11 @@ async fn create_remote_workspace(
|
||||||
"list_order": [],
|
"list_order": [],
|
||||||
"last_opened_list": null,
|
"last_opened_list": null,
|
||||||
});
|
});
|
||||||
let file_path = join_remote_path(&path, ".onyx-workspace.json");
|
let file_path = if path.is_empty() {
|
||||||
|
".onyx-workspace.json".to_string()
|
||||||
|
} else {
|
||||||
|
format!("{}/{}", path.trim_end_matches('/'), ".onyx-workspace.json")
|
||||||
|
};
|
||||||
client.put_file(&file_path, serde_json::to_string_pretty(&metadata).map_err(|e| e.to_string())?.into_bytes())
|
client.put_file(&file_path, serde_json::to_string_pretty(&metadata).map_err(|e| e.to_string())?.into_bytes())
|
||||||
.await
|
.await
|
||||||
.map_err(|e| e.to_string())?;
|
.map_err(|e| e.to_string())?;
|
||||||
|
|
@ -785,7 +735,12 @@ fn add_webdav_workspace(
|
||||||
s.repo = None;
|
s.repo = None;
|
||||||
|
|
||||||
// Store credentials keyed by hostname
|
// Store credentials keyed by hostname
|
||||||
let domain = credential_domain(&webdav_url);
|
let domain = webdav_url
|
||||||
|
.split("://")
|
||||||
|
.nth(1)
|
||||||
|
.and_then(|rest| rest.split('/').next())
|
||||||
|
.unwrap_or("")
|
||||||
|
.to_string();
|
||||||
s.save_config()?;
|
s.save_config()?;
|
||||||
drop(s);
|
drop(s);
|
||||||
let creds = app_handle.state::<Credentials<tauri::Wry>>();
|
let creds = app_handle.state::<Credentials<tauri::Wry>>();
|
||||||
|
|
@ -848,7 +803,12 @@ async fn sync_workspace(
|
||||||
};
|
};
|
||||||
|
|
||||||
// Step 2: load credentials
|
// Step 2: load credentials
|
||||||
let domain = credential_domain(&webdav_url);
|
let domain = webdav_url
|
||||||
|
.split("://")
|
||||||
|
.nth(1)
|
||||||
|
.and_then(|rest| rest.split('/').next())
|
||||||
|
.unwrap_or("")
|
||||||
|
.to_string();
|
||||||
let creds = app_handle.state::<Credentials<tauri::Wry>>();
|
let creds = app_handle.state::<Credentials<tauri::Wry>>();
|
||||||
let (username, password) = creds.load(&domain)?;
|
let (username, password) = creds.load(&domain)?;
|
||||||
|
|
||||||
|
|
|
||||||
42
apps/tauri/src/lib/components/BottomSheet.svelte
Normal file
|
|
@ -0,0 +1,42 @@
|
||||||
|
<script lang="ts">
|
||||||
|
import type { Snippet } from "svelte";
|
||||||
|
let { onclose, children }: { onclose: () => void; children: Snippet } = $props();
|
||||||
|
</script>
|
||||||
|
|
||||||
|
<!-- Backdrop -->
|
||||||
|
<div
|
||||||
|
class="fixed inset-0 z-40 bg-black/40"
|
||||||
|
role="button"
|
||||||
|
tabindex="-1"
|
||||||
|
aria-label="Close sheet"
|
||||||
|
onclick={onclose}
|
||||||
|
onkeydown={(e) => { if (e.key === "Escape") onclose(); }}
|
||||||
|
></div>
|
||||||
|
|
||||||
|
<!-- Sheet -->
|
||||||
|
<div
|
||||||
|
role="dialog"
|
||||||
|
aria-modal="true"
|
||||||
|
class="fixed bottom-0 left-0 right-0 z-50 max-h-[70vh] overflow-y-auto rounded-t-2xl bg-surface-light shadow-xl dark:bg-card-dark animate-slide-up"
|
||||||
|
>
|
||||||
|
<!-- Drag handle -->
|
||||||
|
<div class="flex justify-center py-2">
|
||||||
|
<div class="h-1 w-8 rounded-full bg-gray-300 dark:bg-gray-600"></div>
|
||||||
|
</div>
|
||||||
|
{@render children()}
|
||||||
|
<div class="h-[env(safe-area-inset-bottom)]"></div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<style>
|
||||||
|
@keyframes slide-up {
|
||||||
|
from {
|
||||||
|
transform: translateY(100%);
|
||||||
|
}
|
||||||
|
to {
|
||||||
|
transform: translateY(0);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
.animate-slide-up {
|
||||||
|
animation: slide-up 0.25s ease-out;
|
||||||
|
}
|
||||||
|
</style>
|
||||||
|
|
@ -1,43 +1,6 @@
|
||||||
<script lang="ts" module>
|
|
||||||
// Shared counter so sibling Escape handlers (e.g. TasksScreen's svelte:window
|
|
||||||
// listener) can tell when a ConfirmDialog is open and defer to it instead of
|
|
||||||
// popping the task-detail view behind the dialog.
|
|
||||||
let openCount = $state(0);
|
|
||||||
export function isConfirmDialogOpen(): boolean {
|
|
||||||
return openCount > 0;
|
|
||||||
}
|
|
||||||
</script>
|
|
||||||
|
|
||||||
<script lang="ts">
|
<script lang="ts">
|
||||||
import { onMount, onDestroy, tick } from "svelte";
|
|
||||||
|
|
||||||
let { message, detail, confirmText = "Confirm", danger = false, onconfirm, oncancel }:
|
let { message, detail, confirmText = "Confirm", danger = false, onconfirm, oncancel }:
|
||||||
{ message: string; detail?: string; confirmText?: string; danger?: boolean; onconfirm: () => void; oncancel: () => void } = $props();
|
{ message: string; detail?: string; confirmText?: string; danger?: boolean; onconfirm: () => void; oncancel: () => void } = $props();
|
||||||
|
|
||||||
let cancelBtn: HTMLButtonElement | undefined = $state();
|
|
||||||
|
|
||||||
function handleGlobalKeydown(e: KeyboardEvent) {
|
|
||||||
if (e.key !== "Escape") return;
|
|
||||||
e.stopPropagation();
|
|
||||||
e.stopImmediatePropagation();
|
|
||||||
e.preventDefault();
|
|
||||||
oncancel();
|
|
||||||
}
|
|
||||||
|
|
||||||
onMount(() => {
|
|
||||||
openCount += 1;
|
|
||||||
// Focus Cancel so Escape/Enter go through the dialog's own keydown handler
|
|
||||||
// (which cancels) instead of leaking to the global svelte:window listener
|
|
||||||
// in TasksScreen (which would pop the task detail view).
|
|
||||||
tick().then(() => cancelBtn?.focus());
|
|
||||||
// Belt-and-suspenders: capture-phase listener dismisses even if focus
|
|
||||||
// didn't land on Cancel (e.g. under test harnesses or headless compositors).
|
|
||||||
window.addEventListener("keydown", handleGlobalKeydown, true);
|
|
||||||
});
|
|
||||||
onDestroy(() => {
|
|
||||||
openCount -= 1;
|
|
||||||
window.removeEventListener("keydown", handleGlobalKeydown, true);
|
|
||||||
});
|
|
||||||
</script>
|
</script>
|
||||||
|
|
||||||
<div
|
<div
|
||||||
|
|
@ -60,7 +23,6 @@
|
||||||
{/if}
|
{/if}
|
||||||
<div class="mt-4 flex justify-end gap-2">
|
<div class="mt-4 flex justify-end gap-2">
|
||||||
<button
|
<button
|
||||||
bind:this={cancelBtn}
|
|
||||||
onclick={oncancel}
|
onclick={oncancel}
|
||||||
class="rounded-lg px-4 py-2 text-sm hover:bg-black/5 dark:hover:bg-white/10"
|
class="rounded-lg px-4 py-2 text-sm hover:bg-black/5 dark:hover:bg-white/10"
|
||||||
>
|
>
|
||||||
|
|
|
||||||
|
|
@ -1,105 +0,0 @@
|
||||||
import { describe, it, expect, vi, beforeEach } from "vitest";
|
|
||||||
import { render, screen, cleanup } from "@testing-library/svelte";
|
|
||||||
import userEvent from "@testing-library/user-event";
|
|
||||||
import ConfirmDialog, { isConfirmDialogOpen } from "./ConfirmDialog.svelte";
|
|
||||||
|
|
||||||
beforeEach(() => {
|
|
||||||
cleanup();
|
|
||||||
});
|
|
||||||
|
|
||||||
describe("ConfirmDialog", () => {
|
|
||||||
it("renders the message, detail and custom confirm label", () => {
|
|
||||||
render(ConfirmDialog, {
|
|
||||||
message: "Delete task?",
|
|
||||||
detail: "This cannot be undone.",
|
|
||||||
confirmText: "Delete",
|
|
||||||
onconfirm: vi.fn(),
|
|
||||||
oncancel: vi.fn(),
|
|
||||||
});
|
|
||||||
expect(screen.getByText("Delete task?")).toBeInTheDocument();
|
|
||||||
expect(screen.getByText("This cannot be undone.")).toBeInTheDocument();
|
|
||||||
expect(screen.getByRole("button", { name: "Delete" })).toBeInTheDocument();
|
|
||||||
expect(screen.getByRole("button", { name: "Cancel" })).toBeInTheDocument();
|
|
||||||
});
|
|
||||||
|
|
||||||
it("fires oncancel when Cancel is clicked", async () => {
|
|
||||||
const user = userEvent.setup();
|
|
||||||
const oncancel = vi.fn();
|
|
||||||
render(ConfirmDialog, {
|
|
||||||
message: "Delete?",
|
|
||||||
onconfirm: vi.fn(),
|
|
||||||
oncancel,
|
|
||||||
});
|
|
||||||
await user.click(screen.getByRole("button", { name: "Cancel" }));
|
|
||||||
expect(oncancel).toHaveBeenCalledTimes(1);
|
|
||||||
});
|
|
||||||
|
|
||||||
it("fires onconfirm when Confirm is clicked and not oncancel", async () => {
|
|
||||||
const user = userEvent.setup();
|
|
||||||
const onconfirm = vi.fn();
|
|
||||||
const oncancel = vi.fn();
|
|
||||||
render(ConfirmDialog, {
|
|
||||||
message: "Delete?",
|
|
||||||
confirmText: "Delete",
|
|
||||||
onconfirm,
|
|
||||||
oncancel,
|
|
||||||
});
|
|
||||||
await user.click(screen.getByRole("button", { name: "Delete" }));
|
|
||||||
expect(onconfirm).toHaveBeenCalledTimes(1);
|
|
||||||
expect(oncancel).not.toHaveBeenCalled();
|
|
||||||
});
|
|
||||||
|
|
||||||
it("cancels and stops propagation on Escape (regression: used to bubble and pop task detail)", async () => {
|
|
||||||
const oncancel = vi.fn();
|
|
||||||
// An outer bubble-phase listener emulates TasksScreen's svelte:window
|
|
||||||
// Escape handler. If the dialog leaks Escape, this spy fires too.
|
|
||||||
const outer = vi.fn();
|
|
||||||
window.addEventListener("keydown", outer);
|
|
||||||
try {
|
|
||||||
render(ConfirmDialog, {
|
|
||||||
message: "Delete?",
|
|
||||||
onconfirm: vi.fn(),
|
|
||||||
oncancel,
|
|
||||||
});
|
|
||||||
window.dispatchEvent(new KeyboardEvent("keydown", { key: "Escape", bubbles: true, cancelable: true }));
|
|
||||||
expect(oncancel).toHaveBeenCalledTimes(1);
|
|
||||||
expect(outer).not.toHaveBeenCalled();
|
|
||||||
} finally {
|
|
||||||
window.removeEventListener("keydown", outer);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
it("ignores non-Escape keydowns", async () => {
|
|
||||||
const oncancel = vi.fn();
|
|
||||||
render(ConfirmDialog, {
|
|
||||||
message: "Delete?",
|
|
||||||
onconfirm: vi.fn(),
|
|
||||||
oncancel,
|
|
||||||
});
|
|
||||||
window.dispatchEvent(new KeyboardEvent("keydown", { key: "a" }));
|
|
||||||
window.dispatchEvent(new KeyboardEvent("keydown", { key: "Enter" }));
|
|
||||||
expect(oncancel).not.toHaveBeenCalled();
|
|
||||||
});
|
|
||||||
|
|
||||||
it("increments the open-count singleton so parent Escape handlers can defer", () => {
|
|
||||||
expect(isConfirmDialogOpen()).toBe(false);
|
|
||||||
const { unmount } = render(ConfirmDialog, {
|
|
||||||
message: "Delete?",
|
|
||||||
onconfirm: vi.fn(),
|
|
||||||
oncancel: vi.fn(),
|
|
||||||
});
|
|
||||||
expect(isConfirmDialogOpen()).toBe(true);
|
|
||||||
unmount();
|
|
||||||
expect(isConfirmDialogOpen()).toBe(false);
|
|
||||||
});
|
|
||||||
|
|
||||||
it("tracks multiple concurrently-mounted dialogs and releases on unmount", () => {
|
|
||||||
const a = render(ConfirmDialog, { message: "A?", onconfirm: vi.fn(), oncancel: vi.fn() });
|
|
||||||
const b = render(ConfirmDialog, { message: "B?", onconfirm: vi.fn(), oncancel: vi.fn() });
|
|
||||||
expect(isConfirmDialogOpen()).toBe(true);
|
|
||||||
a.unmount();
|
|
||||||
expect(isConfirmDialogOpen()).toBe(true);
|
|
||||||
b.unmount();
|
|
||||||
expect(isConfirmDialogOpen()).toBe(false);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
@ -13,8 +13,6 @@
|
||||||
let viewYear = $state(existing ? existing.getFullYear() : now.getFullYear());
|
let viewYear = $state(existing ? existing.getFullYear() : now.getFullYear());
|
||||||
let viewMonth = $state(existing ? existing.getMonth() : now.getMonth());
|
let viewMonth = $state(existing ? existing.getMonth() : now.getMonth());
|
||||||
let selectedDay = $state(existing ? existing.getDate() : now.getDate());
|
let selectedDay = $state(existing ? existing.getDate() : now.getDate());
|
||||||
let selectedYear = $state(existing ? existing.getFullYear() : now.getFullYear());
|
|
||||||
let selectedMonth = $state(existing ? existing.getMonth() : now.getMonth());
|
|
||||||
let includeTime = $state(has_time);
|
let includeTime = $state(has_time);
|
||||||
let selectedHour = $state(existing ? existing.getHours() : now.getHours());
|
let selectedHour = $state(existing ? existing.getHours() : now.getHours());
|
||||||
let selectedMinute = $state(existing ? existing.getMinutes() : 0);
|
let selectedMinute = $state(existing ? existing.getMinutes() : 0);
|
||||||
|
|
@ -52,8 +50,6 @@
|
||||||
|
|
||||||
function selectDay(day: number) {
|
function selectDay(day: number) {
|
||||||
selectedDay = day;
|
selectedDay = day;
|
||||||
selectedYear = viewYear;
|
|
||||||
selectedMonth = viewMonth;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
function isToday(day: number): boolean {
|
function isToday(day: number): boolean {
|
||||||
|
|
@ -61,16 +57,16 @@
|
||||||
}
|
}
|
||||||
|
|
||||||
function isSelected(day: number): boolean {
|
function isSelected(day: number): boolean {
|
||||||
return selectedDay === day && selectedYear === viewYear && selectedMonth === viewMonth;
|
return selectedDay === day && (!value || (() => {
|
||||||
|
const v = new Date(value);
|
||||||
|
return v.getFullYear() === viewYear && v.getMonth() === viewMonth;
|
||||||
|
})());
|
||||||
}
|
}
|
||||||
|
|
||||||
function done() {
|
function done() {
|
||||||
const h = includeTime ? selectedHour : 0;
|
const h = includeTime ? selectedHour : 0;
|
||||||
const m = includeTime ? selectedMinute : 0;
|
const m = includeTime ? selectedMinute : 0;
|
||||||
// Commit based on the last-selected year/month, not the currently-viewed
|
const iso = new Date(viewYear, viewMonth, selectedDay, h, m).toISOString();
|
||||||
// ones — users can navigate months after selecting a day without
|
|
||||||
// accidentally shifting the chosen date to the viewed month.
|
|
||||||
const iso = new Date(selectedYear, selectedMonth, selectedDay, h, m).toISOString();
|
|
||||||
onchange(iso, includeTime);
|
onchange(iso, includeTime);
|
||||||
dismiss();
|
dismiss();
|
||||||
}
|
}
|
||||||
|
|
@ -133,9 +129,9 @@
|
||||||
<button
|
<button
|
||||||
onclick={() => selectDay(day)}
|
onclick={() => selectDay(day)}
|
||||||
class="mx-auto flex h-8 w-8 items-center justify-center rounded-full text-sm transition-colors
|
class="mx-auto flex h-8 w-8 items-center justify-center rounded-full text-sm transition-colors
|
||||||
{isSelected(day) ? 'bg-primary text-white' : ''}
|
{selectedDay === day ? 'bg-primary text-white' : ''}
|
||||||
{isToday(day) && !isSelected(day) ? 'font-bold text-primary' : ''}
|
{isToday(day) && selectedDay !== day ? 'font-bold text-primary' : ''}
|
||||||
{!isSelected(day) && !isToday(day) ? 'hover:bg-black/5 dark:hover:bg-white/10' : ''}"
|
{selectedDay !== day && !isToday(day) ? 'hover:bg-black/5 dark:hover:bg-white/10' : ''}"
|
||||||
>
|
>
|
||||||
{day}
|
{day}
|
||||||
</button>
|
</button>
|
||||||
|
|
|
||||||
|
|
@ -1,74 +0,0 @@
|
||||||
import { describe, it, expect, vi, beforeEach } from "vitest";
|
|
||||||
import { render, screen, cleanup } from "@testing-library/svelte";
|
|
||||||
import userEvent from "@testing-library/user-event";
|
|
||||||
import DateTimePicker from "./DateTimePicker.svelte";
|
|
||||||
|
|
||||||
beforeEach(() => {
|
|
||||||
cleanup();
|
|
||||||
});
|
|
||||||
|
|
||||||
describe("DateTimePicker — selected highlight", () => {
|
|
||||||
it("only marks the selected day in the month/year that was actually picked", async () => {
|
|
||||||
const user = userEvent.setup();
|
|
||||||
// Pick a date in the current month so the component opens on it.
|
|
||||||
const now = new Date();
|
|
||||||
const existing = new Date(now.getFullYear(), now.getMonth(), 15, 0, 0, 0).toISOString();
|
|
||||||
|
|
||||||
render(DateTimePicker, {
|
|
||||||
value: existing,
|
|
||||||
has_time: false,
|
|
||||||
onchange: vi.fn(),
|
|
||||||
onclose: vi.fn(),
|
|
||||||
});
|
|
||||||
|
|
||||||
// The "15" button for the current month should be rendered with the
|
|
||||||
// selected styling (bg-primary).
|
|
||||||
const day15 = screen.getByRole("button", { name: "15" });
|
|
||||||
expect(day15.className).toMatch(/bg-primary/);
|
|
||||||
|
|
||||||
// Navigate one month forward. The same "15" cell must NOT be marked as
|
|
||||||
// selected, because the user hasn't picked a day in that month yet.
|
|
||||||
const nextMonthBtn = screen.getAllByRole("button").find((b) =>
|
|
||||||
b.querySelector("svg path[d*='M7.21 14.77']"),
|
|
||||||
) as HTMLElement;
|
|
||||||
await user.click(nextMonthBtn);
|
|
||||||
|
|
||||||
const nextMonth15 = screen.getByRole("button", { name: "15" });
|
|
||||||
expect(nextMonth15.className).not.toMatch(/bg-primary/);
|
|
||||||
});
|
|
||||||
|
|
||||||
it("commits based on the last-selected month, not the currently-viewed month", async () => {
|
|
||||||
const user = userEvent.setup();
|
|
||||||
const onchange = vi.fn();
|
|
||||||
const onclose = vi.fn();
|
|
||||||
|
|
||||||
// Start with April 10 selected (use a fixed month/year so the test is stable).
|
|
||||||
const existing = new Date(2026, 3, 10, 0, 0, 0).toISOString();
|
|
||||||
render(DateTimePicker, {
|
|
||||||
value: existing,
|
|
||||||
has_time: false,
|
|
||||||
onchange,
|
|
||||||
onclose,
|
|
||||||
});
|
|
||||||
|
|
||||||
// Pick the 20th while viewing April.
|
|
||||||
await user.click(screen.getByRole("button", { name: "20" }));
|
|
||||||
|
|
||||||
// Flip to May.
|
|
||||||
const nextMonthBtn = screen.getAllByRole("button").find((b) =>
|
|
||||||
b.querySelector("svg path[d*='M7.21 14.77']"),
|
|
||||||
) as HTMLElement;
|
|
||||||
await user.click(nextMonthBtn);
|
|
||||||
|
|
||||||
// Hit Done.
|
|
||||||
await user.click(screen.getByRole("button", { name: "Done" }));
|
|
||||||
|
|
||||||
expect(onchange).toHaveBeenCalled();
|
|
||||||
const committed = new Date(onchange.mock.calls[0][0] as string);
|
|
||||||
// April == month 3 (0-indexed). We navigated to May without reselecting,
|
|
||||||
// so the committed date must still be April 20.
|
|
||||||
expect(committed.getMonth()).toBe(3);
|
|
||||||
expect(committed.getDate()).toBe(20);
|
|
||||||
expect(committed.getFullYear()).toBe(2026);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
@ -17,15 +17,10 @@
|
||||||
|
|
||||||
async function handleSubmit() {
|
async function handleSubmit() {
|
||||||
if (!title.trim()) return;
|
if (!title.trim()) return;
|
||||||
// Pass date/has_time into createTask directly so the date can't be lost
|
const created = await app.createTask(title.trim(), description.trim() || undefined);
|
||||||
// if a second round-trip to update() failed after the create succeeded.
|
if (date && created) {
|
||||||
await app.createTask(
|
await app.updateTask({ ...created, date: date, has_time: dateHasTime });
|
||||||
title.trim(),
|
}
|
||||||
description.trim() || undefined,
|
|
||||||
undefined,
|
|
||||||
date,
|
|
||||||
dateHasTime,
|
|
||||||
);
|
|
||||||
title = "";
|
title = "";
|
||||||
description = "";
|
description = "";
|
||||||
date = null;
|
date = null;
|
||||||
|
|
|
||||||
|
|
@ -25,20 +25,6 @@
|
||||||
return () => clearTimeout(saveTimer);
|
return () => clearTimeout(saveTimer);
|
||||||
});
|
});
|
||||||
|
|
||||||
// Re-sync local editor state when the task prop's content changes from elsewhere
|
|
||||||
// (sync pull, external file edit). Skip the reset while the user is actively
|
|
||||||
// editing an input so we don't clobber in-progress typing.
|
|
||||||
$effect(() => {
|
|
||||||
const incomingTitle = task.title;
|
|
||||||
const incomingDesc = task.description;
|
|
||||||
const active = document.activeElement;
|
|
||||||
const editing = active instanceof HTMLInputElement || active instanceof HTMLTextAreaElement;
|
|
||||||
if (!editing) {
|
|
||||||
if (incomingTitle !== title) title = incomingTitle;
|
|
||||||
if (incomingDesc !== description) description = incomingDesc;
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
let otherLists = $derived(app.lists.filter((l) => l.id !== app.activeListId));
|
let otherLists = $derived(app.lists.filter((l) => l.id !== app.activeListId));
|
||||||
|
|
||||||
function handleHeaderMouseDown(e: MouseEvent) {
|
function handleHeaderMouseDown(e: MouseEvent) {
|
||||||
|
|
@ -78,12 +64,10 @@
|
||||||
|
|
||||||
async function executeDelete() {
|
async function executeDelete() {
|
||||||
confirmDelete = false;
|
confirmDelete = false;
|
||||||
// Cascade: delete subtasks first. Bail out on first failure so we don't
|
// Cascade: delete subtasks first
|
||||||
// remove the parent while orphaning subtasks; the error is already surfaced.
|
for (const s of subtasks) await app.deleteTask(s.id);
|
||||||
for (const s of subtasks) {
|
await app.deleteTask(task.id);
|
||||||
if (!(await app.deleteTask(s.id))) return;
|
onback();
|
||||||
}
|
|
||||||
if (await app.deleteTask(task.id)) onback();
|
|
||||||
}
|
}
|
||||||
|
|
||||||
function handleMenuClickOutside(e: MouseEvent) {
|
function handleMenuClickOutside(e: MouseEvent) {
|
||||||
|
|
@ -120,12 +104,7 @@
|
||||||
async function executeDeleteCompletedSubtasks() {
|
async function executeDeleteCompletedSubtasks() {
|
||||||
confirmDeleteCompleted = false;
|
confirmDeleteCompleted = false;
|
||||||
showSubtaskMenu = false;
|
showSubtaskMenu = false;
|
||||||
// Snapshot — completedSubtasks is reactive and shrinks as we delete.
|
for (const s of completedSubtasks) await app.deleteTask(s.id);
|
||||||
// Bail on first failure so we don't silently leave a partial delete.
|
|
||||||
const targets = [...completedSubtasks];
|
|
||||||
for (const s of targets) {
|
|
||||||
if (!(await app.deleteTask(s.id))) return;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
function handleSubtaskMenuClickOutside(e: MouseEvent) {
|
function handleSubtaskMenuClickOutside(e: MouseEvent) {
|
||||||
|
|
|
||||||
|
|
@ -1,97 +0,0 @@
|
||||||
import { describe, it, expect } from "vitest";
|
|
||||||
import { groupTasksByDate } from "./grouping";
|
|
||||||
import type { Task } from "./types";
|
|
||||||
|
|
||||||
// 2026-04-17 12:00 local time — "today" in the fixtures below.
|
|
||||||
const NOW = new Date(2026, 3, 17, 12, 0, 0);
|
|
||||||
|
|
||||||
function task(partial: Partial<Task> & { id: string }): Task {
|
|
||||||
return {
|
|
||||||
id: partial.id,
|
|
||||||
title: partial.title ?? partial.id,
|
|
||||||
description: "",
|
|
||||||
status: "backlog",
|
|
||||||
date: partial.date ?? null,
|
|
||||||
has_time: partial.has_time ?? false,
|
|
||||||
version: 1,
|
|
||||||
parent_id: null,
|
|
||||||
...partial,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
describe("groupTasksByDate", () => {
|
|
||||||
it("returns an empty array when there are no pending tasks", () => {
|
|
||||||
expect(groupTasksByDate([], NOW)).toEqual([]);
|
|
||||||
});
|
|
||||||
|
|
||||||
it("puts 'No Date' last — regression: was first, burying urgent tasks", () => {
|
|
||||||
const tasks = [
|
|
||||||
task({ id: "overdue", date: "2026-04-15T00:00:00Z" }),
|
|
||||||
task({ id: "no-date" }),
|
|
||||||
task({ id: "today", date: "2026-04-17T09:00:00Z" }),
|
|
||||||
];
|
|
||||||
const labels = groupTasksByDate(tasks, NOW).map((g) => g.label);
|
|
||||||
expect(labels).toEqual(["Overdue", "Today", "No Date"]);
|
|
||||||
});
|
|
||||||
|
|
||||||
it("orders dated buckets: Overdue, Today, Tomorrow, future…, then No Date", () => {
|
|
||||||
const tasks = [
|
|
||||||
task({ id: "nd1" }),
|
|
||||||
task({ id: "future", date: "2026-04-20T00:00:00Z" }),
|
|
||||||
task({ id: "tomorrow", date: "2026-04-18T00:00:00Z" }),
|
|
||||||
task({ id: "today", date: "2026-04-17T09:00:00Z" }),
|
|
||||||
task({ id: "overdue", date: "2026-04-10T00:00:00Z" }),
|
|
||||||
];
|
|
||||||
const labels = groupTasksByDate(tasks, NOW).map((g) => g.label);
|
|
||||||
expect(labels[0]).toBe("Overdue");
|
|
||||||
expect(labels[1]).toBe("Today");
|
|
||||||
expect(labels[2]).toBe("Tomorrow");
|
|
||||||
// One future day label between tomorrow and No Date
|
|
||||||
expect(labels[labels.length - 1]).toBe("No Date");
|
|
||||||
expect(labels).toHaveLength(5);
|
|
||||||
});
|
|
||||||
|
|
||||||
it("drops empty buckets", () => {
|
|
||||||
const tasks = [task({ id: "t1", date: "2026-04-17T08:00:00Z" })];
|
|
||||||
expect(groupTasksByDate(tasks, NOW).map((g) => g.label)).toEqual(["Today"]);
|
|
||||||
});
|
|
||||||
|
|
||||||
it("sorts tasks within a bucket by due time ascending, stable on ties", () => {
|
|
||||||
const tasks = [
|
|
||||||
task({ id: "b", date: "2026-04-17T15:00:00Z", has_time: true }),
|
|
||||||
task({ id: "a", date: "2026-04-17T09:00:00Z", has_time: true }),
|
|
||||||
task({ id: "c", date: "2026-04-17T15:00:00Z", has_time: true }),
|
|
||||||
];
|
|
||||||
const today = groupTasksByDate(tasks, NOW).find((g) => g.label === "Today")!;
|
|
||||||
expect(today.tasks.map((t) => t.id)).toEqual(["a", "b", "c"]);
|
|
||||||
});
|
|
||||||
|
|
||||||
it("places a task with today's date but time before 'now' in the Today bucket (not Overdue)", () => {
|
|
||||||
const tasks = [task({ id: "earlier-today", date: "2026-04-17T08:00:00Z" })];
|
|
||||||
const groups = groupTasksByDate(tasks, NOW);
|
|
||||||
expect(groups.map((g) => g.label)).toEqual(["Today"]);
|
|
||||||
});
|
|
||||||
|
|
||||||
it("preserves No Date order as given by the caller", () => {
|
|
||||||
const tasks = [
|
|
||||||
task({ id: "z" }),
|
|
||||||
task({ id: "a" }),
|
|
||||||
task({ id: "m" }),
|
|
||||||
];
|
|
||||||
const nd = groupTasksByDate(tasks, NOW).find((g) => g.label === "No Date")!;
|
|
||||||
expect(nd.tasks.map((t) => t.id)).toEqual(["z", "a", "m"]);
|
|
||||||
});
|
|
||||||
|
|
||||||
it("groups multiple tasks on the same future day under one label", () => {
|
|
||||||
const tasks = [
|
|
||||||
task({ id: "f1", date: "2026-04-25T09:00:00Z", has_time: true }),
|
|
||||||
task({ id: "f2", date: "2026-04-25T14:00:00Z", has_time: true }),
|
|
||||||
];
|
|
||||||
const groups = groupTasksByDate(tasks, NOW);
|
|
||||||
const future = groups.find((g) => g.date?.getDate() === 25);
|
|
||||||
expect(future).toBeDefined();
|
|
||||||
expect(future!.tasks.map((t) => t.id)).toEqual(["f1", "f2"]);
|
|
||||||
// And it comes before No Date (which is absent here).
|
|
||||||
expect(groups).toHaveLength(1);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
@ -1,70 +0,0 @@
|
||||||
import type { Task } from "./types";
|
|
||||||
|
|
||||||
export type TaskGroup = { label: string; tasks: Task[]; date: Date | null };
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Group pending tasks into date buckets for the "group by date" view.
|
|
||||||
*
|
|
||||||
* Order:
|
|
||||||
* Overdue → Today → Tomorrow → future days (chronological) → No Date
|
|
||||||
*
|
|
||||||
* Within each dated bucket tasks sort by due date+time ascending, with the
|
|
||||||
* original `pendingTasks` index as a stable tiebreaker. "No Date" preserves
|
|
||||||
* the caller-supplied order.
|
|
||||||
*/
|
|
||||||
export function groupTasksByDate(pendingTasks: Task[], now: Date = new Date()): TaskGroup[] {
|
|
||||||
const todayStart = new Date(now.getFullYear(), now.getMonth(), now.getDate());
|
|
||||||
const tomorrowStart = new Date(todayStart);
|
|
||||||
tomorrowStart.setDate(todayStart.getDate() + 1);
|
|
||||||
|
|
||||||
const overdue: Task[] = [];
|
|
||||||
const today: Task[] = [];
|
|
||||||
const tomorrow: Task[] = [];
|
|
||||||
const futureByDay = new Map<string, { date: Date; tasks: Task[] }>();
|
|
||||||
const noDate: Task[] = [];
|
|
||||||
|
|
||||||
for (const task of pendingTasks) {
|
|
||||||
if (!task.date) {
|
|
||||||
noDate.push(task);
|
|
||||||
} else {
|
|
||||||
const d = new Date(task.date);
|
|
||||||
const dayStart = new Date(d.getFullYear(), d.getMonth(), d.getDate());
|
|
||||||
if (dayStart < todayStart) overdue.push(task);
|
|
||||||
else if (dayStart.getTime() === todayStart.getTime()) today.push(task);
|
|
||||||
else if (dayStart.getTime() === tomorrowStart.getTime()) tomorrow.push(task);
|
|
||||||
else {
|
|
||||||
const key = dayStart.toISOString();
|
|
||||||
if (!futureByDay.has(key)) futureByDay.set(key, { date: dayStart, tasks: [] });
|
|
||||||
futureByDay.get(key)!.tasks.push(task);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
const taskOrderIndex = new Map(pendingTasks.map((t, i) => [t.id, i]));
|
|
||||||
const sortByDue = (a: Task, b: Task) => {
|
|
||||||
const dateDiff = new Date(a.date!).getTime() - new Date(b.date!).getTime();
|
|
||||||
if (dateDiff !== 0) return dateDiff;
|
|
||||||
return (taskOrderIndex.get(a.id) ?? 0) - (taskOrderIndex.get(b.id) ?? 0);
|
|
||||||
};
|
|
||||||
overdue.sort(sortByDue);
|
|
||||||
today.sort(sortByDue);
|
|
||||||
tomorrow.sort(sortByDue);
|
|
||||||
|
|
||||||
const groups: TaskGroup[] = [];
|
|
||||||
if (overdue.length) groups.push({ label: "Overdue", tasks: overdue, date: null });
|
|
||||||
if (today.length) groups.push({ label: "Today", tasks: today, date: todayStart });
|
|
||||||
if (tomorrow.length) groups.push({ label: "Tomorrow", tasks: tomorrow, date: tomorrowStart });
|
|
||||||
|
|
||||||
const currentYear = now.getFullYear();
|
|
||||||
for (const [, { date, tasks }] of [...futureByDay.entries()].sort(([a], [b]) => a.localeCompare(b))) {
|
|
||||||
tasks.sort(sortByDue);
|
|
||||||
const opts: Intl.DateTimeFormatOptions = date.getFullYear() !== currentYear
|
|
||||||
? { weekday: "short", month: "short", day: "numeric", year: "numeric" }
|
|
||||||
: { weekday: "short", month: "short", day: "numeric" };
|
|
||||||
groups.push({ label: date.toLocaleDateString(undefined, opts), tasks, date });
|
|
||||||
}
|
|
||||||
|
|
||||||
if (noDate.length) groups.push({ label: "No Date", tasks: noDate, date: null });
|
|
||||||
|
|
||||||
return groups;
|
|
||||||
}
|
|
||||||
|
|
@ -1,38 +0,0 @@
|
||||||
import { describe, it, expect } from "vitest";
|
|
||||||
import { workspaceNameFromPath } from "./paths";
|
|
||||||
|
|
||||||
describe("workspaceNameFromPath", () => {
|
|
||||||
it("returns the last path component of a POSIX path", () => {
|
|
||||||
expect(workspaceNameFromPath("/home/me/Tasks")).toBe("Tasks");
|
|
||||||
});
|
|
||||||
|
|
||||||
it("strips a trailing slash (regression: used to fall back to 'workspace')", () => {
|
|
||||||
expect(workspaceNameFromPath("/home/me/Tasks/")).toBe("Tasks");
|
|
||||||
});
|
|
||||||
|
|
||||||
it("strips multiple trailing slashes", () => {
|
|
||||||
expect(workspaceNameFromPath("/home/me/Tasks///")).toBe("Tasks");
|
|
||||||
});
|
|
||||||
|
|
||||||
it("handles Windows-style backslash paths", () => {
|
|
||||||
expect(workspaceNameFromPath("C:\\Users\\me\\Tasks")).toBe("Tasks");
|
|
||||||
});
|
|
||||||
|
|
||||||
it("strips a trailing backslash on Windows paths", () => {
|
|
||||||
expect(workspaceNameFromPath("C:\\Users\\me\\Tasks\\")).toBe("Tasks");
|
|
||||||
});
|
|
||||||
|
|
||||||
it("handles mixed separators", () => {
|
|
||||||
expect(workspaceNameFromPath("C:\\Users/me\\Tasks")).toBe("Tasks");
|
|
||||||
});
|
|
||||||
|
|
||||||
it("falls back to 'workspace' when the path has no usable tail", () => {
|
|
||||||
expect(workspaceNameFromPath("/")).toBe("workspace");
|
|
||||||
expect(workspaceNameFromPath("\\")).toBe("workspace");
|
|
||||||
expect(workspaceNameFromPath("")).toBe("workspace");
|
|
||||||
});
|
|
||||||
|
|
||||||
it("preserves names with spaces", () => {
|
|
||||||
expect(workspaceNameFromPath("/home/me/My Tasks/")).toBe("My Tasks");
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
@ -1,9 +0,0 @@
|
||||||
/**
|
|
||||||
* Derive a workspace display name from a folder path picked via the file
|
|
||||||
* dialog. Handles both `/` and `\` separators and tolerates trailing
|
|
||||||
* separators (e.g. `"/home/me/Tasks/"` → `"Tasks"`, not `"workspace"`).
|
|
||||||
*/
|
|
||||||
export function workspaceNameFromPath(folder: string): string {
|
|
||||||
const parts = folder.replace(/[\\/]+$/, "").split(/[\\/]/);
|
|
||||||
return parts[parts.length - 1] || "workspace";
|
|
||||||
}
|
|
||||||
|
|
@ -15,29 +15,14 @@
|
||||||
let webdavUser = $state("");
|
let webdavUser = $state("");
|
||||||
let webdavPass = $state("");
|
let webdavPass = $state("");
|
||||||
let testStatus = $state<"idle" | "testing" | "ok" | "fail">("idle");
|
let testStatus = $state<"idle" | "testing" | "ok" | "fail">("idle");
|
||||||
let credsLoaded = $state(false);
|
|
||||||
|
|
||||||
let renaming = $state(false);
|
let renaming = $state(false);
|
||||||
let renameValue = $state("");
|
let renameValue = $state("");
|
||||||
let renameInput = $state<HTMLInputElement | null>(null);
|
|
||||||
let showKebab = $state(false);
|
let showKebab = $state(false);
|
||||||
let confirmRename = $state(false);
|
let confirmRename = $state(false);
|
||||||
|
|
||||||
// Imperative focus — Svelte's native autofocus attribute is unreliable
|
|
||||||
// for inputs that appear only via conditional blocks.
|
|
||||||
$effect(() => {
|
$effect(() => {
|
||||||
if (renaming && renameInput) {
|
if (!ws?.webdav_url) return;
|
||||||
renameInput.focus();
|
|
||||||
renameInput.select();
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// Load stored credentials exactly once for this workspace. Previously this
|
|
||||||
// ran on every `ws.webdav_url` change, which silently clobbered in-progress
|
|
||||||
// user edits whenever any other setting updated the config.
|
|
||||||
$effect(() => {
|
|
||||||
if (credsLoaded || !ws?.webdav_url) return;
|
|
||||||
credsLoaded = true;
|
|
||||||
webdavUrl = ws.webdav_url;
|
webdavUrl = ws.webdav_url;
|
||||||
try {
|
try {
|
||||||
const domain = new URL(ws.webdav_url).hostname;
|
const domain = new URL(ws.webdav_url).hostname;
|
||||||
|
|
@ -50,12 +35,6 @@
|
||||||
} catch {}
|
} catch {}
|
||||||
});
|
});
|
||||||
|
|
||||||
// Any edit invalidates a prior test so users can't Save a config they
|
|
||||||
// haven't validated since changing it.
|
|
||||||
function markDirty() {
|
|
||||||
if (testStatus !== "idle") testStatus = "idle";
|
|
||||||
}
|
|
||||||
|
|
||||||
async function testConnection() {
|
async function testConnection() {
|
||||||
testStatus = "testing";
|
testStatus = "testing";
|
||||||
try {
|
try {
|
||||||
|
|
@ -72,12 +51,6 @@
|
||||||
|
|
||||||
async function saveWebdav() {
|
async function saveWebdav() {
|
||||||
if (!webdavUrl.trim()) return;
|
if (!webdavUrl.trim()) return;
|
||||||
// Require a successful test so a typo'd URL can't silently point the
|
|
||||||
// workspace at a dead server.
|
|
||||||
if (testStatus !== "ok") {
|
|
||||||
await testConnection();
|
|
||||||
if (testStatus !== "ok") return;
|
|
||||||
}
|
|
||||||
await invoke("set_webdav_config", {
|
await invoke("set_webdav_config", {
|
||||||
workspaceId,
|
workspaceId,
|
||||||
webdavUrl: webdavUrl.trim(),
|
webdavUrl: webdavUrl.trim(),
|
||||||
|
|
@ -143,11 +116,11 @@
|
||||||
{#if renaming}
|
{#if renaming}
|
||||||
<input
|
<input
|
||||||
type="text"
|
type="text"
|
||||||
bind:this={renameInput}
|
|
||||||
bind:value={renameValue}
|
bind:value={renameValue}
|
||||||
class="w-full bg-transparent text-xl font-bold outline-none"
|
class="w-full bg-transparent text-xl font-bold outline-none"
|
||||||
onkeydown={(e) => { if (e.key === "Enter") handleRename(); if (e.key === "Escape") { renaming = false; } }}
|
onkeydown={(e) => { if (e.key === "Enter") handleRename(); if (e.key === "Escape") { renaming = false; } }}
|
||||||
onblur={handleRename}
|
onblur={handleRename}
|
||||||
|
autofocus
|
||||||
/>
|
/>
|
||||||
{:else}
|
{:else}
|
||||||
<p class="text-xl font-bold">{ws?.name}</p>
|
<p class="text-xl font-bold">{ws?.name}</p>
|
||||||
|
|
@ -199,7 +172,6 @@
|
||||||
<input
|
<input
|
||||||
type="url"
|
type="url"
|
||||||
bind:value={webdavUrl}
|
bind:value={webdavUrl}
|
||||||
oninput={markDirty}
|
|
||||||
placeholder="https://dav.example.com/tasks/"
|
placeholder="https://dav.example.com/tasks/"
|
||||||
class="mb-3 w-full rounded-lg border border-border-light bg-transparent px-3 py-2 text-sm outline-none focus:border-primary dark:border-border-dark"
|
class="mb-3 w-full rounded-lg border border-border-light bg-transparent px-3 py-2 text-sm outline-none focus:border-primary dark:border-border-dark"
|
||||||
/>
|
/>
|
||||||
|
|
@ -208,7 +180,6 @@
|
||||||
<input
|
<input
|
||||||
type="text"
|
type="text"
|
||||||
bind:value={webdavUser}
|
bind:value={webdavUser}
|
||||||
oninput={markDirty}
|
|
||||||
class="mb-3 w-full rounded-lg border border-border-light bg-transparent px-3 py-2 text-sm outline-none focus:border-primary dark:border-border-dark"
|
class="mb-3 w-full rounded-lg border border-border-light bg-transparent px-3 py-2 text-sm outline-none focus:border-primary dark:border-border-dark"
|
||||||
/>
|
/>
|
||||||
|
|
||||||
|
|
@ -216,7 +187,6 @@
|
||||||
<input
|
<input
|
||||||
type="password"
|
type="password"
|
||||||
bind:value={webdavPass}
|
bind:value={webdavPass}
|
||||||
oninput={markDirty}
|
|
||||||
class="mb-4 w-full rounded-lg border border-border-light bg-transparent px-3 py-2 text-sm outline-none focus:border-primary dark:border-border-dark"
|
class="mb-4 w-full rounded-lg border border-border-light bg-transparent px-3 py-2 text-sm outline-none focus:border-primary dark:border-border-dark"
|
||||||
/>
|
/>
|
||||||
|
|
||||||
|
|
@ -226,7 +196,7 @@
|
||||||
disabled={!webdavUrl.trim()}
|
disabled={!webdavUrl.trim()}
|
||||||
class="rounded-lg border border-border-light px-4 py-2 text-sm font-medium hover:bg-black/5 disabled:opacity-40 dark:border-border-dark dark:hover:bg-white/10"
|
class="rounded-lg border border-border-light px-4 py-2 text-sm font-medium hover:bg-black/5 disabled:opacity-40 dark:border-border-dark dark:hover:bg-white/10"
|
||||||
>
|
>
|
||||||
{testStatus === "testing" ? "Testing…" : testStatus === "ok" ? "Connected" : testStatus === "fail" ? "Failed — Retry" : "Test Connection"}
|
{testStatus === "testing" ? "Testing..." : testStatus === "ok" ? "Connected" : testStatus === "fail" ? "Failed -- Retry" : "Test Connection"}
|
||||||
</button>
|
</button>
|
||||||
<button
|
<button
|
||||||
onclick={saveWebdav}
|
onclick={saveWebdav}
|
||||||
|
|
|
||||||
|
|
@ -5,7 +5,6 @@
|
||||||
import { app } from "../stores/app.svelte";
|
import { app } from "../stores/app.svelte";
|
||||||
import { getCurrentWindow } from "@tauri-apps/api/window";
|
import { getCurrentWindow } from "@tauri-apps/api/window";
|
||||||
import { platform } from "@tauri-apps/plugin-os";
|
import { platform } from "@tauri-apps/plugin-os";
|
||||||
import { workspaceNameFromPath } from "../paths";
|
|
||||||
|
|
||||||
let { cancellable = false }: { cancellable?: boolean } = $props();
|
let { cancellable = false }: { cancellable?: boolean } = $props();
|
||||||
|
|
||||||
|
|
@ -72,11 +71,27 @@
|
||||||
const selected = await open({ directory: true, multiple: false });
|
const selected = await open({ directory: true, multiple: false });
|
||||||
if (!selected) return;
|
if (!selected) return;
|
||||||
const folder = selected as string;
|
const folder = selected as string;
|
||||||
await app.addWorkspace(workspaceNameFromPath(folder), folder);
|
const parts = folder.replace(/\\/g, "/").split("/");
|
||||||
|
const wsName = parts[parts.length - 1] || "workspace";
|
||||||
|
await app.addWorkspace(wsName, folder);
|
||||||
}
|
}
|
||||||
|
|
||||||
// ── WebDAV handlers ───────────────────────────────────────────────
|
// ── WebDAV handlers ───────────────────────────────────────────────
|
||||||
|
|
||||||
|
async function testConnection() {
|
||||||
|
testStatus = "testing";
|
||||||
|
try {
|
||||||
|
await invoke("test_webdav_connection", {
|
||||||
|
url: webdavUrl,
|
||||||
|
username: webdavUser,
|
||||||
|
password: webdavPass,
|
||||||
|
});
|
||||||
|
testStatus = "ok";
|
||||||
|
} catch {
|
||||||
|
testStatus = "fail";
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
async function connectAndBrowse() {
|
async function connectAndBrowse() {
|
||||||
testStatus = "testing";
|
testStatus = "testing";
|
||||||
try {
|
try {
|
||||||
|
|
|
||||||
|
|
@ -3,7 +3,7 @@
|
||||||
import TaskItem from "../components/TaskItem.svelte";
|
import TaskItem from "../components/TaskItem.svelte";
|
||||||
import TaskDetailView from "../components/TaskDetailView.svelte";
|
import TaskDetailView from "../components/TaskDetailView.svelte";
|
||||||
import NewTaskInput, { newTaskState } from "../components/NewTaskInput.svelte";
|
import NewTaskInput, { newTaskState } from "../components/NewTaskInput.svelte";
|
||||||
import ConfirmDialog, { isConfirmDialogOpen } from "../components/ConfirmDialog.svelte";
|
import ConfirmDialog from "../components/ConfirmDialog.svelte";
|
||||||
import SettingsScreen from "./SettingsScreen.svelte";
|
import SettingsScreen from "./SettingsScreen.svelte";
|
||||||
import { getCurrentWindow } from "@tauri-apps/api/window";
|
import { getCurrentWindow } from "@tauri-apps/api/window";
|
||||||
import { platform } from "@tauri-apps/plugin-os";
|
import { platform } from "@tauri-apps/plugin-os";
|
||||||
|
|
@ -18,15 +18,10 @@
|
||||||
let parentTask = $derived(taskStack.length >= 1 ? app.tasks.find(t => t.id === taskStack[0]) ?? null : null);
|
let parentTask = $derived(taskStack.length >= 1 ? app.tasks.find(t => t.id === taskStack[0]) ?? null : null);
|
||||||
let subtaskDetail = $derived(taskStack.length >= 2 ? app.tasks.find(t => t.id === taskStack[1]) ?? null : null);
|
let subtaskDetail = $derived(taskStack.length >= 2 ? app.tasks.find(t => t.id === taskStack[1]) ?? null : null);
|
||||||
|
|
||||||
// Clear taskStack when the viewed task no longer exists (e.g. deleted or list switched).
|
// Clear taskStack when the viewed task no longer exists (e.g. deleted or list switched)
|
||||||
// Handles both the parent-gone case (clear entirely) and the subtask-gone case
|
|
||||||
// (collapse back to parent detail) so an externally deleted subtask doesn't leave
|
|
||||||
// the slider parked over a blank third panel.
|
|
||||||
$effect(() => {
|
$effect(() => {
|
||||||
if (taskStack.length > 0 && !parentTask) {
|
if (taskStack.length > 0 && !parentTask) {
|
||||||
taskStack = [];
|
taskStack = [];
|
||||||
} else if (taskStack.length >= 2 && !subtaskDetail) {
|
|
||||||
taskStack = taskStack.slice(0, 1);
|
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|
@ -53,12 +48,10 @@
|
||||||
let showWorkspacePicker = $state(false);
|
let showWorkspacePicker = $state(false);
|
||||||
|
|
||||||
let newListName = $state("");
|
let newListName = $state("");
|
||||||
let newListInput = $state<HTMLInputElement | null>(null);
|
|
||||||
let showCompleted = $state(false);
|
let showCompleted = $state(false);
|
||||||
let completedVisible = $state(false);
|
let completedVisible = $state(false);
|
||||||
let renamingListId = $state<string | null>(null);
|
let renamingListId = $state<string | null>(null);
|
||||||
let renameValue = $state("");
|
let renameValue = $state("");
|
||||||
let renameListInput = $state<HTMLInputElement | null>(null);
|
|
||||||
let showListMenu = $state(false);
|
let showListMenu = $state(false);
|
||||||
let showSubtasks = $state(false);
|
let showSubtasks = $state(false);
|
||||||
let confirmDeleteList = $state(false);
|
let confirmDeleteList = $state(false);
|
||||||
|
|
@ -80,20 +73,6 @@
|
||||||
return () => window.removeEventListener("resize", handleResize);
|
return () => window.removeEventListener("resize", handleResize);
|
||||||
});
|
});
|
||||||
|
|
||||||
// Focus the new-list input when it appears. Svelte's native `autofocus`
|
|
||||||
// attribute is unreliable for conditional blocks, so focus imperatively.
|
|
||||||
$effect(() => {
|
|
||||||
if (showNewList && newListInput) newListInput.focus();
|
|
||||||
});
|
|
||||||
|
|
||||||
// Same imperative-focus trick for the inline list-rename input.
|
|
||||||
$effect(() => {
|
|
||||||
if (renamingListId && renameListInput) {
|
|
||||||
renameListInput.focus();
|
|
||||||
renameListInput.select();
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
|
|
||||||
async function handleNewList() {
|
async function handleNewList() {
|
||||||
if (!newListName.trim()) return;
|
if (!newListName.trim()) return;
|
||||||
|
|
@ -109,12 +88,7 @@
|
||||||
|
|
||||||
async function executeDeleteCompleted() {
|
async function executeDeleteCompleted() {
|
||||||
confirmDeleteCompleted = false;
|
confirmDeleteCompleted = false;
|
||||||
// Snapshot targets first — deletes mutate app.completedTasks reactively.
|
for (var t of app.completedTasks) await app.deleteTask(t.id);
|
||||||
// Bail on first failure so we don't silently leave a partial delete.
|
|
||||||
const targets = [...app.completedTasks];
|
|
||||||
for (const t of targets) {
|
|
||||||
if (!(await app.deleteTask(t.id))) return;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
function promptDeleteList() {
|
function promptDeleteList() {
|
||||||
|
|
@ -154,9 +128,6 @@
|
||||||
|
|
||||||
function handleKeydown(e: KeyboardEvent) {
|
function handleKeydown(e: KeyboardEvent) {
|
||||||
if (e.key !== "Escape") return;
|
if (e.key !== "Escape") return;
|
||||||
// Defer to any open ConfirmDialog — it installs a capture-phase listener
|
|
||||||
// that dismisses itself; we must not also pop the task-detail view behind it.
|
|
||||||
if (isConfirmDialogOpen()) return;
|
|
||||||
if (showSettings) { showSettings = false; return; }
|
if (showSettings) { showSettings = false; return; }
|
||||||
if (taskStack.length > 0) { closeDetail(); return; }
|
if (taskStack.length > 0) { closeDetail(); return; }
|
||||||
if (showListMenu) { showListMenu = false; return; }
|
if (showListMenu) { showListMenu = false; return; }
|
||||||
|
|
@ -396,7 +367,7 @@
|
||||||
<div class="flex-1 overflow-y-auto py-2">
|
<div class="flex-1 overflow-y-auto py-2">
|
||||||
{#each app.lists as list (list.id)}
|
{#each app.lists as list (list.id)}
|
||||||
<button
|
<button
|
||||||
onclick={() => { app.selectList(list.id); taskStack = []; showCompleted = false; completedVisible = false; closeDrawer(); }}
|
onclick={() => { app.selectList(list.id); taskStack = []; closeDrawer(); }}
|
||||||
class="group flex w-full items-center gap-2 px-5 py-2.5 text-left text-sm hover:bg-black/5 dark:hover:bg-white/10 {list.id === app.activeListId ? 'font-bold' : ''}"
|
class="group flex w-full items-center gap-2 px-5 py-2.5 text-left text-sm hover:bg-black/5 dark:hover:bg-white/10 {list.id === app.activeListId ? 'font-bold' : ''}"
|
||||||
>
|
>
|
||||||
{#if list.id === app.activeListId}
|
{#if list.id === app.activeListId}
|
||||||
|
|
@ -417,7 +388,6 @@
|
||||||
{#if showNewList}
|
{#if showNewList}
|
||||||
<div class="flex gap-2 px-1">
|
<div class="flex gap-2 px-1">
|
||||||
<input
|
<input
|
||||||
bind:this={newListInput}
|
|
||||||
type="text"
|
type="text"
|
||||||
bind:value={newListName}
|
bind:value={newListName}
|
||||||
placeholder="List name"
|
placeholder="List name"
|
||||||
|
|
@ -640,11 +610,11 @@
|
||||||
{#if renamingListId === app.activeListId}
|
{#if renamingListId === app.activeListId}
|
||||||
<input
|
<input
|
||||||
type="text"
|
type="text"
|
||||||
bind:this={renameListInput}
|
|
||||||
bind:value={renameValue}
|
bind:value={renameValue}
|
||||||
class="w-full bg-transparent text-xl font-bold outline-none"
|
class="w-full bg-transparent text-xl font-bold outline-none"
|
||||||
onkeydown={(e) => { if (e.key === "Enter") handleRenameList(); if (e.key === "Escape") renamingListId = null; }}
|
onkeydown={(e) => { if (e.key === "Enter") handleRenameList(); if (e.key === "Escape") renamingListId = null; }}
|
||||||
onblur={handleRenameList}
|
onblur={handleRenameList}
|
||||||
|
autofocus
|
||||||
/>
|
/>
|
||||||
{:else}
|
{:else}
|
||||||
<p class="text-xl font-bold">{app.activeList?.title ?? "Tasks"}</p>
|
<p class="text-xl font-bold">{app.activeList?.title ?? "Tasks"}</p>
|
||||||
|
|
@ -657,16 +627,7 @@
|
||||||
{#if app.lists.length === 0}
|
{#if app.lists.length === 0}
|
||||||
<div class="flex h-full flex-col items-center justify-center p-8 text-center">
|
<div class="flex h-full flex-col items-center justify-center p-8 text-center">
|
||||||
<p class="text-lg font-medium opacity-60">No lists yet</p>
|
<p class="text-lg font-medium opacity-60">No lists yet</p>
|
||||||
{#if app.isGoogleTasks}
|
<p class="mt-1 text-sm opacity-40">Tap the list name above to create one</p>
|
||||||
<p class="mt-1 text-sm opacity-40">Lists will appear after your next sync.</p>
|
|
||||||
{:else}
|
|
||||||
<button
|
|
||||||
onclick={() => { showDrawer = true; showNewList = true; }}
|
|
||||||
class="mt-4 rounded-lg bg-primary px-4 py-2 text-sm font-medium text-white hover:bg-primary-hover"
|
|
||||||
>
|
|
||||||
Create a list
|
|
||||||
</button>
|
|
||||||
{/if}
|
|
||||||
</div>
|
</div>
|
||||||
{:else if !app.activeListId}
|
{:else if !app.activeListId}
|
||||||
<div class="flex h-full items-center justify-center opacity-40">
|
<div class="flex h-full items-center justify-center opacity-40">
|
||||||
|
|
|
||||||
|
|
@ -8,15 +8,11 @@ import type {
|
||||||
Screen,
|
Screen,
|
||||||
SyncResult,
|
SyncResult,
|
||||||
} from "../types";
|
} from "../types";
|
||||||
import { groupTasksByDate, type TaskGroup } from "../grouping";
|
|
||||||
|
|
||||||
// Listen for file system changes from the backend watcher. Guard against
|
// Listen for file system changes from the backend watcher.
|
||||||
// firing while the user is on the setup/missing screens — loadLists would
|
|
||||||
// fail (no workspace) and a debouncedSync against a non-synced workspace
|
|
||||||
// would be wasted work.
|
|
||||||
listen("fs-changed", () => {
|
listen("fs-changed", () => {
|
||||||
if (!hasWorkspace || screen !== "tasks") return;
|
|
||||||
loadLists();
|
loadLists();
|
||||||
|
// Debounced sync for WebDAV workspaces on local file changes
|
||||||
if (isSyncedWorkspace) debouncedSync();
|
if (isSyncedWorkspace) debouncedSync();
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|
@ -56,9 +52,64 @@ let activeList = $derived(lists.find((l) => l.id === activeListId) ?? null);
|
||||||
let pendingTasks = $derived(tasks.filter((t) => t.status === "backlog" && !t.parent_id));
|
let pendingTasks = $derived(tasks.filter((t) => t.status === "backlog" && !t.parent_id));
|
||||||
let completedTasks = $derived(tasks.filter((t) => t.status === "completed" && !t.parent_id));
|
let completedTasks = $derived(tasks.filter((t) => t.status === "completed" && !t.parent_id));
|
||||||
|
|
||||||
|
type TaskGroup = { label: string; tasks: Task[]; date: Date | null };
|
||||||
|
|
||||||
let groupedPendingTasks = $derived.by((): TaskGroup[] | null => {
|
let groupedPendingTasks = $derived.by((): TaskGroup[] | null => {
|
||||||
if (!activeList?.group_by_date) return null;
|
if (!activeList?.group_by_date) return null;
|
||||||
return groupTasksByDate(pendingTasks);
|
const now = new Date();
|
||||||
|
const todayStart = new Date(now.getFullYear(), now.getMonth(), now.getDate());
|
||||||
|
const tomorrowStart = new Date(todayStart);
|
||||||
|
tomorrowStart.setDate(todayStart.getDate() + 1);
|
||||||
|
|
||||||
|
const overdue: Task[] = [];
|
||||||
|
const today: Task[] = [];
|
||||||
|
const tomorrow: Task[] = [];
|
||||||
|
const futureByDay = new Map<string, { date: Date; tasks: Task[] }>();
|
||||||
|
const noDate: Task[] = [];
|
||||||
|
|
||||||
|
for (const task of pendingTasks) {
|
||||||
|
if (!task.date) {
|
||||||
|
noDate.push(task);
|
||||||
|
} else {
|
||||||
|
const d = new Date(task.date);
|
||||||
|
const dayStart = new Date(d.getFullYear(), d.getMonth(), d.getDate());
|
||||||
|
if (dayStart < todayStart) overdue.push(task);
|
||||||
|
else if (dayStart.getTime() === todayStart.getTime()) today.push(task);
|
||||||
|
else if (dayStart.getTime() === tomorrowStart.getTime()) tomorrow.push(task);
|
||||||
|
else {
|
||||||
|
const key = dayStart.toISOString();
|
||||||
|
if (!futureByDay.has(key)) futureByDay.set(key, { date: dayStart, tasks: [] });
|
||||||
|
futureByDay.get(key)!.tasks.push(task);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const taskOrderIndex = new Map(pendingTasks.map((t, i) => [t.id, i]));
|
||||||
|
const sortByDue = (a: Task, b: Task) => {
|
||||||
|
const dateDiff = new Date(a.date!).getTime() - new Date(b.date!).getTime();
|
||||||
|
if (dateDiff !== 0) return dateDiff;
|
||||||
|
return (taskOrderIndex.get(a.id) ?? 0) - (taskOrderIndex.get(b.id) ?? 0);
|
||||||
|
};
|
||||||
|
overdue.sort(sortByDue);
|
||||||
|
today.sort(sortByDue);
|
||||||
|
tomorrow.sort(sortByDue);
|
||||||
|
|
||||||
|
const groups: TaskGroup[] = [];
|
||||||
|
if (noDate.length) groups.push({ label: "No Date", tasks: noDate, date: null });
|
||||||
|
if (overdue.length) groups.push({ label: "Overdue", tasks: overdue, date: null });
|
||||||
|
if (today.length) groups.push({ label: "Today", tasks: today, date: todayStart });
|
||||||
|
if (tomorrow.length) groups.push({ label: "Tomorrow", tasks: tomorrow, date: tomorrowStart });
|
||||||
|
|
||||||
|
const currentYear = now.getFullYear();
|
||||||
|
for (const [, { date, tasks }] of [...futureByDay.entries()].sort(([a], [b]) => a.localeCompare(b))) {
|
||||||
|
tasks.sort(sortByDue);
|
||||||
|
const opts: Intl.DateTimeFormatOptions = date.getFullYear() !== currentYear
|
||||||
|
? { weekday: "short", month: "short", day: "numeric", year: "numeric" }
|
||||||
|
: { weekday: "short", month: "short", day: "numeric" };
|
||||||
|
groups.push({ label: date.toLocaleDateString(undefined, opts), tasks, date });
|
||||||
|
}
|
||||||
|
|
||||||
|
return groups;
|
||||||
});
|
});
|
||||||
|
|
||||||
// Build a map of parent_id -> children for subtask hierarchy
|
// Build a map of parent_id -> children for subtask hierarchy
|
||||||
|
|
@ -187,17 +238,11 @@ async function removeWorkspace(id: string) {
|
||||||
try {
|
try {
|
||||||
await invoke("remove_workspace", { id });
|
await invoke("remove_workspace", { id });
|
||||||
config = await invoke<AppConfig>("get_config");
|
config = await invoke<AppConfig>("get_config");
|
||||||
activeListId = null;
|
if (!hasWorkspace) {
|
||||||
tasks = [];
|
|
||||||
lists = [];
|
|
||||||
// Switch to the next available workspace rather than dumping the user
|
|
||||||
// to the setup screen when they still have other workspaces.
|
|
||||||
const remaining = Object.keys(config?.workspaces ?? {});
|
|
||||||
if (remaining.length > 0) {
|
|
||||||
await switchWorkspace(remaining[0]);
|
|
||||||
screen = "tasks";
|
|
||||||
} else {
|
|
||||||
screen = "setup";
|
screen = "setup";
|
||||||
|
lists = [];
|
||||||
|
tasks = [];
|
||||||
|
activeListId = null;
|
||||||
}
|
}
|
||||||
} catch (e) {
|
} catch (e) {
|
||||||
error = String(e);
|
error = String(e);
|
||||||
|
|
@ -264,13 +309,7 @@ async function deleteList(id: string) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async function createTask(
|
async function createTask(title: string, description?: string, parentId?: string): Promise<Task | null> {
|
||||||
title: string,
|
|
||||||
description?: string,
|
|
||||||
parentId?: string,
|
|
||||||
date?: string | null,
|
|
||||||
hasTime?: boolean,
|
|
||||||
): Promise<Task | null> {
|
|
||||||
if (!activeListId) return null;
|
if (!activeListId) return null;
|
||||||
try {
|
try {
|
||||||
const task = await invoke<Task>("create_task", {
|
const task = await invoke<Task>("create_task", {
|
||||||
|
|
@ -278,8 +317,6 @@ async function createTask(
|
||||||
title,
|
title,
|
||||||
description: description ?? "",
|
description: description ?? "",
|
||||||
parentId: parentId ?? null,
|
parentId: parentId ?? null,
|
||||||
date: date ?? null,
|
|
||||||
hasTime: hasTime ?? false,
|
|
||||||
});
|
});
|
||||||
tasks = parentId ? [task, ...tasks] : [...tasks, task];
|
tasks = parentId ? [task, ...tasks] : [...tasks, task];
|
||||||
error = null;
|
error = null;
|
||||||
|
|
@ -329,15 +366,13 @@ async function reorderTask(taskId: string, newPosition: number) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async function deleteTask(taskId: string): Promise<boolean> {
|
async function deleteTask(taskId: string) {
|
||||||
if (!activeListId) return false;
|
if (!activeListId) return;
|
||||||
try {
|
try {
|
||||||
await invoke("delete_task", { listId: activeListId, taskId });
|
await invoke("delete_task", { listId: activeListId, taskId });
|
||||||
tasks = tasks.filter((t) => t.id !== taskId);
|
tasks = tasks.filter((t) => t.id !== taskId);
|
||||||
return true;
|
|
||||||
} catch (e) {
|
} catch (e) {
|
||||||
error = String(e);
|
error = String(e);
|
||||||
return false;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -398,11 +433,7 @@ async function triggerSync() {
|
||||||
await loadLists();
|
await loadLists();
|
||||||
} catch (e) {
|
} catch (e) {
|
||||||
const msg = String(e);
|
const msg = String(e);
|
||||||
// Narrow phrases so that a legitimate server-side error containing a
|
const isTransient = /timeout|connect|network|unreachable|refused/i.test(msg);
|
||||||
// word like "network" or "refused" in its description isn't silently
|
|
||||||
// swallowed as an offline blip. Only treat obvious connectivity failures
|
|
||||||
// as transient.
|
|
||||||
const isTransient = /(^|\W)(timed? out|timeout|connection (refused|reset|timed out|aborted)|connect error|network (is )?unreachable|no route to host|host (not found|is unreachable)|dns|enotfound|econnrefused|etimedout|ehostunreach|enetunreach)(\W|$)/i.test(msg);
|
|
||||||
syncStatus = isTransient ? "offline" : "error";
|
syncStatus = isTransient ? "offline" : "error";
|
||||||
// Only show the error banner for non-transient failures; connectivity issues just update the status dot
|
// Only show the error banner for non-transient failures; connectivity issues just update the status dot
|
||||||
if (!isTransient) error = msg;
|
if (!isTransient) error = msg;
|
||||||
|
|
@ -418,7 +449,7 @@ function debouncedSync() {
|
||||||
|
|
||||||
function restartSyncInterval() {
|
function restartSyncInterval() {
|
||||||
if (_syncInterval) clearInterval(_syncInterval);
|
if (_syncInterval) clearInterval(_syncInterval);
|
||||||
const secs = _appFocused ? syncIntervalSecs : syncIntervalUnfocusedSecs;
|
var secs = _appFocused ? syncIntervalSecs : syncIntervalUnfocusedSecs;
|
||||||
_syncInterval = setInterval(triggerSync, secs * 1000);
|
_syncInterval = setInterval(triggerSync, secs * 1000);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -540,10 +571,22 @@ async function addGoogleTasksWorkspace(
|
||||||
|
|
||||||
async function forgetMissingWorkspace() {
|
async function forgetMissingWorkspace() {
|
||||||
if (!missingWorkspace) return;
|
if (!missingWorkspace) return;
|
||||||
// removeWorkspace handles switching to the next available workspace (or
|
|
||||||
// falling back to the setup screen when none remain); just delegate.
|
|
||||||
await removeWorkspace(missingWorkspace);
|
await removeWorkspace(missingWorkspace);
|
||||||
missingWorkspace = null;
|
missingWorkspace = null;
|
||||||
|
config = await invoke<AppConfig>("get_config");
|
||||||
|
if (hasWorkspace) {
|
||||||
|
// Switch to the next available workspace
|
||||||
|
const nextName = Object.keys(config!.workspaces)[0];
|
||||||
|
if (nextName) {
|
||||||
|
await switchWorkspace(nextName);
|
||||||
|
screen = "tasks";
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
screen = "setup";
|
||||||
|
lists = [];
|
||||||
|
tasks = [];
|
||||||
|
activeListId = null;
|
||||||
}
|
}
|
||||||
|
|
||||||
function setScreen(s: Screen) {
|
function setScreen(s: Screen) {
|
||||||
|
|
|
||||||
|
|
@ -1 +0,0 @@
|
||||||
import "@testing-library/jest-dom/vitest";
|
|
||||||
|
|
@ -1,4 +1,3 @@
|
||||||
/// <reference types="vitest/config" />
|
|
||||||
import { defineConfig } from "vite";
|
import { defineConfig } from "vite";
|
||||||
import { svelte } from "@sveltejs/vite-plugin-svelte";
|
import { svelte } from "@sveltejs/vite-plugin-svelte";
|
||||||
import tailwindcss from "@tailwindcss/vite";
|
import tailwindcss from "@tailwindcss/vite";
|
||||||
|
|
@ -15,17 +14,4 @@ export default defineConfig({
|
||||||
hmr: host ? { protocol: "ws", host, port: 1421 } : undefined,
|
hmr: host ? { protocol: "ws", host, port: 1421 } : undefined,
|
||||||
watch: { ignored: ["**/src-tauri/**"] },
|
watch: { ignored: ["**/src-tauri/**"] },
|
||||||
},
|
},
|
||||||
test: {
|
|
||||||
environment: "jsdom",
|
|
||||||
globals: true,
|
|
||||||
setupFiles: ["./src/test/setup.ts"],
|
|
||||||
include: ["src/**/*.{test,spec}.{ts,svelte}"],
|
|
||||||
// Resolve Svelte's client (browser) entry under Vitest — without the
|
|
||||||
// browser condition mount() picks up Svelte's SSR export and throws
|
|
||||||
// lifecycle_function_unavailable.
|
|
||||||
server: { deps: { inline: ["@testing-library/svelte"] } },
|
|
||||||
},
|
|
||||||
resolve: {
|
|
||||||
conditions: process.env.VITEST ? ["browser"] : [],
|
|
||||||
},
|
|
||||||
});
|
});
|
||||||
|
|
|
||||||
|
|
@ -6,7 +6,6 @@ pub mod group;
|
||||||
pub mod sync;
|
pub mod sync;
|
||||||
|
|
||||||
use onyx_core::{AppConfig, TaskRepository};
|
use onyx_core::{AppConfig, TaskRepository};
|
||||||
use onyx_core::config::WorkspaceConfig;
|
|
||||||
use anyhow::{Context, Result};
|
use anyhow::{Context, Result};
|
||||||
use std::path::PathBuf;
|
use std::path::PathBuf;
|
||||||
|
|
||||||
|
|
@ -24,89 +23,21 @@ pub fn save_config(config: &AppConfig) -> Result<()> {
|
||||||
config.save_to_file(&path).context("Failed to save config")
|
config.save_to_file(&path).context("Failed to save config")
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Resolve a user-supplied identifier to (id, WorkspaceConfig). Accepts either
|
pub fn get_repository(workspace_name: Option<String>) -> Result<(TaskRepository, String)> {
|
||||||
/// the workspace's display name or its UUID. Falls back to the current
|
|
||||||
/// workspace when `identifier` is `None`.
|
|
||||||
pub fn resolve_workspace(config: &AppConfig, identifier: Option<&str>) -> Result<(String, WorkspaceConfig)> {
|
|
||||||
if let Some(s) = identifier {
|
|
||||||
// Try by UUID first (exact match on map key), then fall back to name lookup.
|
|
||||||
if let Some(ws) = config.get_workspace(s) {
|
|
||||||
return Ok((s.to_string(), ws.clone()));
|
|
||||||
}
|
|
||||||
let (id, ws) = config.find_by_name(s)
|
|
||||||
.ok_or_else(|| anyhow::anyhow!("Workspace '{}' not found", s))?;
|
|
||||||
Ok((id.clone(), ws.clone()))
|
|
||||||
} else {
|
|
||||||
let (id, ws) = config.get_current_workspace()
|
|
||||||
.context("No workspace set. Run 'onyx workspace add <name> <path>' to create one, or 'onyx workspace switch <name>' to select one.")?;
|
|
||||||
Ok((id.clone(), ws.clone()))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn get_repository(workspace_identifier: Option<String>) -> Result<(TaskRepository, String)> {
|
|
||||||
let config = load_config()?;
|
let config = load_config()?;
|
||||||
let (_id, workspace_config) = resolve_workspace(&config, workspace_identifier.as_deref())?;
|
|
||||||
let name = workspace_config.name.clone();
|
let (name, workspace_config) = if let Some(name) = workspace_name {
|
||||||
|
let workspace_config = config.get_workspace(&name)
|
||||||
|
.ok_or_else(|| anyhow::anyhow!("Workspace '{}' not found", name))?;
|
||||||
|
(name, workspace_config.clone())
|
||||||
|
} else {
|
||||||
|
let (name, workspace_config) = config.get_current_workspace()
|
||||||
|
.context("No workspace set. Use 'onyx init' to create one.")?;
|
||||||
|
(name.clone(), workspace_config.clone())
|
||||||
|
};
|
||||||
|
|
||||||
let repo = TaskRepository::new(workspace_config.path.clone())
|
let repo = TaskRepository::new(workspace_config.path.clone())
|
||||||
.context(format!("Failed to open workspace '{}'", name))?;
|
.context(format!("Failed to open workspace '{}'", name))?;
|
||||||
|
|
||||||
Ok((repo, name))
|
Ok((repo, name))
|
||||||
}
|
}
|
||||||
|
|
||||||
#[cfg(test)]
|
|
||||||
mod tests {
|
|
||||||
use super::*;
|
|
||||||
|
|
||||||
fn make_config_with(ws: &[(&str, &str)]) -> (AppConfig, Vec<String>) {
|
|
||||||
let mut config = AppConfig::new();
|
|
||||||
let ids: Vec<String> = ws.iter()
|
|
||||||
.map(|(name, path)| config.add_workspace(WorkspaceConfig::new(name.to_string(), PathBuf::from(path))))
|
|
||||||
.collect();
|
|
||||||
(config, ids)
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn resolve_by_name() {
|
|
||||||
let (config, _ids) = make_config_with(&[("dev", "/tmp/dev"), ("home", "/tmp/home")]);
|
|
||||||
let (id, ws) = resolve_workspace(&config, Some("dev")).unwrap();
|
|
||||||
assert_eq!(ws.name, "dev");
|
|
||||||
assert!(config.workspaces.contains_key(&id));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn resolve_by_uuid() {
|
|
||||||
let (config, ids) = make_config_with(&[("dev", "/tmp/dev")]);
|
|
||||||
let target = ids[0].clone();
|
|
||||||
let (id, ws) = resolve_workspace(&config, Some(&target)).unwrap();
|
|
||||||
assert_eq!(id, target);
|
|
||||||
assert_eq!(ws.name, "dev");
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn resolve_unknown_identifier_errors() {
|
|
||||||
let (config, _ids) = make_config_with(&[("dev", "/tmp/dev")]);
|
|
||||||
let err = resolve_workspace(&config, Some("ghost")).unwrap_err();
|
|
||||||
assert!(err.to_string().contains("Workspace 'ghost' not found"));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn resolve_falls_back_to_current() {
|
|
||||||
let (mut config, ids) = make_config_with(&[("a", "/tmp/a"), ("b", "/tmp/b")]);
|
|
||||||
config.set_current_workspace(ids[1].clone()).unwrap();
|
|
||||||
let (id, ws) = resolve_workspace(&config, None).unwrap();
|
|
||||||
assert_eq!(id, ids[1]);
|
|
||||||
assert_eq!(ws.name, "b");
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn resolve_no_current_gives_actionable_message() {
|
|
||||||
let config = AppConfig::new();
|
|
||||||
let err = resolve_workspace(&config, None).unwrap_err();
|
|
||||||
let msg = err.to_string();
|
|
||||||
// The message should point the user at the right sub-commands, not
|
|
||||||
// at the obsolete 'onyx init' suggestion.
|
|
||||||
assert!(msg.contains("workspace add") || msg.contains("workspace switch"),
|
|
||||||
"expected actionable message, got: {msg}");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
|
||||||
|
|
@ -2,8 +2,22 @@ use anyhow::{Context, Result};
|
||||||
use colored::Colorize;
|
use colored::Colorize;
|
||||||
use onyx_core::sync::{SyncMode, sync_workspace, get_sync_status};
|
use onyx_core::sync::{SyncMode, sync_workspace, get_sync_status};
|
||||||
use onyx_core::webdav::{WebDavClient, store_credentials, load_credentials};
|
use onyx_core::webdav::{WebDavClient, store_credentials, load_credentials};
|
||||||
|
use onyx_core::config::AppConfig;
|
||||||
use crate::output;
|
use crate::output;
|
||||||
use super::{load_config, save_config, resolve_workspace};
|
use super::{load_config, save_config};
|
||||||
|
|
||||||
|
/// Resolve a workspace name to (id, config). Falls back to current workspace if name is None.
|
||||||
|
fn resolve_workspace(config: &AppConfig, name: Option<&str>) -> Result<(String, onyx_core::config::WorkspaceConfig)> {
|
||||||
|
if let Some(name) = name {
|
||||||
|
let (id, ws) = config.find_by_name(name)
|
||||||
|
.ok_or_else(|| anyhow::anyhow!("Workspace '{}' not found", name))?;
|
||||||
|
Ok((id.clone(), ws.clone()))
|
||||||
|
} else {
|
||||||
|
let (id, ws) = config.get_current_workspace()
|
||||||
|
.context("No workspace set. Use 'onyx init' to create one.")?;
|
||||||
|
Ok((id.clone(), ws.clone()))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/// Run sync setup: prompt for URL, username, password, test connection, store credentials.
|
/// Run sync setup: prompt for URL, username, password, test connection, store credentials.
|
||||||
pub fn setup(workspace_name: Option<String>) -> Result<()> {
|
pub fn setup(workspace_name: Option<String>) -> Result<()> {
|
||||||
|
|
|
||||||
|
|
@ -119,26 +119,13 @@ pub fn edit(task_id_str: String, workspace: Option<String>) -> Result<()> {
|
||||||
let (list_id, task) = find_task(&lists, task_id)
|
let (list_id, task) = find_task(&lists, task_id)
|
||||||
.ok_or_else(|| anyhow::anyhow!("Task not found: {}", task_id_str))?;
|
.ok_or_else(|| anyhow::anyhow!("Task not found: {}", task_id_str))?;
|
||||||
|
|
||||||
// Create temporary file with task content. On Unix, open with 0600 so
|
// Create temporary file with task content
|
||||||
// other local users on a shared system can't read the task body off /tmp
|
|
||||||
// while the editor is running.
|
|
||||||
let temp_dir = std::env::temp_dir();
|
let temp_dir = std::env::temp_dir();
|
||||||
let temp_file = temp_dir.join(format!("onyx-{}.md", task.id));
|
let temp_file = temp_dir.join(format!("onyx-{}.md", task.id));
|
||||||
|
|
||||||
|
// Write current task content to temp file
|
||||||
let content = format!("# {}\n\n{}", task.title, task.description);
|
let content = format!("# {}\n\n{}", task.title, task.description);
|
||||||
{
|
std::fs::write(&temp_file, content)?;
|
||||||
use std::io::Write;
|
|
||||||
let mut opts = std::fs::OpenOptions::new();
|
|
||||||
opts.write(true).create(true).truncate(true);
|
|
||||||
#[cfg(unix)]
|
|
||||||
{
|
|
||||||
use std::os::unix::fs::OpenOptionsExt;
|
|
||||||
opts.mode(0o600);
|
|
||||||
}
|
|
||||||
let mut f = opts.open(&temp_file)
|
|
||||||
.with_context(|| format!("Failed to create {}", temp_file.display()))?;
|
|
||||||
f.write_all(content.as_bytes())?;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get editor from environment
|
// Get editor from environment
|
||||||
let editor = std::env::var("EDITOR").unwrap_or_else(|_| {
|
let editor = std::env::var("EDITOR").unwrap_or_else(|_| {
|
||||||
|
|
|
||||||
|
|
@ -30,21 +30,11 @@ pub fn add(name: String, path: String) -> Result<()> {
|
||||||
// Add workspace
|
// Add workspace
|
||||||
let id = config.add_workspace(WorkspaceConfig::new(name.clone(), path_buf.clone()));
|
let id = config.add_workspace(WorkspaceConfig::new(name.clone(), path_buf.clone()));
|
||||||
|
|
||||||
// Select the new workspace as current when none was previously set, so the
|
|
||||||
// very next command doesn't fail with "No workspace set".
|
|
||||||
let made_current = config.current_workspace.is_none();
|
|
||||||
if made_current {
|
|
||||||
config.set_current_workspace(id.clone())?;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Save config
|
// Save config
|
||||||
save_config(&config)?;
|
save_config(&config)?;
|
||||||
|
|
||||||
output::success(&format!("Added workspace \"{}\" ({}) at {}", name, &id[..8], path_buf.display()));
|
output::success(&format!("Added workspace \"{}\" ({}) at {}", name, &id[..8], path_buf.display()));
|
||||||
output::success("Created default list \"My Tasks\"");
|
output::success("Created default list \"My Tasks\"");
|
||||||
if made_current {
|
|
||||||
output::success(&format!("Set \"{}\" as the current workspace", name));
|
|
||||||
}
|
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
@ -74,20 +64,15 @@ pub fn list() -> Result<()> {
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Resolve a user-supplied identifier to a workspace ID. Accepts either the
|
/// Resolve a workspace name to its ID. Errors if not found or ambiguous.
|
||||||
/// display name or the UUID. Errors if not found or ambiguous.
|
fn resolve_name(config: &onyx_core::config::AppConfig, name: &str) -> Result<String> {
|
||||||
fn resolve_name(config: &onyx_core::config::AppConfig, identifier: &str) -> Result<String> {
|
|
||||||
// Direct UUID hit on the map key — unambiguous.
|
|
||||||
if config.workspaces.contains_key(identifier) {
|
|
||||||
return Ok(identifier.to_string());
|
|
||||||
}
|
|
||||||
let matches: Vec<_> = config.workspaces.iter()
|
let matches: Vec<_> = config.workspaces.iter()
|
||||||
.filter(|(_, ws)| ws.name == identifier)
|
.filter(|(_, ws)| ws.name == name)
|
||||||
.collect();
|
.collect();
|
||||||
match matches.len() {
|
match matches.len() {
|
||||||
0 => anyhow::bail!("Workspace '{}' not found", identifier),
|
0 => anyhow::bail!("Workspace '{}' not found", name),
|
||||||
1 => Ok(matches[0].0.clone()),
|
1 => Ok(matches[0].0.clone()),
|
||||||
n => anyhow::bail!("Ambiguous: {} workspaces named '{}'. Use the workspace ID instead.", n, identifier),
|
n => anyhow::bail!("Ambiguous: {} workspaces named '{}'. Use the workspace ID instead.", n, name),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -3,7 +3,6 @@ mod output;
|
||||||
|
|
||||||
use anyhow::Result;
|
use anyhow::Result;
|
||||||
use clap::{Parser, Subcommand};
|
use clap::{Parser, Subcommand};
|
||||||
use colored::Colorize;
|
|
||||||
use commands::*;
|
use commands::*;
|
||||||
|
|
||||||
#[derive(Parser)]
|
#[derive(Parser)]
|
||||||
|
|
@ -198,24 +197,7 @@ enum GroupCommands {
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
fn main() {
|
fn main() -> Result<()> {
|
||||||
match run() {
|
|
||||||
Ok(()) => {}
|
|
||||||
Err(e) => {
|
|
||||||
// Print user-friendly error chain (no backtrace). Programming-bug
|
|
||||||
// panics still surface through their default handler.
|
|
||||||
eprintln!("{}: {}", "Error".red().bold(), e);
|
|
||||||
let mut cause = e.source();
|
|
||||||
while let Some(c) = cause {
|
|
||||||
eprintln!(" caused by: {}", c);
|
|
||||||
cause = c.source();
|
|
||||||
}
|
|
||||||
std::process::exit(1);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn run() -> Result<()> {
|
|
||||||
let cli = Cli::parse();
|
let cli = Cli::parse();
|
||||||
|
|
||||||
match cli.command {
|
match cli.command {
|
||||||
|
|
|
||||||
|
|
@ -4,15 +4,20 @@ use serde::{Deserialize, Serialize};
|
||||||
use uuid::Uuid;
|
use uuid::Uuid;
|
||||||
use crate::error::{Error, Result};
|
use crate::error::{Error, Result};
|
||||||
|
|
||||||
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Default)]
|
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
|
||||||
#[serde(rename_all = "lowercase")]
|
#[serde(rename_all = "lowercase")]
|
||||||
pub enum WorkspaceMode {
|
pub enum WorkspaceMode {
|
||||||
#[default]
|
|
||||||
Local,
|
Local,
|
||||||
Webdav,
|
Webdav,
|
||||||
GoogleTasks,
|
GoogleTasks,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
impl Default for WorkspaceMode {
|
||||||
|
fn default() -> Self {
|
||||||
|
Self::Local
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
pub struct WorkspaceConfig {
|
pub struct WorkspaceConfig {
|
||||||
pub name: String,
|
pub name: String,
|
||||||
|
|
@ -116,7 +121,13 @@ impl AppConfig {
|
||||||
std::fs::create_dir_all(parent)?;
|
std::fs::create_dir_all(parent)?;
|
||||||
}
|
}
|
||||||
let content = serde_json::to_string_pretty(&self)?;
|
let content = serde_json::to_string_pretty(&self)?;
|
||||||
crate::storage::atomic_write(path, content.as_bytes())?;
|
// Atomic write: write to temp file then rename to prevent corruption on crash
|
||||||
|
let temp = path.with_extension("tmp");
|
||||||
|
std::fs::write(&temp, &content)?;
|
||||||
|
if let Err(e) = std::fs::rename(&temp, path) {
|
||||||
|
let _ = std::fs::remove_file(&temp);
|
||||||
|
return Err(e.into());
|
||||||
|
}
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -358,15 +358,8 @@ pub async fn sync_google_tasks(
|
||||||
list_meta.task_order = task_order;
|
list_meta.task_order = task_order;
|
||||||
list_meta.updated_at = Utc::now();
|
list_meta.updated_at = Utc::now();
|
||||||
|
|
||||||
match serde_json::to_string_pretty(&list_meta) {
|
if let Ok(meta_content) = serde_json::to_string_pretty(&list_meta) {
|
||||||
Ok(meta_content) => {
|
let _ = atomic_write(&listdata_path, meta_content.as_bytes());
|
||||||
if let Err(e) = atomic_write(&listdata_path, meta_content.as_bytes()) {
|
|
||||||
errors.push(format!("Failed to write metadata for list '{}': {}", gt_list.title, e));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Err(e) => {
|
|
||||||
errors.push(format!("Failed to serialize metadata for list '{}': {}", gt_list.title, e));
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -381,15 +374,8 @@ pub async fn sync_google_tasks(
|
||||||
RootMetadata::default()
|
RootMetadata::default()
|
||||||
};
|
};
|
||||||
root_meta.list_order = new_list_order;
|
root_meta.list_order = new_list_order;
|
||||||
match serde_json::to_string_pretty(&root_meta) {
|
if let Ok(meta_content) = serde_json::to_string_pretty(&root_meta) {
|
||||||
Ok(meta_content) => {
|
let _ = atomic_write(&root_meta_path, meta_content.as_bytes());
|
||||||
if let Err(e) = atomic_write(&root_meta_path, meta_content.as_bytes()) {
|
|
||||||
errors.push(format!("Failed to write workspace metadata: {}", e));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Err(e) => {
|
|
||||||
errors.push(format!("Failed to serialize workspace metadata: {}", e));
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
Ok(GoogleSyncResult { downloaded, errors })
|
Ok(GoogleSyncResult { downloaded, errors })
|
||||||
|
|
|
||||||
|
|
@ -26,10 +26,7 @@ impl TaskRepository {
|
||||||
// Task operations
|
// Task operations
|
||||||
pub fn create_task(&mut self, list_id: Uuid, mut task: Task) -> Result<Task> {
|
pub fn create_task(&mut self, list_id: Uuid, mut task: Task) -> Result<Task> {
|
||||||
self.storage.write_task(list_id, &task)?;
|
self.storage.write_task(list_id, &task)?;
|
||||||
// Mirror the saturating increment that FileSystemStorage applies to
|
task.version += 1;
|
||||||
// the on-disk frontmatter so the in-memory Task matches what was
|
|
||||||
// written and doesn't wrap at u64::MAX.
|
|
||||||
task.version = task.version.saturating_add(1);
|
|
||||||
Ok(task)
|
Ok(task)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -157,7 +154,7 @@ mod tests {
|
||||||
|
|
||||||
// Create a task
|
// Create a task
|
||||||
let task = Task::new("Test Task".to_string());
|
let task = Task::new("Test Task".to_string());
|
||||||
let _ = repo.create_task(list.id, task).unwrap();
|
let created_task = repo.create_task(list.id, task).unwrap();
|
||||||
|
|
||||||
// List tasks
|
// List tasks
|
||||||
let tasks = repo.list_tasks(list.id).unwrap();
|
let tasks = repo.list_tasks(list.id).unwrap();
|
||||||
|
|
@ -165,20 +162,6 @@ mod tests {
|
||||||
assert_eq!(tasks[0].title, "Test Task");
|
assert_eq!(tasks[0].title, "Test Task");
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_create_task_saturates_version_at_max() {
|
|
||||||
let temp_dir = TempDir::new().unwrap();
|
|
||||||
let mut repo = TaskRepository::init(temp_dir.path().to_path_buf()).unwrap();
|
|
||||||
let list = repo.create_list("L".to_string()).unwrap();
|
|
||||||
|
|
||||||
// Simulate a task that is already at u64::MAX. A plain `+=` would
|
|
||||||
// overflow — saturating_add must clamp.
|
|
||||||
let mut task = Task::new("max".to_string());
|
|
||||||
task.version = u64::MAX;
|
|
||||||
let created = repo.create_task(list.id, task).unwrap();
|
|
||||||
assert_eq!(created.version, u64::MAX);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn test_update_task() {
|
fn test_update_task() {
|
||||||
let temp_dir = TempDir::new().unwrap();
|
let temp_dir = TempDir::new().unwrap();
|
||||||
|
|
|
||||||
|
|
@ -236,8 +236,12 @@ impl FileSystemStorage {
|
||||||
Ok(path)
|
Ok(path)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn sanitize_filename(name: &str) -> String {
|
||||||
|
crate::sanitize_filename(name)
|
||||||
|
}
|
||||||
|
|
||||||
fn task_file_path(&self, list_dir: &Path, task: &Task) -> PathBuf {
|
fn task_file_path(&self, list_dir: &Path, task: &Task) -> PathBuf {
|
||||||
let safe_title = crate::sanitize_filename(&task.title);
|
let safe_title = Self::sanitize_filename(&task.title);
|
||||||
let filename = if safe_title.is_empty() {
|
let filename = if safe_title.is_empty() {
|
||||||
task.id.to_string()
|
task.id.to_string()
|
||||||
} else {
|
} else {
|
||||||
|
|
@ -377,9 +381,7 @@ impl Storage for FileSystemStorage {
|
||||||
}
|
}
|
||||||
|
|
||||||
let content = self.write_markdown_with_frontmatter(task)?;
|
let content = self.write_markdown_with_frontmatter(task)?;
|
||||||
// Atomic write: a crash mid-write must not leave a truncated .md file
|
fs::write(&task_path, content)?;
|
||||||
// that then fails YAML parsing on the next list_tasks/read_task.
|
|
||||||
atomic_write(&task_path, content.as_bytes())?;
|
|
||||||
|
|
||||||
// Update list metadata to include this task in task_order if not already present
|
// Update list metadata to include this task in task_order if not already present
|
||||||
let mut list_metadata = self.read_list_metadata(list_id)?;
|
let mut list_metadata = self.read_list_metadata(list_id)?;
|
||||||
|
|
@ -453,42 +455,27 @@ impl Storage for FileSystemStorage {
|
||||||
}
|
}
|
||||||
|
|
||||||
let mut tasks = Vec::new();
|
let mut tasks = Vec::new();
|
||||||
for (_id, entries) in by_id {
|
for (_id, mut entries) in by_id {
|
||||||
// `by_id` only inserts non-empty groups, so each `entries` has at
|
if entries.len() > 1 {
|
||||||
// least one element.
|
entries.sort_by(|a, b| {
|
||||||
let task = if entries.len() > 1 {
|
|
||||||
// Read mtime once per file so sort_by doesn't hit the filesystem
|
|
||||||
// O(n log n) times and can't produce inconsistent orderings if a
|
|
||||||
// file is touched mid-sort.
|
|
||||||
let mut with_mtime: Vec<(PathBuf, Task, Option<std::time::SystemTime>)> = entries
|
|
||||||
.into_iter()
|
|
||||||
.map(|(p, t)| {
|
|
||||||
let mtime = fs::metadata(&p).and_then(|m| m.modified()).ok();
|
|
||||||
(p, t, mtime)
|
|
||||||
})
|
|
||||||
.collect();
|
|
||||||
with_mtime.sort_by(|a, b| {
|
|
||||||
// Primary: highest version first
|
// Primary: highest version first
|
||||||
let version_cmp = b.1.version.cmp(&a.1.version);
|
let version_cmp = b.1.version.cmp(&a.1.version);
|
||||||
if version_cmp != std::cmp::Ordering::Equal {
|
if version_cmp != std::cmp::Ordering::Equal {
|
||||||
return version_cmp;
|
return version_cmp;
|
||||||
}
|
}
|
||||||
// Tiebreaker: most recently modified file first
|
// Tiebreaker: most recently modified file first
|
||||||
b.2.cmp(&a.2)
|
let mtime_a = fs::metadata(&a.0).and_then(|m| m.modified()).ok();
|
||||||
|
let mtime_b = fs::metadata(&b.0).and_then(|m| m.modified()).ok();
|
||||||
|
mtime_b.cmp(&mtime_a)
|
||||||
});
|
});
|
||||||
for (stale_path, _, _) in with_mtime.drain(1..) {
|
for (stale_path, _) in entries.drain(1..) {
|
||||||
if let Err(e) = fs::remove_file(&stale_path) {
|
if let Err(e) = fs::remove_file(&stale_path) {
|
||||||
eprintln!("Warning: failed to remove stale duplicate task file {:?}: {}", stale_path, e);
|
eprintln!("Warning: failed to remove stale duplicate task file {:?}: {}", stale_path, e);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
let (_, t, _) = with_mtime.into_iter().next()
|
}
|
||||||
.expect("dedup group is non-empty after drain(1..)");
|
let (_, task) = entries.into_iter().next()
|
||||||
t
|
.ok_or_else(|| Error::InvalidData("Empty dedup entries for task".to_string()))?;
|
||||||
} else {
|
|
||||||
let (_, t) = entries.into_iter().next()
|
|
||||||
.expect("dedup group is non-empty");
|
|
||||||
t
|
|
||||||
};
|
|
||||||
tasks.push(task);
|
tasks.push(task);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -5,7 +5,7 @@ use serde::{Deserialize, Serialize};
|
||||||
use sha2::{Sha256, Digest};
|
use sha2::{Sha256, Digest};
|
||||||
use uuid::Uuid;
|
use uuid::Uuid;
|
||||||
use crate::error::{Error, Result};
|
use crate::error::{Error, Result};
|
||||||
use crate::storage::{atomic_write, ListMetadata, TaskFrontmatter};
|
use crate::storage::{ListMetadata, TaskFrontmatter};
|
||||||
use crate::webdav::WebDavClient;
|
use crate::webdav::WebDavClient;
|
||||||
|
|
||||||
/// File-based lock to prevent concurrent sync operations on the same workspace.
|
/// File-based lock to prevent concurrent sync operations on the same workspace.
|
||||||
|
|
@ -204,9 +204,8 @@ pub fn compute_sync_actions(
|
||||||
}
|
}
|
||||||
|
|
||||||
// Remote present, local gone, base known: local was deleted
|
// Remote present, local gone, base known: local was deleted
|
||||||
(None, Some(r), Some(b)) => {
|
(None, Some(_), Some(b)) => {
|
||||||
let remote_changed = r.size != b.size
|
let remote_changed = remote.is_some_and(|r| r.size != b.size || !timestamps_equal(r.last_modified.as_deref(), b.modified_at.as_deref()));
|
||||||
|| !timestamps_equal(r.last_modified.as_deref(), b.modified_at.as_deref());
|
|
||||||
if remote_changed {
|
if remote_changed {
|
||||||
// deleted locally + modified remotely -> download (remote wins)
|
// deleted locally + modified remotely -> download (remote wins)
|
||||||
actions.push(SyncAction::Download { path: path.to_string() });
|
actions.push(SyncAction::Download { path: path.to_string() });
|
||||||
|
|
@ -230,22 +229,6 @@ pub fn compute_sync_actions(
|
||||||
actions
|
actions
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Remove base entries for files that are gone from both local and remote.
|
|
||||||
/// `compute_sync_actions` emits no action for the both-deleted case, so without
|
|
||||||
/// this pass those entries would persist in `.syncstate.json` indefinitely.
|
|
||||||
fn prune_orphan_bases(
|
|
||||||
sync_state: &mut SyncState,
|
|
||||||
local_files: &[LocalFileInfo],
|
|
||||||
remote_files: &[RemoteFileSnapshot],
|
|
||||||
) {
|
|
||||||
let live_paths: std::collections::HashSet<&str> = local_files
|
|
||||||
.iter()
|
|
||||||
.map(|f| f.path.as_str())
|
|
||||||
.chain(remote_files.iter().map(|f| f.path.as_str()))
|
|
||||||
.collect();
|
|
||||||
sync_state.files.retain(|p, _| live_paths.contains(p.as_str()));
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Compare two timestamps for equality by parsing both, tolerating format differences.
|
/// Compare two timestamps for equality by parsing both, tolerating format differences.
|
||||||
fn timestamps_equal(a: Option<&str>, b: Option<&str>) -> bool {
|
fn timestamps_equal(a: Option<&str>, b: Option<&str>) -> bool {
|
||||||
match (a, b) {
|
match (a, b) {
|
||||||
|
|
@ -621,12 +604,6 @@ async fn sync_workspace_inner(
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
// Purge orphan base entries: files we previously tracked that are now gone
|
|
||||||
// from both local and remote. Without this, `.syncstate.json` accumulates
|
|
||||||
// ghost entries forever because the both-deleted diff case emits no action
|
|
||||||
// and so nothing else would clean them.
|
|
||||||
prune_orphan_bases(&mut sync_state, &local_files, &remote_files);
|
|
||||||
|
|
||||||
// Compute actions from three-way diff
|
// Compute actions from three-way diff
|
||||||
let fresh_actions = compute_sync_actions(&local_files, &remote_files, &sync_state);
|
let fresh_actions = compute_sync_actions(&local_files, &remote_files, &sync_state);
|
||||||
|
|
||||||
|
|
@ -724,20 +701,19 @@ async fn execute_action(
|
||||||
Err(e) => return Err(e.into()),
|
Err(e) => return Err(e.into()),
|
||||||
};
|
};
|
||||||
let checksum = compute_checksum(&data);
|
let checksum = compute_checksum(&data);
|
||||||
let len = data.len() as u64;
|
|
||||||
|
|
||||||
if let Some(parent) = path_parent(path) {
|
if let Some(parent) = path_parent(path) {
|
||||||
client.ensure_dir(parent).await?;
|
client.ensure_dir(parent).await?;
|
||||||
}
|
}
|
||||||
|
|
||||||
report(&format!(" ^ Uploading {}", path));
|
report(&format!(" ^ Uploading {}", path));
|
||||||
client.put_file(path, data).await?;
|
client.put_file(path, data.clone()).await?;
|
||||||
|
|
||||||
// Record in sync state using local file metadata
|
// Record in sync state using local file metadata
|
||||||
let modified = std::fs::metadata(&local_path).ok()
|
let modified = std::fs::metadata(&local_path).ok()
|
||||||
.and_then(|m| m.modified().ok())
|
.and_then(|m| m.modified().ok())
|
||||||
.map(|t| { let dt: DateTime<Utc> = t.into(); dt.to_rfc3339() });
|
.map(|t| { let dt: DateTime<Utc> = t.into(); dt.to_rfc3339() });
|
||||||
sync_state.record_file(path, &checksum, modified.as_deref(), len);
|
sync_state.record_file(path, &checksum, modified.as_deref(), data.len() as u64);
|
||||||
}
|
}
|
||||||
|
|
||||||
SyncAction::Conflict { path } => {
|
SyncAction::Conflict { path } => {
|
||||||
|
|
@ -767,9 +743,8 @@ async fn execute_action(
|
||||||
} else {
|
} else {
|
||||||
report(&format!(" ! Conflict: remote wins for {}, recovering local as duplicate", path));
|
report(&format!(" ! Conflict: remote wins for {}, recovering local as duplicate", path));
|
||||||
|
|
||||||
// Remote wins: overwrite local with remote content. Atomic
|
// Remote wins: overwrite local with remote content
|
||||||
// so a crash mid-sync cannot leave a truncated file behind.
|
std::fs::write(&local_path, &remote_data)?;
|
||||||
atomic_write(&local_path, &remote_data)?;
|
|
||||||
let modified = std::fs::metadata(&local_path).ok()
|
let modified = std::fs::metadata(&local_path).ok()
|
||||||
.and_then(|m| m.modified().ok())
|
.and_then(|m| m.modified().ok())
|
||||||
.map(|t| { let dt: DateTime<Utc> = t.into(); dt.to_rfc3339() });
|
.map(|t| { let dt: DateTime<Utc> = t.into(); dt.to_rfc3339() });
|
||||||
|
|
@ -777,7 +752,7 @@ async fn execute_action(
|
||||||
|
|
||||||
// For .md task files inside a list dir, create a duplicate of the local version
|
// For .md task files inside a list dir, create a duplicate of the local version
|
||||||
let parts: Vec<&str> = path.split('/').collect();
|
let parts: Vec<&str> = path.split('/').collect();
|
||||||
if parts.len() == 2 && parts[1].ends_with(".md") {
|
if parts.len() == 2 && parts[1].ends_with(".md") && parts[1] != ".listdata.json" {
|
||||||
let local_content = String::from_utf8_lossy(&local_data);
|
let local_content = String::from_utf8_lossy(&local_data);
|
||||||
if let Ok((frontmatter, description)) = parse_frontmatter_for_conflict(&local_content) {
|
if let Ok((frontmatter, description)) = parse_frontmatter_for_conflict(&local_content) {
|
||||||
let original_id = frontmatter.id;
|
let original_id = frontmatter.id;
|
||||||
|
|
@ -800,7 +775,7 @@ async fn execute_action(
|
||||||
let list_dir = workspace_path.join(parts[0]);
|
let list_dir = workspace_path.join(parts[0]);
|
||||||
let dup_filename = format!("{}.md", new_id);
|
let dup_filename = format!("{}.md", new_id);
|
||||||
let dup_path = list_dir.join(&dup_filename);
|
let dup_path = list_dir.join(&dup_filename);
|
||||||
atomic_write(&dup_path, new_content.as_bytes())?;
|
std::fs::write(&dup_path, &new_content)?;
|
||||||
|
|
||||||
// Insert new task adjacent to original in .listdata.json.
|
// Insert new task adjacent to original in .listdata.json.
|
||||||
// If metadata update fails, remove the duplicate file to
|
// If metadata update fails, remove the duplicate file to
|
||||||
|
|
@ -816,7 +791,7 @@ async fn execute_action(
|
||||||
.unwrap_or(metadata.task_order.len());
|
.unwrap_or(metadata.task_order.len());
|
||||||
metadata.task_order.insert(insert_pos, new_id);
|
metadata.task_order.insert(insert_pos, new_id);
|
||||||
let json = serde_json::to_string_pretty(&metadata)?;
|
let json = serde_json::to_string_pretty(&metadata)?;
|
||||||
atomic_write(&listdata_path, json.as_bytes())?;
|
std::fs::write(&listdata_path, json)?;
|
||||||
Ok(())
|
Ok(())
|
||||||
})();
|
})();
|
||||||
if let Err(e) = metadata_updated {
|
if let Err(e) = metadata_updated {
|
||||||
|
|
@ -841,7 +816,7 @@ async fn execute_action(
|
||||||
if let Some(parent) = local_path.parent() {
|
if let Some(parent) = local_path.parent() {
|
||||||
std::fs::create_dir_all(parent)?;
|
std::fs::create_dir_all(parent)?;
|
||||||
}
|
}
|
||||||
atomic_write(&local_path, &data)?;
|
std::fs::write(&local_path, &data)?;
|
||||||
|
|
||||||
// Record remote's last_modified so next diff won't see a timestamp mismatch
|
// Record remote's last_modified so next diff won't see a timestamp mismatch
|
||||||
let modified = remote_meta.get(path.as_str()).and_then(|r| r.last_modified.clone());
|
let modified = remote_meta.get(path.as_str()).and_then(|r| r.last_modified.clone());
|
||||||
|
|
@ -915,15 +890,9 @@ pub fn get_sync_status(workspace_path: &Path) -> Result<SyncStatusInfo> {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Count files in base that are now missing locally (deleted).
|
// Count files in base that are now missing locally (deleted)
|
||||||
// Build a set of local paths once so the membership check is O(1) per
|
|
||||||
// tracked file instead of scanning local_files linearly each time.
|
|
||||||
let local_paths: std::collections::HashSet<&str> = local_files
|
|
||||||
.iter()
|
|
||||||
.map(|f| f.path.as_str())
|
|
||||||
.collect();
|
|
||||||
for path in sync_state.files.keys() {
|
for path in sync_state.files.keys() {
|
||||||
if !local_paths.contains(path.as_str()) {
|
if !local_files.iter().any(|f| f.path == *path) {
|
||||||
pending_changes += 1;
|
pending_changes += 1;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -1136,22 +1105,6 @@ mod tests {
|
||||||
assert!(actions.is_empty());
|
assert!(actions.is_empty());
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_prune_orphan_bases() {
|
|
||||||
let mut state = SyncState::default();
|
|
||||||
state.files.insert("kept_local.md".to_string(), make_base("a"));
|
|
||||||
state.files.insert("kept_remote.md".to_string(), make_base("b"));
|
|
||||||
state.files.insert("orphan.md".to_string(), make_base("c"));
|
|
||||||
|
|
||||||
let local = vec![make_local("kept_local.md", "a")];
|
|
||||||
let remote = vec![make_remote("kept_remote.md")];
|
|
||||||
prune_orphan_bases(&mut state, &local, &remote);
|
|
||||||
|
|
||||||
assert!(state.files.contains_key("kept_local.md"));
|
|
||||||
assert!(state.files.contains_key("kept_remote.md"));
|
|
||||||
assert!(!state.files.contains_key("orphan.md"));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn test_multiple_files_mixed() {
|
fn test_multiple_files_mixed() {
|
||||||
let local = vec![
|
let local = vec![
|
||||||
|
|
@ -1183,7 +1136,8 @@ mod tests {
|
||||||
#[test]
|
#[test]
|
||||||
fn test_sync_state_save_load_roundtrip() {
|
fn test_sync_state_save_load_roundtrip() {
|
||||||
let temp_dir = TempDir::new().unwrap();
|
let temp_dir = TempDir::new().unwrap();
|
||||||
let mut state = SyncState { last_sync: Some(Utc::now()), ..Default::default() };
|
let mut state = SyncState::default();
|
||||||
|
state.last_sync = Some(Utc::now());
|
||||||
state.record_file("test.md", "abc123", Some("2026-01-01T00:00:00Z"), 42);
|
state.record_file("test.md", "abc123", Some("2026-01-01T00:00:00Z"), 42);
|
||||||
|
|
||||||
state.save(temp_dir.path()).unwrap();
|
state.save(temp_dir.path()).unwrap();
|
||||||
|
|
|
||||||
|
|
@ -448,12 +448,12 @@ pub fn store_credentials(domain: &str, username: &str, password: &str) -> Result
|
||||||
|
|
||||||
let user_entry = keyring::Entry::new(&service, "username")
|
let user_entry = keyring::Entry::new(&service, "username")
|
||||||
.map_err(|e| Error::Credential(format!("Failed to create keyring entry: {}", e)))?;
|
.map_err(|e| Error::Credential(format!("Failed to create keyring entry: {}", e)))?;
|
||||||
user_entry.set_password(username)
|
user_entry.setpassword(username)
|
||||||
.map_err(|e| Error::Credential(format!("Failed to store username: {}", e)))?;
|
.map_err(|e| Error::Credential(format!("Failed to store username: {}", e)))?;
|
||||||
|
|
||||||
let pass_entry = keyring::Entry::new(&scoped_service, "password")
|
let pass_entry = keyring::Entry::new(&scoped_service, "password")
|
||||||
.map_err(|e| Error::Credential(format!("Failed to create keyring entry: {}", e)))?;
|
.map_err(|e| Error::Credential(format!("Failed to create keyring entry: {}", e)))?;
|
||||||
pass_entry.set_password(password)
|
pass_entry.setpassword(password)
|
||||||
.map_err(|e| Error::Credential(format!("Failed to store password: {}", e)))?;
|
.map_err(|e| Error::Credential(format!("Failed to store password: {}", e)))?;
|
||||||
|
|
||||||
// Clean up legacy unscoped password entry if present
|
// Clean up legacy unscoped password entry if present
|
||||||
|
|
@ -478,18 +478,18 @@ pub fn load_credentials(domain: &str) -> Result<(Zeroizing<String>, Zeroizing<St
|
||||||
let user_entry = keyring::Entry::new(&service, "username")
|
let user_entry = keyring::Entry::new(&service, "username")
|
||||||
.map_err(|e| Error::Credential(format!("Failed to create keyring entry: {}", e)))?;
|
.map_err(|e| Error::Credential(format!("Failed to create keyring entry: {}", e)))?;
|
||||||
|
|
||||||
if let Ok(user) = user_entry.get_password() {
|
if let Ok(user) = user_entry.getpassword() {
|
||||||
// Try scoped password key first (domain+username), fall back to legacy unscoped key
|
// Try scoped password key first (domain+username), fall back to legacy unscoped key
|
||||||
let scoped_service = format!("com.onyx.webdav.{}::{}", domain, user);
|
let scoped_service = format!("com.onyx.webdav.{}::{}", domain, user);
|
||||||
let found = keyring::Entry::new(&scoped_service, "password")
|
let found = keyring::Entry::new(&scoped_service, "password")
|
||||||
.ok()
|
.ok()
|
||||||
.and_then(|e| e.get_password().ok())
|
.and_then(|e| e.getpassword().ok())
|
||||||
.map(|p| (p, false))
|
.map(|p| (p, false))
|
||||||
.or_else(|| {
|
.or_else(|| {
|
||||||
// Migration fallback: try legacy unscoped password entry
|
// Migration fallback: try legacy unscoped password entry
|
||||||
keyring::Entry::new(&service, "password")
|
keyring::Entry::new(&service, "password")
|
||||||
.ok()
|
.ok()
|
||||||
.and_then(|e| e.get_password().ok())
|
.and_then(|e| e.getpassword().ok())
|
||||||
.map(|p| (p, true))
|
.map(|p| (p, true))
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|
@ -497,7 +497,7 @@ pub fn load_credentials(domain: &str) -> Result<(Zeroizing<String>, Zeroizing<St
|
||||||
// Auto-migrate legacy credentials to scoped format
|
// Auto-migrate legacy credentials to scoped format
|
||||||
if needs_migration {
|
if needs_migration {
|
||||||
if let Ok(entry) = keyring::Entry::new(&scoped_service, "password") {
|
if let Ok(entry) = keyring::Entry::new(&scoped_service, "password") {
|
||||||
let _ = entry.set_password(&pass);
|
let _ = entry.setpassword(&pass);
|
||||||
}
|
}
|
||||||
if let Ok(legacy) = keyring::Entry::new(&service, "password") {
|
if let Ok(legacy) = keyring::Entry::new(&service, "password") {
|
||||||
let _ = legacy.delete_credential();
|
let _ = legacy.delete_credential();
|
||||||
|
|
@ -547,7 +547,7 @@ pub fn delete_credentials(domain: &str) -> Result<()> {
|
||||||
// Load username first so we can delete the scoped password entry
|
// Load username first so we can delete the scoped password entry
|
||||||
let username = keyring::Entry::new(&service, "username")
|
let username = keyring::Entry::new(&service, "username")
|
||||||
.ok()
|
.ok()
|
||||||
.and_then(|e| e.get_password().ok());
|
.and_then(|e| e.getpassword().ok());
|
||||||
|
|
||||||
if let Some(user) = &username {
|
if let Some(user) = &username {
|
||||||
let scoped_service = format!("com.onyx.webdav.{}::{}", domain, user);
|
let scoped_service = format!("com.onyx.webdav.{}::{}", domain, user);
|
||||||
|
|
|
||||||
26
docs/API.md
|
|
@ -207,20 +207,6 @@ let list = repo.get_list(list_id)?;
|
||||||
repo.delete_list(list_id)?;
|
repo.delete_list(list_id)?;
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Rename List
|
|
||||||
|
|
||||||
```rust
|
|
||||||
repo.rename_list(list_id, "New Name".to_string())?;
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Move Task Between Lists
|
|
||||||
|
|
||||||
```rust
|
|
||||||
// Atomically moves a task from one list to another.
|
|
||||||
// If the delete-from-source step fails, the copy in the destination is rolled back.
|
|
||||||
repo.move_task(from_list_id, to_list_id, task_id)?;
|
|
||||||
```
|
|
||||||
|
|
||||||
### Task Ordering
|
### Task Ordering
|
||||||
|
|
||||||
#### Reorder Task
|
#### Reorder Task
|
||||||
|
|
@ -353,14 +339,12 @@ Credentials are stored in the platform keychain (Windows Credential Manager, mac
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
use onyx_core::webdav::{store_credentials, load_credentials, delete_credentials};
|
use onyx_core::webdav::{store_credentials, load_credentials, delete_credentials};
|
||||||
use zeroize::Zeroizing;
|
|
||||||
|
|
||||||
// Store credentials
|
// Store credentials
|
||||||
store_credentials("nextcloud.example.com", "username", "password")?;
|
store_credentials("nextcloud.example.com", "username", "password")?;
|
||||||
|
|
||||||
// Load credentials — returns Zeroizing<String> wrappers that wipe memory on drop
|
// Load credentials (returns Zeroizing<String> wrappers that wipe memory on drop)
|
||||||
let (username, password): (Zeroizing<String>, Zeroizing<String>) =
|
let (username, password) = load_credentials("nextcloud.example.com")?;
|
||||||
load_credentials("nextcloud.example.com")?;
|
|
||||||
|
|
||||||
// Delete credentials
|
// Delete credentials
|
||||||
delete_credentials("nextcloud.example.com")?;
|
delete_credentials("nextcloud.example.com")?;
|
||||||
|
|
@ -456,7 +440,7 @@ All metadata and state files use an atomic write pattern (write to `.tmp` then r
|
||||||
|
|
||||||
- **List names**: Rejected if they contain `/`, `\`, or `..` components. Canonicalized and verified to stay within workspace root.
|
- **List names**: Rejected if they contain `/`, `\`, or `..` components. Canonicalized and verified to stay within workspace root.
|
||||||
- **Sync paths**: Validated to reject `..` components and backslashes anywhere in the path before any file system operation.
|
- **Sync paths**: Validated to reject `..` components and backslashes anywhere in the path before any file system operation.
|
||||||
- **Workspace paths** (Tauri): Rejected if they point to the filesystem root (`/`) or system directories (`/etc`, `/usr`, `/bin`, `/sbin`, `/var`, `/proc`, `/sys`, `/dev`).
|
- **Workspace paths** (Tauri): Rejected if they point to system directories (`/etc`, `/usr`, `/bin`, etc.).
|
||||||
- **Filenames**: Sanitized to replace `/ \ : * ? " < > |` and control characters with `_`.
|
- **Filenames**: Sanitized to replace `/ \ : * ? " < > |` and control characters with `_`.
|
||||||
|
|
||||||
## Example: Complete Workflow
|
## Example: Complete Workflow
|
||||||
|
|
@ -523,9 +507,9 @@ Key test areas:
|
||||||
|
|
||||||
## Thread Safety
|
## Thread Safety
|
||||||
|
|
||||||
`TaskRepository` holds its storage as `Box<dyn Storage + Send + Sync>`, so any concrete storage implementation passed in must be `Send + Sync`. Repository instances can be shared across threads behind a `Mutex` — the Tauri GUI uses `Mutex<AppState>` for this purpose.
|
The `Storage` trait requires `Send + Sync`, and `TaskRepository` wraps `Box<dyn Storage + Send + Sync>`, so repository instances can be shared across threads behind a `Mutex`. The Tauri GUI uses `Mutex<AppState>` for this purpose.
|
||||||
|
|
||||||
For concurrent access:
|
For concurrent access:
|
||||||
|
|
||||||
1. Wrap `TaskRepository` in `Mutex` or `RwLock` (the Tauri app does this)
|
1. Wrap `TaskRepository` in `Mutex` or `RwLock` (the Tauri app does this)
|
||||||
2. Or create separate repository instances per thread. Note that `FileSystemStorage` does not coordinate writes between processes — concurrent multi-process writes to the same workspace are not supported outside the WebDAV sync flow, which uses a `.sync.lock` file.
|
2. Or create separate repository instances per thread (file system handles locking)
|
||||||
|
|
|
||||||
|
|
@ -27,7 +27,7 @@ cargo run -p onyx-cli -- --help
|
||||||
|
|
||||||
# Run the Tauri GUI
|
# Run the Tauri GUI
|
||||||
cd apps/tauri && npm install
|
cd apps/tauri && npm install
|
||||||
npm run tauri dev # (Wayland: WEBKIT_DISABLE_DMABUF_RENDERER=1 npm run tauri dev)
|
npm run tauri dev
|
||||||
```
|
```
|
||||||
|
|
||||||
## Project Structure
|
## Project Structure
|
||||||
|
|
@ -72,15 +72,10 @@ onyx/
|
||||||
│ │ ├── main.ts
|
│ │ ├── main.ts
|
||||||
│ │ ├── app.css # Tailwind CSS 4 + theme
|
│ │ ├── app.css # Tailwind CSS 4 + theme
|
||||||
│ │ ├── App.svelte
|
│ │ ├── App.svelte
|
||||||
│ │ ├── test/
|
|
||||||
│ │ │ └── setup.ts
|
|
||||||
│ │ └── lib/
|
│ │ └── lib/
|
||||||
│ │ ├── screens/ # Full-page views
|
│ │ ├── screens/ # Full-page views
|
||||||
│ │ ├── components/ # Reusable UI components
|
│ │ ├── components/ # Reusable UI components
|
||||||
│ │ ├── stores/ # Svelte state (app.svelte.ts)
|
│ │ ├── stores/ # Svelte state (app.svelte.ts)
|
||||||
│ │ ├── dateFormat.ts # Date formatting utilities
|
|
||||||
│ │ ├── grouping.ts # Task grouping logic
|
|
||||||
│ │ ├── paths.ts # Path utilities
|
|
||||||
│ │ └── types.ts # TypeScript type definitions
|
│ │ └── types.ts # TypeScript type definitions
|
||||||
│ ├── tauri-plugin-credentials/ # Cross-platform credential storage plugin
|
│ ├── tauri-plugin-credentials/ # Cross-platform credential storage plugin
|
||||||
│ │ ├── Cargo.toml
|
│ │ ├── Cargo.toml
|
||||||
|
|
|
||||||
BIN
screenshot.png
|
Before Width: | Height: | Size: 29 KiB |
|
Before Width: | Height: | Size: 21 KiB |
|
Before Width: | Height: | Size: 17 KiB |
|
Before Width: | Height: | Size: 18 KiB |
|
Before Width: | Height: | Size: 22 KiB |
|
Before Width: | Height: | Size: 21 KiB |
|
Before Width: | Height: | Size: 1.4 KiB |
|
Before Width: | Height: | Size: 26 KiB |
|
Before Width: | Height: | Size: 21 KiB |
|
Before Width: | Height: | Size: 22 KiB |
|
Before Width: | Height: | Size: 18 KiB |
|
Before Width: | Height: | Size: 15 KiB |
|
Before Width: | Height: | Size: 15 KiB |