Compare commits
62 commits
claude/res
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c5a3840aea | ||
|
|
c29f715c9e | ||
|
|
6f4d00b912 | ||
|
|
39718ef700 | ||
|
|
c57ffd3f55 | ||
|
|
12adfdc532 | ||
|
|
6e161ba819 | ||
|
|
e8a69a3222 | ||
|
|
839b744720 | ||
|
|
0506d44989 | ||
|
|
e1c4fd7dfb | ||
|
|
8c8735b2b4 | ||
|
|
069afe8d5e | ||
|
|
1cdf5dff90 | ||
|
|
56944360e0 | ||
|
|
16cf409f32 | ||
|
|
8611f55573 | ||
|
|
a9fac2c1d8 | ||
|
|
1fcc6e7f6d | ||
|
|
970210b647 | ||
|
|
66513519ab | ||
|
|
1bb1b67977 | ||
|
|
4c318705f6 | ||
|
|
890f0c2126 | ||
|
|
f42697f4ed | ||
|
|
7754ea4b45 | ||
|
|
6abe95692e | ||
|
|
70fe7420cd | ||
|
|
6e1921230a | ||
|
|
6ae1006ab4 | ||
|
|
d8c6b9fc8e | ||
|
|
9a8a1a9f8e | ||
|
|
c952156491 | ||
|
|
62cf05480d | ||
|
|
e911ac1d94 | ||
|
|
937b6c2c7d | ||
|
|
4e8f7c4536 | ||
|
|
b977d275ba | ||
|
|
065118789f | ||
|
|
a79dcc4617 | ||
|
|
efb4ccaaef | ||
|
|
f6c8dfc951 | ||
|
|
3acc4c3f5d | ||
|
|
391c42aa18 | ||
|
|
6283f9ab2c | ||
|
|
5869c305aa | ||
|
|
d213e523ec | ||
|
|
0fc1f16c9d | ||
|
|
d01bd9d280 | ||
|
|
b437b0b7b2 | ||
|
|
c134624839 | ||
|
|
f276233be5 | ||
|
|
df66e7bc98 | ||
|
|
604a6058b8 | ||
|
|
a0e2bb214b | ||
|
|
8a81f05492 | ||
|
|
433a950418 | ||
|
|
855fa46a0e | ||
|
|
cdef59fab4 | ||
|
|
92475483de | ||
|
|
771e104486 | ||
|
|
7bef6b07bc |
33
Audit.md
33
Audit.md
|
|
@ -1,5 +1,38 @@
|
|||
# Audit Log
|
||||
|
||||
## 2026-04-27
|
||||
|
||||
Found and fixed 3 issues:
|
||||
|
||||
1. **Perf: needless clone of upload payload** (sync.rs:733) — the `SyncAction::Upload` arm read the file into `data`, computed `compute_checksum(&data)`, then called `client.put_file(path, data.clone())`. The clone existed only because the next statement needed `data.len()` for the sync-state record. Captured `data.len() as u64` into `len` first, moved `data` into `put_file`, and used `len` afterwards — one full byte copy avoided per uploaded file.
|
||||
2. **Bug: Google Tasks sync silently drops metadata-write failures** (google_tasks.rs:361, 377) — both `.listdata.json` and `.onyx-workspace.json` were written via `if let Ok(meta_content) = serde_json::to_string_pretty(...) { let _ = atomic_write(...); }`, so a serialization or atomic-write error returned `Ok(GoogleSyncResult { downloaded: N, errors: [] })` even though list/workspace ordering was never persisted. Both writes now push their errors into the `errors` vec already returned in `GoogleSyncResult`.
|
||||
3. **Code quality: unreachable dead-error path in storage dedup** (storage.rs:447) — the dedup loop computed `Option<Task>` from each `by_id` group and then `ok_or_else(|| Error::InvalidData("Empty dedup entries for task"))?`. `by_id` is only populated by `entry(uuid).or_default().push(entry)`, so every group has ≥1 element and the `None` branch is unreachable. Replaced the `Option`+`?` with direct `expect` calls (one per branch) that document the non-empty invariant; the loop now yields `Task` directly.
|
||||
|
||||
## 2026-04-25
|
||||
|
||||
Found and fixed 3 issues:
|
||||
|
||||
1. **Perf: O(n²) deletion-detection in `get_sync_status`** (sync.rs:918) — for every path tracked in `sync_state.files`, the loop scanned `local_files` linearly via `.any(|f| f.path == *path)` to decide whether to count it as a deleted-locally pending change. The earlier "modified or new" loop already used the inverse direction with `sync_state.files.get(...)` (O(1)), so the second loop was the inconsistent one. Built a `HashSet<&str>` of local paths once and used `contains` for the membership check.
|
||||
2. **Perf: cascade delete walks all_tasks per frontier pop** (tauri/lib.rs:460) — `delete_task`'s descendant BFS scanned the full task list on every parent popped from the frontier, making the work O(n × depth). Built a `parent_id -> [child_id]` `HashMap` once, then the BFS visits each descendant in O(1) amortised, dropping total cost to O(n).
|
||||
3. **Code quality: duplicate atomic-write in `AppConfig::save_to_file`** (config.rs:114) — the function had its own copy of the temp-file + rename + cleanup-on-failure dance even though `storage::atomic_write` is `pub(crate)` and was already shared by `google_tasks.rs`. Replaced the inline implementation with a call to `crate::storage::atomic_write` so the crate has one canonical atomic write path.
|
||||
|
||||
## 2026-04-24
|
||||
|
||||
Found and fixed 3 issues:
|
||||
|
||||
1. **Bug: orphan base entries never cleaned from sync state** (sync.rs) — when a file was deleted both locally and remotely, `compute_sync_actions` emitted no action (the `(None, None, Some(_))` arm), so the base entry in `.syncstate.json` persisted forever. On each subsequent sync the same no-op case fired and the state file grew. Added `prune_orphan_bases` pass in `sync_workspace_inner` that drops base entries not present in either scan.
|
||||
2. **Code quality: redundant is_some_and on already-matched Option** (sync.rs:208) — the `(None, Some(_), Some(b))` arm re-checked `remote` via `remote.is_some_and(|r| ...)` even though the pattern had just proven `remote` is `Some(_)`. Bound the inner value with `Some(r)` in the pattern and used `r` directly.
|
||||
3. **Code quality: single-caller sanitize_filename wrapper** (storage.rs) — `FileSystemStorage::sanitize_filename` was a one-line forwarder to `crate::sanitize_filename` with one call site. Inlined the crate call and removed the method.
|
||||
|
||||
## 2026-04-20
|
||||
|
||||
Found and fixed 4 issues:
|
||||
|
||||
1. **Dead code in conflict recovery** (sync.rs:756) — `parts[1] != ".listdata.json"` was unreachable because the branch is already gated on `parts[1].ends_with(".md")`, which `.listdata.json` cannot satisfy. Removed the redundant check.
|
||||
2. **O(n²) cascade delete** (tauri/lib.rs) — descendant traversal in `delete_task` used `Vec::contains` inside the inner loop, making it quadratic in the number of tasks per list. Swapped the visited set to `HashSet`; `HashSet::insert` folds the contains+push into one call.
|
||||
3. **Silent cascade failure in toggle_task** (tauri/lib.rs) — subtask `update_task` errors were discarded with `let _ = ...`, leaving subtasks stuck at the old status with no UI feedback. Propagate the error so the frontend can surface it.
|
||||
4. **Duplicated UUID-parse boilerplate** (tauri/lib.rs) — 17 commands repeated `Uuid::parse_str(&x).map_err(|e| e.to_string())?`. Extracted a `parse_uuid` helper so callers read as `let id = parse_uuid(&list_id)?;`.
|
||||
|
||||
## 2026-04-15
|
||||
|
||||
Found and fixed 4 issues:
|
||||
|
|
|
|||
|
|
@ -30,7 +30,7 @@ The Tauri dev server runs on port 1422 (`vite.config.ts` and `tauri.conf.json`).
|
|||
Two-crate workspace (`resolver = "2"`, edition 2021) plus a Tauri app:
|
||||
|
||||
- **onyx-core** — Pure Rust library. Storage trait with `FileSystemStorage` implementation, `TaskRepository` (main API), data models, config, error types. No CLI/UI dependencies. `keyring` feature-gated behind `keyring-storage` (default on) for Android compatibility.
|
||||
- **onyx-cli** — CLI frontend using clap. Commands are in `src/commands/` (init, workspace, list, task, group). Output formatting in `src/output.rs`.
|
||||
- **onyx-cli** — CLI frontend using clap. Commands are in `src/commands/` (init, workspace, list, task, group, sync). Output formatting in `src/output.rs`.
|
||||
- **apps/tauri/** — Tauri v2 GUI. Svelte 5 frontend in `src/`, Rust backend in `src-tauri/` with Tauri commands that call into `onyx-core`. `notify` crate feature-gated for Android. `tauri-plugin-credentials/` provides cross-platform credential storage (Android Keystore via EncryptedSharedPreferences, desktop via keyring crate).
|
||||
|
||||
### Key patterns
|
||||
|
|
@ -64,7 +64,7 @@ The GUI uses Svelte 5 runes mode (`$state`, `$derived`, `$effect`, `$props()`).
|
|||
|
||||
Pre-alpha. No users, no released builds, no data to migrate. Breaking changes to on-disk formats, config structure, or sync conventions are free — do not add migration logic.
|
||||
|
||||
### Current state (2026-04-15)
|
||||
### Current state (2026-04-27)
|
||||
|
||||
- **Phase 1** (Core + CLI): Complete
|
||||
- **Phase 2** (WebDAV sync): Complete — remote folder browsing, checksum-based conflict resolution, auto-sync lifecycle, per-workspace sync interval
|
||||
|
|
@ -106,7 +106,7 @@ Pre-alpha. No users, no released builds, no data to migrate. Breaking changes to
|
|||
- Task deduplication on load (handles sync conflict duplicates)
|
||||
- Subtask hierarchy: subtask count shown on parent tasks in list, subtask detail via three-panel slide navigation, inline add at top of subtask list (new subtasks prepend), collapsible completed subtasks section, cascade delete (parent deletion removes all subtasks with confirmation warning)
|
||||
- Custom confirmation dialogs (ConfirmDialog component replaces native confirm())
|
||||
- Workspace path validation (rejects system directories)
|
||||
- Workspace path validation (rejects filesystem root `/` and system directories: `/etc`, `/usr`, `/bin`, `/sbin`, `/var`, `/proc`, `/sys`, `/dev`)
|
||||
- Task detail auto-cleanup (taskStack clears when viewed task is deleted or list switches)
|
||||
- Swipe gestures on mobile: swipe left/right on a task to toggle completion (swipe direction depends on current status)
|
||||
- Accessibility: ARIA labels/roles on interactive components, keyboard handlers, `prefers-reduced-motion` CSS support
|
||||
|
|
|
|||
20
PLAN.md
20
PLAN.md
|
|
@ -532,8 +532,11 @@ pub fn delete_credentials(domain: &str) -> Result<()>;
|
|||
Add to `onyx-core/Cargo.toml`:
|
||||
```toml
|
||||
reqwest = { version = "0.12", features = ["json", "rustls-tls"] }
|
||||
keyring = "3.0"
|
||||
# TODO: Evaluate dav-client or implement custom WebDAV
|
||||
keyring = { version = "3", features = ["apple-native", "windows-native", "sync-secret-service"], optional = true }
|
||||
zeroize = "1"
|
||||
sha2 = "0.10"
|
||||
quick-xml = "0.36"
|
||||
# WebDAV implemented as custom client using reqwest + quick-xml for PROPFIND parsing
|
||||
```
|
||||
|
||||
### Features
|
||||
|
|
@ -668,7 +671,6 @@ apps/tauri/
|
|||
│ │ ├── TaskItem.svelte
|
||||
│ │ ├── NewTaskInput.svelte
|
||||
│ │ ├── TaskDetailView.svelte
|
||||
│ │ ├── BottomSheet.svelte
|
||||
│ │ ├── ConfirmDialog.svelte
|
||||
│ │ └── DateTimePicker.svelte
|
||||
│ └── stores/
|
||||
|
|
@ -763,7 +765,7 @@ WorkspaceConfig {
|
|||
- [x] List rename (inline input via list kebab menu in drawer)
|
||||
- [x] Keyboard shortcuts (Escape closes settings → detail → drawer → menus in priority order)
|
||||
- [x] Sync status indicators (last-sync time + upload/download counts chip in TasksScreen)
|
||||
- [x] Push/pull sync mode selection (session-only sync direction selector in SettingsScreen)
|
||||
- [ ] Push/pull sync mode selection (session-only sync direction selector in SettingsScreen)
|
||||
- [x] Group-by-date toggle per list (checkmark toggle in list kebab menu)
|
||||
- [x] Subtask hierarchy (expand/collapse, inline add, cascade toggle/delete)
|
||||
- [ ] Search/filter tasks
|
||||
|
|
@ -844,11 +846,11 @@ npm run tauri ios build
|
|||
|
||||
#### Features
|
||||
|
||||
- [x] Gate file-watcher initialization behind `#[cfg(not(mobile))]`
|
||||
- [x] Gate file-watcher initialization behind `#[cfg(not(target_os = "android"))]`
|
||||
- [x] Install Android Studio + NDK, configure env vars
|
||||
- [x] Add Android Rust targets
|
||||
- [x] `npm run tauri android init` (generates `gen/android/`)
|
||||
- [x] Confirm `npm run tauri android build` succeeds
|
||||
- [ ] `npm run tauri android init` (generates `gen/android/`)
|
||||
- [ ] Confirm `npm run tauri android build` succeeds
|
||||
- [ ] Basic smoke test: app launches, workspace setup, create a task
|
||||
- [ ] Set up macOS CI for iOS builds
|
||||
- [ ] `npm run tauri ios init` (generates `gen/ios/`)
|
||||
|
|
@ -1056,6 +1058,6 @@ This project is free and open-source software licensed under GPL v3.
|
|||
|
||||
---
|
||||
|
||||
**Last Updated**: 2026-04-15
|
||||
**Document Version**: 4.3
|
||||
**Last Updated**: 2026-04-27
|
||||
**Document Version**: 4.5
|
||||
**Status**: Ready to Implement - Milestone-Driven Plan
|
||||
|
|
|
|||
15
README.md
15
README.md
|
|
@ -2,6 +2,8 @@
|
|||
|
||||
A **local-first, cross-platform tasks application** built with Rust. Inspired by Google Tasks, designed for speed and flexibility.
|
||||
|
||||

|
||||
|
||||
## Core Principles
|
||||
|
||||
- **Local-First**: Your data, your folder, your control
|
||||
|
|
@ -21,7 +23,10 @@ onyx/
|
|||
│ └── onyx-cli/ # CLI frontend
|
||||
├── apps/
|
||||
│ └── tauri/ # Tauri v2 GUI (Svelte 5 + Tailwind CSS 4)
|
||||
│ └── tauri-plugin-credentials/ # Cross-platform credential storage plugin
|
||||
└── docs/
|
||||
├── API.md # Core library API reference
|
||||
└── DEVELOPMENT.md # Development guide
|
||||
```
|
||||
|
||||
## Project Status
|
||||
|
|
@ -29,7 +34,7 @@ onyx/
|
|||
- **Phase 1** (Core + CLI): Complete
|
||||
- **Phase 2** (WebDAV Sync): Complete — backend, CLI, and GUI all wired
|
||||
- **Phase 3** (GUI MVP): Complete
|
||||
- **Phase 4** (Mobile): In progress — Android preliminaries done (file-watcher gating, tauri-plugin-credentials, safe area insets, Android targets configured); needs build verification and iOS setup
|
||||
- **Phase 4** (Mobile): In progress — Android preliminaries done (file-watcher gating, `tauri-plugin-credentials`, safe area insets, Android targets configured); needs `tauri android init`, build verification, and iOS setup
|
||||
|
||||
### Core Library (`onyx-core`)
|
||||
- Data models (Task, TaskList, AppConfig, WorkspaceConfig)
|
||||
|
|
@ -59,13 +64,15 @@ onyx/
|
|||
- Due date picker/editor with optional time
|
||||
- Subtask hierarchy with three-panel slide navigation
|
||||
- Move tasks between lists
|
||||
- List rename, group-by-date toggle, delete completed tasks
|
||||
- List rename, workspace rename, group-by-date toggle, delete completed tasks
|
||||
- Keyboard shortcuts (Escape priority chain)
|
||||
- WebDAV setup flow with credential auto-population
|
||||
- File watcher (auto-reloads on external changes)
|
||||
- Auto-sync with configurable interval, status indicators
|
||||
- Swipe gestures on mobile (swipe to toggle completion)
|
||||
- Custom confirmation dialogs
|
||||
- Safe area insets for mobile (viewport-fit=cover)
|
||||
- Accessibility: ARIA labels/roles, keyboard handlers, `prefers-reduced-motion` support
|
||||
- Desktop packaging (Linux: AppImage + .deb; Windows: MSI)
|
||||
|
||||
## Development Setup
|
||||
|
|
@ -213,8 +220,8 @@ cargo test -- --nocapture
|
|||
|
||||
## What's Next?
|
||||
|
||||
- **Phase 4**: Mobile support (iOS & Android via Tauri v2 mobile)
|
||||
- **Phase 5**: GUI advanced features (rich markdown editor, search/filter)
|
||||
- **Phase 4** (in progress): Complete Android build (`tauri android init` + verification), iOS setup on macOS CI
|
||||
- **Phase 5**: GUI advanced features (rich markdown editor, search/filter, change storage folder)
|
||||
- **Phase 6**: Mobile polish and platform-specific integrations
|
||||
- **Phase 7**: Google Tasks importer and unique features
|
||||
|
||||
|
|
|
|||
|
|
@ -60,6 +60,11 @@ fn lock_state(state: &Mutex<AppState>) -> Result<std::sync::MutexGuard<'_, AppSt
|
|||
state.lock().map_err(|e| format!("State lock poisoned: {}", e))
|
||||
}
|
||||
|
||||
/// Parse a UUID from a string, converting errors to the String format Tauri commands use.
|
||||
fn parse_uuid(s: &str) -> Result<Uuid, String> {
|
||||
Uuid::parse_str(s).map_err(|e| e.to_string())
|
||||
}
|
||||
|
||||
impl AppState {
|
||||
/// Persist config to disk, converting errors to String for Tauri commands.
|
||||
fn save_config(&self) -> Result<(), String> {
|
||||
|
|
@ -67,6 +72,25 @@ impl AppState {
|
|||
}
|
||||
}
|
||||
|
||||
/// Extract the hostname from a URL (scheme://host/...), used as the credential key.
|
||||
/// Returns an empty string if the URL has no scheme or host.
|
||||
fn credential_domain(url: &str) -> String {
|
||||
url.split("://")
|
||||
.nth(1)
|
||||
.and_then(|rest| rest.split('/').next())
|
||||
.unwrap_or("")
|
||||
.to_string()
|
||||
}
|
||||
|
||||
/// Join a remote base directory with a child path, handling empty base and trailing slashes.
|
||||
fn join_remote_path(base: &str, child: &str) -> String {
|
||||
if base.is_empty() {
|
||||
child.to_string()
|
||||
} else {
|
||||
format!("{}/{}", base.trim_end_matches('/'), child)
|
||||
}
|
||||
}
|
||||
|
||||
/// Validate that a workspace path is a reasonable directory and not a system path.
|
||||
fn validate_workspace_path(path: &str) -> Result<(), String> {
|
||||
let p = PathBuf::from(path);
|
||||
|
|
@ -79,7 +103,10 @@ fn validate_workspace_path(path: &str) -> Result<(), String> {
|
|||
#[cfg(unix)]
|
||||
{
|
||||
let forbidden = ["/", "/etc", "/usr", "/bin", "/sbin", "/var", "/proc", "/sys", "/dev"];
|
||||
// Strip trailing slashes, but keep "/" itself — trim_end_matches would
|
||||
// collapse it to "" and slip past the forbidden check.
|
||||
let canonical = normalized.trim_end_matches('/');
|
||||
let canonical = if canonical.is_empty() { "/" } else { canonical };
|
||||
if forbidden.contains(&canonical) {
|
||||
return Err(format!("Cannot use system directory as workspace: {}", path));
|
||||
}
|
||||
|
|
@ -179,6 +206,13 @@ fn add_workspace(
|
|||
state: State<'_, Mutex<AppState>>,
|
||||
) -> Result<(), String> {
|
||||
validate_workspace_path(&path)?;
|
||||
// Ensure the path exists and is a valid workspace before persisting the
|
||||
// config. Without this, calling add_workspace directly on a missing
|
||||
// directory would save the workspace but every subsequent ensure_repo
|
||||
// call would fail with "Path does not exist".
|
||||
TaskRepository::init(PathBuf::from(&path))
|
||||
.map(|_| ())
|
||||
.map_err(|e| e.to_string())?;
|
||||
let mut s = lock_state(&state)?;
|
||||
let ws = WorkspaceConfig::new(name, PathBuf::from(&path));
|
||||
let id = s.config.add_workspace(ws);
|
||||
|
|
@ -256,10 +290,7 @@ async fn rename_workspace(
|
|||
let base_url = webdav_url.as_deref().ok_or("No WebDAV URL configured")?;
|
||||
let remote_path = webdav_path.as_deref().unwrap_or("");
|
||||
|
||||
let domain = base_url
|
||||
.split("://").nth(1)
|
||||
.and_then(|rest| rest.split('/').next())
|
||||
.unwrap_or("").to_string();
|
||||
let domain = credential_domain(base_url);
|
||||
let creds = app_handle.state::<Credentials<tauri::Wry>>();
|
||||
let (username, password) = creds.load(&domain)?;
|
||||
|
||||
|
|
@ -340,7 +371,7 @@ fn delete_list(
|
|||
let mut s = lock_state(&state)?;
|
||||
ensure_repo(&mut s)?;
|
||||
mute_watcher(&mut s);
|
||||
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||
let id = parse_uuid(&list_id)?;
|
||||
repo_mut(&mut s)?
|
||||
.delete_list(id)
|
||||
.map_err(|e| e.to_string())
|
||||
|
|
@ -355,7 +386,7 @@ fn list_tasks(
|
|||
) -> Result<Vec<Task>, String> {
|
||||
let mut s = lock_state(&state)?;
|
||||
ensure_repo(&mut s)?;
|
||||
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||
let id = parse_uuid(&list_id)?;
|
||||
repo_ref(&s)?
|
||||
.list_tasks(id)
|
||||
.map_err(|e| e.to_string())
|
||||
|
|
@ -367,20 +398,27 @@ fn create_task(
|
|||
title: String,
|
||||
description: Option<String>,
|
||||
parent_id: Option<String>,
|
||||
date: Option<chrono::DateTime<chrono::Utc>>,
|
||||
has_time: Option<bool>,
|
||||
state: State<'_, Mutex<AppState>>,
|
||||
) -> Result<Task, String> {
|
||||
let mut s = lock_state(&state)?;
|
||||
ensure_repo(&mut s)?;
|
||||
mute_watcher(&mut s);
|
||||
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||
let id = parse_uuid(&list_id)?;
|
||||
let mut task = Task::new(title);
|
||||
if let Some(desc) = description.filter(|d| !d.is_empty()) {
|
||||
task.description = desc;
|
||||
}
|
||||
if let Some(pid) = parent_id {
|
||||
let parent_uuid = Uuid::parse_str(&pid).map_err(|e| e.to_string())?;
|
||||
let parent_uuid = parse_uuid(&pid)?;
|
||||
task.parent_id = Some(parent_uuid);
|
||||
}
|
||||
// Accept the date fields at creation time so callers don't have to do a
|
||||
// second update() round-trip just to attach a date — which previously
|
||||
// dropped the date entirely if the follow-up update failed.
|
||||
task.date = date;
|
||||
task.has_time = has_time.unwrap_or(false);
|
||||
repo_mut(&mut s)?
|
||||
.create_task(id, task)
|
||||
.map_err(|e| e.to_string())
|
||||
|
|
@ -395,7 +433,7 @@ fn update_task(
|
|||
let mut s = lock_state(&state)?;
|
||||
ensure_repo(&mut s)?;
|
||||
mute_watcher(&mut s);
|
||||
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||
let id = parse_uuid(&list_id)?;
|
||||
repo_mut(&mut s)?
|
||||
.update_task(id, task)
|
||||
.map_err(|e| e.to_string())
|
||||
|
|
@ -410,17 +448,36 @@ fn delete_task(
|
|||
let mut s = lock_state(&state)?;
|
||||
ensure_repo(&mut s)?;
|
||||
mute_watcher(&mut s);
|
||||
let lid = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||
let tid = Uuid::parse_str(&task_id).map_err(|e| e.to_string())?;
|
||||
let lid = parse_uuid(&list_id)?;
|
||||
let tid = parse_uuid(&task_id)?;
|
||||
let repo = repo_mut(&mut s)?;
|
||||
// Cascade-delete subtasks first
|
||||
// Cascade-delete the full descendant subtree (not just direct children)
|
||||
// so deleting a parent can't leave grandchildren orphaned with a
|
||||
// parent_id pointing at a deleted task.
|
||||
let all_tasks = repo.list_tasks(lid).map_err(|e| e.to_string())?;
|
||||
let child_ids: Vec<Uuid> = all_tasks
|
||||
.iter()
|
||||
.filter(|t| t.parent_id == Some(tid))
|
||||
.map(|t| t.id)
|
||||
.collect();
|
||||
for child_id in child_ids {
|
||||
// Build a parent -> children index in one pass so the BFS below is O(n)
|
||||
// instead of O(n * depth) scanning all tasks for each frontier pop.
|
||||
let mut children_by_parent: std::collections::HashMap<Uuid, Vec<Uuid>> =
|
||||
std::collections::HashMap::new();
|
||||
for t in &all_tasks {
|
||||
if let Some(pid) = t.parent_id {
|
||||
children_by_parent.entry(pid).or_default().push(t.id);
|
||||
}
|
||||
}
|
||||
let mut to_delete: std::collections::HashSet<Uuid> = std::collections::HashSet::new();
|
||||
let mut frontier: Vec<Uuid> = vec![tid];
|
||||
while let Some(parent) = frontier.pop() {
|
||||
if let Some(children) = children_by_parent.get(&parent) {
|
||||
for &child_id in children {
|
||||
if to_delete.insert(child_id) {
|
||||
frontier.push(child_id);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
// Delete children before the parent so a mid-cascade failure doesn't
|
||||
// leave the parent removed but descendants stranded.
|
||||
for child_id in to_delete {
|
||||
repo.delete_task(lid, child_id).map_err(|e| format!("Failed to delete subtask {}: {}", child_id, e))?;
|
||||
}
|
||||
repo.delete_task(lid, tid)
|
||||
|
|
@ -436,8 +493,8 @@ fn toggle_task(
|
|||
let mut s = lock_state(&state)?;
|
||||
ensure_repo(&mut s)?;
|
||||
mute_watcher(&mut s);
|
||||
let lid = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||
let tid = Uuid::parse_str(&task_id).map_err(|e| e.to_string())?;
|
||||
let lid = parse_uuid(&list_id)?;
|
||||
let tid = parse_uuid(&task_id)?;
|
||||
let repo = repo_mut(&mut s)?;
|
||||
let mut task = repo.get_task(lid, tid).map_err(|e| e.to_string())?;
|
||||
match task.status {
|
||||
|
|
@ -454,7 +511,9 @@ fn toggle_task(
|
|||
TaskStatus::Backlog => child.uncomplete(),
|
||||
TaskStatus::Completed => child.complete(),
|
||||
}
|
||||
let _ = repo.update_task(lid, child);
|
||||
let child_id = child.id;
|
||||
repo.update_task(lid, child)
|
||||
.map_err(|e| format!("Failed to cascade to subtask {}: {}", child_id, e))?;
|
||||
}
|
||||
}
|
||||
Ok(task)
|
||||
|
|
@ -470,8 +529,8 @@ fn reorder_task(
|
|||
let mut s = lock_state(&state)?;
|
||||
ensure_repo(&mut s)?;
|
||||
mute_watcher(&mut s);
|
||||
let lid = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||
let tid = Uuid::parse_str(&task_id).map_err(|e| e.to_string())?;
|
||||
let lid = parse_uuid(&list_id)?;
|
||||
let tid = parse_uuid(&task_id)?;
|
||||
repo_mut(&mut s)?
|
||||
.reorder_task(lid, tid, new_position)
|
||||
.map_err(|e| e.to_string())
|
||||
|
|
@ -489,9 +548,9 @@ fn move_task(
|
|||
let mut s = lock_state(&state)?;
|
||||
ensure_repo(&mut s)?;
|
||||
mute_watcher(&mut s);
|
||||
let from = Uuid::parse_str(&from_list_id).map_err(|e| e.to_string())?;
|
||||
let to = Uuid::parse_str(&to_list_id).map_err(|e| e.to_string())?;
|
||||
let tid = Uuid::parse_str(&task_id).map_err(|e| e.to_string())?;
|
||||
let from = parse_uuid(&from_list_id)?;
|
||||
let to = parse_uuid(&to_list_id)?;
|
||||
let tid = parse_uuid(&task_id)?;
|
||||
repo_mut(&mut s)?
|
||||
.move_task(from, to, tid)
|
||||
.map_err(|e| e.to_string())
|
||||
|
|
@ -506,7 +565,7 @@ fn rename_list(
|
|||
let mut s = lock_state(&state)?;
|
||||
ensure_repo(&mut s)?;
|
||||
mute_watcher(&mut s);
|
||||
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||
let id = parse_uuid(&list_id)?;
|
||||
repo_mut(&mut s)?
|
||||
.rename_list(id, new_name)
|
||||
.map_err(|e| e.to_string())
|
||||
|
|
@ -521,7 +580,7 @@ fn set_group_by_date(
|
|||
let mut s = lock_state(&state)?;
|
||||
ensure_repo(&mut s)?;
|
||||
mute_watcher(&mut s);
|
||||
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||
let id = parse_uuid(&list_id)?;
|
||||
repo_mut(&mut s)?
|
||||
.set_group_by_date(id, enabled)
|
||||
.map_err(|e| e.to_string())
|
||||
|
|
@ -534,7 +593,7 @@ fn get_group_by_date(
|
|||
) -> Result<bool, String> {
|
||||
let mut s = lock_state(&state)?;
|
||||
ensure_repo(&mut s)?;
|
||||
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||
let id = parse_uuid(&list_id)?;
|
||||
repo_ref(&s)?
|
||||
.get_group_by_date(id)
|
||||
.map_err(|e| e.to_string())
|
||||
|
|
@ -622,10 +681,9 @@ async fn list_remote_folder(
|
|||
let dir_entries: Vec<_> = entries.into_iter().filter(|e| e.is_dir).collect();
|
||||
|
||||
// Check all subfolders for .onyx-workspace.json in parallel
|
||||
let sub_paths: Vec<_> = dir_entries.iter().map(|entry| {
|
||||
if path.is_empty() { entry.path.clone() }
|
||||
else { format!("{}/{}", path.trim_end_matches('/'), entry.path) }
|
||||
}).collect();
|
||||
let sub_paths: Vec<_> = dir_entries.iter()
|
||||
.map(|entry| join_remote_path(&path, &entry.path))
|
||||
.collect();
|
||||
let checks: Vec<_> = sub_paths.iter().map(|sp| {
|
||||
client.list_files(sp)
|
||||
}).collect();
|
||||
|
|
@ -657,11 +715,7 @@ async fn inspect_remote_workspace(
|
|||
let mut lists = Vec::new();
|
||||
for entry in entries {
|
||||
if !entry.is_dir { continue; }
|
||||
let list_path = if path.is_empty() {
|
||||
entry.path.clone()
|
||||
} else {
|
||||
format!("{}/{}", path.trim_end_matches('/'), entry.path)
|
||||
};
|
||||
let list_path = join_remote_path(&path, &entry.path);
|
||||
let files = client.list_files(&list_path).await.unwrap_or_else(|e| {
|
||||
eprintln!("Warning: failed to list remote folder '{}': {}", list_path, e);
|
||||
Vec::new()
|
||||
|
|
@ -697,11 +751,7 @@ async fn create_remote_workspace(
|
|||
"list_order": [],
|
||||
"last_opened_list": null,
|
||||
});
|
||||
let file_path = if path.is_empty() {
|
||||
".onyx-workspace.json".to_string()
|
||||
} else {
|
||||
format!("{}/{}", path.trim_end_matches('/'), ".onyx-workspace.json")
|
||||
};
|
||||
let file_path = join_remote_path(&path, ".onyx-workspace.json");
|
||||
client.put_file(&file_path, serde_json::to_string_pretty(&metadata).map_err(|e| e.to_string())?.into_bytes())
|
||||
.await
|
||||
.map_err(|e| e.to_string())?;
|
||||
|
|
@ -735,12 +785,7 @@ fn add_webdav_workspace(
|
|||
s.repo = None;
|
||||
|
||||
// Store credentials keyed by hostname
|
||||
let domain = webdav_url
|
||||
.split("://")
|
||||
.nth(1)
|
||||
.and_then(|rest| rest.split('/').next())
|
||||
.unwrap_or("")
|
||||
.to_string();
|
||||
let domain = credential_domain(&webdav_url);
|
||||
s.save_config()?;
|
||||
drop(s);
|
||||
let creds = app_handle.state::<Credentials<tauri::Wry>>();
|
||||
|
|
@ -803,12 +848,7 @@ async fn sync_workspace(
|
|||
};
|
||||
|
||||
// Step 2: load credentials
|
||||
let domain = webdav_url
|
||||
.split("://")
|
||||
.nth(1)
|
||||
.and_then(|rest| rest.split('/').next())
|
||||
.unwrap_or("")
|
||||
.to_string();
|
||||
let domain = credential_domain(&webdav_url);
|
||||
let creds = app_handle.state::<Credentials<tauri::Wry>>();
|
||||
let (username, password) = creds.load(&domain)?;
|
||||
|
||||
|
|
|
|||
|
|
@ -1,42 +0,0 @@
|
|||
<script lang="ts">
|
||||
import type { Snippet } from "svelte";
|
||||
let { onclose, children }: { onclose: () => void; children: Snippet } = $props();
|
||||
</script>
|
||||
|
||||
<!-- Backdrop -->
|
||||
<div
|
||||
class="fixed inset-0 z-40 bg-black/40"
|
||||
role="button"
|
||||
tabindex="-1"
|
||||
aria-label="Close sheet"
|
||||
onclick={onclose}
|
||||
onkeydown={(e) => { if (e.key === "Escape") onclose(); }}
|
||||
></div>
|
||||
|
||||
<!-- Sheet -->
|
||||
<div
|
||||
role="dialog"
|
||||
aria-modal="true"
|
||||
class="fixed bottom-0 left-0 right-0 z-50 max-h-[70vh] overflow-y-auto rounded-t-2xl bg-surface-light shadow-xl dark:bg-card-dark animate-slide-up"
|
||||
>
|
||||
<!-- Drag handle -->
|
||||
<div class="flex justify-center py-2">
|
||||
<div class="h-1 w-8 rounded-full bg-gray-300 dark:bg-gray-600"></div>
|
||||
</div>
|
||||
{@render children()}
|
||||
<div class="h-[env(safe-area-inset-bottom)]"></div>
|
||||
</div>
|
||||
|
||||
<style>
|
||||
@keyframes slide-up {
|
||||
from {
|
||||
transform: translateY(100%);
|
||||
}
|
||||
to {
|
||||
transform: translateY(0);
|
||||
}
|
||||
}
|
||||
.animate-slide-up {
|
||||
animation: slide-up 0.25s ease-out;
|
||||
}
|
||||
</style>
|
||||
|
|
@ -13,6 +13,8 @@
|
|||
let viewYear = $state(existing ? existing.getFullYear() : now.getFullYear());
|
||||
let viewMonth = $state(existing ? existing.getMonth() : now.getMonth());
|
||||
let selectedDay = $state(existing ? existing.getDate() : now.getDate());
|
||||
let selectedYear = $state(existing ? existing.getFullYear() : now.getFullYear());
|
||||
let selectedMonth = $state(existing ? existing.getMonth() : now.getMonth());
|
||||
let includeTime = $state(has_time);
|
||||
let selectedHour = $state(existing ? existing.getHours() : now.getHours());
|
||||
let selectedMinute = $state(existing ? existing.getMinutes() : 0);
|
||||
|
|
@ -50,6 +52,8 @@
|
|||
|
||||
function selectDay(day: number) {
|
||||
selectedDay = day;
|
||||
selectedYear = viewYear;
|
||||
selectedMonth = viewMonth;
|
||||
}
|
||||
|
||||
function isToday(day: number): boolean {
|
||||
|
|
@ -57,16 +61,16 @@
|
|||
}
|
||||
|
||||
function isSelected(day: number): boolean {
|
||||
return selectedDay === day && (!value || (() => {
|
||||
const v = new Date(value);
|
||||
return v.getFullYear() === viewYear && v.getMonth() === viewMonth;
|
||||
})());
|
||||
return selectedDay === day && selectedYear === viewYear && selectedMonth === viewMonth;
|
||||
}
|
||||
|
||||
function done() {
|
||||
const h = includeTime ? selectedHour : 0;
|
||||
const m = includeTime ? selectedMinute : 0;
|
||||
const iso = new Date(viewYear, viewMonth, selectedDay, h, m).toISOString();
|
||||
// Commit based on the last-selected year/month, not the currently-viewed
|
||||
// ones — users can navigate months after selecting a day without
|
||||
// accidentally shifting the chosen date to the viewed month.
|
||||
const iso = new Date(selectedYear, selectedMonth, selectedDay, h, m).toISOString();
|
||||
onchange(iso, includeTime);
|
||||
dismiss();
|
||||
}
|
||||
|
|
@ -129,9 +133,9 @@
|
|||
<button
|
||||
onclick={() => selectDay(day)}
|
||||
class="mx-auto flex h-8 w-8 items-center justify-center rounded-full text-sm transition-colors
|
||||
{selectedDay === day ? 'bg-primary text-white' : ''}
|
||||
{isToday(day) && selectedDay !== day ? 'font-bold text-primary' : ''}
|
||||
{selectedDay !== day && !isToday(day) ? 'hover:bg-black/5 dark:hover:bg-white/10' : ''}"
|
||||
{isSelected(day) ? 'bg-primary text-white' : ''}
|
||||
{isToday(day) && !isSelected(day) ? 'font-bold text-primary' : ''}
|
||||
{!isSelected(day) && !isToday(day) ? 'hover:bg-black/5 dark:hover:bg-white/10' : ''}"
|
||||
>
|
||||
{day}
|
||||
</button>
|
||||
|
|
|
|||
74
apps/tauri/src/lib/components/DateTimePicker.test.ts
Normal file
74
apps/tauri/src/lib/components/DateTimePicker.test.ts
Normal file
|
|
@ -0,0 +1,74 @@
|
|||
import { describe, it, expect, vi, beforeEach } from "vitest";
|
||||
import { render, screen, cleanup } from "@testing-library/svelte";
|
||||
import userEvent from "@testing-library/user-event";
|
||||
import DateTimePicker from "./DateTimePicker.svelte";
|
||||
|
||||
beforeEach(() => {
|
||||
cleanup();
|
||||
});
|
||||
|
||||
describe("DateTimePicker — selected highlight", () => {
|
||||
it("only marks the selected day in the month/year that was actually picked", async () => {
|
||||
const user = userEvent.setup();
|
||||
// Pick a date in the current month so the component opens on it.
|
||||
const now = new Date();
|
||||
const existing = new Date(now.getFullYear(), now.getMonth(), 15, 0, 0, 0).toISOString();
|
||||
|
||||
render(DateTimePicker, {
|
||||
value: existing,
|
||||
has_time: false,
|
||||
onchange: vi.fn(),
|
||||
onclose: vi.fn(),
|
||||
});
|
||||
|
||||
// The "15" button for the current month should be rendered with the
|
||||
// selected styling (bg-primary).
|
||||
const day15 = screen.getByRole("button", { name: "15" });
|
||||
expect(day15.className).toMatch(/bg-primary/);
|
||||
|
||||
// Navigate one month forward. The same "15" cell must NOT be marked as
|
||||
// selected, because the user hasn't picked a day in that month yet.
|
||||
const nextMonthBtn = screen.getAllByRole("button").find((b) =>
|
||||
b.querySelector("svg path[d*='M7.21 14.77']"),
|
||||
) as HTMLElement;
|
||||
await user.click(nextMonthBtn);
|
||||
|
||||
const nextMonth15 = screen.getByRole("button", { name: "15" });
|
||||
expect(nextMonth15.className).not.toMatch(/bg-primary/);
|
||||
});
|
||||
|
||||
it("commits based on the last-selected month, not the currently-viewed month", async () => {
|
||||
const user = userEvent.setup();
|
||||
const onchange = vi.fn();
|
||||
const onclose = vi.fn();
|
||||
|
||||
// Start with April 10 selected (use a fixed month/year so the test is stable).
|
||||
const existing = new Date(2026, 3, 10, 0, 0, 0).toISOString();
|
||||
render(DateTimePicker, {
|
||||
value: existing,
|
||||
has_time: false,
|
||||
onchange,
|
||||
onclose,
|
||||
});
|
||||
|
||||
// Pick the 20th while viewing April.
|
||||
await user.click(screen.getByRole("button", { name: "20" }));
|
||||
|
||||
// Flip to May.
|
||||
const nextMonthBtn = screen.getAllByRole("button").find((b) =>
|
||||
b.querySelector("svg path[d*='M7.21 14.77']"),
|
||||
) as HTMLElement;
|
||||
await user.click(nextMonthBtn);
|
||||
|
||||
// Hit Done.
|
||||
await user.click(screen.getByRole("button", { name: "Done" }));
|
||||
|
||||
expect(onchange).toHaveBeenCalled();
|
||||
const committed = new Date(onchange.mock.calls[0][0] as string);
|
||||
// April == month 3 (0-indexed). We navigated to May without reselecting,
|
||||
// so the committed date must still be April 20.
|
||||
expect(committed.getMonth()).toBe(3);
|
||||
expect(committed.getDate()).toBe(20);
|
||||
expect(committed.getFullYear()).toBe(2026);
|
||||
});
|
||||
});
|
||||
|
|
@ -17,10 +17,15 @@
|
|||
|
||||
async function handleSubmit() {
|
||||
if (!title.trim()) return;
|
||||
const created = await app.createTask(title.trim(), description.trim() || undefined);
|
||||
if (date && created) {
|
||||
await app.updateTask({ ...created, date: date, has_time: dateHasTime });
|
||||
}
|
||||
// Pass date/has_time into createTask directly so the date can't be lost
|
||||
// if a second round-trip to update() failed after the create succeeded.
|
||||
await app.createTask(
|
||||
title.trim(),
|
||||
description.trim() || undefined,
|
||||
undefined,
|
||||
date,
|
||||
dateHasTime,
|
||||
);
|
||||
title = "";
|
||||
description = "";
|
||||
date = null;
|
||||
|
|
|
|||
|
|
@ -120,7 +120,12 @@
|
|||
async function executeDeleteCompletedSubtasks() {
|
||||
confirmDeleteCompleted = false;
|
||||
showSubtaskMenu = false;
|
||||
for (const s of completedSubtasks) await app.deleteTask(s.id);
|
||||
// Snapshot — completedSubtasks is reactive and shrinks as we delete.
|
||||
// Bail on first failure so we don't silently leave a partial delete.
|
||||
const targets = [...completedSubtasks];
|
||||
for (const s of targets) {
|
||||
if (!(await app.deleteTask(s.id))) return;
|
||||
}
|
||||
}
|
||||
|
||||
function handleSubtaskMenuClickOutside(e: MouseEvent) {
|
||||
|
|
|
|||
|
|
@ -15,14 +15,29 @@
|
|||
let webdavUser = $state("");
|
||||
let webdavPass = $state("");
|
||||
let testStatus = $state<"idle" | "testing" | "ok" | "fail">("idle");
|
||||
let credsLoaded = $state(false);
|
||||
|
||||
let renaming = $state(false);
|
||||
let renameValue = $state("");
|
||||
let renameInput = $state<HTMLInputElement | null>(null);
|
||||
let showKebab = $state(false);
|
||||
let confirmRename = $state(false);
|
||||
|
||||
// Imperative focus — Svelte's native autofocus attribute is unreliable
|
||||
// for inputs that appear only via conditional blocks.
|
||||
$effect(() => {
|
||||
if (!ws?.webdav_url) return;
|
||||
if (renaming && renameInput) {
|
||||
renameInput.focus();
|
||||
renameInput.select();
|
||||
}
|
||||
});
|
||||
|
||||
// Load stored credentials exactly once for this workspace. Previously this
|
||||
// ran on every `ws.webdav_url` change, which silently clobbered in-progress
|
||||
// user edits whenever any other setting updated the config.
|
||||
$effect(() => {
|
||||
if (credsLoaded || !ws?.webdav_url) return;
|
||||
credsLoaded = true;
|
||||
webdavUrl = ws.webdav_url;
|
||||
try {
|
||||
const domain = new URL(ws.webdav_url).hostname;
|
||||
|
|
@ -35,6 +50,12 @@
|
|||
} catch {}
|
||||
});
|
||||
|
||||
// Any edit invalidates a prior test so users can't Save a config they
|
||||
// haven't validated since changing it.
|
||||
function markDirty() {
|
||||
if (testStatus !== "idle") testStatus = "idle";
|
||||
}
|
||||
|
||||
async function testConnection() {
|
||||
testStatus = "testing";
|
||||
try {
|
||||
|
|
@ -51,6 +72,12 @@
|
|||
|
||||
async function saveWebdav() {
|
||||
if (!webdavUrl.trim()) return;
|
||||
// Require a successful test so a typo'd URL can't silently point the
|
||||
// workspace at a dead server.
|
||||
if (testStatus !== "ok") {
|
||||
await testConnection();
|
||||
if (testStatus !== "ok") return;
|
||||
}
|
||||
await invoke("set_webdav_config", {
|
||||
workspaceId,
|
||||
webdavUrl: webdavUrl.trim(),
|
||||
|
|
@ -116,11 +143,11 @@
|
|||
{#if renaming}
|
||||
<input
|
||||
type="text"
|
||||
bind:this={renameInput}
|
||||
bind:value={renameValue}
|
||||
class="w-full bg-transparent text-xl font-bold outline-none"
|
||||
onkeydown={(e) => { if (e.key === "Enter") handleRename(); if (e.key === "Escape") { renaming = false; } }}
|
||||
onblur={handleRename}
|
||||
autofocus
|
||||
/>
|
||||
{:else}
|
||||
<p class="text-xl font-bold">{ws?.name}</p>
|
||||
|
|
@ -172,6 +199,7 @@
|
|||
<input
|
||||
type="url"
|
||||
bind:value={webdavUrl}
|
||||
oninput={markDirty}
|
||||
placeholder="https://dav.example.com/tasks/"
|
||||
class="mb-3 w-full rounded-lg border border-border-light bg-transparent px-3 py-2 text-sm outline-none focus:border-primary dark:border-border-dark"
|
||||
/>
|
||||
|
|
@ -180,6 +208,7 @@
|
|||
<input
|
||||
type="text"
|
||||
bind:value={webdavUser}
|
||||
oninput={markDirty}
|
||||
class="mb-3 w-full rounded-lg border border-border-light bg-transparent px-3 py-2 text-sm outline-none focus:border-primary dark:border-border-dark"
|
||||
/>
|
||||
|
||||
|
|
@ -187,6 +216,7 @@
|
|||
<input
|
||||
type="password"
|
||||
bind:value={webdavPass}
|
||||
oninput={markDirty}
|
||||
class="mb-4 w-full rounded-lg border border-border-light bg-transparent px-3 py-2 text-sm outline-none focus:border-primary dark:border-border-dark"
|
||||
/>
|
||||
|
||||
|
|
@ -196,7 +226,7 @@
|
|||
disabled={!webdavUrl.trim()}
|
||||
class="rounded-lg border border-border-light px-4 py-2 text-sm font-medium hover:bg-black/5 disabled:opacity-40 dark:border-border-dark dark:hover:bg-white/10"
|
||||
>
|
||||
{testStatus === "testing" ? "Testing..." : testStatus === "ok" ? "Connected" : testStatus === "fail" ? "Failed -- Retry" : "Test Connection"}
|
||||
{testStatus === "testing" ? "Testing…" : testStatus === "ok" ? "Connected" : testStatus === "fail" ? "Failed — Retry" : "Test Connection"}
|
||||
</button>
|
||||
<button
|
||||
onclick={saveWebdav}
|
||||
|
|
|
|||
|
|
@ -77,20 +77,6 @@
|
|||
|
||||
// ── WebDAV handlers ───────────────────────────────────────────────
|
||||
|
||||
async function testConnection() {
|
||||
testStatus = "testing";
|
||||
try {
|
||||
await invoke("test_webdav_connection", {
|
||||
url: webdavUrl,
|
||||
username: webdavUser,
|
||||
password: webdavPass,
|
||||
});
|
||||
testStatus = "ok";
|
||||
} catch {
|
||||
testStatus = "fail";
|
||||
}
|
||||
}
|
||||
|
||||
async function connectAndBrowse() {
|
||||
testStatus = "testing";
|
||||
try {
|
||||
|
|
|
|||
|
|
@ -58,6 +58,7 @@
|
|||
let completedVisible = $state(false);
|
||||
let renamingListId = $state<string | null>(null);
|
||||
let renameValue = $state("");
|
||||
let renameListInput = $state<HTMLInputElement | null>(null);
|
||||
let showListMenu = $state(false);
|
||||
let showSubtasks = $state(false);
|
||||
let confirmDeleteList = $state(false);
|
||||
|
|
@ -85,6 +86,14 @@
|
|||
if (showNewList && newListInput) newListInput.focus();
|
||||
});
|
||||
|
||||
// Same imperative-focus trick for the inline list-rename input.
|
||||
$effect(() => {
|
||||
if (renamingListId && renameListInput) {
|
||||
renameListInput.focus();
|
||||
renameListInput.select();
|
||||
}
|
||||
});
|
||||
|
||||
|
||||
async function handleNewList() {
|
||||
if (!newListName.trim()) return;
|
||||
|
|
@ -100,7 +109,12 @@
|
|||
|
||||
async function executeDeleteCompleted() {
|
||||
confirmDeleteCompleted = false;
|
||||
for (var t of app.completedTasks) await app.deleteTask(t.id);
|
||||
// Snapshot targets first — deletes mutate app.completedTasks reactively.
|
||||
// Bail on first failure so we don't silently leave a partial delete.
|
||||
const targets = [...app.completedTasks];
|
||||
for (const t of targets) {
|
||||
if (!(await app.deleteTask(t.id))) return;
|
||||
}
|
||||
}
|
||||
|
||||
function promptDeleteList() {
|
||||
|
|
@ -626,11 +640,11 @@
|
|||
{#if renamingListId === app.activeListId}
|
||||
<input
|
||||
type="text"
|
||||
bind:this={renameListInput}
|
||||
bind:value={renameValue}
|
||||
class="w-full bg-transparent text-xl font-bold outline-none"
|
||||
onkeydown={(e) => { if (e.key === "Enter") handleRenameList(); if (e.key === "Escape") renamingListId = null; }}
|
||||
onblur={handleRenameList}
|
||||
autofocus
|
||||
/>
|
||||
{:else}
|
||||
<p class="text-xl font-bold">{app.activeList?.title ?? "Tasks"}</p>
|
||||
|
|
@ -643,7 +657,16 @@
|
|||
{#if app.lists.length === 0}
|
||||
<div class="flex h-full flex-col items-center justify-center p-8 text-center">
|
||||
<p class="text-lg font-medium opacity-60">No lists yet</p>
|
||||
<p class="mt-1 text-sm opacity-40">Tap the list name above to create one</p>
|
||||
{#if app.isGoogleTasks}
|
||||
<p class="mt-1 text-sm opacity-40">Lists will appear after your next sync.</p>
|
||||
{:else}
|
||||
<button
|
||||
onclick={() => { showDrawer = true; showNewList = true; }}
|
||||
class="mt-4 rounded-lg bg-primary px-4 py-2 text-sm font-medium text-white hover:bg-primary-hover"
|
||||
>
|
||||
Create a list
|
||||
</button>
|
||||
{/if}
|
||||
</div>
|
||||
{:else if !app.activeListId}
|
||||
<div class="flex h-full items-center justify-center opacity-40">
|
||||
|
|
|
|||
|
|
@ -10,10 +10,13 @@ import type {
|
|||
} from "../types";
|
||||
import { groupTasksByDate, type TaskGroup } from "../grouping";
|
||||
|
||||
// Listen for file system changes from the backend watcher.
|
||||
// Listen for file system changes from the backend watcher. Guard against
|
||||
// firing while the user is on the setup/missing screens — loadLists would
|
||||
// fail (no workspace) and a debouncedSync against a non-synced workspace
|
||||
// would be wasted work.
|
||||
listen("fs-changed", () => {
|
||||
if (!hasWorkspace || screen !== "tasks") return;
|
||||
loadLists();
|
||||
// Debounced sync for WebDAV workspaces on local file changes
|
||||
if (isSyncedWorkspace) debouncedSync();
|
||||
});
|
||||
|
||||
|
|
@ -184,11 +187,17 @@ async function removeWorkspace(id: string) {
|
|||
try {
|
||||
await invoke("remove_workspace", { id });
|
||||
config = await invoke<AppConfig>("get_config");
|
||||
if (!hasWorkspace) {
|
||||
screen = "setup";
|
||||
lists = [];
|
||||
tasks = [];
|
||||
activeListId = null;
|
||||
tasks = [];
|
||||
lists = [];
|
||||
// Switch to the next available workspace rather than dumping the user
|
||||
// to the setup screen when they still have other workspaces.
|
||||
const remaining = Object.keys(config?.workspaces ?? {});
|
||||
if (remaining.length > 0) {
|
||||
await switchWorkspace(remaining[0]);
|
||||
screen = "tasks";
|
||||
} else {
|
||||
screen = "setup";
|
||||
}
|
||||
} catch (e) {
|
||||
error = String(e);
|
||||
|
|
@ -255,7 +264,13 @@ async function deleteList(id: string) {
|
|||
}
|
||||
}
|
||||
|
||||
async function createTask(title: string, description?: string, parentId?: string): Promise<Task | null> {
|
||||
async function createTask(
|
||||
title: string,
|
||||
description?: string,
|
||||
parentId?: string,
|
||||
date?: string | null,
|
||||
hasTime?: boolean,
|
||||
): Promise<Task | null> {
|
||||
if (!activeListId) return null;
|
||||
try {
|
||||
const task = await invoke<Task>("create_task", {
|
||||
|
|
@ -263,6 +278,8 @@ async function createTask(title: string, description?: string, parentId?: string
|
|||
title,
|
||||
description: description ?? "",
|
||||
parentId: parentId ?? null,
|
||||
date: date ?? null,
|
||||
hasTime: hasTime ?? false,
|
||||
});
|
||||
tasks = parentId ? [task, ...tasks] : [...tasks, task];
|
||||
error = null;
|
||||
|
|
@ -381,7 +398,11 @@ async function triggerSync() {
|
|||
await loadLists();
|
||||
} catch (e) {
|
||||
const msg = String(e);
|
||||
const isTransient = /timeout|connect|network|unreachable|refused/i.test(msg);
|
||||
// Narrow phrases so that a legitimate server-side error containing a
|
||||
// word like "network" or "refused" in its description isn't silently
|
||||
// swallowed as an offline blip. Only treat obvious connectivity failures
|
||||
// as transient.
|
||||
const isTransient = /(^|\W)(timed? out|timeout|connection (refused|reset|timed out|aborted)|connect error|network (is )?unreachable|no route to host|host (not found|is unreachable)|dns|enotfound|econnrefused|etimedout|ehostunreach|enetunreach)(\W|$)/i.test(msg);
|
||||
syncStatus = isTransient ? "offline" : "error";
|
||||
// Only show the error banner for non-transient failures; connectivity issues just update the status dot
|
||||
if (!isTransient) error = msg;
|
||||
|
|
@ -397,7 +418,7 @@ function debouncedSync() {
|
|||
|
||||
function restartSyncInterval() {
|
||||
if (_syncInterval) clearInterval(_syncInterval);
|
||||
var secs = _appFocused ? syncIntervalSecs : syncIntervalUnfocusedSecs;
|
||||
const secs = _appFocused ? syncIntervalSecs : syncIntervalUnfocusedSecs;
|
||||
_syncInterval = setInterval(triggerSync, secs * 1000);
|
||||
}
|
||||
|
||||
|
|
@ -519,22 +540,10 @@ async function addGoogleTasksWorkspace(
|
|||
|
||||
async function forgetMissingWorkspace() {
|
||||
if (!missingWorkspace) return;
|
||||
// removeWorkspace handles switching to the next available workspace (or
|
||||
// falling back to the setup screen when none remain); just delegate.
|
||||
await removeWorkspace(missingWorkspace);
|
||||
missingWorkspace = null;
|
||||
config = await invoke<AppConfig>("get_config");
|
||||
if (hasWorkspace) {
|
||||
// Switch to the next available workspace
|
||||
const nextName = Object.keys(config!.workspaces)[0];
|
||||
if (nextName) {
|
||||
await switchWorkspace(nextName);
|
||||
screen = "tasks";
|
||||
return;
|
||||
}
|
||||
}
|
||||
screen = "setup";
|
||||
lists = [];
|
||||
tasks = [];
|
||||
activeListId = null;
|
||||
}
|
||||
|
||||
function setScreen(s: Screen) {
|
||||
|
|
|
|||
|
|
@ -6,6 +6,7 @@ pub mod group;
|
|||
pub mod sync;
|
||||
|
||||
use onyx_core::{AppConfig, TaskRepository};
|
||||
use onyx_core::config::WorkspaceConfig;
|
||||
use anyhow::{Context, Result};
|
||||
use std::path::PathBuf;
|
||||
|
||||
|
|
@ -23,21 +24,89 @@ pub fn save_config(config: &AppConfig) -> Result<()> {
|
|||
config.save_to_file(&path).context("Failed to save config")
|
||||
}
|
||||
|
||||
pub fn get_repository(workspace_name: Option<String>) -> Result<(TaskRepository, String)> {
|
||||
let config = load_config()?;
|
||||
|
||||
let (name, workspace_config) = if let Some(name) = workspace_name {
|
||||
let workspace_config = config.get_workspace(&name)
|
||||
.ok_or_else(|| anyhow::anyhow!("Workspace '{}' not found", name))?;
|
||||
(name, workspace_config.clone())
|
||||
/// Resolve a user-supplied identifier to (id, WorkspaceConfig). Accepts either
|
||||
/// the workspace's display name or its UUID. Falls back to the current
|
||||
/// workspace when `identifier` is `None`.
|
||||
pub fn resolve_workspace(config: &AppConfig, identifier: Option<&str>) -> Result<(String, WorkspaceConfig)> {
|
||||
if let Some(s) = identifier {
|
||||
// Try by UUID first (exact match on map key), then fall back to name lookup.
|
||||
if let Some(ws) = config.get_workspace(s) {
|
||||
return Ok((s.to_string(), ws.clone()));
|
||||
}
|
||||
let (id, ws) = config.find_by_name(s)
|
||||
.ok_or_else(|| anyhow::anyhow!("Workspace '{}' not found", s))?;
|
||||
Ok((id.clone(), ws.clone()))
|
||||
} else {
|
||||
let (name, workspace_config) = config.get_current_workspace()
|
||||
.context("No workspace set. Use 'onyx init' to create one.")?;
|
||||
(name.clone(), workspace_config.clone())
|
||||
};
|
||||
let (id, ws) = config.get_current_workspace()
|
||||
.context("No workspace set. Run 'onyx workspace add <name> <path>' to create one, or 'onyx workspace switch <name>' to select one.")?;
|
||||
Ok((id.clone(), ws.clone()))
|
||||
}
|
||||
}
|
||||
|
||||
pub fn get_repository(workspace_identifier: Option<String>) -> Result<(TaskRepository, String)> {
|
||||
let config = load_config()?;
|
||||
let (_id, workspace_config) = resolve_workspace(&config, workspace_identifier.as_deref())?;
|
||||
let name = workspace_config.name.clone();
|
||||
|
||||
let repo = TaskRepository::new(workspace_config.path.clone())
|
||||
.context(format!("Failed to open workspace '{}'", name))?;
|
||||
|
||||
Ok((repo, name))
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
fn make_config_with(ws: &[(&str, &str)]) -> (AppConfig, Vec<String>) {
|
||||
let mut config = AppConfig::new();
|
||||
let ids: Vec<String> = ws.iter()
|
||||
.map(|(name, path)| config.add_workspace(WorkspaceConfig::new(name.to_string(), PathBuf::from(path))))
|
||||
.collect();
|
||||
(config, ids)
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_by_name() {
|
||||
let (config, _ids) = make_config_with(&[("dev", "/tmp/dev"), ("home", "/tmp/home")]);
|
||||
let (id, ws) = resolve_workspace(&config, Some("dev")).unwrap();
|
||||
assert_eq!(ws.name, "dev");
|
||||
assert!(config.workspaces.contains_key(&id));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_by_uuid() {
|
||||
let (config, ids) = make_config_with(&[("dev", "/tmp/dev")]);
|
||||
let target = ids[0].clone();
|
||||
let (id, ws) = resolve_workspace(&config, Some(&target)).unwrap();
|
||||
assert_eq!(id, target);
|
||||
assert_eq!(ws.name, "dev");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_unknown_identifier_errors() {
|
||||
let (config, _ids) = make_config_with(&[("dev", "/tmp/dev")]);
|
||||
let err = resolve_workspace(&config, Some("ghost")).unwrap_err();
|
||||
assert!(err.to_string().contains("Workspace 'ghost' not found"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_falls_back_to_current() {
|
||||
let (mut config, ids) = make_config_with(&[("a", "/tmp/a"), ("b", "/tmp/b")]);
|
||||
config.set_current_workspace(ids[1].clone()).unwrap();
|
||||
let (id, ws) = resolve_workspace(&config, None).unwrap();
|
||||
assert_eq!(id, ids[1]);
|
||||
assert_eq!(ws.name, "b");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_no_current_gives_actionable_message() {
|
||||
let config = AppConfig::new();
|
||||
let err = resolve_workspace(&config, None).unwrap_err();
|
||||
let msg = err.to_string();
|
||||
// The message should point the user at the right sub-commands, not
|
||||
// at the obsolete 'onyx init' suggestion.
|
||||
assert!(msg.contains("workspace add") || msg.contains("workspace switch"),
|
||||
"expected actionable message, got: {msg}");
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -2,22 +2,8 @@ use anyhow::{Context, Result};
|
|||
use colored::Colorize;
|
||||
use onyx_core::sync::{SyncMode, sync_workspace, get_sync_status};
|
||||
use onyx_core::webdav::{WebDavClient, store_credentials, load_credentials};
|
||||
use onyx_core::config::AppConfig;
|
||||
use crate::output;
|
||||
use super::{load_config, save_config};
|
||||
|
||||
/// Resolve a workspace name to (id, config). Falls back to current workspace if name is None.
|
||||
fn resolve_workspace(config: &AppConfig, name: Option<&str>) -> Result<(String, onyx_core::config::WorkspaceConfig)> {
|
||||
if let Some(name) = name {
|
||||
let (id, ws) = config.find_by_name(name)
|
||||
.ok_or_else(|| anyhow::anyhow!("Workspace '{}' not found", name))?;
|
||||
Ok((id.clone(), ws.clone()))
|
||||
} else {
|
||||
let (id, ws) = config.get_current_workspace()
|
||||
.context("No workspace set. Use 'onyx init' to create one.")?;
|
||||
Ok((id.clone(), ws.clone()))
|
||||
}
|
||||
}
|
||||
use super::{load_config, save_config, resolve_workspace};
|
||||
|
||||
/// Run sync setup: prompt for URL, username, password, test connection, store credentials.
|
||||
pub fn setup(workspace_name: Option<String>) -> Result<()> {
|
||||
|
|
|
|||
|
|
@ -119,13 +119,26 @@ pub fn edit(task_id_str: String, workspace: Option<String>) -> Result<()> {
|
|||
let (list_id, task) = find_task(&lists, task_id)
|
||||
.ok_or_else(|| anyhow::anyhow!("Task not found: {}", task_id_str))?;
|
||||
|
||||
// Create temporary file with task content
|
||||
// Create temporary file with task content. On Unix, open with 0600 so
|
||||
// other local users on a shared system can't read the task body off /tmp
|
||||
// while the editor is running.
|
||||
let temp_dir = std::env::temp_dir();
|
||||
let temp_file = temp_dir.join(format!("onyx-{}.md", task.id));
|
||||
|
||||
// Write current task content to temp file
|
||||
let content = format!("# {}\n\n{}", task.title, task.description);
|
||||
std::fs::write(&temp_file, content)?;
|
||||
{
|
||||
use std::io::Write;
|
||||
let mut opts = std::fs::OpenOptions::new();
|
||||
opts.write(true).create(true).truncate(true);
|
||||
#[cfg(unix)]
|
||||
{
|
||||
use std::os::unix::fs::OpenOptionsExt;
|
||||
opts.mode(0o600);
|
||||
}
|
||||
let mut f = opts.open(&temp_file)
|
||||
.with_context(|| format!("Failed to create {}", temp_file.display()))?;
|
||||
f.write_all(content.as_bytes())?;
|
||||
}
|
||||
|
||||
// Get editor from environment
|
||||
let editor = std::env::var("EDITOR").unwrap_or_else(|_| {
|
||||
|
|
|
|||
|
|
@ -30,11 +30,21 @@ pub fn add(name: String, path: String) -> Result<()> {
|
|||
// Add workspace
|
||||
let id = config.add_workspace(WorkspaceConfig::new(name.clone(), path_buf.clone()));
|
||||
|
||||
// Select the new workspace as current when none was previously set, so the
|
||||
// very next command doesn't fail with "No workspace set".
|
||||
let made_current = config.current_workspace.is_none();
|
||||
if made_current {
|
||||
config.set_current_workspace(id.clone())?;
|
||||
}
|
||||
|
||||
// Save config
|
||||
save_config(&config)?;
|
||||
|
||||
output::success(&format!("Added workspace \"{}\" ({}) at {}", name, &id[..8], path_buf.display()));
|
||||
output::success("Created default list \"My Tasks\"");
|
||||
if made_current {
|
||||
output::success(&format!("Set \"{}\" as the current workspace", name));
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
|
@ -64,15 +74,20 @@ pub fn list() -> Result<()> {
|
|||
Ok(())
|
||||
}
|
||||
|
||||
/// Resolve a workspace name to its ID. Errors if not found or ambiguous.
|
||||
fn resolve_name(config: &onyx_core::config::AppConfig, name: &str) -> Result<String> {
|
||||
/// Resolve a user-supplied identifier to a workspace ID. Accepts either the
|
||||
/// display name or the UUID. Errors if not found or ambiguous.
|
||||
fn resolve_name(config: &onyx_core::config::AppConfig, identifier: &str) -> Result<String> {
|
||||
// Direct UUID hit on the map key — unambiguous.
|
||||
if config.workspaces.contains_key(identifier) {
|
||||
return Ok(identifier.to_string());
|
||||
}
|
||||
let matches: Vec<_> = config.workspaces.iter()
|
||||
.filter(|(_, ws)| ws.name == name)
|
||||
.filter(|(_, ws)| ws.name == identifier)
|
||||
.collect();
|
||||
match matches.len() {
|
||||
0 => anyhow::bail!("Workspace '{}' not found", name),
|
||||
0 => anyhow::bail!("Workspace '{}' not found", identifier),
|
||||
1 => Ok(matches[0].0.clone()),
|
||||
n => anyhow::bail!("Ambiguous: {} workspaces named '{}'. Use the workspace ID instead.", n, name),
|
||||
n => anyhow::bail!("Ambiguous: {} workspaces named '{}'. Use the workspace ID instead.", n, identifier),
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@ mod output;
|
|||
|
||||
use anyhow::Result;
|
||||
use clap::{Parser, Subcommand};
|
||||
use colored::Colorize;
|
||||
use commands::*;
|
||||
|
||||
#[derive(Parser)]
|
||||
|
|
@ -197,7 +198,24 @@ enum GroupCommands {
|
|||
},
|
||||
}
|
||||
|
||||
fn main() -> Result<()> {
|
||||
fn main() {
|
||||
match run() {
|
||||
Ok(()) => {}
|
||||
Err(e) => {
|
||||
// Print user-friendly error chain (no backtrace). Programming-bug
|
||||
// panics still surface through their default handler.
|
||||
eprintln!("{}: {}", "Error".red().bold(), e);
|
||||
let mut cause = e.source();
|
||||
while let Some(c) = cause {
|
||||
eprintln!(" caused by: {}", c);
|
||||
cause = c.source();
|
||||
}
|
||||
std::process::exit(1);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn run() -> Result<()> {
|
||||
let cli = Cli::parse();
|
||||
|
||||
match cli.command {
|
||||
|
|
|
|||
|
|
@ -4,20 +4,15 @@ use serde::{Deserialize, Serialize};
|
|||
use uuid::Uuid;
|
||||
use crate::error::{Error, Result};
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Default)]
|
||||
#[serde(rename_all = "lowercase")]
|
||||
pub enum WorkspaceMode {
|
||||
#[default]
|
||||
Local,
|
||||
Webdav,
|
||||
GoogleTasks,
|
||||
}
|
||||
|
||||
impl Default for WorkspaceMode {
|
||||
fn default() -> Self {
|
||||
Self::Local
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct WorkspaceConfig {
|
||||
pub name: String,
|
||||
|
|
@ -121,13 +116,7 @@ impl AppConfig {
|
|||
std::fs::create_dir_all(parent)?;
|
||||
}
|
||||
let content = serde_json::to_string_pretty(&self)?;
|
||||
// Atomic write: write to temp file then rename to prevent corruption on crash
|
||||
let temp = path.with_extension("tmp");
|
||||
std::fs::write(&temp, &content)?;
|
||||
if let Err(e) = std::fs::rename(&temp, path) {
|
||||
let _ = std::fs::remove_file(&temp);
|
||||
return Err(e.into());
|
||||
}
|
||||
crate::storage::atomic_write(path, content.as_bytes())?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -358,8 +358,15 @@ pub async fn sync_google_tasks(
|
|||
list_meta.task_order = task_order;
|
||||
list_meta.updated_at = Utc::now();
|
||||
|
||||
if let Ok(meta_content) = serde_json::to_string_pretty(&list_meta) {
|
||||
let _ = atomic_write(&listdata_path, meta_content.as_bytes());
|
||||
match serde_json::to_string_pretty(&list_meta) {
|
||||
Ok(meta_content) => {
|
||||
if let Err(e) = atomic_write(&listdata_path, meta_content.as_bytes()) {
|
||||
errors.push(format!("Failed to write metadata for list '{}': {}", gt_list.title, e));
|
||||
}
|
||||
}
|
||||
Err(e) => {
|
||||
errors.push(format!("Failed to serialize metadata for list '{}': {}", gt_list.title, e));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -374,8 +381,15 @@ pub async fn sync_google_tasks(
|
|||
RootMetadata::default()
|
||||
};
|
||||
root_meta.list_order = new_list_order;
|
||||
if let Ok(meta_content) = serde_json::to_string_pretty(&root_meta) {
|
||||
let _ = atomic_write(&root_meta_path, meta_content.as_bytes());
|
||||
match serde_json::to_string_pretty(&root_meta) {
|
||||
Ok(meta_content) => {
|
||||
if let Err(e) = atomic_write(&root_meta_path, meta_content.as_bytes()) {
|
||||
errors.push(format!("Failed to write workspace metadata: {}", e));
|
||||
}
|
||||
}
|
||||
Err(e) => {
|
||||
errors.push(format!("Failed to serialize workspace metadata: {}", e));
|
||||
}
|
||||
}
|
||||
|
||||
Ok(GoogleSyncResult { downloaded, errors })
|
||||
|
|
|
|||
|
|
@ -26,7 +26,10 @@ impl TaskRepository {
|
|||
// Task operations
|
||||
pub fn create_task(&mut self, list_id: Uuid, mut task: Task) -> Result<Task> {
|
||||
self.storage.write_task(list_id, &task)?;
|
||||
task.version += 1;
|
||||
// Mirror the saturating increment that FileSystemStorage applies to
|
||||
// the on-disk frontmatter so the in-memory Task matches what was
|
||||
// written and doesn't wrap at u64::MAX.
|
||||
task.version = task.version.saturating_add(1);
|
||||
Ok(task)
|
||||
}
|
||||
|
||||
|
|
@ -154,7 +157,7 @@ mod tests {
|
|||
|
||||
// Create a task
|
||||
let task = Task::new("Test Task".to_string());
|
||||
let created_task = repo.create_task(list.id, task).unwrap();
|
||||
let _ = repo.create_task(list.id, task).unwrap();
|
||||
|
||||
// List tasks
|
||||
let tasks = repo.list_tasks(list.id).unwrap();
|
||||
|
|
@ -162,6 +165,20 @@ mod tests {
|
|||
assert_eq!(tasks[0].title, "Test Task");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_create_task_saturates_version_at_max() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let mut repo = TaskRepository::init(temp_dir.path().to_path_buf()).unwrap();
|
||||
let list = repo.create_list("L".to_string()).unwrap();
|
||||
|
||||
// Simulate a task that is already at u64::MAX. A plain `+=` would
|
||||
// overflow — saturating_add must clamp.
|
||||
let mut task = Task::new("max".to_string());
|
||||
task.version = u64::MAX;
|
||||
let created = repo.create_task(list.id, task).unwrap();
|
||||
assert_eq!(created.version, u64::MAX);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_update_task() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
|
|
|
|||
|
|
@ -236,12 +236,8 @@ impl FileSystemStorage {
|
|||
Ok(path)
|
||||
}
|
||||
|
||||
fn sanitize_filename(name: &str) -> String {
|
||||
crate::sanitize_filename(name)
|
||||
}
|
||||
|
||||
fn task_file_path(&self, list_dir: &Path, task: &Task) -> PathBuf {
|
||||
let safe_title = Self::sanitize_filename(&task.title);
|
||||
let safe_title = crate::sanitize_filename(&task.title);
|
||||
let filename = if safe_title.is_empty() {
|
||||
task.id.to_string()
|
||||
} else {
|
||||
|
|
@ -381,7 +377,9 @@ impl Storage for FileSystemStorage {
|
|||
}
|
||||
|
||||
let content = self.write_markdown_with_frontmatter(task)?;
|
||||
fs::write(&task_path, content)?;
|
||||
// Atomic write: a crash mid-write must not leave a truncated .md file
|
||||
// that then fails YAML parsing on the next list_tasks/read_task.
|
||||
atomic_write(&task_path, content.as_bytes())?;
|
||||
|
||||
// Update list metadata to include this task in task_order if not already present
|
||||
let mut list_metadata = self.read_list_metadata(list_id)?;
|
||||
|
|
@ -455,27 +453,42 @@ impl Storage for FileSystemStorage {
|
|||
}
|
||||
|
||||
let mut tasks = Vec::new();
|
||||
for (_id, mut entries) in by_id {
|
||||
if entries.len() > 1 {
|
||||
entries.sort_by(|a, b| {
|
||||
for (_id, entries) in by_id {
|
||||
// `by_id` only inserts non-empty groups, so each `entries` has at
|
||||
// least one element.
|
||||
let task = if entries.len() > 1 {
|
||||
// Read mtime once per file so sort_by doesn't hit the filesystem
|
||||
// O(n log n) times and can't produce inconsistent orderings if a
|
||||
// file is touched mid-sort.
|
||||
let mut with_mtime: Vec<(PathBuf, Task, Option<std::time::SystemTime>)> = entries
|
||||
.into_iter()
|
||||
.map(|(p, t)| {
|
||||
let mtime = fs::metadata(&p).and_then(|m| m.modified()).ok();
|
||||
(p, t, mtime)
|
||||
})
|
||||
.collect();
|
||||
with_mtime.sort_by(|a, b| {
|
||||
// Primary: highest version first
|
||||
let version_cmp = b.1.version.cmp(&a.1.version);
|
||||
if version_cmp != std::cmp::Ordering::Equal {
|
||||
return version_cmp;
|
||||
}
|
||||
// Tiebreaker: most recently modified file first
|
||||
let mtime_a = fs::metadata(&a.0).and_then(|m| m.modified()).ok();
|
||||
let mtime_b = fs::metadata(&b.0).and_then(|m| m.modified()).ok();
|
||||
mtime_b.cmp(&mtime_a)
|
||||
b.2.cmp(&a.2)
|
||||
});
|
||||
for (stale_path, _) in entries.drain(1..) {
|
||||
for (stale_path, _, _) in with_mtime.drain(1..) {
|
||||
if let Err(e) = fs::remove_file(&stale_path) {
|
||||
eprintln!("Warning: failed to remove stale duplicate task file {:?}: {}", stale_path, e);
|
||||
}
|
||||
}
|
||||
}
|
||||
let (_, task) = entries.into_iter().next()
|
||||
.ok_or_else(|| Error::InvalidData("Empty dedup entries for task".to_string()))?;
|
||||
let (_, t, _) = with_mtime.into_iter().next()
|
||||
.expect("dedup group is non-empty after drain(1..)");
|
||||
t
|
||||
} else {
|
||||
let (_, t) = entries.into_iter().next()
|
||||
.expect("dedup group is non-empty");
|
||||
t
|
||||
};
|
||||
tasks.push(task);
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ use serde::{Deserialize, Serialize};
|
|||
use sha2::{Sha256, Digest};
|
||||
use uuid::Uuid;
|
||||
use crate::error::{Error, Result};
|
||||
use crate::storage::{ListMetadata, TaskFrontmatter};
|
||||
use crate::storage::{atomic_write, ListMetadata, TaskFrontmatter};
|
||||
use crate::webdav::WebDavClient;
|
||||
|
||||
/// File-based lock to prevent concurrent sync operations on the same workspace.
|
||||
|
|
@ -204,8 +204,9 @@ pub fn compute_sync_actions(
|
|||
}
|
||||
|
||||
// Remote present, local gone, base known: local was deleted
|
||||
(None, Some(_), Some(b)) => {
|
||||
let remote_changed = remote.is_some_and(|r| r.size != b.size || !timestamps_equal(r.last_modified.as_deref(), b.modified_at.as_deref()));
|
||||
(None, Some(r), Some(b)) => {
|
||||
let remote_changed = r.size != b.size
|
||||
|| !timestamps_equal(r.last_modified.as_deref(), b.modified_at.as_deref());
|
||||
if remote_changed {
|
||||
// deleted locally + modified remotely -> download (remote wins)
|
||||
actions.push(SyncAction::Download { path: path.to_string() });
|
||||
|
|
@ -229,6 +230,22 @@ pub fn compute_sync_actions(
|
|||
actions
|
||||
}
|
||||
|
||||
/// Remove base entries for files that are gone from both local and remote.
|
||||
/// `compute_sync_actions` emits no action for the both-deleted case, so without
|
||||
/// this pass those entries would persist in `.syncstate.json` indefinitely.
|
||||
fn prune_orphan_bases(
|
||||
sync_state: &mut SyncState,
|
||||
local_files: &[LocalFileInfo],
|
||||
remote_files: &[RemoteFileSnapshot],
|
||||
) {
|
||||
let live_paths: std::collections::HashSet<&str> = local_files
|
||||
.iter()
|
||||
.map(|f| f.path.as_str())
|
||||
.chain(remote_files.iter().map(|f| f.path.as_str()))
|
||||
.collect();
|
||||
sync_state.files.retain(|p, _| live_paths.contains(p.as_str()));
|
||||
}
|
||||
|
||||
/// Compare two timestamps for equality by parsing both, tolerating format differences.
|
||||
fn timestamps_equal(a: Option<&str>, b: Option<&str>) -> bool {
|
||||
match (a, b) {
|
||||
|
|
@ -604,6 +621,12 @@ async fn sync_workspace_inner(
|
|||
}
|
||||
};
|
||||
|
||||
// Purge orphan base entries: files we previously tracked that are now gone
|
||||
// from both local and remote. Without this, `.syncstate.json` accumulates
|
||||
// ghost entries forever because the both-deleted diff case emits no action
|
||||
// and so nothing else would clean them.
|
||||
prune_orphan_bases(&mut sync_state, &local_files, &remote_files);
|
||||
|
||||
// Compute actions from three-way diff
|
||||
let fresh_actions = compute_sync_actions(&local_files, &remote_files, &sync_state);
|
||||
|
||||
|
|
@ -701,19 +724,20 @@ async fn execute_action(
|
|||
Err(e) => return Err(e.into()),
|
||||
};
|
||||
let checksum = compute_checksum(&data);
|
||||
let len = data.len() as u64;
|
||||
|
||||
if let Some(parent) = path_parent(path) {
|
||||
client.ensure_dir(parent).await?;
|
||||
}
|
||||
|
||||
report(&format!(" ^ Uploading {}", path));
|
||||
client.put_file(path, data.clone()).await?;
|
||||
client.put_file(path, data).await?;
|
||||
|
||||
// Record in sync state using local file metadata
|
||||
let modified = std::fs::metadata(&local_path).ok()
|
||||
.and_then(|m| m.modified().ok())
|
||||
.map(|t| { let dt: DateTime<Utc> = t.into(); dt.to_rfc3339() });
|
||||
sync_state.record_file(path, &checksum, modified.as_deref(), data.len() as u64);
|
||||
sync_state.record_file(path, &checksum, modified.as_deref(), len);
|
||||
}
|
||||
|
||||
SyncAction::Conflict { path } => {
|
||||
|
|
@ -743,8 +767,9 @@ async fn execute_action(
|
|||
} else {
|
||||
report(&format!(" ! Conflict: remote wins for {}, recovering local as duplicate", path));
|
||||
|
||||
// Remote wins: overwrite local with remote content
|
||||
std::fs::write(&local_path, &remote_data)?;
|
||||
// Remote wins: overwrite local with remote content. Atomic
|
||||
// so a crash mid-sync cannot leave a truncated file behind.
|
||||
atomic_write(&local_path, &remote_data)?;
|
||||
let modified = std::fs::metadata(&local_path).ok()
|
||||
.and_then(|m| m.modified().ok())
|
||||
.map(|t| { let dt: DateTime<Utc> = t.into(); dt.to_rfc3339() });
|
||||
|
|
@ -752,7 +777,7 @@ async fn execute_action(
|
|||
|
||||
// For .md task files inside a list dir, create a duplicate of the local version
|
||||
let parts: Vec<&str> = path.split('/').collect();
|
||||
if parts.len() == 2 && parts[1].ends_with(".md") && parts[1] != ".listdata.json" {
|
||||
if parts.len() == 2 && parts[1].ends_with(".md") {
|
||||
let local_content = String::from_utf8_lossy(&local_data);
|
||||
if let Ok((frontmatter, description)) = parse_frontmatter_for_conflict(&local_content) {
|
||||
let original_id = frontmatter.id;
|
||||
|
|
@ -775,7 +800,7 @@ async fn execute_action(
|
|||
let list_dir = workspace_path.join(parts[0]);
|
||||
let dup_filename = format!("{}.md", new_id);
|
||||
let dup_path = list_dir.join(&dup_filename);
|
||||
std::fs::write(&dup_path, &new_content)?;
|
||||
atomic_write(&dup_path, new_content.as_bytes())?;
|
||||
|
||||
// Insert new task adjacent to original in .listdata.json.
|
||||
// If metadata update fails, remove the duplicate file to
|
||||
|
|
@ -791,7 +816,7 @@ async fn execute_action(
|
|||
.unwrap_or(metadata.task_order.len());
|
||||
metadata.task_order.insert(insert_pos, new_id);
|
||||
let json = serde_json::to_string_pretty(&metadata)?;
|
||||
std::fs::write(&listdata_path, json)?;
|
||||
atomic_write(&listdata_path, json.as_bytes())?;
|
||||
Ok(())
|
||||
})();
|
||||
if let Err(e) = metadata_updated {
|
||||
|
|
@ -816,7 +841,7 @@ async fn execute_action(
|
|||
if let Some(parent) = local_path.parent() {
|
||||
std::fs::create_dir_all(parent)?;
|
||||
}
|
||||
std::fs::write(&local_path, &data)?;
|
||||
atomic_write(&local_path, &data)?;
|
||||
|
||||
// Record remote's last_modified so next diff won't see a timestamp mismatch
|
||||
let modified = remote_meta.get(path.as_str()).and_then(|r| r.last_modified.clone());
|
||||
|
|
@ -890,9 +915,15 @@ pub fn get_sync_status(workspace_path: &Path) -> Result<SyncStatusInfo> {
|
|||
}
|
||||
}
|
||||
|
||||
// Count files in base that are now missing locally (deleted)
|
||||
// Count files in base that are now missing locally (deleted).
|
||||
// Build a set of local paths once so the membership check is O(1) per
|
||||
// tracked file instead of scanning local_files linearly each time.
|
||||
let local_paths: std::collections::HashSet<&str> = local_files
|
||||
.iter()
|
||||
.map(|f| f.path.as_str())
|
||||
.collect();
|
||||
for path in sync_state.files.keys() {
|
||||
if !local_files.iter().any(|f| f.path == *path) {
|
||||
if !local_paths.contains(path.as_str()) {
|
||||
pending_changes += 1;
|
||||
}
|
||||
}
|
||||
|
|
@ -1105,6 +1136,22 @@ mod tests {
|
|||
assert!(actions.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_prune_orphan_bases() {
|
||||
let mut state = SyncState::default();
|
||||
state.files.insert("kept_local.md".to_string(), make_base("a"));
|
||||
state.files.insert("kept_remote.md".to_string(), make_base("b"));
|
||||
state.files.insert("orphan.md".to_string(), make_base("c"));
|
||||
|
||||
let local = vec![make_local("kept_local.md", "a")];
|
||||
let remote = vec![make_remote("kept_remote.md")];
|
||||
prune_orphan_bases(&mut state, &local, &remote);
|
||||
|
||||
assert!(state.files.contains_key("kept_local.md"));
|
||||
assert!(state.files.contains_key("kept_remote.md"));
|
||||
assert!(!state.files.contains_key("orphan.md"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_multiple_files_mixed() {
|
||||
let local = vec![
|
||||
|
|
@ -1136,8 +1183,7 @@ mod tests {
|
|||
#[test]
|
||||
fn test_sync_state_save_load_roundtrip() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let mut state = SyncState::default();
|
||||
state.last_sync = Some(Utc::now());
|
||||
let mut state = SyncState { last_sync: Some(Utc::now()), ..Default::default() };
|
||||
state.record_file("test.md", "abc123", Some("2026-01-01T00:00:00Z"), 42);
|
||||
|
||||
state.save(temp_dir.path()).unwrap();
|
||||
|
|
|
|||
12
docs/API.md
12
docs/API.md
|
|
@ -353,12 +353,14 @@ Credentials are stored in the platform keychain (Windows Credential Manager, mac
|
|||
|
||||
```rust
|
||||
use onyx_core::webdav::{store_credentials, load_credentials, delete_credentials};
|
||||
use zeroize::Zeroizing;
|
||||
|
||||
// Store credentials
|
||||
store_credentials("nextcloud.example.com", "username", "password")?;
|
||||
|
||||
// Load credentials (returns Zeroizing<String> wrappers that wipe memory on drop)
|
||||
let (username, password) = load_credentials("nextcloud.example.com")?;
|
||||
// Load credentials — returns Zeroizing<String> wrappers that wipe memory on drop
|
||||
let (username, password): (Zeroizing<String>, Zeroizing<String>) =
|
||||
load_credentials("nextcloud.example.com")?;
|
||||
|
||||
// Delete credentials
|
||||
delete_credentials("nextcloud.example.com")?;
|
||||
|
|
@ -454,7 +456,7 @@ All metadata and state files use an atomic write pattern (write to `.tmp` then r
|
|||
|
||||
- **List names**: Rejected if they contain `/`, `\`, or `..` components. Canonicalized and verified to stay within workspace root.
|
||||
- **Sync paths**: Validated to reject `..` components and backslashes anywhere in the path before any file system operation.
|
||||
- **Workspace paths** (Tauri): Rejected if they point to system directories (`/etc`, `/usr`, `/bin`, etc.).
|
||||
- **Workspace paths** (Tauri): Rejected if they point to the filesystem root (`/`) or system directories (`/etc`, `/usr`, `/bin`, `/sbin`, `/var`, `/proc`, `/sys`, `/dev`).
|
||||
- **Filenames**: Sanitized to replace `/ \ : * ? " < > |` and control characters with `_`.
|
||||
|
||||
## Example: Complete Workflow
|
||||
|
|
@ -521,9 +523,9 @@ Key test areas:
|
|||
|
||||
## Thread Safety
|
||||
|
||||
The `Storage` trait requires `Send + Sync`, and `TaskRepository` wraps `Box<dyn Storage + Send + Sync>`, so repository instances can be shared across threads behind a `Mutex`. The Tauri GUI uses `Mutex<AppState>` for this purpose.
|
||||
`TaskRepository` holds its storage as `Box<dyn Storage + Send + Sync>`, so any concrete storage implementation passed in must be `Send + Sync`. Repository instances can be shared across threads behind a `Mutex` — the Tauri GUI uses `Mutex<AppState>` for this purpose.
|
||||
|
||||
For concurrent access:
|
||||
|
||||
1. Wrap `TaskRepository` in `Mutex` or `RwLock` (the Tauri app does this)
|
||||
2. Or create separate repository instances per thread (file system handles locking)
|
||||
2. Or create separate repository instances per thread. Note that `FileSystemStorage` does not coordinate writes between processes — concurrent multi-process writes to the same workspace are not supported outside the WebDAV sync flow, which uses a `.sync.lock` file.
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ cargo run -p onyx-cli -- --help
|
|||
|
||||
# Run the Tauri GUI
|
||||
cd apps/tauri && npm install
|
||||
npm run tauri dev
|
||||
npm run tauri dev # (Wayland: WEBKIT_DISABLE_DMABUF_RENDERER=1 npm run tauri dev)
|
||||
```
|
||||
|
||||
## Project Structure
|
||||
|
|
@ -72,11 +72,15 @@ onyx/
|
|||
│ │ ├── main.ts
|
||||
│ │ ├── app.css # Tailwind CSS 4 + theme
|
||||
│ │ ├── App.svelte
|
||||
│ │ ├── test/
|
||||
│ │ │ └── setup.ts
|
||||
│ │ └── lib/
|
||||
│ │ ├── screens/ # Full-page views
|
||||
│ │ ├── components/ # Reusable UI components
|
||||
│ │ ├── stores/ # Svelte state (app.svelte.ts)
|
||||
│ │ ├── dateFormat.ts # Date formatting utilities
|
||||
│ │ ├── grouping.ts # Task grouping logic
|
||||
│ │ ├── paths.ts # Path utilities
|
||||
│ │ └── types.ts # TypeScript type definitions
|
||||
│ ├── tauri-plugin-credentials/ # Cross-platform credential storage plugin
|
||||
│ │ ├── Cargo.toml
|
||||
|
|
|
|||
Loading…
Reference in a new issue