Compare commits
No commits in common. "main" and "claude/smoke-test-and-fixes-TwfSh" have entirely different histories.
main
...
claude/smo
33
Audit.md
33
Audit.md
|
|
@ -1,38 +1,5 @@
|
|||
# Audit Log
|
||||
|
||||
## 2026-04-27
|
||||
|
||||
Found and fixed 3 issues:
|
||||
|
||||
1. **Perf: needless clone of upload payload** (sync.rs:733) — the `SyncAction::Upload` arm read the file into `data`, computed `compute_checksum(&data)`, then called `client.put_file(path, data.clone())`. The clone existed only because the next statement needed `data.len()` for the sync-state record. Captured `data.len() as u64` into `len` first, moved `data` into `put_file`, and used `len` afterwards — one full byte copy avoided per uploaded file.
|
||||
2. **Bug: Google Tasks sync silently drops metadata-write failures** (google_tasks.rs:361, 377) — both `.listdata.json` and `.onyx-workspace.json` were written via `if let Ok(meta_content) = serde_json::to_string_pretty(...) { let _ = atomic_write(...); }`, so a serialization or atomic-write error returned `Ok(GoogleSyncResult { downloaded: N, errors: [] })` even though list/workspace ordering was never persisted. Both writes now push their errors into the `errors` vec already returned in `GoogleSyncResult`.
|
||||
3. **Code quality: unreachable dead-error path in storage dedup** (storage.rs:447) — the dedup loop computed `Option<Task>` from each `by_id` group and then `ok_or_else(|| Error::InvalidData("Empty dedup entries for task"))?`. `by_id` is only populated by `entry(uuid).or_default().push(entry)`, so every group has ≥1 element and the `None` branch is unreachable. Replaced the `Option`+`?` with direct `expect` calls (one per branch) that document the non-empty invariant; the loop now yields `Task` directly.
|
||||
|
||||
## 2026-04-25
|
||||
|
||||
Found and fixed 3 issues:
|
||||
|
||||
1. **Perf: O(n²) deletion-detection in `get_sync_status`** (sync.rs:918) — for every path tracked in `sync_state.files`, the loop scanned `local_files` linearly via `.any(|f| f.path == *path)` to decide whether to count it as a deleted-locally pending change. The earlier "modified or new" loop already used the inverse direction with `sync_state.files.get(...)` (O(1)), so the second loop was the inconsistent one. Built a `HashSet<&str>` of local paths once and used `contains` for the membership check.
|
||||
2. **Perf: cascade delete walks all_tasks per frontier pop** (tauri/lib.rs:460) — `delete_task`'s descendant BFS scanned the full task list on every parent popped from the frontier, making the work O(n × depth). Built a `parent_id -> [child_id]` `HashMap` once, then the BFS visits each descendant in O(1) amortised, dropping total cost to O(n).
|
||||
3. **Code quality: duplicate atomic-write in `AppConfig::save_to_file`** (config.rs:114) — the function had its own copy of the temp-file + rename + cleanup-on-failure dance even though `storage::atomic_write` is `pub(crate)` and was already shared by `google_tasks.rs`. Replaced the inline implementation with a call to `crate::storage::atomic_write` so the crate has one canonical atomic write path.
|
||||
|
||||
## 2026-04-24
|
||||
|
||||
Found and fixed 3 issues:
|
||||
|
||||
1. **Bug: orphan base entries never cleaned from sync state** (sync.rs) — when a file was deleted both locally and remotely, `compute_sync_actions` emitted no action (the `(None, None, Some(_))` arm), so the base entry in `.syncstate.json` persisted forever. On each subsequent sync the same no-op case fired and the state file grew. Added `prune_orphan_bases` pass in `sync_workspace_inner` that drops base entries not present in either scan.
|
||||
2. **Code quality: redundant is_some_and on already-matched Option** (sync.rs:208) — the `(None, Some(_), Some(b))` arm re-checked `remote` via `remote.is_some_and(|r| ...)` even though the pattern had just proven `remote` is `Some(_)`. Bound the inner value with `Some(r)` in the pattern and used `r` directly.
|
||||
3. **Code quality: single-caller sanitize_filename wrapper** (storage.rs) — `FileSystemStorage::sanitize_filename` was a one-line forwarder to `crate::sanitize_filename` with one call site. Inlined the crate call and removed the method.
|
||||
|
||||
## 2026-04-20
|
||||
|
||||
Found and fixed 4 issues:
|
||||
|
||||
1. **Dead code in conflict recovery** (sync.rs:756) — `parts[1] != ".listdata.json"` was unreachable because the branch is already gated on `parts[1].ends_with(".md")`, which `.listdata.json` cannot satisfy. Removed the redundant check.
|
||||
2. **O(n²) cascade delete** (tauri/lib.rs) — descendant traversal in `delete_task` used `Vec::contains` inside the inner loop, making it quadratic in the number of tasks per list. Swapped the visited set to `HashSet`; `HashSet::insert` folds the contains+push into one call.
|
||||
3. **Silent cascade failure in toggle_task** (tauri/lib.rs) — subtask `update_task` errors were discarded with `let _ = ...`, leaving subtasks stuck at the old status with no UI feedback. Propagate the error so the frontend can surface it.
|
||||
4. **Duplicated UUID-parse boilerplate** (tauri/lib.rs) — 17 commands repeated `Uuid::parse_str(&x).map_err(|e| e.to_string())?`. Extracted a `parse_uuid` helper so callers read as `let id = parse_uuid(&list_id)?;`.
|
||||
|
||||
## 2026-04-15
|
||||
|
||||
Found and fixed 4 issues:
|
||||
|
|
|
|||
|
|
@ -30,7 +30,7 @@ The Tauri dev server runs on port 1422 (`vite.config.ts` and `tauri.conf.json`).
|
|||
Two-crate workspace (`resolver = "2"`, edition 2021) plus a Tauri app:
|
||||
|
||||
- **onyx-core** — Pure Rust library. Storage trait with `FileSystemStorage` implementation, `TaskRepository` (main API), data models, config, error types. No CLI/UI dependencies. `keyring` feature-gated behind `keyring-storage` (default on) for Android compatibility.
|
||||
- **onyx-cli** — CLI frontend using clap. Commands are in `src/commands/` (init, workspace, list, task, group, sync). Output formatting in `src/output.rs`.
|
||||
- **onyx-cli** — CLI frontend using clap. Commands are in `src/commands/` (init, workspace, list, task, group). Output formatting in `src/output.rs`.
|
||||
- **apps/tauri/** — Tauri v2 GUI. Svelte 5 frontend in `src/`, Rust backend in `src-tauri/` with Tauri commands that call into `onyx-core`. `notify` crate feature-gated for Android. `tauri-plugin-credentials/` provides cross-platform credential storage (Android Keystore via EncryptedSharedPreferences, desktop via keyring crate).
|
||||
|
||||
### Key patterns
|
||||
|
|
@ -64,7 +64,7 @@ The GUI uses Svelte 5 runes mode (`$state`, `$derived`, `$effect`, `$props()`).
|
|||
|
||||
Pre-alpha. No users, no released builds, no data to migrate. Breaking changes to on-disk formats, config structure, or sync conventions are free — do not add migration logic.
|
||||
|
||||
### Current state (2026-04-27)
|
||||
### Current state (2026-04-15)
|
||||
|
||||
- **Phase 1** (Core + CLI): Complete
|
||||
- **Phase 2** (WebDAV sync): Complete — remote folder browsing, checksum-based conflict resolution, auto-sync lifecycle, per-workspace sync interval
|
||||
|
|
@ -106,7 +106,7 @@ Pre-alpha. No users, no released builds, no data to migrate. Breaking changes to
|
|||
- Task deduplication on load (handles sync conflict duplicates)
|
||||
- Subtask hierarchy: subtask count shown on parent tasks in list, subtask detail via three-panel slide navigation, inline add at top of subtask list (new subtasks prepend), collapsible completed subtasks section, cascade delete (parent deletion removes all subtasks with confirmation warning)
|
||||
- Custom confirmation dialogs (ConfirmDialog component replaces native confirm())
|
||||
- Workspace path validation (rejects filesystem root `/` and system directories: `/etc`, `/usr`, `/bin`, `/sbin`, `/var`, `/proc`, `/sys`, `/dev`)
|
||||
- Workspace path validation (rejects system directories)
|
||||
- Task detail auto-cleanup (taskStack clears when viewed task is deleted or list switches)
|
||||
- Swipe gestures on mobile: swipe left/right on a task to toggle completion (swipe direction depends on current status)
|
||||
- Accessibility: ARIA labels/roles on interactive components, keyboard handlers, `prefers-reduced-motion` CSS support
|
||||
|
|
|
|||
20
PLAN.md
20
PLAN.md
|
|
@ -532,11 +532,8 @@ pub fn delete_credentials(domain: &str) -> Result<()>;
|
|||
Add to `onyx-core/Cargo.toml`:
|
||||
```toml
|
||||
reqwest = { version = "0.12", features = ["json", "rustls-tls"] }
|
||||
keyring = { version = "3", features = ["apple-native", "windows-native", "sync-secret-service"], optional = true }
|
||||
zeroize = "1"
|
||||
sha2 = "0.10"
|
||||
quick-xml = "0.36"
|
||||
# WebDAV implemented as custom client using reqwest + quick-xml for PROPFIND parsing
|
||||
keyring = "3.0"
|
||||
# TODO: Evaluate dav-client or implement custom WebDAV
|
||||
```
|
||||
|
||||
### Features
|
||||
|
|
@ -671,6 +668,7 @@ apps/tauri/
|
|||
│ │ ├── TaskItem.svelte
|
||||
│ │ ├── NewTaskInput.svelte
|
||||
│ │ ├── TaskDetailView.svelte
|
||||
│ │ ├── BottomSheet.svelte
|
||||
│ │ ├── ConfirmDialog.svelte
|
||||
│ │ └── DateTimePicker.svelte
|
||||
│ └── stores/
|
||||
|
|
@ -765,7 +763,7 @@ WorkspaceConfig {
|
|||
- [x] List rename (inline input via list kebab menu in drawer)
|
||||
- [x] Keyboard shortcuts (Escape closes settings → detail → drawer → menus in priority order)
|
||||
- [x] Sync status indicators (last-sync time + upload/download counts chip in TasksScreen)
|
||||
- [ ] Push/pull sync mode selection (session-only sync direction selector in SettingsScreen)
|
||||
- [x] Push/pull sync mode selection (session-only sync direction selector in SettingsScreen)
|
||||
- [x] Group-by-date toggle per list (checkmark toggle in list kebab menu)
|
||||
- [x] Subtask hierarchy (expand/collapse, inline add, cascade toggle/delete)
|
||||
- [ ] Search/filter tasks
|
||||
|
|
@ -846,11 +844,11 @@ npm run tauri ios build
|
|||
|
||||
#### Features
|
||||
|
||||
- [x] Gate file-watcher initialization behind `#[cfg(not(target_os = "android"))]`
|
||||
- [x] Gate file-watcher initialization behind `#[cfg(not(mobile))]`
|
||||
- [x] Install Android Studio + NDK, configure env vars
|
||||
- [x] Add Android Rust targets
|
||||
- [ ] `npm run tauri android init` (generates `gen/android/`)
|
||||
- [ ] Confirm `npm run tauri android build` succeeds
|
||||
- [x] `npm run tauri android init` (generates `gen/android/`)
|
||||
- [x] Confirm `npm run tauri android build` succeeds
|
||||
- [ ] Basic smoke test: app launches, workspace setup, create a task
|
||||
- [ ] Set up macOS CI for iOS builds
|
||||
- [ ] `npm run tauri ios init` (generates `gen/ios/`)
|
||||
|
|
@ -1058,6 +1056,6 @@ This project is free and open-source software licensed under GPL v3.
|
|||
|
||||
---
|
||||
|
||||
**Last Updated**: 2026-04-27
|
||||
**Document Version**: 4.5
|
||||
**Last Updated**: 2026-04-15
|
||||
**Document Version**: 4.3
|
||||
**Status**: Ready to Implement - Milestone-Driven Plan
|
||||
|
|
|
|||
15
README.md
15
README.md
|
|
@ -2,8 +2,6 @@
|
|||
|
||||
A **local-first, cross-platform tasks application** built with Rust. Inspired by Google Tasks, designed for speed and flexibility.
|
||||
|
||||

|
||||
|
||||
## Core Principles
|
||||
|
||||
- **Local-First**: Your data, your folder, your control
|
||||
|
|
@ -23,10 +21,7 @@ onyx/
|
|||
│ └── onyx-cli/ # CLI frontend
|
||||
├── apps/
|
||||
│ └── tauri/ # Tauri v2 GUI (Svelte 5 + Tailwind CSS 4)
|
||||
│ └── tauri-plugin-credentials/ # Cross-platform credential storage plugin
|
||||
└── docs/
|
||||
├── API.md # Core library API reference
|
||||
└── DEVELOPMENT.md # Development guide
|
||||
```
|
||||
|
||||
## Project Status
|
||||
|
|
@ -34,7 +29,7 @@ onyx/
|
|||
- **Phase 1** (Core + CLI): Complete
|
||||
- **Phase 2** (WebDAV Sync): Complete — backend, CLI, and GUI all wired
|
||||
- **Phase 3** (GUI MVP): Complete
|
||||
- **Phase 4** (Mobile): In progress — Android preliminaries done (file-watcher gating, `tauri-plugin-credentials`, safe area insets, Android targets configured); needs `tauri android init`, build verification, and iOS setup
|
||||
- **Phase 4** (Mobile): In progress — Android preliminaries done (file-watcher gating, tauri-plugin-credentials, safe area insets, Android targets configured); needs build verification and iOS setup
|
||||
|
||||
### Core Library (`onyx-core`)
|
||||
- Data models (Task, TaskList, AppConfig, WorkspaceConfig)
|
||||
|
|
@ -64,15 +59,13 @@ onyx/
|
|||
- Due date picker/editor with optional time
|
||||
- Subtask hierarchy with three-panel slide navigation
|
||||
- Move tasks between lists
|
||||
- List rename, workspace rename, group-by-date toggle, delete completed tasks
|
||||
- List rename, group-by-date toggle, delete completed tasks
|
||||
- Keyboard shortcuts (Escape priority chain)
|
||||
- WebDAV setup flow with credential auto-population
|
||||
- File watcher (auto-reloads on external changes)
|
||||
- Auto-sync with configurable interval, status indicators
|
||||
- Swipe gestures on mobile (swipe to toggle completion)
|
||||
- Custom confirmation dialogs
|
||||
- Safe area insets for mobile (viewport-fit=cover)
|
||||
- Accessibility: ARIA labels/roles, keyboard handlers, `prefers-reduced-motion` support
|
||||
- Desktop packaging (Linux: AppImage + .deb; Windows: MSI)
|
||||
|
||||
## Development Setup
|
||||
|
|
@ -220,8 +213,8 @@ cargo test -- --nocapture
|
|||
|
||||
## What's Next?
|
||||
|
||||
- **Phase 4** (in progress): Complete Android build (`tauri android init` + verification), iOS setup on macOS CI
|
||||
- **Phase 5**: GUI advanced features (rich markdown editor, search/filter, change storage folder)
|
||||
- **Phase 4**: Mobile support (iOS & Android via Tauri v2 mobile)
|
||||
- **Phase 5**: GUI advanced features (rich markdown editor, search/filter)
|
||||
- **Phase 6**: Mobile polish and platform-specific integrations
|
||||
- **Phase 7**: Google Tasks importer and unique features
|
||||
|
||||
|
|
|
|||
|
|
@ -60,11 +60,6 @@ fn lock_state(state: &Mutex<AppState>) -> Result<std::sync::MutexGuard<'_, AppSt
|
|||
state.lock().map_err(|e| format!("State lock poisoned: {}", e))
|
||||
}
|
||||
|
||||
/// Parse a UUID from a string, converting errors to the String format Tauri commands use.
|
||||
fn parse_uuid(s: &str) -> Result<Uuid, String> {
|
||||
Uuid::parse_str(s).map_err(|e| e.to_string())
|
||||
}
|
||||
|
||||
impl AppState {
|
||||
/// Persist config to disk, converting errors to String for Tauri commands.
|
||||
fn save_config(&self) -> Result<(), String> {
|
||||
|
|
@ -72,25 +67,6 @@ impl AppState {
|
|||
}
|
||||
}
|
||||
|
||||
/// Extract the hostname from a URL (scheme://host/...), used as the credential key.
|
||||
/// Returns an empty string if the URL has no scheme or host.
|
||||
fn credential_domain(url: &str) -> String {
|
||||
url.split("://")
|
||||
.nth(1)
|
||||
.and_then(|rest| rest.split('/').next())
|
||||
.unwrap_or("")
|
||||
.to_string()
|
||||
}
|
||||
|
||||
/// Join a remote base directory with a child path, handling empty base and trailing slashes.
|
||||
fn join_remote_path(base: &str, child: &str) -> String {
|
||||
if base.is_empty() {
|
||||
child.to_string()
|
||||
} else {
|
||||
format!("{}/{}", base.trim_end_matches('/'), child)
|
||||
}
|
||||
}
|
||||
|
||||
/// Validate that a workspace path is a reasonable directory and not a system path.
|
||||
fn validate_workspace_path(path: &str) -> Result<(), String> {
|
||||
let p = PathBuf::from(path);
|
||||
|
|
@ -103,10 +79,7 @@ fn validate_workspace_path(path: &str) -> Result<(), String> {
|
|||
#[cfg(unix)]
|
||||
{
|
||||
let forbidden = ["/", "/etc", "/usr", "/bin", "/sbin", "/var", "/proc", "/sys", "/dev"];
|
||||
// Strip trailing slashes, but keep "/" itself — trim_end_matches would
|
||||
// collapse it to "" and slip past the forbidden check.
|
||||
let canonical = normalized.trim_end_matches('/');
|
||||
let canonical = if canonical.is_empty() { "/" } else { canonical };
|
||||
if forbidden.contains(&canonical) {
|
||||
return Err(format!("Cannot use system directory as workspace: {}", path));
|
||||
}
|
||||
|
|
@ -290,7 +263,10 @@ async fn rename_workspace(
|
|||
let base_url = webdav_url.as_deref().ok_or("No WebDAV URL configured")?;
|
||||
let remote_path = webdav_path.as_deref().unwrap_or("");
|
||||
|
||||
let domain = credential_domain(base_url);
|
||||
let domain = base_url
|
||||
.split("://").nth(1)
|
||||
.and_then(|rest| rest.split('/').next())
|
||||
.unwrap_or("").to_string();
|
||||
let creds = app_handle.state::<Credentials<tauri::Wry>>();
|
||||
let (username, password) = creds.load(&domain)?;
|
||||
|
||||
|
|
@ -371,7 +347,7 @@ fn delete_list(
|
|||
let mut s = lock_state(&state)?;
|
||||
ensure_repo(&mut s)?;
|
||||
mute_watcher(&mut s);
|
||||
let id = parse_uuid(&list_id)?;
|
||||
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||
repo_mut(&mut s)?
|
||||
.delete_list(id)
|
||||
.map_err(|e| e.to_string())
|
||||
|
|
@ -386,7 +362,7 @@ fn list_tasks(
|
|||
) -> Result<Vec<Task>, String> {
|
||||
let mut s = lock_state(&state)?;
|
||||
ensure_repo(&mut s)?;
|
||||
let id = parse_uuid(&list_id)?;
|
||||
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||
repo_ref(&s)?
|
||||
.list_tasks(id)
|
||||
.map_err(|e| e.to_string())
|
||||
|
|
@ -405,13 +381,13 @@ fn create_task(
|
|||
let mut s = lock_state(&state)?;
|
||||
ensure_repo(&mut s)?;
|
||||
mute_watcher(&mut s);
|
||||
let id = parse_uuid(&list_id)?;
|
||||
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||
let mut task = Task::new(title);
|
||||
if let Some(desc) = description.filter(|d| !d.is_empty()) {
|
||||
task.description = desc;
|
||||
}
|
||||
if let Some(pid) = parent_id {
|
||||
let parent_uuid = parse_uuid(&pid)?;
|
||||
let parent_uuid = Uuid::parse_str(&pid).map_err(|e| e.to_string())?;
|
||||
task.parent_id = Some(parent_uuid);
|
||||
}
|
||||
// Accept the date fields at creation time so callers don't have to do a
|
||||
|
|
@ -433,7 +409,7 @@ fn update_task(
|
|||
let mut s = lock_state(&state)?;
|
||||
ensure_repo(&mut s)?;
|
||||
mute_watcher(&mut s);
|
||||
let id = parse_uuid(&list_id)?;
|
||||
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||
repo_mut(&mut s)?
|
||||
.update_task(id, task)
|
||||
.map_err(|e| e.to_string())
|
||||
|
|
@ -448,30 +424,20 @@ fn delete_task(
|
|||
let mut s = lock_state(&state)?;
|
||||
ensure_repo(&mut s)?;
|
||||
mute_watcher(&mut s);
|
||||
let lid = parse_uuid(&list_id)?;
|
||||
let tid = parse_uuid(&task_id)?;
|
||||
let lid = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||
let tid = Uuid::parse_str(&task_id).map_err(|e| e.to_string())?;
|
||||
let repo = repo_mut(&mut s)?;
|
||||
// Cascade-delete the full descendant subtree (not just direct children)
|
||||
// so deleting a parent can't leave grandchildren orphaned with a
|
||||
// parent_id pointing at a deleted task.
|
||||
let all_tasks = repo.list_tasks(lid).map_err(|e| e.to_string())?;
|
||||
// Build a parent -> children index in one pass so the BFS below is O(n)
|
||||
// instead of O(n * depth) scanning all tasks for each frontier pop.
|
||||
let mut children_by_parent: std::collections::HashMap<Uuid, Vec<Uuid>> =
|
||||
std::collections::HashMap::new();
|
||||
for t in &all_tasks {
|
||||
if let Some(pid) = t.parent_id {
|
||||
children_by_parent.entry(pid).or_default().push(t.id);
|
||||
}
|
||||
}
|
||||
let mut to_delete: std::collections::HashSet<Uuid> = std::collections::HashSet::new();
|
||||
let mut to_delete: Vec<Uuid> = Vec::new();
|
||||
let mut frontier: Vec<Uuid> = vec![tid];
|
||||
while let Some(parent) = frontier.pop() {
|
||||
if let Some(children) = children_by_parent.get(&parent) {
|
||||
for &child_id in children {
|
||||
if to_delete.insert(child_id) {
|
||||
frontier.push(child_id);
|
||||
}
|
||||
for t in &all_tasks {
|
||||
if t.parent_id == Some(parent) && !to_delete.contains(&t.id) {
|
||||
to_delete.push(t.id);
|
||||
frontier.push(t.id);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -493,8 +459,8 @@ fn toggle_task(
|
|||
let mut s = lock_state(&state)?;
|
||||
ensure_repo(&mut s)?;
|
||||
mute_watcher(&mut s);
|
||||
let lid = parse_uuid(&list_id)?;
|
||||
let tid = parse_uuid(&task_id)?;
|
||||
let lid = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||
let tid = Uuid::parse_str(&task_id).map_err(|e| e.to_string())?;
|
||||
let repo = repo_mut(&mut s)?;
|
||||
let mut task = repo.get_task(lid, tid).map_err(|e| e.to_string())?;
|
||||
match task.status {
|
||||
|
|
@ -511,9 +477,7 @@ fn toggle_task(
|
|||
TaskStatus::Backlog => child.uncomplete(),
|
||||
TaskStatus::Completed => child.complete(),
|
||||
}
|
||||
let child_id = child.id;
|
||||
repo.update_task(lid, child)
|
||||
.map_err(|e| format!("Failed to cascade to subtask {}: {}", child_id, e))?;
|
||||
let _ = repo.update_task(lid, child);
|
||||
}
|
||||
}
|
||||
Ok(task)
|
||||
|
|
@ -529,8 +493,8 @@ fn reorder_task(
|
|||
let mut s = lock_state(&state)?;
|
||||
ensure_repo(&mut s)?;
|
||||
mute_watcher(&mut s);
|
||||
let lid = parse_uuid(&list_id)?;
|
||||
let tid = parse_uuid(&task_id)?;
|
||||
let lid = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||
let tid = Uuid::parse_str(&task_id).map_err(|e| e.to_string())?;
|
||||
repo_mut(&mut s)?
|
||||
.reorder_task(lid, tid, new_position)
|
||||
.map_err(|e| e.to_string())
|
||||
|
|
@ -548,9 +512,9 @@ fn move_task(
|
|||
let mut s = lock_state(&state)?;
|
||||
ensure_repo(&mut s)?;
|
||||
mute_watcher(&mut s);
|
||||
let from = parse_uuid(&from_list_id)?;
|
||||
let to = parse_uuid(&to_list_id)?;
|
||||
let tid = parse_uuid(&task_id)?;
|
||||
let from = Uuid::parse_str(&from_list_id).map_err(|e| e.to_string())?;
|
||||
let to = Uuid::parse_str(&to_list_id).map_err(|e| e.to_string())?;
|
||||
let tid = Uuid::parse_str(&task_id).map_err(|e| e.to_string())?;
|
||||
repo_mut(&mut s)?
|
||||
.move_task(from, to, tid)
|
||||
.map_err(|e| e.to_string())
|
||||
|
|
@ -565,7 +529,7 @@ fn rename_list(
|
|||
let mut s = lock_state(&state)?;
|
||||
ensure_repo(&mut s)?;
|
||||
mute_watcher(&mut s);
|
||||
let id = parse_uuid(&list_id)?;
|
||||
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||
repo_mut(&mut s)?
|
||||
.rename_list(id, new_name)
|
||||
.map_err(|e| e.to_string())
|
||||
|
|
@ -580,7 +544,7 @@ fn set_group_by_date(
|
|||
let mut s = lock_state(&state)?;
|
||||
ensure_repo(&mut s)?;
|
||||
mute_watcher(&mut s);
|
||||
let id = parse_uuid(&list_id)?;
|
||||
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||
repo_mut(&mut s)?
|
||||
.set_group_by_date(id, enabled)
|
||||
.map_err(|e| e.to_string())
|
||||
|
|
@ -593,7 +557,7 @@ fn get_group_by_date(
|
|||
) -> Result<bool, String> {
|
||||
let mut s = lock_state(&state)?;
|
||||
ensure_repo(&mut s)?;
|
||||
let id = parse_uuid(&list_id)?;
|
||||
let id = Uuid::parse_str(&list_id).map_err(|e| e.to_string())?;
|
||||
repo_ref(&s)?
|
||||
.get_group_by_date(id)
|
||||
.map_err(|e| e.to_string())
|
||||
|
|
@ -681,9 +645,10 @@ async fn list_remote_folder(
|
|||
let dir_entries: Vec<_> = entries.into_iter().filter(|e| e.is_dir).collect();
|
||||
|
||||
// Check all subfolders for .onyx-workspace.json in parallel
|
||||
let sub_paths: Vec<_> = dir_entries.iter()
|
||||
.map(|entry| join_remote_path(&path, &entry.path))
|
||||
.collect();
|
||||
let sub_paths: Vec<_> = dir_entries.iter().map(|entry| {
|
||||
if path.is_empty() { entry.path.clone() }
|
||||
else { format!("{}/{}", path.trim_end_matches('/'), entry.path) }
|
||||
}).collect();
|
||||
let checks: Vec<_> = sub_paths.iter().map(|sp| {
|
||||
client.list_files(sp)
|
||||
}).collect();
|
||||
|
|
@ -715,7 +680,11 @@ async fn inspect_remote_workspace(
|
|||
let mut lists = Vec::new();
|
||||
for entry in entries {
|
||||
if !entry.is_dir { continue; }
|
||||
let list_path = join_remote_path(&path, &entry.path);
|
||||
let list_path = if path.is_empty() {
|
||||
entry.path.clone()
|
||||
} else {
|
||||
format!("{}/{}", path.trim_end_matches('/'), entry.path)
|
||||
};
|
||||
let files = client.list_files(&list_path).await.unwrap_or_else(|e| {
|
||||
eprintln!("Warning: failed to list remote folder '{}': {}", list_path, e);
|
||||
Vec::new()
|
||||
|
|
@ -751,7 +720,11 @@ async fn create_remote_workspace(
|
|||
"list_order": [],
|
||||
"last_opened_list": null,
|
||||
});
|
||||
let file_path = join_remote_path(&path, ".onyx-workspace.json");
|
||||
let file_path = if path.is_empty() {
|
||||
".onyx-workspace.json".to_string()
|
||||
} else {
|
||||
format!("{}/{}", path.trim_end_matches('/'), ".onyx-workspace.json")
|
||||
};
|
||||
client.put_file(&file_path, serde_json::to_string_pretty(&metadata).map_err(|e| e.to_string())?.into_bytes())
|
||||
.await
|
||||
.map_err(|e| e.to_string())?;
|
||||
|
|
@ -785,7 +758,12 @@ fn add_webdav_workspace(
|
|||
s.repo = None;
|
||||
|
||||
// Store credentials keyed by hostname
|
||||
let domain = credential_domain(&webdav_url);
|
||||
let domain = webdav_url
|
||||
.split("://")
|
||||
.nth(1)
|
||||
.and_then(|rest| rest.split('/').next())
|
||||
.unwrap_or("")
|
||||
.to_string();
|
||||
s.save_config()?;
|
||||
drop(s);
|
||||
let creds = app_handle.state::<Credentials<tauri::Wry>>();
|
||||
|
|
@ -848,7 +826,12 @@ async fn sync_workspace(
|
|||
};
|
||||
|
||||
// Step 2: load credentials
|
||||
let domain = credential_domain(&webdav_url);
|
||||
let domain = webdav_url
|
||||
.split("://")
|
||||
.nth(1)
|
||||
.and_then(|rest| rest.split('/').next())
|
||||
.unwrap_or("")
|
||||
.to_string();
|
||||
let creds = app_handle.state::<Credentials<tauri::Wry>>();
|
||||
let (username, password) = creds.load(&domain)?;
|
||||
|
||||
|
|
|
|||
|
|
@ -13,8 +13,6 @@
|
|||
let viewYear = $state(existing ? existing.getFullYear() : now.getFullYear());
|
||||
let viewMonth = $state(existing ? existing.getMonth() : now.getMonth());
|
||||
let selectedDay = $state(existing ? existing.getDate() : now.getDate());
|
||||
let selectedYear = $state(existing ? existing.getFullYear() : now.getFullYear());
|
||||
let selectedMonth = $state(existing ? existing.getMonth() : now.getMonth());
|
||||
let includeTime = $state(has_time);
|
||||
let selectedHour = $state(existing ? existing.getHours() : now.getHours());
|
||||
let selectedMinute = $state(existing ? existing.getMinutes() : 0);
|
||||
|
|
@ -60,6 +58,9 @@
|
|||
return `${viewYear}-${viewMonth + 1}-${day}` === todayStr;
|
||||
}
|
||||
|
||||
let selectedYear = $state(existing ? existing.getFullYear() : now.getFullYear());
|
||||
let selectedMonth = $state(existing ? existing.getMonth() : now.getMonth());
|
||||
|
||||
function isSelected(day: number): boolean {
|
||||
return selectedDay === day && selectedYear === viewYear && selectedMonth === viewMonth;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -418,7 +418,7 @@ function debouncedSync() {
|
|||
|
||||
function restartSyncInterval() {
|
||||
if (_syncInterval) clearInterval(_syncInterval);
|
||||
const secs = _appFocused ? syncIntervalSecs : syncIntervalUnfocusedSecs;
|
||||
var secs = _appFocused ? syncIntervalSecs : syncIntervalUnfocusedSecs;
|
||||
_syncInterval = setInterval(triggerSync, secs * 1000);
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -116,7 +116,13 @@ impl AppConfig {
|
|||
std::fs::create_dir_all(parent)?;
|
||||
}
|
||||
let content = serde_json::to_string_pretty(&self)?;
|
||||
crate::storage::atomic_write(path, content.as_bytes())?;
|
||||
// Atomic write: write to temp file then rename to prevent corruption on crash
|
||||
let temp = path.with_extension("tmp");
|
||||
std::fs::write(&temp, &content)?;
|
||||
if let Err(e) = std::fs::rename(&temp, path) {
|
||||
let _ = std::fs::remove_file(&temp);
|
||||
return Err(e.into());
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -358,15 +358,8 @@ pub async fn sync_google_tasks(
|
|||
list_meta.task_order = task_order;
|
||||
list_meta.updated_at = Utc::now();
|
||||
|
||||
match serde_json::to_string_pretty(&list_meta) {
|
||||
Ok(meta_content) => {
|
||||
if let Err(e) = atomic_write(&listdata_path, meta_content.as_bytes()) {
|
||||
errors.push(format!("Failed to write metadata for list '{}': {}", gt_list.title, e));
|
||||
}
|
||||
}
|
||||
Err(e) => {
|
||||
errors.push(format!("Failed to serialize metadata for list '{}': {}", gt_list.title, e));
|
||||
}
|
||||
if let Ok(meta_content) = serde_json::to_string_pretty(&list_meta) {
|
||||
let _ = atomic_write(&listdata_path, meta_content.as_bytes());
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -381,15 +374,8 @@ pub async fn sync_google_tasks(
|
|||
RootMetadata::default()
|
||||
};
|
||||
root_meta.list_order = new_list_order;
|
||||
match serde_json::to_string_pretty(&root_meta) {
|
||||
Ok(meta_content) => {
|
||||
if let Err(e) = atomic_write(&root_meta_path, meta_content.as_bytes()) {
|
||||
errors.push(format!("Failed to write workspace metadata: {}", e));
|
||||
}
|
||||
}
|
||||
Err(e) => {
|
||||
errors.push(format!("Failed to serialize workspace metadata: {}", e));
|
||||
}
|
||||
if let Ok(meta_content) = serde_json::to_string_pretty(&root_meta) {
|
||||
let _ = atomic_write(&root_meta_path, meta_content.as_bytes());
|
||||
}
|
||||
|
||||
Ok(GoogleSyncResult { downloaded, errors })
|
||||
|
|
|
|||
|
|
@ -236,8 +236,12 @@ impl FileSystemStorage {
|
|||
Ok(path)
|
||||
}
|
||||
|
||||
fn sanitize_filename(name: &str) -> String {
|
||||
crate::sanitize_filename(name)
|
||||
}
|
||||
|
||||
fn task_file_path(&self, list_dir: &Path, task: &Task) -> PathBuf {
|
||||
let safe_title = crate::sanitize_filename(&task.title);
|
||||
let safe_title = Self::sanitize_filename(&task.title);
|
||||
let filename = if safe_title.is_empty() {
|
||||
task.id.to_string()
|
||||
} else {
|
||||
|
|
@ -453,42 +457,27 @@ impl Storage for FileSystemStorage {
|
|||
}
|
||||
|
||||
let mut tasks = Vec::new();
|
||||
for (_id, entries) in by_id {
|
||||
// `by_id` only inserts non-empty groups, so each `entries` has at
|
||||
// least one element.
|
||||
let task = if entries.len() > 1 {
|
||||
// Read mtime once per file so sort_by doesn't hit the filesystem
|
||||
// O(n log n) times and can't produce inconsistent orderings if a
|
||||
// file is touched mid-sort.
|
||||
let mut with_mtime: Vec<(PathBuf, Task, Option<std::time::SystemTime>)> = entries
|
||||
.into_iter()
|
||||
.map(|(p, t)| {
|
||||
let mtime = fs::metadata(&p).and_then(|m| m.modified()).ok();
|
||||
(p, t, mtime)
|
||||
})
|
||||
.collect();
|
||||
with_mtime.sort_by(|a, b| {
|
||||
for (_id, mut entries) in by_id {
|
||||
if entries.len() > 1 {
|
||||
entries.sort_by(|a, b| {
|
||||
// Primary: highest version first
|
||||
let version_cmp = b.1.version.cmp(&a.1.version);
|
||||
if version_cmp != std::cmp::Ordering::Equal {
|
||||
return version_cmp;
|
||||
}
|
||||
// Tiebreaker: most recently modified file first
|
||||
b.2.cmp(&a.2)
|
||||
let mtime_a = fs::metadata(&a.0).and_then(|m| m.modified()).ok();
|
||||
let mtime_b = fs::metadata(&b.0).and_then(|m| m.modified()).ok();
|
||||
mtime_b.cmp(&mtime_a)
|
||||
});
|
||||
for (stale_path, _, _) in with_mtime.drain(1..) {
|
||||
for (stale_path, _) in entries.drain(1..) {
|
||||
if let Err(e) = fs::remove_file(&stale_path) {
|
||||
eprintln!("Warning: failed to remove stale duplicate task file {:?}: {}", stale_path, e);
|
||||
}
|
||||
}
|
||||
let (_, t, _) = with_mtime.into_iter().next()
|
||||
.expect("dedup group is non-empty after drain(1..)");
|
||||
t
|
||||
} else {
|
||||
let (_, t) = entries.into_iter().next()
|
||||
.expect("dedup group is non-empty");
|
||||
t
|
||||
};
|
||||
}
|
||||
let (_, task) = entries.into_iter().next()
|
||||
.ok_or_else(|| Error::InvalidData("Empty dedup entries for task".to_string()))?;
|
||||
tasks.push(task);
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -204,9 +204,8 @@ pub fn compute_sync_actions(
|
|||
}
|
||||
|
||||
// Remote present, local gone, base known: local was deleted
|
||||
(None, Some(r), Some(b)) => {
|
||||
let remote_changed = r.size != b.size
|
||||
|| !timestamps_equal(r.last_modified.as_deref(), b.modified_at.as_deref());
|
||||
(None, Some(_), Some(b)) => {
|
||||
let remote_changed = remote.is_some_and(|r| r.size != b.size || !timestamps_equal(r.last_modified.as_deref(), b.modified_at.as_deref()));
|
||||
if remote_changed {
|
||||
// deleted locally + modified remotely -> download (remote wins)
|
||||
actions.push(SyncAction::Download { path: path.to_string() });
|
||||
|
|
@ -230,22 +229,6 @@ pub fn compute_sync_actions(
|
|||
actions
|
||||
}
|
||||
|
||||
/// Remove base entries for files that are gone from both local and remote.
|
||||
/// `compute_sync_actions` emits no action for the both-deleted case, so without
|
||||
/// this pass those entries would persist in `.syncstate.json` indefinitely.
|
||||
fn prune_orphan_bases(
|
||||
sync_state: &mut SyncState,
|
||||
local_files: &[LocalFileInfo],
|
||||
remote_files: &[RemoteFileSnapshot],
|
||||
) {
|
||||
let live_paths: std::collections::HashSet<&str> = local_files
|
||||
.iter()
|
||||
.map(|f| f.path.as_str())
|
||||
.chain(remote_files.iter().map(|f| f.path.as_str()))
|
||||
.collect();
|
||||
sync_state.files.retain(|p, _| live_paths.contains(p.as_str()));
|
||||
}
|
||||
|
||||
/// Compare two timestamps for equality by parsing both, tolerating format differences.
|
||||
fn timestamps_equal(a: Option<&str>, b: Option<&str>) -> bool {
|
||||
match (a, b) {
|
||||
|
|
@ -621,12 +604,6 @@ async fn sync_workspace_inner(
|
|||
}
|
||||
};
|
||||
|
||||
// Purge orphan base entries: files we previously tracked that are now gone
|
||||
// from both local and remote. Without this, `.syncstate.json` accumulates
|
||||
// ghost entries forever because the both-deleted diff case emits no action
|
||||
// and so nothing else would clean them.
|
||||
prune_orphan_bases(&mut sync_state, &local_files, &remote_files);
|
||||
|
||||
// Compute actions from three-way diff
|
||||
let fresh_actions = compute_sync_actions(&local_files, &remote_files, &sync_state);
|
||||
|
||||
|
|
@ -724,20 +701,19 @@ async fn execute_action(
|
|||
Err(e) => return Err(e.into()),
|
||||
};
|
||||
let checksum = compute_checksum(&data);
|
||||
let len = data.len() as u64;
|
||||
|
||||
if let Some(parent) = path_parent(path) {
|
||||
client.ensure_dir(parent).await?;
|
||||
}
|
||||
|
||||
report(&format!(" ^ Uploading {}", path));
|
||||
client.put_file(path, data).await?;
|
||||
client.put_file(path, data.clone()).await?;
|
||||
|
||||
// Record in sync state using local file metadata
|
||||
let modified = std::fs::metadata(&local_path).ok()
|
||||
.and_then(|m| m.modified().ok())
|
||||
.map(|t| { let dt: DateTime<Utc> = t.into(); dt.to_rfc3339() });
|
||||
sync_state.record_file(path, &checksum, modified.as_deref(), len);
|
||||
sync_state.record_file(path, &checksum, modified.as_deref(), data.len() as u64);
|
||||
}
|
||||
|
||||
SyncAction::Conflict { path } => {
|
||||
|
|
@ -777,7 +753,7 @@ async fn execute_action(
|
|||
|
||||
// For .md task files inside a list dir, create a duplicate of the local version
|
||||
let parts: Vec<&str> = path.split('/').collect();
|
||||
if parts.len() == 2 && parts[1].ends_with(".md") {
|
||||
if parts.len() == 2 && parts[1].ends_with(".md") && parts[1] != ".listdata.json" {
|
||||
let local_content = String::from_utf8_lossy(&local_data);
|
||||
if let Ok((frontmatter, description)) = parse_frontmatter_for_conflict(&local_content) {
|
||||
let original_id = frontmatter.id;
|
||||
|
|
@ -915,15 +891,9 @@ pub fn get_sync_status(workspace_path: &Path) -> Result<SyncStatusInfo> {
|
|||
}
|
||||
}
|
||||
|
||||
// Count files in base that are now missing locally (deleted).
|
||||
// Build a set of local paths once so the membership check is O(1) per
|
||||
// tracked file instead of scanning local_files linearly each time.
|
||||
let local_paths: std::collections::HashSet<&str> = local_files
|
||||
.iter()
|
||||
.map(|f| f.path.as_str())
|
||||
.collect();
|
||||
// Count files in base that are now missing locally (deleted)
|
||||
for path in sync_state.files.keys() {
|
||||
if !local_paths.contains(path.as_str()) {
|
||||
if !local_files.iter().any(|f| f.path == *path) {
|
||||
pending_changes += 1;
|
||||
}
|
||||
}
|
||||
|
|
@ -1136,22 +1106,6 @@ mod tests {
|
|||
assert!(actions.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_prune_orphan_bases() {
|
||||
let mut state = SyncState::default();
|
||||
state.files.insert("kept_local.md".to_string(), make_base("a"));
|
||||
state.files.insert("kept_remote.md".to_string(), make_base("b"));
|
||||
state.files.insert("orphan.md".to_string(), make_base("c"));
|
||||
|
||||
let local = vec![make_local("kept_local.md", "a")];
|
||||
let remote = vec![make_remote("kept_remote.md")];
|
||||
prune_orphan_bases(&mut state, &local, &remote);
|
||||
|
||||
assert!(state.files.contains_key("kept_local.md"));
|
||||
assert!(state.files.contains_key("kept_remote.md"));
|
||||
assert!(!state.files.contains_key("orphan.md"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_multiple_files_mixed() {
|
||||
let local = vec![
|
||||
|
|
|
|||
12
docs/API.md
12
docs/API.md
|
|
@ -353,14 +353,12 @@ Credentials are stored in the platform keychain (Windows Credential Manager, mac
|
|||
|
||||
```rust
|
||||
use onyx_core::webdav::{store_credentials, load_credentials, delete_credentials};
|
||||
use zeroize::Zeroizing;
|
||||
|
||||
// Store credentials
|
||||
store_credentials("nextcloud.example.com", "username", "password")?;
|
||||
|
||||
// Load credentials — returns Zeroizing<String> wrappers that wipe memory on drop
|
||||
let (username, password): (Zeroizing<String>, Zeroizing<String>) =
|
||||
load_credentials("nextcloud.example.com")?;
|
||||
// Load credentials (returns Zeroizing<String> wrappers that wipe memory on drop)
|
||||
let (username, password) = load_credentials("nextcloud.example.com")?;
|
||||
|
||||
// Delete credentials
|
||||
delete_credentials("nextcloud.example.com")?;
|
||||
|
|
@ -456,7 +454,7 @@ All metadata and state files use an atomic write pattern (write to `.tmp` then r
|
|||
|
||||
- **List names**: Rejected if they contain `/`, `\`, or `..` components. Canonicalized and verified to stay within workspace root.
|
||||
- **Sync paths**: Validated to reject `..` components and backslashes anywhere in the path before any file system operation.
|
||||
- **Workspace paths** (Tauri): Rejected if they point to the filesystem root (`/`) or system directories (`/etc`, `/usr`, `/bin`, `/sbin`, `/var`, `/proc`, `/sys`, `/dev`).
|
||||
- **Workspace paths** (Tauri): Rejected if they point to system directories (`/etc`, `/usr`, `/bin`, etc.).
|
||||
- **Filenames**: Sanitized to replace `/ \ : * ? " < > |` and control characters with `_`.
|
||||
|
||||
## Example: Complete Workflow
|
||||
|
|
@ -523,9 +521,9 @@ Key test areas:
|
|||
|
||||
## Thread Safety
|
||||
|
||||
`TaskRepository` holds its storage as `Box<dyn Storage + Send + Sync>`, so any concrete storage implementation passed in must be `Send + Sync`. Repository instances can be shared across threads behind a `Mutex` — the Tauri GUI uses `Mutex<AppState>` for this purpose.
|
||||
The `Storage` trait requires `Send + Sync`, and `TaskRepository` wraps `Box<dyn Storage + Send + Sync>`, so repository instances can be shared across threads behind a `Mutex`. The Tauri GUI uses `Mutex<AppState>` for this purpose.
|
||||
|
||||
For concurrent access:
|
||||
|
||||
1. Wrap `TaskRepository` in `Mutex` or `RwLock` (the Tauri app does this)
|
||||
2. Or create separate repository instances per thread. Note that `FileSystemStorage` does not coordinate writes between processes — concurrent multi-process writes to the same workspace are not supported outside the WebDAV sync flow, which uses a `.sync.lock` file.
|
||||
2. Or create separate repository instances per thread (file system handles locking)
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ cargo run -p onyx-cli -- --help
|
|||
|
||||
# Run the Tauri GUI
|
||||
cd apps/tauri && npm install
|
||||
npm run tauri dev # (Wayland: WEBKIT_DISABLE_DMABUF_RENDERER=1 npm run tauri dev)
|
||||
npm run tauri dev
|
||||
```
|
||||
|
||||
## Project Structure
|
||||
|
|
@ -72,15 +72,11 @@ onyx/
|
|||
│ │ ├── main.ts
|
||||
│ │ ├── app.css # Tailwind CSS 4 + theme
|
||||
│ │ ├── App.svelte
|
||||
│ │ ├── test/
|
||||
│ │ │ └── setup.ts
|
||||
│ │ └── lib/
|
||||
│ │ ├── screens/ # Full-page views
|
||||
│ │ ├── components/ # Reusable UI components
|
||||
│ │ ├── stores/ # Svelte state (app.svelte.ts)
|
||||
│ │ ├── dateFormat.ts # Date formatting utilities
|
||||
│ │ ├── grouping.ts # Task grouping logic
|
||||
│ │ ├── paths.ts # Path utilities
|
||||
│ │ └── types.ts # TypeScript type definitions
|
||||
│ ├── tauri-plugin-credentials/ # Cross-platform credential storage plugin
|
||||
│ │ ├── Cargo.toml
|
||||
|
|
|
|||
Loading…
Reference in a new issue