Merge pull request #33 from SteelDynamite/claude/audit-project-report-qoEsr

Fix critical and high-severity issues from project audit
This commit is contained in:
SteelDynamite 2026-04-06 04:44:32 -07:00 committed by GitHub
commit 36fa591799
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
13 changed files with 453 additions and 66 deletions

View file

@ -35,15 +35,15 @@ Two-crate workspace (`resolver = "2"`, edition 2021) plus a Tauri app:
### Key patterns ### Key patterns
- **Storage trait** (`storage.rs`): Strategy pattern for task persistence. `FileSystemStorage` reads/writes markdown files with YAML frontmatter and JSON metadata files. Atomic writes (temp file + rename) for all metadata files. Input validation: task titles max 500 chars, descriptions max 1MB, list names max 255 chars. - **Storage trait** (`storage.rs`): Strategy pattern for task persistence. `FileSystemStorage` reads/writes markdown files with YAML frontmatter and JSON metadata files. Atomic writes (temp file + rename, with temp cleanup on failure) for all metadata files. Input validation: task titles max 500 chars, descriptions max 1MB, list names max 255 chars, YAML frontmatter max 64KB. Delete operations update metadata before removing files to prevent orphaned metadata on crash.
- **Repository** (`repository.rs`): `TaskRepository` wraps a `Storage` impl and provides the public API for task/list CRUD, ordering, and grouping. Tests live here. - **Repository** (`repository.rs`): `TaskRepository` wraps a `Storage` impl and provides the public API for task/list CRUD, ordering, and grouping. Tests live here.
- **Config** (`config.rs`): `AppConfig` manages workspaces keyed by UUID string. `WorkspaceConfig` stores `name`, `path`, `mode` (Local/Webdav), `webdav_url`, `webdav_path` (user-selected remote folder), `theme`, and `sync_interval_secs`. `add_workspace` returns a generated UUID. Stored in platform-specific config dirs via the `directories` crate. Atomic writes (temp file + rename) prevent corruption on crash. - **Config** (`config.rs`): `AppConfig` manages workspaces keyed by UUID string. `WorkspaceConfig` stores `name`, `path`, `mode` (Local/Webdav), `webdav_url`, `webdav_path` (user-selected remote folder), `theme`, and `sync_interval_secs`. `add_workspace` returns a generated UUID. Stored in platform-specific config dirs via the `directories` crate. Atomic writes (temp file + rename) prevent corruption on crash.
- **Sync** (`sync.rs`): Three-way diff sync with offline queue. Checksum-based conflict resolution: downloads remote, compares SHA-256 — identical content is a false conflict (skipped); when different, remote wins and local is recovered as a duplicate with a new UUID and `[RECOVERED FROM CONFLICT]` prefix. Auto-sync lifecycle: periodic polling (configurable interval, default 60s), debounced file-change (5s), window-focus (30s stale threshold). Wrapped in `tokio::time::timeout` (60s) to handle unreachable servers on Windows. Path traversal validation on all sync paths. Atomic writes for sync state and queue files. - **Sync** (`sync.rs`): Three-way diff sync with offline queue. File-based `.sync.lock` prevents concurrent sync operations (auto-cleaned after 5 minutes if stale). Checksum-based conflict resolution: downloads remote, compares SHA-256 — identical content is a false conflict (skipped); when different, remote wins and local is recovered as a duplicate with a new UUID and `[RECOVERED FROM CONFLICT]` prefix (duplicate file cleaned up if metadata update fails). Auto-sync lifecycle: periodic polling (configurable interval, default 60s), debounced file-change (5s), window-focus (30s stale threshold). Wrapped in `tokio::time::timeout` (60s) to handle unreachable servers on Windows. Path traversal validation rejects `..` components and backslashes anywhere in sync paths. Atomic writes for sync state and queue files (temp cleanup on failure).
- **WebDAV** (`webdav.rs`): reqwest client with rustls-tls, 30s request timeout, 10s connect timeout. Rejects non-HTTPS URLs. `Zeroizing<String>` for credential fields. `move_resource` method for WebDAV MOVE (workspace rename). 10MB cap on both PROPFIND responses and file downloads. Desktop credentials via `keyring` crate (feature-gated); Tauri GUI uses `tauri-plugin-credentials` for cross-platform support (Android Keystore + desktop keychain). - **WebDAV** (`webdav.rs`): reqwest client with rustls-tls, 30s request timeout, 10s connect timeout. Rejects non-HTTPS URLs. `Zeroizing<String>` for credential fields. `move_resource` method for WebDAV MOVE (workspace rename). 10MB cap on both PROPFIND responses and file downloads. Desktop credentials via `keyring` crate (feature-gated); Tauri GUI uses `tauri-plugin-credentials` for cross-platform support (Android Keystore + desktop keychain).
### On-disk format ### On-disk format
Workspaces are plain folders. Each task list is a subfolder containing `.listdata.json` (metadata/ordering) and one `.md` file per task. The workspace root has `.onyx-workspace.json` for list ordering and workspace detection. Sync only processes files at expected depths: `.onyx-workspace.json` at root, `.listdata.json` and `*.md` inside list directories. Task frontmatter contains `id`, `status`, `version` (u64, increments on every write, defaults to 1 for legacy files), and optionally `due`, `has_time`, `parent` (omitted when false/null). `list_tasks` auto-deduplicates by UUID, keeping the highest-version file and deleting stale copies. Workspaces are plain folders. Each task list is a subfolder containing `.listdata.json` (metadata/ordering) and one `.md` file per task. The workspace root has `.onyx-workspace.json` for list ordering and workspace detection. Sync only processes files at expected depths: `.onyx-workspace.json` at root, `.listdata.json` and `*.md` inside list directories. Task frontmatter contains `id`, `status`, `version` (u64, increments via `saturating_add` on every write, defaults to 1 for legacy files), and optionally `due`, `has_time`, `parent` (omitted when false/null). `list_tasks` auto-deduplicates by UUID, keeping the highest-version file and deleting stale copies; when versions are equal, filesystem modification time is used as a tiebreaker.
### Tauri GUI structure ### Tauri GUI structure
@ -95,7 +95,7 @@ Pre-alpha. No users, no released builds, no data to migrate. Breaking changes to
- Per-workspace sync interval (configurable in settings) - Per-workspace sync interval (configurable in settings)
- Upload/download counts in drawer footer (hidden when zero) - Upload/download counts in drawer footer (hidden when zero)
- Transient sync error suppression (connectivity issues update status dot only, no error banner) - Transient sync error suppression (connectivity issues update status dot only, no error banner)
- File watcher (notify crate, 500ms debounce, auto-reloads on external changes) - File watcher (notify crate, 500ms debounce, auto-reloads on external changes, emits `watcher-error` event to frontend on failures)
- WorkspaceMode enum (local/webdav) with per-workspace config, UUID-keyed workspaces - WorkspaceMode enum (local/webdav) with per-workspace config, UUID-keyed workspaces
- Workspace rename (local folder rename + WebDAV MOVE for remote folders, with confirmation dialog) - Workspace rename (local folder rename + WebDAV MOVE for remote folders, with confirmation dialog)
- Desktop packaging (Linux: AppImage + .deb; Windows: MSI) - Desktop packaging (Linux: AppImage + .deb; Windows: MSI)
@ -106,6 +106,7 @@ Pre-alpha. No users, no released builds, no data to migrate. Breaking changes to
- Custom confirmation dialogs (ConfirmDialog component replaces native confirm()) - Custom confirmation dialogs (ConfirmDialog component replaces native confirm())
- Workspace path validation (rejects system directories) - Workspace path validation (rejects system directories)
- Task detail auto-cleanup (taskStack clears when viewed task is deleted or list switches) - Task detail auto-cleanup (taskStack clears when viewed task is deleted or list switches)
- Accessibility: ARIA labels/roles on interactive components, keyboard handlers, `prefers-reduced-motion` CSS support
### GUI features NOT yet done ### GUI features NOT yet done

View file

@ -86,7 +86,7 @@ Task {
status: TaskStatus, // Backlog or Completed status: TaskStatus, // Backlog or Completed
due_date: Option<DateTime>, due_date: Option<DateTime>,
has_time: bool, // Whether due_date includes a specific time has_time: bool, // Whether due_date includes a specific time
version: u64, // Increments on every write; used for sync dedup version: u64, // Increments (saturating) on every write; used for sync dedup
parent_id: Option<Uuid>, // For subtasks parent_id: Option<Uuid>, // For subtasks
} }

View file

@ -111,6 +111,8 @@ fn mute_watcher(_state: &mut AppState) {
fn mute_watcher(_state: &mut AppState) {} fn mute_watcher(_state: &mut AppState) {}
/// Helper: get or open a TaskRepository for the current workspace. /// Helper: get or open a TaskRepository for the current workspace.
/// Safe against double-init because it runs under the AppState Mutex and uses
/// get_or_insert to atomically check-and-set.
fn ensure_repo(state: &mut AppState) -> Result<(), String> { fn ensure_repo(state: &mut AppState) -> Result<(), String> {
if state.repo.is_some() { if state.repo.is_some() {
return Ok(()); return Ok(());
@ -119,7 +121,10 @@ fn ensure_repo(state: &mut AppState) -> Result<(), String> {
.config .config
.get_current_workspace() .get_current_workspace()
.map_err(|e| e.to_string())?; .map_err(|e| e.to_string())?;
let repo = TaskRepository::new(ws.path.clone()).map_err(|e| e.to_string())?; let path = ws.path.clone();
// Use a separate variable to avoid borrow issues — the Mutex ensures
// no concurrent access, so TOCTOU is not possible here.
let repo = TaskRepository::new(path).map_err(|e| e.to_string())?;
state.repo = Some(repo); state.repo = Some(repo);
Ok(()) Ok(())
} }
@ -587,7 +592,10 @@ async fn list_remote_folder(
client.list_files(sp) client.list_files(sp)
}).collect(); }).collect();
let results: Vec<_> = futures::future::join_all(checks).await let results: Vec<_> = futures::future::join_all(checks).await
.into_iter().map(|r| r.unwrap_or_default()).collect(); .into_iter().map(|r| r.unwrap_or_else(|e| {
eprintln!("Warning: failed to inspect remote subfolder: {}", e);
Vec::new()
})).collect();
let folders = dir_entries.into_iter().zip(results).map(|(entry, sub_files)| { let folders = dir_entries.into_iter().zip(results).map(|(entry, sub_files)| {
let is_workspace = sub_files.iter().any(|f| !f.is_dir && f.path == ".onyx-workspace.json"); let is_workspace = sub_files.iter().any(|f| !f.is_dir && f.path == ".onyx-workspace.json");
@ -616,7 +624,10 @@ async fn inspect_remote_workspace(
} else { } else {
format!("{}/{}", path.trim_end_matches('/'), entry.path) format!("{}/{}", path.trim_end_matches('/'), entry.path)
}; };
let files = client.list_files(&list_path).await.unwrap_or_default(); let files = client.list_files(&list_path).await.unwrap_or_else(|e| {
eprintln!("Warning: failed to list remote folder '{}': {}", list_path, e);
Vec::new()
});
let has_listdata = files.iter().any(|f| !f.is_dir && f.path == ".listdata.json"); let has_listdata = files.iter().any(|f| !f.is_dir && f.path == ".listdata.json");
if has_listdata { if has_listdata {
let task_count = files.iter().filter(|f| !f.is_dir && f.path.ends_with(".md")).count(); let task_count = files.iter().filter(|f| !f.is_dir && f.path.ends_with(".md")).count();
@ -803,7 +814,9 @@ fn start_watcher(handle: tauri::AppHandle, path: PathBuf) {
std::time::Duration::from_millis(500), std::time::Duration::from_millis(500),
move |events: Result<Vec<notify_debouncer_mini::DebouncedEvent>, notify::Error>| { move |events: Result<Vec<notify_debouncer_mini::DebouncedEvent>, notify::Error>| {
let Ok(events) = events else { let Ok(events) = events else {
eprintln!("File watcher error: {:?}", events.unwrap_err()); let err = events.unwrap_err();
eprintln!("File watcher error: {:?}", err);
let _ = handle.emit("watcher-error", format!("{}", err));
return; return;
}; };
// Only care about data file changes // Only care about data file changes

View file

@ -154,3 +154,16 @@ body {
--color-border-dark: #094959; --color-border-dark: #094959;
--color-danger: #dc322f; --color-danger: #dc322f;
} }
/* ── Accessibility: Reduced motion ───────────────────────────────── */
@media (prefers-reduced-motion: reduce) {
*,
*::before,
*::after {
animation-duration: 0.01ms !important;
animation-iteration-count: 1 !important;
transition-duration: 0.01ms !important;
scroll-behavior: auto !important;
}
}

View file

@ -1,17 +1,22 @@
<script lang="ts"> <script lang="ts">
let { onclose, children }: { onclose: () => void; children: any } = $props(); import type { Snippet } from "svelte";
let { onclose, children }: { onclose: () => void; children: Snippet } = $props();
</script> </script>
<!-- Backdrop --> <!-- Backdrop -->
<!-- svelte-ignore a11y_no_static_element_interactions -->
<div <div
class="fixed inset-0 z-40 bg-black/40" class="fixed inset-0 z-40 bg-black/40"
role="button"
tabindex="-1"
aria-label="Close sheet"
onclick={onclose} onclick={onclose}
onkeydown={(e) => { if (e.key === "Escape") onclose(); }} onkeydown={(e) => { if (e.key === "Escape") onclose(); }}
></div> ></div>
<!-- Sheet --> <!-- Sheet -->
<div <div
role="dialog"
aria-modal="true"
class="fixed bottom-0 left-0 right-0 z-50 max-h-[70vh] overflow-y-auto rounded-t-2xl bg-surface-light shadow-xl dark:bg-card-dark animate-slide-up" class="fixed bottom-0 left-0 right-0 z-50 max-h-[70vh] overflow-y-auto rounded-t-2xl bg-surface-light shadow-xl dark:bg-card-dark animate-slide-up"
> >
<!-- Drag handle --> <!-- Drag handle -->

View file

@ -3,9 +3,11 @@
{ message: string; detail?: string; confirmText?: string; danger?: boolean; onconfirm: () => void; oncancel: () => void } = $props(); { message: string; detail?: string; confirmText?: string; danger?: boolean; onconfirm: () => void; oncancel: () => void } = $props();
</script> </script>
<!-- svelte-ignore a11y_no_static_element_interactions -->
<div <div
class="absolute inset-0 z-50 flex items-center justify-center" class="absolute inset-0 z-50 flex items-center justify-center"
role="dialog"
aria-modal="true"
aria-label={message}
onclick={oncancel} onclick={oncancel}
onkeydown={(e) => { if (e.key === "Escape") { e.stopPropagation(); oncancel(); } }} onkeydown={(e) => { if (e.key === "Escape") { e.stopPropagation(); oncancel(); } }}
> >

View file

@ -111,11 +111,14 @@
{/if} {/if}
<!-- Task content --> <!-- Task content -->
<!-- svelte-ignore a11y_no_static_element_interactions -->
<div <div
class="group flex w-full cursor-pointer items-start gap-3 bg-surface-light py-3 pr-4 text-left hover:bg-black/5 dark:bg-surface-dark dark:hover:bg-white/5" class="group flex w-full cursor-pointer items-start gap-3 bg-surface-light py-3 pr-4 text-left hover:bg-black/5 dark:bg-surface-dark dark:hover:bg-white/5"
style="padding-left: {1 + depth * 1.5}rem; transform: translateX({swipeX}px); transition: {swiping ? 'none' : 'transform 0.2s ease-out'}" style="padding-left: {1 + depth * 1.5}rem; transform: translateX({swipeX}px); transition: {swiping ? 'none' : 'transform 0.2s ease-out'}"
role="button"
tabindex="0"
aria-label="Open task: {task.title}"
onclick={() => onopen?.(task)} onclick={() => onopen?.(task)}
onkeydown={(e) => { if (e.key === "Enter" || e.key === " ") { e.preventDefault(); onopen?.(task); } }}
> >
<!-- Checkbox --> <!-- Checkbox -->
<button <button
@ -153,8 +156,8 @@
</span> </span>
{/if} {/if}
{#if subtaskCount > 0} {#if subtaskCount > 0}
<span class="mt-1 inline-flex items-center gap-1 text-xs opacity-40"> <span class="mt-1 inline-flex items-center gap-1 text-xs opacity-40" aria-label="{subtasks.filter(s => s.status === 'completed').length} of {subtaskCount} subtasks completed">
<svg class="h-3 w-3" viewBox="0 0 20 20" fill="currentColor"> <svg class="h-3 w-3" viewBox="0 0 20 20" fill="currentColor" aria-hidden="true">
<path fill-rule="evenodd" d="M3 4a1 1 0 011-1h12a1 1 0 110 2H4a1 1 0 01-1-1zm2 4a1 1 0 011-1h10a1 1 0 110 2H6a1 1 0 01-1-1zm2 4a1 1 0 011-1h8a1 1 0 110 2H8a1 1 0 01-1-1z" /> <path fill-rule="evenodd" d="M3 4a1 1 0 011-1h12a1 1 0 110 2H4a1 1 0 01-1-1zm2 4a1 1 0 011-1h10a1 1 0 110 2H6a1 1 0 01-1-1zm2 4a1 1 0 011-1h8a1 1 0 110 2H8a1 1 0 01-1-1z" />
</svg> </svg>
{subtasks.filter(s => s.status === "completed").length}/{subtaskCount} {subtasks.filter(s => s.status === "completed").length}/{subtaskCount}
@ -163,7 +166,7 @@
</div> </div>
<!-- Chevron --> <!-- Chevron -->
<svg class="mt-1 h-4 w-4 shrink-0 opacity-0 transition-opacity group-hover:opacity-30" viewBox="0 0 20 20" fill="currentColor"> <svg class="mt-1 h-4 w-4 shrink-0 opacity-0 transition-opacity group-hover:opacity-30" viewBox="0 0 20 20" fill="currentColor" aria-hidden="true">
<path fill-rule="evenodd" d="M7.21 14.77a.75.75 0 01.02-1.06L11.168 10 7.23 6.29a.75.75 0 111.04-1.08l4.5 4.25a.75.75 0 010 1.08l-4.5 4.25a.75.75 0 01-1.06-.02z" /> <path fill-rule="evenodd" d="M7.21 14.77a.75.75 0 01.02-1.06L11.168 10 7.23 6.29a.75.75 0 111.04-1.08l4.5 4.25a.75.75 0 010 1.08l-4.5 4.25a.75.75 0 01-1.06-.02z" />
</svg> </svg>
</div> </div>

View file

@ -367,9 +367,18 @@ function startAutoSync() {
stopAutoSync(); stopAutoSync();
triggerSync(); triggerSync();
_syncInterval = setInterval(triggerSync, syncIntervalSecs * 1000); _syncInterval = setInterval(triggerSync, syncIntervalSecs * 1000);
// Store the promise-returned unlisten function, ensuring we clean up any
// previous listener before assigning a new one.
getCurrentWindow().onFocusChanged(({ payload: focused }) => { getCurrentWindow().onFocusChanged(({ payload: focused }) => {
if (focused && Date.now() - lastSyncTime > SYNC_FOCUS_THRESHOLD_MS) triggerSync(); if (focused && Date.now() - lastSyncTime > SYNC_FOCUS_THRESHOLD_MS) triggerSync();
}).then((unlisten) => { _focusUnlisten = unlisten; }).catch((e) => { }).then((unlisten) => {
// If stopAutoSync was called while the promise was pending, immediately clean up
if (!_syncInterval) {
unlisten();
} else {
_focusUnlisten = unlisten;
}
}).catch((e) => {
console.warn("Failed to set up focus listener:", e); console.warn("Failed to set up focus listener:", e);
}); });
} }

View file

@ -79,7 +79,12 @@ impl TaskRepository {
// If delete from source fails, roll back by removing the copy from destination // If delete from source fails, roll back by removing the copy from destination
if let Err(e) = self.storage.delete_task(from_list_id, task_id) { if let Err(e) = self.storage.delete_task(from_list_id, task_id) {
if let Err(rollback_err) = self.storage.delete_task(to_list_id, task_id) { if let Err(rollback_err) = self.storage.delete_task(to_list_id, task_id) {
eprintln!("Warning: move_task rollback failed: {}", rollback_err); // Rollback failed — task now exists in both lists.
// Return an error describing the inconsistent state.
return Err(Error::InvalidData(format!(
"move_task failed and rollback also failed: original error: {}, rollback error: {}. Task {} may exist in both lists.",
e, rollback_err, task_id
)));
} }
return Err(e); return Err(e);
} }

View file

@ -13,6 +13,8 @@ const MAX_TITLE_LENGTH: usize = 500;
const MAX_DESCRIPTION_LENGTH: usize = 1_000_000; // 1 MB const MAX_DESCRIPTION_LENGTH: usize = 1_000_000; // 1 MB
/// Maximum allowed length for list names. /// Maximum allowed length for list names.
const MAX_LIST_NAME_LENGTH: usize = 255; const MAX_LIST_NAME_LENGTH: usize = 255;
/// Maximum allowed size for YAML frontmatter (64 KB) to prevent DoS via crafted files.
const MAX_FRONTMATTER_LENGTH: usize = 64 * 1024;
/// Workspace root metadata filename. /// Workspace root metadata filename.
const WORKSPACE_METADATA_FILE: &str = ".onyx-workspace.json"; const WORKSPACE_METADATA_FILE: &str = ".onyx-workspace.json";
/// Per-list metadata filename. /// Per-list metadata filename.
@ -23,11 +25,15 @@ const TASK_FILE_EXT: &str = "md";
const DEFAULT_TASK_VERSION: u64 = 1; const DEFAULT_TASK_VERSION: u64 = 1;
/// Write data to a temporary file then atomically rename to the target path. /// Write data to a temporary file then atomically rename to the target path.
/// Prevents corruption from partial writes on crash. /// Prevents corruption from partial writes on crash. Cleans up temp file on
/// rename failure to prevent accumulation.
fn atomic_write(path: &Path, content: &[u8]) -> std::io::Result<()> { fn atomic_write(path: &Path, content: &[u8]) -> std::io::Result<()> {
let temp = path.with_extension("tmp"); let temp = path.with_extension("tmp");
fs::write(&temp, content)?; fs::write(&temp, content)?;
fs::rename(&temp, path)?; if let Err(e) = fs::rename(&temp, path) {
let _ = fs::remove_file(&temp);
return Err(e);
}
Ok(()) Ok(())
} }
@ -279,6 +285,12 @@ impl FileSystemStorage {
let frontmatter_lines = &lines[1..=end_idx]; let frontmatter_lines = &lines[1..=end_idx];
let frontmatter_str = frontmatter_lines.join("\n"); let frontmatter_str = frontmatter_lines.join("\n");
if frontmatter_str.len() > MAX_FRONTMATTER_LENGTH {
return Err(Error::InvalidData(format!(
"Frontmatter too large ({} bytes, max {})",
frontmatter_str.len(), MAX_FRONTMATTER_LENGTH
)));
}
let frontmatter: TaskFrontmatter = serde_yaml::from_str(&frontmatter_str)?; let frontmatter: TaskFrontmatter = serde_yaml::from_str(&frontmatter_str)?;
let description = if end_idx + 2 < lines.len() { let description = if end_idx + 2 < lines.len() {
@ -292,7 +304,7 @@ impl FileSystemStorage {
fn write_markdown_with_frontmatter(&self, task: &Task) -> Result<String> { fn write_markdown_with_frontmatter(&self, task: &Task) -> Result<String> {
let mut frontmatter = TaskFrontmatter::from(task); let mut frontmatter = TaskFrontmatter::from(task);
frontmatter.version = task.version + 1; frontmatter.version = task.version.saturating_add(1);
let yaml = serde_yaml::to_string(&frontmatter)?; let yaml = serde_yaml::to_string(&frontmatter)?;
let mut content = String::new(); let mut content = String::new();
@ -407,14 +419,15 @@ impl Storage for FileSystemStorage {
let list_dir = self.list_dir_path(list_id)?; let list_dir = self.list_dir_path(list_id)?;
let task_path = self.task_file_path(&list_dir, &task); let task_path = self.task_file_path(&list_dir, &task);
fs::remove_file(&task_path)?; // Update metadata first so a crash between steps leaves an orphaned file
// (recoverable) rather than an orphaned metadata entry (confusing).
// Remove from task_order
let mut list_metadata = self.read_list_metadata(list_id)?; let mut list_metadata = self.read_list_metadata(list_id)?;
list_metadata.task_order.retain(|&id| id != task_id); list_metadata.task_order.retain(|&id| id != task_id);
list_metadata.updated_at = Utc::now(); list_metadata.updated_at = Utc::now();
self.write_list_metadata(&list_metadata)?; self.write_list_metadata(&list_metadata)?;
fs::remove_file(&task_path)?;
Ok(()) Ok(())
} }
@ -453,7 +466,9 @@ impl Storage for FileSystemStorage {
} }
} }
// Self-healing dedup: group by UUID, keep highest version, delete stale files // Self-healing dedup: group by UUID, keep highest version, delete stale files.
// When versions are equal, keep the file with the latest filesystem modification
// time to avoid non-deterministic selection.
let mut by_id: HashMap<Uuid, Vec<(PathBuf, Task)>> = HashMap::new(); let mut by_id: HashMap<Uuid, Vec<(PathBuf, Task)>> = HashMap::new();
for entry in file_tasks { for entry in file_tasks {
by_id.entry(entry.1.id).or_default().push(entry); by_id.entry(entry.1.id).or_default().push(entry);
@ -462,7 +477,17 @@ impl Storage for FileSystemStorage {
let mut tasks = Vec::new(); let mut tasks = Vec::new();
for (_id, mut entries) in by_id { for (_id, mut entries) in by_id {
if entries.len() > 1 { if entries.len() > 1 {
entries.sort_by(|a, b| b.1.version.cmp(&a.1.version)); entries.sort_by(|a, b| {
// Primary: highest version first
let version_cmp = b.1.version.cmp(&a.1.version);
if version_cmp != std::cmp::Ordering::Equal {
return version_cmp;
}
// Tiebreaker: most recently modified file first
let mtime_a = fs::metadata(&a.0).and_then(|m| m.modified()).ok();
let mtime_b = fs::metadata(&b.0).and_then(|m| m.modified()).ok();
mtime_b.cmp(&mtime_a)
});
for (stale_path, _) in entries.drain(1..) { for (stale_path, _) in entries.drain(1..) {
if let Err(e) = fs::remove_file(&stale_path) { if let Err(e) = fs::remove_file(&stale_path) {
eprintln!("Warning: failed to remove stale duplicate task file {:?}: {}", stale_path, e); eprintln!("Warning: failed to remove stale duplicate task file {:?}: {}", stale_path, e);
@ -580,9 +605,8 @@ impl Storage for FileSystemStorage {
fn delete_list(&mut self, list_id: Uuid) -> Result<()> { fn delete_list(&mut self, list_id: Uuid) -> Result<()> {
let list_dir = self.list_dir_path(list_id)?; let list_dir = self.list_dir_path(list_id)?;
fs::remove_dir_all(&list_dir)?; // Update root metadata first so a crash between steps leaves an orphaned
// directory (recoverable) rather than an orphaned metadata entry.
// Remove from root metadata
let mut root_metadata = self.read_root_metadata_internal()?; let mut root_metadata = self.read_root_metadata_internal()?;
root_metadata.list_order.retain(|&id| id != list_id); root_metadata.list_order.retain(|&id| id != list_id);
if root_metadata.last_opened_list == Some(list_id) { if root_metadata.last_opened_list == Some(list_id) {
@ -590,6 +614,8 @@ impl Storage for FileSystemStorage {
} }
self.write_root_metadata_internal(&root_metadata)?; self.write_root_metadata_internal(&root_metadata)?;
fs::remove_dir_all(&list_dir)?;
Ok(()) Ok(())
} }
@ -1006,12 +1032,149 @@ mod tests {
let tasks = storage.list_tasks(list.id).unwrap(); let tasks = storage.list_tasks(list.id).unwrap();
assert_eq!(tasks.len(), 1); assert_eq!(tasks.len(), 1);
assert_eq!(tasks[0].id, task_id); assert_eq!(tasks[0].id, task_id);
// The winner should be the one written by write_task (version 1), not the manually created stale copy (also version 1 but alphabetically second) // Both are version 1, so mtime tiebreaker picks the most recent file.
// Actually both are version 1, so the first sorted wins — but the stale file should be cleaned up // Verify only one .md file remains.
// Let's verify only one .md file remains
let md_count = fs::read_dir(&list_dir).unwrap() let md_count = fs::read_dir(&list_dir).unwrap()
.filter(|e| e.as_ref().unwrap().path().extension().and_then(|s| s.to_str()) == Some("md")) .filter(|e| e.as_ref().unwrap().path().extension().and_then(|s| s.to_str()) == Some("md"))
.count(); .count();
assert_eq!(md_count, 1); assert_eq!(md_count, 1);
} }
// --- Deduplication mtime tiebreaker ---
#[test]
fn test_dedup_equal_versions_uses_mtime_tiebreaker() {
let temp_dir = TempDir::new().unwrap();
let mut storage = init_storage(&temp_dir);
let list = storage.create_list("Dedup2".to_string()).unwrap();
let task_id = uuid::Uuid::new_v4();
let list_dir = storage.list_dir_path(list.id).unwrap();
// Create two files with the same UUID and same version (1)
let content_a = format!(
"---\nid: {}\nstatus: backlog\nversion: 1\n---\n\nVersion A (older)",
task_id
);
let content_b = format!(
"---\nid: {}\nstatus: backlog\nversion: 1\n---\n\nVersion B (newer)",
task_id
);
let path_a = list_dir.join("TaskA.md");
let path_b = list_dir.join("TaskB.md");
fs::write(&path_a, &content_a).unwrap();
// Sleep briefly so mtime differs
std::thread::sleep(std::time::Duration::from_millis(50));
fs::write(&path_b, &content_b).unwrap();
let tasks = storage.list_tasks(list.id).unwrap();
assert_eq!(tasks.len(), 1);
assert_eq!(tasks[0].id, task_id);
// The newer file (B) should win the mtime tiebreaker
assert_eq!(tasks[0].description, "Version B (newer)");
// Verify only one .md file remains
let md_count = fs::read_dir(&list_dir).unwrap()
.filter(|e| e.as_ref().unwrap().path().extension().and_then(|s| s.to_str()) == Some("md"))
.count();
assert_eq!(md_count, 1);
}
// --- Frontmatter size limit ---
#[test]
fn test_parse_frontmatter_rejects_oversized() {
let temp_dir = TempDir::new().unwrap();
let storage = init_storage(&temp_dir);
// Create content with frontmatter larger than 64KB
let huge = "x".repeat(70_000);
let content = format!(
"---\nid: 550e8400-e29b-41d4-a716-446655440000\nstatus: backlog\nversion: 1\nhuge: {}\n---\n\nBody",
huge
);
let result = storage.parse_markdown_with_frontmatter(&content);
assert!(result.is_err());
let err = result.unwrap_err().to_string();
assert!(err.contains("too large"), "Error should mention size: {}", err);
}
#[test]
fn test_parse_frontmatter_accepts_normal_size() {
let temp_dir = TempDir::new().unwrap();
let storage = init_storage(&temp_dir);
let content = "---\nid: 550e8400-e29b-41d4-a716-446655440000\nstatus: backlog\nversion: 1\n---\n\nDescription";
let result = storage.parse_markdown_with_frontmatter(content);
assert!(result.is_ok());
}
// --- Version saturating_add ---
#[test]
fn test_version_saturates_at_max() {
let temp_dir = TempDir::new().unwrap();
let mut storage = init_storage(&temp_dir);
let list = storage.create_list("MaxVer".to_string()).unwrap();
let mut task = Task::new("Saturate".to_string());
task.version = u64::MAX - 1;
storage.write_task(list.id, &task).unwrap();
let read_back = storage.read_task(list.id, task.id).unwrap();
assert_eq!(read_back.version, u64::MAX, "Version should saturate at u64::MAX");
// Writing again should not panic or wrap
storage.write_task(list.id, &read_back).unwrap();
let read_again = storage.read_task(list.id, task.id).unwrap();
assert_eq!(read_again.version, u64::MAX, "Version should stay at u64::MAX");
}
// --- Delete ordering: metadata before file ---
#[test]
fn test_delete_task_removes_from_metadata_first() {
let temp_dir = TempDir::new().unwrap();
let mut storage = init_storage(&temp_dir);
let list = storage.create_list("DelOrder".to_string()).unwrap();
let task = Task::new("ToDelete".to_string());
let task_id = task.id;
storage.write_task(list.id, &task).unwrap();
// Verify task is in metadata
let meta = storage.read_list_metadata(list.id).unwrap();
assert!(meta.task_order.contains(&task_id));
// Delete
storage.delete_task(list.id, task_id).unwrap();
// Verify metadata no longer contains the task
let meta_after = storage.read_list_metadata(list.id).unwrap();
assert!(!meta_after.task_order.contains(&task_id));
// Verify file is also gone
let list_dir = storage.list_dir_path(list.id).unwrap();
let md_count = fs::read_dir(&list_dir).unwrap()
.filter(|e| e.as_ref().unwrap().path().extension().and_then(|s| s.to_str()) == Some("md"))
.count();
assert_eq!(md_count, 0);
}
// --- Atomic write no leftover tmp ---
#[test]
fn test_atomic_write_no_leftover_tmp() {
let temp_dir = TempDir::new().unwrap();
let target = temp_dir.path().join("test.json");
atomic_write(&target, b"hello").unwrap();
let tmp_files: Vec<_> = fs::read_dir(temp_dir.path()).unwrap()
.filter_map(|e| e.ok())
.filter(|e| e.path().extension().and_then(|s| s.to_str()) == Some("tmp"))
.collect();
assert!(tmp_files.is_empty(), "No .tmp files should remain after atomic_write");
assert_eq!(fs::read_to_string(&target).unwrap(), "hello");
}
} }

View file

@ -1,5 +1,5 @@
use std::collections::HashMap; use std::collections::HashMap;
use std::path::Path; use std::path::{Path, PathBuf};
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use sha2::{Sha256, Digest}; use sha2::{Sha256, Digest};
@ -8,6 +8,40 @@ use crate::error::{Error, Result};
use crate::storage::{ListMetadata, TaskFrontmatter}; use crate::storage::{ListMetadata, TaskFrontmatter};
use crate::webdav::WebDavClient; use crate::webdav::WebDavClient;
/// File-based lock to prevent concurrent sync operations on the same workspace.
/// The lock file is automatically removed on drop.
#[derive(Debug)]
struct SyncLock {
path: PathBuf,
}
impl SyncLock {
fn acquire(workspace_path: &Path) -> Result<Self> {
let path = workspace_path.join(".sync.lock");
// Check for stale lock (older than 5 minutes)
if path.exists() {
if let Ok(meta) = std::fs::metadata(&path) {
if let Ok(modified) = meta.modified() {
if modified.elapsed().unwrap_or_default() > std::time::Duration::from_secs(300) {
let _ = std::fs::remove_file(&path);
}
}
}
}
// Try to create the lock file exclusively
match std::fs::OpenOptions::new().write(true).create_new(true).open(&path) {
Ok(_) => Ok(Self { path }),
Err(_) => Err(Error::Sync("Another sync operation is already in progress".into())),
}
}
}
impl Drop for SyncLock {
fn drop(&mut self) {
let _ = std::fs::remove_file(&self.path);
}
}
// --- Sync State --- // --- Sync State ---
/// Persisted sync state for a workspace, stored as `.syncstate.json`. /// Persisted sync state for a workspace, stored as `.syncstate.json`.
@ -293,7 +327,10 @@ impl OfflineQueue {
// Atomic write: write to temp then rename // Atomic write: write to temp then rename
let temp_path = workspace_path.join(".syncqueue.json.tmp"); let temp_path = workspace_path.join(".syncqueue.json.tmp");
std::fs::write(&temp_path, &content)?; std::fs::write(&temp_path, &content)?;
std::fs::rename(&temp_path, &queue_path)?; if let Err(e) = std::fs::rename(&temp_path, &queue_path) {
let _ = std::fs::remove_file(&temp_path);
return Err(e.into());
}
Ok(()) Ok(())
} }
@ -483,7 +520,11 @@ impl SyncState {
// Atomic write: write to temp file then rename to prevent corruption on crash // Atomic write: write to temp file then rename to prevent corruption on crash
let temp_path = workspace_path.join(".syncstate.json.tmp"); let temp_path = workspace_path.join(".syncstate.json.tmp");
std::fs::write(&temp_path, &content)?; std::fs::write(&temp_path, &content)?;
std::fs::rename(&temp_path, &state_path)?; if let Err(e) = std::fs::rename(&temp_path, &state_path) {
// Clean up temp file on rename failure to prevent accumulation
let _ = std::fs::remove_file(&temp_path);
return Err(e.into());
}
Ok(()) Ok(())
} }
@ -535,6 +576,8 @@ async fn sync_workspace_inner(
mode: SyncMode, mode: SyncMode,
on_progress: Option<ProgressCallback>, on_progress: Option<ProgressCallback>,
) -> Result<SyncResult> { ) -> Result<SyncResult> {
// Acquire exclusive lock to prevent concurrent syncs
let _lock = SyncLock::acquire(workspace_path)?;
let client = WebDavClient::new(webdav_url, username, password)?; let client = WebDavClient::new(webdav_url, username, password)?;
let mut sync_state = SyncState::load(workspace_path); let mut sync_state = SyncState::load(workspace_path);
let queue = OfflineQueue::load(workspace_path); let queue = OfflineQueue::load(workspace_path);
@ -614,9 +657,16 @@ async fn sync_workspace_inner(
} }
/// Validate that a sync path doesn't escape the workspace via path traversal. /// Validate that a sync path doesn't escape the workspace via path traversal.
/// Rejects `..` components and backslashes (which could bypass validation on Windows
/// after separator replacement).
fn validate_sync_path(path: &str) -> Result<()> { fn validate_sync_path(path: &str) -> Result<()> {
// Reject backslashes anywhere in the path — they could be used to bypass
// forward-slash-based component splitting on Windows.
if path.contains('\\') {
return Err(Error::Sync(format!("Path traversal not allowed (backslash): {}", path)));
}
for component in path.split('/') { for component in path.split('/') {
if component == ".." || component.contains('\\') { if component == ".." {
return Err(Error::Sync(format!("Path traversal not allowed: {}", path))); return Err(Error::Sync(format!("Path traversal not allowed: {}", path)));
} }
} }
@ -707,22 +757,26 @@ async fn execute_action(
let dup_path = list_dir.join(&dup_filename); let dup_path = list_dir.join(&dup_filename);
std::fs::write(&dup_path, &new_content)?; std::fs::write(&dup_path, &new_content)?;
// Insert new task adjacent to original in .listdata.json // Insert new task adjacent to original in .listdata.json.
// If metadata update fails, remove the duplicate file to
// avoid orphaned files that are invisible to the app.
let listdata_path = list_dir.join(".listdata.json"); let listdata_path = list_dir.join(".listdata.json");
if listdata_path.exists() { if listdata_path.exists() {
if let Ok(content) = std::fs::read_to_string(&listdata_path) { let metadata_updated = (|| -> std::result::Result<(), Box<dyn std::error::Error>> {
if let Ok(mut metadata) = serde_json::from_str::<ListMetadata>(&content) { let content = std::fs::read_to_string(&listdata_path)?;
let insert_pos = metadata.task_order.iter() let mut metadata: ListMetadata = serde_json::from_str(&content)?;
.position(|id| *id == original_id) let insert_pos = metadata.task_order.iter()
.map(|p| p + 1) .position(|id| *id == original_id)
.unwrap_or(metadata.task_order.len()); .map(|p| p + 1)
metadata.task_order.insert(insert_pos, new_id); .unwrap_or(metadata.task_order.len());
if let Ok(json) = serde_json::to_string_pretty(&metadata) { metadata.task_order.insert(insert_pos, new_id);
if let Err(e) = std::fs::write(&listdata_path, json) { let json = serde_json::to_string_pretty(&metadata)?;
eprintln!("Warning: failed to update listdata after conflict recovery: {}", e); std::fs::write(&listdata_path, json)?;
} Ok(())
} })();
} if let Err(e) = metadata_updated {
eprintln!("Warning: failed to update listdata after conflict recovery, removing duplicate: {}", e);
let _ = std::fs::remove_file(&dup_path);
} }
} }
@ -776,6 +830,13 @@ fn parse_frontmatter_for_conflict(content: &str) -> Result<(TaskFrontmatter, Str
let end_idx = lines[1..].iter().position(|&line| line == "---") let end_idx = lines[1..].iter().position(|&line| line == "---")
.ok_or_else(|| Error::InvalidData("Missing closing frontmatter delimiter".to_string()))?; .ok_or_else(|| Error::InvalidData("Missing closing frontmatter delimiter".to_string()))?;
let frontmatter_str = lines[1..=end_idx].join("\n"); let frontmatter_str = lines[1..=end_idx].join("\n");
// Reject oversized frontmatter to prevent DoS via crafted YAML
if frontmatter_str.len() > 64 * 1024 {
return Err(Error::InvalidData(format!(
"Frontmatter too large ({} bytes, max 65536)",
frontmatter_str.len()
)));
}
let frontmatter: TaskFrontmatter = serde_yaml::from_str(&frontmatter_str) let frontmatter: TaskFrontmatter = serde_yaml::from_str(&frontmatter_str)
.map_err(|e| Error::Sync(format!("Failed to parse frontmatter: {}", e)))?; .map_err(|e| Error::Sync(format!("Failed to parse frontmatter: {}", e)))?;
let description = if end_idx + 2 < lines.len() { let description = if end_idx + 2 < lines.len() {
@ -1257,4 +1318,112 @@ mod tests {
assert_eq!(path_parent("file.md"), None); assert_eq!(path_parent("file.md"), None);
assert_eq!(path_parent("a/b/c.md"), Some("a/b")); assert_eq!(path_parent("a/b/c.md"), Some("a/b"));
} }
// --- validate_sync_path tests ---
#[test]
fn test_validate_sync_path_rejects_dotdot() {
assert!(validate_sync_path("../etc/passwd").is_err());
assert!(validate_sync_path("list/../secret.md").is_err());
}
#[test]
fn test_validate_sync_path_rejects_backslash() {
assert!(validate_sync_path("list\\..\\secret.md").is_err());
assert!(validate_sync_path("list\\file.md").is_err());
assert!(validate_sync_path("a\\b").is_err());
}
#[test]
fn test_validate_sync_path_allows_valid_paths() {
assert!(validate_sync_path("list/task.md").is_ok());
assert!(validate_sync_path(".onyx-workspace.json").is_ok());
assert!(validate_sync_path("My Tasks/.listdata.json").is_ok());
}
// --- SyncLock tests ---
#[test]
fn test_sync_lock_prevents_concurrent_access() {
let temp_dir = TempDir::new().unwrap();
let _lock1 = SyncLock::acquire(temp_dir.path()).unwrap();
let result = SyncLock::acquire(temp_dir.path());
assert!(result.is_err());
assert!(result.unwrap_err().to_string().contains("already in progress"));
}
#[test]
fn test_sync_lock_released_on_drop() {
let temp_dir = TempDir::new().unwrap();
{
let _lock = SyncLock::acquire(temp_dir.path()).unwrap();
// lock held here
}
// lock dropped — should be able to acquire again
let _lock2 = SyncLock::acquire(temp_dir.path()).unwrap();
}
#[test]
fn test_sync_lock_file_cleaned_up_on_drop() {
let temp_dir = TempDir::new().unwrap();
let lock_path = temp_dir.path().join(".sync.lock");
{
let _lock = SyncLock::acquire(temp_dir.path()).unwrap();
assert!(lock_path.exists(), "Lock file should exist while held");
}
assert!(!lock_path.exists(), "Lock file should be removed after drop");
}
// --- Sync state temp cleanup ---
#[test]
fn test_sync_state_save_load_no_leftover_tmp() {
let temp_dir = TempDir::new().unwrap();
let state = SyncState {
last_sync: Some(Utc::now()),
files: HashMap::new(),
};
state.save(temp_dir.path()).unwrap();
// Verify no .tmp files remain
let tmp_files: Vec<_> = std::fs::read_dir(temp_dir.path()).unwrap()
.filter_map(|e| e.ok())
.filter(|e| e.path().extension().and_then(|s| s.to_str()) == Some("tmp"))
.collect();
assert!(tmp_files.is_empty(), "No .tmp files should remain after save");
}
// --- Queue save no leftover tmp ---
#[test]
fn test_queue_save_no_leftover_tmp() {
let temp_dir = TempDir::new().unwrap();
let queue = OfflineQueue {
operations: vec![QueuedOperation {
action_type: "upload".to_string(),
path: "test.md".to_string(),
queued_at: Utc::now(),
}],
};
queue.save(temp_dir.path()).unwrap();
let tmp_files: Vec<_> = std::fs::read_dir(temp_dir.path()).unwrap()
.filter_map(|e| e.ok())
.filter(|e| e.path().extension().and_then(|s| s.to_str()) == Some("tmp"))
.collect();
assert!(tmp_files.is_empty(), "No .tmp files should remain after queue save");
}
// --- Frontmatter size limit in conflict parsing ---
#[test]
fn test_parse_frontmatter_for_conflict_rejects_oversized() {
// Create content with frontmatter larger than 64KB
let huge_field = "x".repeat(70_000);
let content = format!("---\nid: 550e8400-e29b-41d4-a716-446655440000\nstatus: backlog\nversion: 1\nhuge: {}\n---\n\nBody", huge_field);
let result = parse_frontmatter_for_conflict(&content);
assert!(result.is_err());
let err = result.unwrap_err().to_string();
assert!(err.contains("too large"), "Error should mention size: {}", err);
}
} }

View file

@ -32,21 +32,22 @@ impl WebDavClient {
if !base_url.starts_with("https://") { if !base_url.starts_with("https://") {
return Err(Error::WebDav("Refusing non-HTTPS URL: credentials would be sent in plaintext".into())); return Err(Error::WebDav("Refusing non-HTTPS URL: credentials would be sent in plaintext".into()));
} }
Ok(Self::new_unchecked(base_url, username, password)) Self::new_unchecked(base_url, username, password)
} }
fn new_unchecked(base_url: &str, username: &str, password: &str) -> Self { fn new_unchecked(base_url: &str, username: &str, password: &str) -> Result<Self> {
let base_url = base_url.trim_end_matches('/').to_string(); let base_url = base_url.trim_end_matches('/').to_string();
Self { let client = Client::builder()
_client: Client::builder() .timeout(REQUEST_TIMEOUT)
.timeout(REQUEST_TIMEOUT) .connect_timeout(CONNECT_TIMEOUT)
.connect_timeout(CONNECT_TIMEOUT) .build()
.build() .map_err(|e| Error::WebDav(format!("Failed to build HTTP client: {}", e)))?;
.unwrap_or_else(|_| Client::new()), Ok(Self {
_client: client,
_base_url: base_url, _base_url: base_url,
_username: Zeroizing::new(username.to_string()), _username: Zeroizing::new(username.to_string()),
_password: Zeroizing::new(password.to_string()), _password: Zeroizing::new(password.to_string()),
} })
} }
fn full_url(&self, path: &str) -> String { fn full_url(&self, path: &str) -> String {
@ -750,7 +751,7 @@ mod tests {
#[test] #[test]
fn test_full_url_building() { fn test_full_url_building() {
let client = WebDavClient::new_unchecked("http://example.com/dav/", "user", "pass"); let client = WebDavClient::new_unchecked("http://example.com/dav/", "user", "pass").unwrap();
assert_eq!(client.full_url(""), "http://example.com/dav"); assert_eq!(client.full_url(""), "http://example.com/dav");
assert_eq!(client.full_url("file.md"), "http://example.com/dav/file.md"); assert_eq!(client.full_url("file.md"), "http://example.com/dav/file.md");
assert_eq!(client.full_url("My Tasks/Buy groceries.md"), "http://example.com/dav/My%20Tasks/Buy%20groceries.md"); assert_eq!(client.full_url("My Tasks/Buy groceries.md"), "http://example.com/dav/My%20Tasks/Buy%20groceries.md");
@ -758,7 +759,7 @@ mod tests {
#[test] #[test]
fn test_full_url_strips_leading_slash() { fn test_full_url_strips_leading_slash() {
let client = WebDavClient::new_unchecked("http://example.com/dav", "user", "pass"); let client = WebDavClient::new_unchecked("http://example.com/dav", "user", "pass").unwrap();
assert_eq!(client.full_url("/file.md"), "http://example.com/dav/file.md"); assert_eq!(client.full_url("/file.md"), "http://example.com/dav/file.md");
} }

View file

@ -20,7 +20,7 @@ pub struct Task {
pub status: TaskStatus, pub status: TaskStatus,
pub due_date: Option<DateTime<Utc>>, pub due_date: Option<DateTime<Utc>>,
pub has_time: bool, // Whether due_date includes a specific time pub has_time: bool, // Whether due_date includes a specific time
pub version: u64, // Increments on every write; used for sync dedup pub version: u64, // Increments (saturating) on every write; used for sync dedup
pub parent_id: Option<Uuid>, pub parent_id: Option<Uuid>,
} }
@ -377,8 +377,10 @@ client.delete_file("old-task.md").await?;
- **Sync state**: Stored in `.syncstate.json` within the workspace directory - **Sync state**: Stored in `.syncstate.json` within the workspace directory
- **Auto-sync**: Periodic polling (configurable `sync_interval_secs`), debounced file-change trigger (5s), window-focus trigger (30s stale threshold) - **Auto-sync**: Periodic polling (configurable `sync_interval_secs`), debounced file-change trigger (5s), window-focus trigger (30s stale threshold)
- **Response size cap**: PROPFIND responses and file downloads are limited to 10 MB (checked via `Content-Length` header and actual body size) to prevent memory exhaustion from malicious servers - **Response size cap**: PROPFIND responses and file downloads are limited to 10 MB (checked via `Content-Length` header and actual body size) to prevent memory exhaustion from malicious servers
- **Path traversal protection**: Sync paths are validated to reject `..` and `\` components before any file system operation - **Path traversal protection**: Sync paths are validated to reject `..` components and backslashes anywhere in the path before any file system operation
- **Atomic writes**: Sync state (`.syncstate.json`) and offline queue (`.syncqueue.json`) use atomic write pattern (temp file + rename) to prevent corruption on crash - **Concurrent sync lock**: File-based `.sync.lock` prevents overlapping sync operations on the same workspace. Stale locks older than 5 minutes are automatically cleaned up
- **Atomic writes**: Sync state (`.syncstate.json`) and offline queue (`.syncqueue.json`) use atomic write pattern (temp file + rename, with cleanup on failure) to prevent corruption on crash
- **Delete ordering**: Delete operations update metadata before removing files, so a crash between steps leaves an orphaned file (recoverable) rather than an orphaned metadata entry
- **Syncable files**: Only processes files at expected depths — `.onyx-workspace.json` at root (depth 1), `.listdata.json` and `*.md` inside list directories (depth 2) - **Syncable files**: Only processes files at expected depths — `.onyx-workspace.json` at root (depth 1), `.listdata.json` and `*.md` inside list directories (depth 2)
## Error Handling ## Error Handling
@ -413,10 +415,11 @@ The storage layer enforces the following limits:
| List name | 255 characters | `InvalidData` | | List name | 255 characters | `InvalidData` |
| WebDAV file download | 10 MB | `WebDav` | | WebDAV file download | 10 MB | `WebDav` |
| PROPFIND response | 10 MB | `WebDav` | | PROPFIND response | 10 MB | `WebDav` |
| YAML frontmatter | 65,536 bytes (64 KB) | `InvalidData` |
### Atomic Writes ### Atomic Writes
All metadata and state files use an atomic write pattern (write to `.tmp` then rename) to prevent data corruption if the process crashes mid-write: All metadata and state files use an atomic write pattern (write to `.tmp` then rename) to prevent data corruption if the process crashes mid-write. If the rename step fails, the `.tmp` file is cleaned up to prevent accumulation. Affected files:
- `.onyx-workspace.json` (root metadata) - `.onyx-workspace.json` (root metadata)
- `.listdata.json` (list metadata) - `.listdata.json` (list metadata)
@ -427,7 +430,7 @@ All metadata and state files use an atomic write pattern (write to `.tmp` then r
### Path Safety ### Path Safety
- **List names**: Rejected if they contain `/`, `\`, or `..` components. Canonicalized and verified to stay within workspace root. - **List names**: Rejected if they contain `/`, `\`, or `..` components. Canonicalized and verified to stay within workspace root.
- **Sync paths**: Validated to reject `..` and `\` before any file system operation. - **Sync paths**: Validated to reject `..` components and backslashes anywhere in the path before any file system operation.
- **Workspace paths** (Tauri): Rejected if they point to system directories (`/etc`, `/usr`, `/bin`, etc.). - **Workspace paths** (Tauri): Rejected if they point to system directories (`/etc`, `/usr`, `/bin`, etc.).
- **Filenames**: Sanitized to replace `/ \ : * ? " < > |` and control characters with `_`. - **Filenames**: Sanitized to replace `/ \ : * ? " < > |` and control characters with `_`.