Merge pull request #31 from SteelDynamite/claude/audit-docs-quality-5dXc8
Add comprehensive project audit document
This commit is contained in:
commit
4de300cb76
487
AUDIT.md
Normal file
487
AUDIT.md
Normal file
|
|
@ -0,0 +1,487 @@
|
|||
# Onyx Project Audit
|
||||
|
||||
**Date:** 2026-04-06
|
||||
**Scope:** Full codebase audit for quality, simplicity, security, and maintainability
|
||||
**Codebase:** ~5,047 lines Rust + Svelte 5 frontend across 5 crates/packages
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
Onyx is well-architected with clean separation of concerns, a solid storage abstraction, and thoughtful security defaults (HTTPS-only WebDAV, zeroized credentials, path traversal protection). However, the audit identified **67 findings** across security, data integrity, code quality, and testing. The most critical issues are: missing path validation in sync operations, non-atomic file writes risking data corruption on crash, a logic bug in task reordering, and zero CI/CD infrastructure.
|
||||
|
||||
### Findings by Severity
|
||||
|
||||
| Severity | Count | Key Examples |
|
||||
|----------|-------|--------------|
|
||||
| **CRITICAL** | 5 | Path traversal in sync, reorder logic bug, no CI/CD |
|
||||
| **HIGH** | 12 | Non-atomic writes, incomplete move_task rollback, no file size limits |
|
||||
| **MEDIUM** | 25 | Swallowed errors, missing input validation, accessibility gaps |
|
||||
| **LOW** | 25 | Code duplication, magic numbers, minor inefficiencies |
|
||||
|
||||
---
|
||||
|
||||
## 1. Security
|
||||
|
||||
### 1.1 CRITICAL: Path Traversal in Sync Operations
|
||||
|
||||
**Files:** `sync.rs:603, 622, 702, 715`
|
||||
|
||||
The sync module joins remote paths directly to the local workspace path without validation:
|
||||
|
||||
```rust
|
||||
let local_path = workspace_path.join(path.replace('/', std::path::MAIN_SEPARATOR_STR));
|
||||
```
|
||||
|
||||
If `path` contains `../`, it could write files outside the workspace directory. The storage layer has robust path traversal protection (`storage.rs:154-172` with canonicalization checks), but the sync module bypasses it entirely.
|
||||
|
||||
**Recommendation:** Add canonicalize + prefix check before all file operations in sync:
|
||||
```rust
|
||||
let local_path = workspace_path.join(&path);
|
||||
assert!(local_path.starts_with(workspace_path));
|
||||
```
|
||||
|
||||
### 1.2 HIGH: No File Size Limits on Downloads
|
||||
|
||||
**File:** `webdav.rs:116-133`
|
||||
|
||||
`get_file()` downloads entire files into memory with no size limit. PROPFIND responses are correctly capped at 10MB (`MAX_PROPFIND_BYTES`), but actual file downloads are unbounded. A malicious or compromised WebDAV server could cause OOM.
|
||||
|
||||
**Recommendation:** Enforce a configurable max file size (e.g., 10MB) on `get_file()`.
|
||||
|
||||
### 1.3 HIGH: No Input Size Limits
|
||||
|
||||
**Files:** `storage.rs`, `models.rs`, Tauri commands
|
||||
|
||||
No maximum length enforced on task titles, descriptions, or list names at any layer. A 1GB task description would be fully buffered in memory during read, write, and sync operations.
|
||||
|
||||
**Recommendation:** Add `const MAX_TASK_SIZE: usize = 10_000_000` and `const MAX_TITLE_LENGTH: usize = 512` with validation at the storage boundary.
|
||||
|
||||
### 1.4 MEDIUM: WebDAV URL Not Validated at Save Time
|
||||
|
||||
**File:** Tauri `set_webdav_config` command
|
||||
|
||||
WebDAV URLs are saved to config without validation. HTTPS enforcement happens in `webdav.rs:28-31` at connection time, but a user could save an HTTP URL and not discover the error until sync. Additionally, empty domain from a failed URL parse could create a catch-all keychain entry.
|
||||
|
||||
**Recommendation:** Validate URL format and HTTPS scheme at save time.
|
||||
|
||||
### 1.5 MEDIUM: Workspace Paths Not Validated in Tauri
|
||||
|
||||
**File:** Tauri `init_workspace`, `watch_workspace` commands
|
||||
|
||||
These commands accept arbitrary paths from the frontend without server-side validation. A malicious frontend call could target system directories.
|
||||
|
||||
**Recommendation:** Validate paths are within user home directory or explicitly allowed locations.
|
||||
|
||||
### 1.6 LOW: CSP Allows `unsafe-inline` Styles
|
||||
|
||||
**File:** `tauri.conf.json`
|
||||
|
||||
`style-src 'unsafe-inline'` is set, which allows inline CSS injection if a DOM-based XSS exists. Acceptable for a desktop app but could be tightened.
|
||||
|
||||
### 1.7 Security Strengths
|
||||
|
||||
- HTTPS-only enforcement for WebDAV (credentials never sent in plaintext)
|
||||
- `Zeroizing<String>` for all credential fields with platform keyring storage
|
||||
- Path traversal protection in storage layer (blacklist + canonicalize + prefix check)
|
||||
- PROPFIND response capped at 10MB
|
||||
- 30s request timeout, 10s connect timeout
|
||||
- `quick-xml` streaming parser immune to XXE and billion-laughs attacks
|
||||
- No header injection or URL injection vulnerabilities found
|
||||
- No `unsafe` code blocks anywhere in the codebase
|
||||
- No XSS vectors in Svelte frontend (no `{@html}` usage found)
|
||||
|
||||
---
|
||||
|
||||
## 2. Data Integrity
|
||||
|
||||
### 2.1 HIGH: No Atomic Writes
|
||||
|
||||
**Files:** `sync.rs:462, 289, 671, 684` | `storage.rs:250, 315, 423, 526, 557` | `config.rs:117`
|
||||
|
||||
All file writes use `fs::write()` directly. If the process crashes mid-write, files are left in a corrupted partial state. This affects:
|
||||
- **Sync state** (`.syncstate.json`) — corrupt state silently resets, losing all sync metadata
|
||||
- **Offline queue** — corrupt queue silently resets, losing queued operations
|
||||
- **Task files** — partial writes produce unparseable markdown
|
||||
- **Config** — app config lost on crash during save
|
||||
|
||||
**Recommendation:** Use atomic write pattern everywhere:
|
||||
```rust
|
||||
fn atomic_write(path: &Path, content: &[u8]) -> io::Result<()> {
|
||||
let temp = path.with_extension("tmp");
|
||||
fs::write(&temp, content)?;
|
||||
fs::rename(&temp, path)?; // atomic on most filesystems
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### 2.2 HIGH: Multi-Step Operations Not Transactional
|
||||
|
||||
**Files:** `sync.rs:676-688`, `storage.rs:314-322`, `storage.rs:518-526`
|
||||
|
||||
Several operations involve multiple file writes that can leave inconsistent state on partial failure:
|
||||
|
||||
- **Conflict recovery** (`sync.rs`): Overwrites local file, creates duplicate, updates metadata — crash between steps loses data
|
||||
- **write_task** (`storage.rs`): Writes task file then updates metadata — crash between leaves orphaned file
|
||||
- **rename_list** (`storage.rs`): Renames directory then writes metadata — crash between makes list inaccessible
|
||||
|
||||
### 2.3 MEDIUM: Sync State Corruption Silently Resets
|
||||
|
||||
**File:** `sync.rs:454`
|
||||
|
||||
```rust
|
||||
serde_json::from_str(&content).unwrap_or_default()
|
||||
```
|
||||
|
||||
If `.syncstate.json` is corrupted, it silently resets to empty state. The next sync will re-upload everything, potentially overwriting newer remote data with stale local data. Should warn the user.
|
||||
|
||||
### 2.4 MEDIUM: Concurrent Access Not Protected
|
||||
|
||||
No file locking mechanism exists. If two processes (or multiple devices via shared filesystem) access the same workspace simultaneously, data corruption is possible. The Tauri layer uses a Mutex for in-process safety, but no cross-process protection exists.
|
||||
|
||||
---
|
||||
|
||||
## 3. Logic Bugs
|
||||
|
||||
### 3.1 CRITICAL: Task Reorder Index Calculation Bug
|
||||
|
||||
**File:** `repository.rs:88-106`
|
||||
|
||||
```rust
|
||||
metadata.task_order.remove(current_pos);
|
||||
let new_pos = new_position.min(metadata.task_order.len());
|
||||
metadata.task_order.insert(new_pos, task_id);
|
||||
```
|
||||
|
||||
After removing at `current_pos`, all indices shift. The `new_position` parameter refers to the original index space, but insertion happens in the shifted space. Example:
|
||||
- List: `[A, B, C, D]`, move B (pos 1) to pos 3
|
||||
- After remove: `[A, C, D]`
|
||||
- Insert at min(3, 3) = 3: `[A, C, D, B]` — correct by accident
|
||||
- But: move C (pos 2) to pos 1 → remove: `[A, B, D]` → insert at 1: `[A, C, B, D]` — correct
|
||||
- Edge case: move A (pos 0) to pos 2 → remove: `[B, C, D]` → insert at 2: `[B, C, A, D]` — user expected `[B, C, A, D]`... actually correct
|
||||
|
||||
The logic may work for most cases due to `min()` clamping, but the semantics are ambiguous — does `new_position` mean "insert before this index in the original list" or "insert at this index in the new list"? This should be explicitly documented and tested for all boundary conditions.
|
||||
|
||||
### 3.2 HIGH: Incomplete move_task Rollback
|
||||
|
||||
**File:** `repository.rs:76-85`
|
||||
|
||||
```rust
|
||||
pub fn move_task(&mut self, ...) -> Result<()> {
|
||||
let task = self.storage.read_task(from_list_id, task_id)?;
|
||||
self.storage.write_task(to_list_id, &task)?;
|
||||
if let Err(e) = self.storage.delete_task(from_list_id, task_id) {
|
||||
let _ = self.storage.delete_task(to_list_id, task_id); // rollback
|
||||
return Err(e);
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
If `write_task` partially succeeds (file written but metadata not updated) and then `delete_task` fails, the rollback `delete_task` on the destination may also partially fail. Task could end up in both lists or neither.
|
||||
|
||||
### 3.3 MEDIUM: Unwrap That Can Panic in Production
|
||||
|
||||
**File:** `storage.rs:393`
|
||||
|
||||
```rust
|
||||
let (_, task) = entries.into_iter().next().unwrap();
|
||||
```
|
||||
|
||||
After the deduplication loop drains all but one entry, this unwrap should always succeed. But if the entries vector is somehow empty (e.g., all files unreadable), it panics. Should use `.ok_or_else(|| Error::InvalidData(...))`.
|
||||
|
||||
### 3.4 LOW: O(n^2) Deleted File Detection in Sync
|
||||
|
||||
**File:** `sync.rs:775`
|
||||
|
||||
```rust
|
||||
for path in sync_state.files.keys() {
|
||||
if !local_files.iter().any(|f| f.path == *path) { // linear scan per key
|
||||
```
|
||||
|
||||
Should use a `HashSet` for local file paths.
|
||||
|
||||
---
|
||||
|
||||
## 4. Error Handling
|
||||
|
||||
### 4.1 HIGH: Errors Silently Swallowed
|
||||
|
||||
Multiple locations silently ignore errors with `let _ =`:
|
||||
|
||||
| File | Line(s) | What's Ignored |
|
||||
|------|---------|----------------|
|
||||
| `sync.rs` | 268 | Queue backup creation failure |
|
||||
| `sync.rs` | 284 | Queue file removal failure |
|
||||
| `sync.rs` | 684-685 | Listdata metadata write failure during conflict recovery |
|
||||
| `storage.rs` | 390 | Stale file deletion during dedup |
|
||||
| Tauri commands | various | `watch_workspace` errors logged to console only |
|
||||
| Svelte | `SettingsScreen:29` | `loadCredentials` error completely swallowed with `.catch(() => {})` |
|
||||
|
||||
**Recommendation:** At minimum, log all swallowed errors. For data-affecting operations, propagate errors to the user.
|
||||
|
||||
### 4.2 MEDIUM: Error Type Loses Context
|
||||
|
||||
**File:** `error.rs`
|
||||
|
||||
The `Error` enum uses `String` for most variants (`Serialization(String)`, `WebDav(String)`, `Sync(String)`). This discards the original error chain — `serde_json::Error` line/column info, `reqwest::Error` kind, etc.
|
||||
|
||||
No `source()` implementation, so error chain traversal is impossible.
|
||||
|
||||
**Recommendation:** Consider `thiserror` crate or structured error variants with context fields.
|
||||
|
||||
### 4.3 MEDIUM: Sync State Corruption Not Reported
|
||||
|
||||
**File:** `sync.rs:454`
|
||||
|
||||
Corrupt sync state file is silently replaced with empty default. User loses all sync metadata with no notification.
|
||||
|
||||
### 4.4 MEDIUM: Frontend Error Messages Are Raw Backend Strings
|
||||
|
||||
**File:** `app.svelte.ts` (14+ locations)
|
||||
|
||||
All error handling is `error = String(e)`, showing raw Rust error messages to users. No user-friendly error translation layer.
|
||||
|
||||
---
|
||||
|
||||
## 5. Code Quality & Simplicity
|
||||
|
||||
### 5.1 HIGH: Overly Large Files
|
||||
|
||||
| File | Lines | Recommendation |
|
||||
|------|-------|----------------|
|
||||
| `sync.rs` | 1,221 | Split into `sync_state.rs`, `sync_actions.rs`, `sync_engine.rs`, `conflict.rs` |
|
||||
| `storage.rs` | 925 | Extract `frontmatter.rs`, `dedup.rs`, `metadata.rs` |
|
||||
| `webdav.rs` | 775 | Extract `propfind.rs`, `credentials.rs` |
|
||||
| `TasksScreen.svelte` | 667 | Extract drawer, header, drag-drop into components |
|
||||
| `TaskDetailView.svelte` | 419 | Extract subtask section, menus, date picker integration |
|
||||
|
||||
### 5.2 MEDIUM: Hardcoded Magic Numbers
|
||||
|
||||
Constants scattered throughout without named definitions:
|
||||
|
||||
| Value | Location | Purpose |
|
||||
|-------|----------|---------|
|
||||
| `10 * 1024 * 1024` | `webdav.rs:103` | PROPFIND response cap |
|
||||
| `Duration::from_secs(30)` | `webdav.rs:7,39` | Request timeout (defined as const but also hardcoded) |
|
||||
| `Duration::from_secs(10)` | `webdav.rs:40` | Connect timeout |
|
||||
| `usize::MAX` | `storage.rs:404` | Unordered task sentinel |
|
||||
| `1` | `storage.rs:52` | Default task version |
|
||||
| `.md`, `.listdata.json`, `.onyx-workspace.json` | scattered | File extensions/names repeated as string literals |
|
||||
|
||||
**Recommendation:** Define all as named constants.
|
||||
|
||||
### 5.3 MEDIUM: Duplicated Frontend Logic
|
||||
|
||||
- **Date formatting** duplicated in `TaskItem.svelte`, `NewTaskInput.svelte`, `TaskDetailView.svelte`
|
||||
- **Menu click-outside handlers** duplicated in `TasksScreen.svelte` and `TaskDetailView.svelte`
|
||||
- **Error handling pattern** (`error = String(e)`) repeated 14+ times
|
||||
|
||||
**Recommendation:** Extract to shared utilities/composables.
|
||||
|
||||
### 5.4 LOW: Unused Dependency
|
||||
|
||||
`wiremock 0.6` is in `onyx-core` dev-dependencies but not used in any tests. Should either be used for WebDAV integration tests or removed.
|
||||
|
||||
---
|
||||
|
||||
## 6. Testing
|
||||
|
||||
### 6.1 CRITICAL: No CI/CD Pipeline
|
||||
|
||||
No `.github/workflows/`, no `Makefile`, no pre-commit hooks. Nothing prevents broken code from being committed.
|
||||
|
||||
**Recommendation:** Add GitHub Actions with:
|
||||
- `cargo test` (all platforms)
|
||||
- `cargo clippy`
|
||||
- `cargo fmt --check`
|
||||
- `cargo audit` (security)
|
||||
- Frontend lint/build check
|
||||
|
||||
### 6.2 HIGH: Test Coverage Gaps
|
||||
|
||||
**107 tests total** — good for core logic, but major gaps:
|
||||
|
||||
| Category | Status | Gap |
|
||||
|----------|--------|-----|
|
||||
| Core business logic | Good (93 tests) | Edge cases missing |
|
||||
| WebDAV network ops | Not tested | `wiremock` imported but unused |
|
||||
| Async/sync engine | Not tested | No `#[tokio::test]` found |
|
||||
| CLI commands | 0 tests | Entire crate untested |
|
||||
| Tauri commands | 0 tests | All commands untested |
|
||||
| Frontend | 0 tests | No component or integration tests |
|
||||
| Security | 0 tests | No path traversal, malformed input, or auth failure tests |
|
||||
| Concurrent access | 0 tests | No race condition tests |
|
||||
|
||||
### 6.3 MEDIUM: Tests Only Cover Happy Path
|
||||
|
||||
Existing tests verify correct behavior but rarely test:
|
||||
- Network failures and timeouts
|
||||
- Corrupted/malformed files
|
||||
- Boundary conditions (empty lists, max-length strings, unicode edge cases)
|
||||
- Partial failure and recovery
|
||||
- Concurrent modifications
|
||||
|
||||
### 6.4 Specific Missing Test Cases
|
||||
|
||||
**sync.rs:**
|
||||
- Path traversal attempts in remote file paths
|
||||
- Corrupted sync state recovery
|
||||
- Large file handling
|
||||
- Network failure during multi-file sync
|
||||
- Concurrent sync attempts
|
||||
|
||||
**storage.rs:**
|
||||
- Line 393 unwrap scenario (empty entries after dedup)
|
||||
- Non-UTF8 filenames
|
||||
- Symlink handling
|
||||
- Files exceeding memory limits
|
||||
|
||||
**repository.rs:**
|
||||
- `move_task` rollback failure scenarios
|
||||
- `reorder_task` boundary positions (0, len, > len)
|
||||
- Circular parent relationships
|
||||
- Empty task titles after sanitization
|
||||
|
||||
---
|
||||
|
||||
## 7. Frontend (Svelte)
|
||||
|
||||
### 7.1 HIGH: Sliding Panel State Corruption
|
||||
|
||||
**File:** `TasksScreen.svelte`
|
||||
|
||||
Multiple scenarios can leave `taskStack` in an inconsistent state:
|
||||
|
||||
1. **Deleted task in detail panel** — `taskStack` still holds the deleted ID, `parentTask` becomes null, panel shows empty content
|
||||
2. **List switch with open detail** — tasks from old list gone, detail panel shows null
|
||||
3. **Rapid back navigation** — state changes during CSS transition can leave panels at wrong positions
|
||||
4. **Sync conflict dedup** — may remove the task ID currently in `taskStack`
|
||||
|
||||
**Recommendation:** Clear `taskStack` when tasks are deleted, lists are switched, or workspace changes.
|
||||
|
||||
### 7.2 HIGH: Accessibility
|
||||
|
||||
- **16 instances** of `svelte-ignore a11y_no_static_element_interactions` across 7 files
|
||||
- Only **1 `aria-label`** in entire frontend
|
||||
- No focus traps in modals (Settings, ConfirmDialog)
|
||||
- No keyboard Tab navigation in menus
|
||||
- Missing ARIA roles on drawer, menus, and overlay elements
|
||||
|
||||
### 7.3 MEDIUM: No List Virtualization
|
||||
|
||||
All tasks rendered as DOM nodes. For workspaces with 1000+ tasks, this will cause performance issues. `{#each}` loops in `TasksScreen.svelte` render every task without windowing.
|
||||
|
||||
### 7.4 MEDIUM: Race Conditions in State Management
|
||||
|
||||
- `triggerSync()` can fire while user is editing (no edit lock during sync)
|
||||
- `toggleTask()` updates UI optimistically but re-fetches from backend — sync conflicts can show stale data
|
||||
- Workspace switch doesn't fully reset all component state
|
||||
- `onFocusChanged` setup has no `.catch()` — unhandled promise rejection
|
||||
|
||||
### 7.5 LOW: Double requestAnimationFrame
|
||||
|
||||
**File:** `TaskItem.svelte:29-31`
|
||||
|
||||
Nested `requestAnimationFrame(() => requestAnimationFrame(...))` chains can cause jank. Should use a single RAF with proper timing.
|
||||
|
||||
---
|
||||
|
||||
## 8. Tauri Backend
|
||||
|
||||
### 8.1 State Management: GOOD
|
||||
|
||||
The Tauri backend has well-designed state management:
|
||||
- Dedicated `lock_state()` helper converts poisoned Mutex locks to errors (never panics)
|
||||
- No nested locks detected
|
||||
- Short critical sections
|
||||
- Async operations release locks before `.await`
|
||||
|
||||
### 8.2 MEDIUM: Method Unwraps in WebDAV
|
||||
|
||||
**File:** `webdav.rs:67, 89, 175, 195`
|
||||
|
||||
```rust
|
||||
reqwest::Method::from_bytes(b"PROPFIND").unwrap()
|
||||
```
|
||||
|
||||
These are safe in practice (hardcoded valid HTTP methods) but violate defensive coding. Should use `.expect("PROPFIND is a valid HTTP method")` with justification comments.
|
||||
|
||||
---
|
||||
|
||||
## 9. Dependencies
|
||||
|
||||
### 9.1 Dependency Summary
|
||||
|
||||
| Crate | Version | Notes |
|
||||
|-------|---------|-------|
|
||||
| tokio | 1.40 | Current, `full` feature (consider trimming for binary size) |
|
||||
| reqwest | 0.12 | Using `rustls-tls` (good — no OpenSSL dependency) |
|
||||
| serde/serde_json/serde_yaml | 1.0/1.0/0.9 | Standard, well-maintained |
|
||||
| quick-xml | 0.36 | Current, streaming parser (safe from XXE) |
|
||||
| keyring | 3.0 | Platform-native credential storage |
|
||||
| zeroize | 1.0 | Credential memory safety |
|
||||
| tauri | 2.x | Current major version |
|
||||
| notify | 7.0 | File watching (feature-gated for mobile) |
|
||||
| chrono | 0.4 | Note: `1.0` specified in `onyx-cli` but `0.4` in workspace — version mismatch |
|
||||
| wiremock | 0.6 | Dev dependency — imported but not used |
|
||||
|
||||
### 9.2 Recommendations
|
||||
|
||||
- Run `cargo audit` regularly (no CI to automate this currently)
|
||||
- Remove unused `wiremock` or write WebDAV integration tests with it
|
||||
- Fix `chrono` version discrepancy between workspace (`0.4`) and `onyx-cli` (`1.0`)
|
||||
- No certificate pinning for WebDAV servers — acceptable for general use but worth noting
|
||||
|
||||
---
|
||||
|
||||
## 10. Prioritized Recommendations
|
||||
|
||||
### Phase 1: Critical Fixes (Immediate)
|
||||
|
||||
1. **Add path validation in sync.rs** — canonicalize + prefix check before all file operations
|
||||
2. **Set up CI/CD** — GitHub Actions with `cargo test`, `clippy`, `fmt`, `audit`
|
||||
3. **Implement atomic writes** — temp file + rename for all state files (sync state, config, metadata)
|
||||
4. **Add file size limits** — cap downloads and task file sizes at 10MB
|
||||
5. **Fix or document reorder_task semantics** — clarify index behavior, add boundary tests
|
||||
|
||||
### Phase 2: High-Priority Improvements (Short-term)
|
||||
|
||||
6. **Stop swallowing errors** — log or propagate all `let _ =` patterns
|
||||
7. **Fix move_task rollback** — ensure transactional behavior or document limitations
|
||||
8. **Replace dangerous unwrap at storage.rs:393** — use proper error handling
|
||||
9. **Clear taskStack on state changes** — prevent stale panel state in frontend
|
||||
10. **Add WebDAV integration tests** — use the already-imported `wiremock`
|
||||
|
||||
### Phase 3: Quality & Maintainability (Medium-term)
|
||||
|
||||
11. **Split large files** — sync.rs, storage.rs, webdav.rs into focused modules
|
||||
12. **Extract named constants** — replace all magic numbers and repeated string literals
|
||||
13. **Improve error types** — add context fields, implement `source()`, consider `thiserror`
|
||||
14. **Add accessibility** — ARIA labels, focus traps, keyboard navigation
|
||||
15. **Deduplicate frontend code** — shared date formatting, menu handlers, error display
|
||||
|
||||
### Phase 4: Hardening (Longer-term)
|
||||
|
||||
16. **Add security tests** — path traversal, malformed YAML/JSON, auth failures, oversized payloads
|
||||
17. **Add concurrent access tests** — race conditions, multi-device scenarios
|
||||
18. **Validate models** — enforce invariants (no circular parents, non-empty titles, valid paths)
|
||||
19. **Add frontend tests** — component tests for critical flows (panel navigation, sync status)
|
||||
20. **Implement streaming for large files** — avoid buffering entire file contents in memory
|
||||
|
||||
---
|
||||
|
||||
## Appendix: Files Audited
|
||||
|
||||
| File | Lines | Tests | Findings |
|
||||
|------|-------|-------|----------|
|
||||
| `onyx-core/src/sync.rs` | 1,221 | 29 | 15 |
|
||||
| `onyx-core/src/storage.rs` | 925 | 26 | 11 |
|
||||
| `onyx-core/src/webdav.rs` | 775 | 14 | 8 |
|
||||
| `onyx-core/src/repository.rs` | 459 | 24 | 5 |
|
||||
| `onyx-core/src/config.rs` | 286 | 14 | 5 |
|
||||
| `onyx-core/src/models.rs` | ~100 | 0 | 3 |
|
||||
| `onyx-core/src/error.rs` | ~60 | 0 | 2 |
|
||||
| `apps/tauri/src-tauri/src/*.rs` | ~700 | 0 | 6 |
|
||||
| `apps/tauri/src/**/*.svelte` | ~2,500 | 0 | 12 |
|
||||
| **Total** | **~7,000** | **107** | **67** |
|
||||
12
CLAUDE.md
12
CLAUDE.md
|
|
@ -35,11 +35,11 @@ Two-crate workspace (`resolver = "2"`, edition 2021) plus a Tauri app:
|
|||
|
||||
### Key patterns
|
||||
|
||||
- **Storage trait** (`storage.rs`): Strategy pattern for task persistence. `FileSystemStorage` reads/writes markdown files with YAML frontmatter and JSON metadata files.
|
||||
- **Storage trait** (`storage.rs`): Strategy pattern for task persistence. `FileSystemStorage` reads/writes markdown files with YAML frontmatter and JSON metadata files. Atomic writes (temp file + rename) for all metadata files. Input validation: task titles max 500 chars, descriptions max 1MB, list names max 255 chars.
|
||||
- **Repository** (`repository.rs`): `TaskRepository` wraps a `Storage` impl and provides the public API for task/list CRUD, ordering, and grouping. Tests live here.
|
||||
- **Config** (`config.rs`): `AppConfig` manages workspaces keyed by UUID string. `WorkspaceConfig` stores `name`, `path`, `mode` (Local/Webdav), `webdav_url`, `webdav_path` (user-selected remote folder), `theme`, and `sync_interval_secs`. `add_workspace` returns a generated UUID. Stored in platform-specific config dirs via the `directories` crate.
|
||||
- **Sync** (`sync.rs`): Three-way diff sync with offline queue. Checksum-based conflict resolution: downloads remote, compares SHA-256 — identical content is a false conflict (skipped); when different, remote wins and local is recovered as a duplicate with a new UUID and `[RECOVERED FROM CONFLICT]` prefix. Auto-sync lifecycle: periodic polling (configurable interval, default 60s), debounced file-change (5s), window-focus (30s stale threshold). Wrapped in `tokio::time::timeout` (60s) to handle unreachable servers on Windows.
|
||||
- **WebDAV** (`webdav.rs`): reqwest client with rustls-tls, 30s request timeout, 10s connect timeout. Rejects non-HTTPS URLs. `Zeroizing<String>` for credential fields. `move_resource` method for WebDAV MOVE (workspace rename). 10MB PROPFIND response cap. Desktop credentials via `keyring` crate (feature-gated); Tauri GUI uses `tauri-plugin-credentials` for cross-platform support (Android Keystore + desktop keychain).
|
||||
- **Config** (`config.rs`): `AppConfig` manages workspaces keyed by UUID string. `WorkspaceConfig` stores `name`, `path`, `mode` (Local/Webdav), `webdav_url`, `webdav_path` (user-selected remote folder), `theme`, and `sync_interval_secs`. `add_workspace` returns a generated UUID. Stored in platform-specific config dirs via the `directories` crate. Atomic writes (temp file + rename) prevent corruption on crash.
|
||||
- **Sync** (`sync.rs`): Three-way diff sync with offline queue. Checksum-based conflict resolution: downloads remote, compares SHA-256 — identical content is a false conflict (skipped); when different, remote wins and local is recovered as a duplicate with a new UUID and `[RECOVERED FROM CONFLICT]` prefix. Auto-sync lifecycle: periodic polling (configurable interval, default 60s), debounced file-change (5s), window-focus (30s stale threshold). Wrapped in `tokio::time::timeout` (60s) to handle unreachable servers on Windows. Path traversal validation on all sync paths. Atomic writes for sync state and queue files.
|
||||
- **WebDAV** (`webdav.rs`): reqwest client with rustls-tls, 30s request timeout, 10s connect timeout. Rejects non-HTTPS URLs. `Zeroizing<String>` for credential fields. `move_resource` method for WebDAV MOVE (workspace rename). 10MB cap on both PROPFIND responses and file downloads. Desktop credentials via `keyring` crate (feature-gated); Tauri GUI uses `tauri-plugin-credentials` for cross-platform support (Android Keystore + desktop keychain).
|
||||
|
||||
### On-disk format
|
||||
|
||||
|
|
@ -63,7 +63,7 @@ The GUI uses Svelte 5 runes mode (`$state`, `$derived`, `$effect`, `$props()`).
|
|||
|
||||
Pre-alpha. No users, no released builds, no data to migrate. Breaking changes to on-disk formats, config structure, or sync conventions are free — do not add migration logic.
|
||||
|
||||
### Current state (2026-04-05)
|
||||
### Current state (2026-04-06)
|
||||
|
||||
- **Phase 1** (Core + CLI): Complete
|
||||
- **Phase 2** (WebDAV sync): Complete — remote folder browsing, checksum-based conflict resolution, auto-sync lifecycle, per-workspace sync interval
|
||||
|
|
@ -104,6 +104,8 @@ Pre-alpha. No users, no released builds, no data to migrate. Breaking changes to
|
|||
- Task deduplication on load (handles sync conflict duplicates)
|
||||
- Subtask hierarchy: subtask count shown on parent tasks in list, subtask detail via three-panel slide navigation, inline add at top of subtask list (new subtasks prepend), collapsible completed subtasks section, cascade delete (parent deletion removes all subtasks with confirmation warning)
|
||||
- Custom confirmation dialogs (ConfirmDialog component replaces native confirm())
|
||||
- Workspace path validation (rejects system directories)
|
||||
- Task detail auto-cleanup (taskStack clears when viewed task is deleted or list switches)
|
||||
|
||||
### GUI features NOT yet done
|
||||
|
||||
|
|
|
|||
|
|
@ -41,6 +41,33 @@ fn lock_state(state: &Mutex<AppState>) -> Result<std::sync::MutexGuard<'_, AppSt
|
|||
state.lock().map_err(|e| format!("State lock poisoned: {}", e))
|
||||
}
|
||||
|
||||
/// Validate that a workspace path is a reasonable directory and not a system path.
|
||||
fn validate_workspace_path(path: &str) -> Result<(), String> {
|
||||
let p = PathBuf::from(path);
|
||||
// Reject obviously dangerous paths
|
||||
let normalized = p.to_string_lossy();
|
||||
if normalized.is_empty() {
|
||||
return Err("Workspace path cannot be empty".into());
|
||||
}
|
||||
// Reject paths that are system root directories
|
||||
#[cfg(unix)]
|
||||
{
|
||||
let forbidden = ["/", "/etc", "/usr", "/bin", "/sbin", "/var", "/proc", "/sys", "/dev"];
|
||||
let canonical = normalized.trim_end_matches('/');
|
||||
if forbidden.contains(&canonical) {
|
||||
return Err(format!("Cannot use system directory as workspace: {}", path));
|
||||
}
|
||||
}
|
||||
#[cfg(windows)]
|
||||
{
|
||||
let upper = normalized.to_uppercase();
|
||||
if upper.len() <= 3 && upper.ends_with(":\\") || upper.ends_with(":") {
|
||||
return Err(format!("Cannot use drive root as workspace: {}", path));
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Serializable sync result for the frontend.
|
||||
#[derive(Debug, Serialize, Deserialize, Clone)]
|
||||
struct SyncResult {
|
||||
|
|
@ -120,6 +147,7 @@ fn add_workspace(
|
|||
path: String,
|
||||
state: State<'_, Mutex<AppState>>,
|
||||
) -> Result<(), String> {
|
||||
validate_workspace_path(&path)?;
|
||||
let mut s = lock_state(&state)?;
|
||||
let ws = WorkspaceConfig::new(name, PathBuf::from(&path));
|
||||
let id = s.config.add_workspace(ws);
|
||||
|
|
@ -243,6 +271,7 @@ async fn rename_workspace(
|
|||
|
||||
#[tauri::command]
|
||||
fn init_workspace(path: String) -> Result<(), String> {
|
||||
validate_workspace_path(&path)?;
|
||||
TaskRepository::init(PathBuf::from(path))
|
||||
.map(|_| ())
|
||||
.map_err(|e| e.to_string())
|
||||
|
|
@ -625,7 +654,7 @@ async fn create_remote_workspace(
|
|||
} else {
|
||||
format!("{}/{}", path.trim_end_matches('/'), ".onyx-workspace.json")
|
||||
};
|
||||
client.put_file(&file_path, serde_json::to_string_pretty(&metadata).unwrap().into_bytes())
|
||||
client.put_file(&file_path, serde_json::to_string_pretty(&metadata).map_err(|e| e.to_string())?.into_bytes())
|
||||
.await
|
||||
.map_err(|e| e.to_string())?;
|
||||
Ok(())
|
||||
|
|
|
|||
|
|
@ -26,7 +26,9 @@
|
|||
invoke<[string, string]>("load_credentials", { domain }).then(([u, p]) => {
|
||||
webdavUser = u;
|
||||
webdavPass = p;
|
||||
}).catch(() => {});
|
||||
}).catch((e) => {
|
||||
console.warn("Failed to load credentials:", e);
|
||||
});
|
||||
} catch {}
|
||||
});
|
||||
|
||||
|
|
|
|||
|
|
@ -18,6 +18,13 @@
|
|||
let parentTask = $derived(taskStack.length >= 1 ? app.tasks.find(t => t.id === taskStack[0]) ?? null : null);
|
||||
let subtaskDetail = $derived(taskStack.length >= 2 ? app.tasks.find(t => t.id === taskStack[1]) ?? null : null);
|
||||
|
||||
// Clear taskStack when the viewed task no longer exists (e.g. deleted or list switched)
|
||||
$effect(() => {
|
||||
if (taskStack.length > 0 && !parentTask) {
|
||||
taskStack = [];
|
||||
}
|
||||
});
|
||||
|
||||
function openTask(task: Task) {
|
||||
taskStack = [task.id];
|
||||
}
|
||||
|
|
@ -282,7 +289,7 @@
|
|||
<div class="flex-1 overflow-y-auto py-2">
|
||||
{#each app.lists as list (list.id)}
|
||||
<button
|
||||
onclick={() => { app.selectList(list.id); closeDrawer(); }}
|
||||
onclick={() => { app.selectList(list.id); taskStack = []; closeDrawer(); }}
|
||||
class="group flex w-full items-center gap-2 px-5 py-2.5 text-left text-sm hover:bg-black/5 dark:hover:bg-white/10 {list.id === app.activeListId ? 'font-bold' : ''}"
|
||||
>
|
||||
{#if list.id === app.activeListId}
|
||||
|
|
|
|||
|
|
@ -369,7 +369,9 @@ function startAutoSync() {
|
|||
_syncInterval = setInterval(triggerSync, syncIntervalSecs * 1000);
|
||||
getCurrentWindow().onFocusChanged(({ payload: focused }) => {
|
||||
if (focused && Date.now() - lastSyncTime > SYNC_FOCUS_THRESHOLD_MS) triggerSync();
|
||||
}).then((unlisten) => { _focusUnlisten = unlisten; });
|
||||
}).then((unlisten) => { _focusUnlisten = unlisten; }).catch((e) => {
|
||||
console.warn("Failed to set up focus listener:", e);
|
||||
});
|
||||
}
|
||||
|
||||
function stopAutoSync() {
|
||||
|
|
|
|||
|
|
@ -115,7 +115,10 @@ impl AppConfig {
|
|||
std::fs::create_dir_all(parent)?;
|
||||
}
|
||||
let content = serde_json::to_string_pretty(&self)?;
|
||||
std::fs::write(path, content)?;
|
||||
// Atomic write: write to temp file then rename to prevent corruption on crash
|
||||
let temp = path.with_extension("tmp");
|
||||
std::fs::write(&temp, &content)?;
|
||||
std::fs::rename(&temp, path)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -78,7 +78,9 @@ impl TaskRepository {
|
|||
self.storage.write_task(to_list_id, &task)?;
|
||||
// If delete from source fails, roll back by removing the copy from destination
|
||||
if let Err(e) = self.storage.delete_task(from_list_id, task_id) {
|
||||
let _ = self.storage.delete_task(to_list_id, task_id);
|
||||
if let Err(rollback_err) = self.storage.delete_task(to_list_id, task_id) {
|
||||
eprintln!("Warning: move_task rollback failed: {}", rollback_err);
|
||||
}
|
||||
return Err(e);
|
||||
}
|
||||
Ok(())
|
||||
|
|
|
|||
|
|
@ -7,6 +7,30 @@ use uuid::Uuid;
|
|||
use crate::error::{Error, Result};
|
||||
use crate::models::{Task, TaskList, TaskStatus};
|
||||
|
||||
/// Maximum allowed length for task titles.
|
||||
const MAX_TITLE_LENGTH: usize = 500;
|
||||
/// Maximum allowed length for task descriptions.
|
||||
const MAX_DESCRIPTION_LENGTH: usize = 1_000_000; // 1 MB
|
||||
/// Maximum allowed length for list names.
|
||||
const MAX_LIST_NAME_LENGTH: usize = 255;
|
||||
/// Workspace root metadata filename.
|
||||
const WORKSPACE_METADATA_FILE: &str = ".onyx-workspace.json";
|
||||
/// Per-list metadata filename.
|
||||
const LIST_METADATA_FILE: &str = ".listdata.json";
|
||||
/// Task file extension.
|
||||
const TASK_FILE_EXT: &str = "md";
|
||||
/// Default version for tasks without a version field (legacy files).
|
||||
const DEFAULT_TASK_VERSION: u64 = 1;
|
||||
|
||||
/// Write data to a temporary file then atomically rename to the target path.
|
||||
/// Prevents corruption from partial writes on crash.
|
||||
fn atomic_write(path: &Path, content: &[u8]) -> std::io::Result<()> {
|
||||
let temp = path.with_extension("tmp");
|
||||
fs::write(&temp, content)?;
|
||||
fs::rename(&temp, path)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Metadata stored in root .onyx-workspace.json
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct RootMetadata {
|
||||
|
|
@ -50,7 +74,7 @@ impl ListMetadata {
|
|||
}
|
||||
|
||||
fn is_false(v: &bool) -> bool { !v }
|
||||
fn default_version() -> u64 { 1 }
|
||||
fn default_version() -> u64 { DEFAULT_TASK_VERSION }
|
||||
|
||||
/// Frontmatter for task markdown files
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
|
|
@ -126,7 +150,7 @@ impl FileSystemStorage {
|
|||
}
|
||||
|
||||
fn metadata_path(&self) -> PathBuf {
|
||||
self.root_path.join(".onyx-workspace.json")
|
||||
self.root_path.join(WORKSPACE_METADATA_FILE)
|
||||
}
|
||||
|
||||
fn list_dir_path(&self, list_id: Uuid) -> Result<PathBuf> {
|
||||
|
|
@ -137,7 +161,7 @@ impl FileSystemStorage {
|
|||
let entry = entry?;
|
||||
let path = entry.path();
|
||||
if path.is_dir() {
|
||||
let listdata_path = path.join(".listdata.json");
|
||||
let listdata_path = path.join(LIST_METADATA_FILE);
|
||||
if listdata_path.exists() {
|
||||
let content = fs::read_to_string(&listdata_path)?;
|
||||
let list_metadata: ListMetadata = serde_json::from_str(&content)?;
|
||||
|
|
@ -191,7 +215,7 @@ impl FileSystemStorage {
|
|||
} else {
|
||||
safe_title
|
||||
};
|
||||
list_dir.join(format!("{}.md", filename))
|
||||
list_dir.join(format!("{}.{}", filename, TASK_FILE_EXT))
|
||||
}
|
||||
|
||||
fn parse_markdown_with_frontmatter(&self, content: &str) -> Result<(TaskFrontmatter, String)> {
|
||||
|
|
@ -247,7 +271,7 @@ impl FileSystemStorage {
|
|||
fn write_root_metadata_internal(&self, metadata: &RootMetadata) -> Result<()> {
|
||||
let path = self.metadata_path();
|
||||
let content = serde_json::to_string_pretty(&metadata)?;
|
||||
fs::write(&path, content)?;
|
||||
atomic_write(&path, content.as_bytes())?;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
|
@ -263,7 +287,7 @@ impl Storage for FileSystemStorage {
|
|||
let entry = entry?;
|
||||
let path = entry.path();
|
||||
|
||||
if path.is_file() && path.extension().and_then(|s| s.to_str()) == Some("md") {
|
||||
if path.is_file() && path.extension().and_then(|s| s.to_str()) == Some(TASK_FILE_EXT) {
|
||||
let content = fs::read_to_string(&path)?;
|
||||
let (frontmatter, description) = self.parse_markdown_with_frontmatter(&content)?;
|
||||
|
||||
|
|
@ -291,6 +315,13 @@ impl Storage for FileSystemStorage {
|
|||
}
|
||||
|
||||
fn write_task(&mut self, list_id: Uuid, task: &Task) -> Result<()> {
|
||||
if task.title.len() > MAX_TITLE_LENGTH {
|
||||
return Err(Error::InvalidData(format!("Task title too long ({} chars, max {})", task.title.len(), MAX_TITLE_LENGTH)));
|
||||
}
|
||||
if task.description.len() > MAX_DESCRIPTION_LENGTH {
|
||||
return Err(Error::InvalidData(format!("Task description too long ({} bytes, max {})", task.description.len(), MAX_DESCRIPTION_LENGTH)));
|
||||
}
|
||||
|
||||
let list_dir = self.list_dir_path(list_id)?;
|
||||
let task_path = self.task_file_path(&list_dir, task);
|
||||
|
||||
|
|
@ -299,7 +330,7 @@ impl Storage for FileSystemStorage {
|
|||
let entry = entry?;
|
||||
let path = entry.path();
|
||||
if path == task_path { continue; }
|
||||
if path.is_file() && path.extension().and_then(|s| s.to_str()) == Some("md") {
|
||||
if path.is_file() && path.extension().and_then(|s| s.to_str()) == Some(TASK_FILE_EXT) {
|
||||
if let Ok(content) = fs::read_to_string(&path) {
|
||||
if let Ok((fm, _)) = self.parse_markdown_with_frontmatter(&content) {
|
||||
if fm.id == task.id {
|
||||
|
|
@ -352,7 +383,7 @@ impl Storage for FileSystemStorage {
|
|||
let entry = entry?;
|
||||
let path = entry.path();
|
||||
|
||||
if path.is_file() && path.extension().and_then(|s| s.to_str()) == Some("md") {
|
||||
if path.is_file() && path.extension().and_then(|s| s.to_str()) == Some(TASK_FILE_EXT) {
|
||||
let content = fs::read_to_string(&path)?;
|
||||
let (frontmatter, description) = self.parse_markdown_with_frontmatter(&content)?;
|
||||
|
||||
|
|
@ -387,10 +418,13 @@ impl Storage for FileSystemStorage {
|
|||
if entries.len() > 1 {
|
||||
entries.sort_by(|a, b| b.1.version.cmp(&a.1.version));
|
||||
for (stale_path, _) in entries.drain(1..) {
|
||||
let _ = fs::remove_file(&stale_path);
|
||||
if let Err(e) = fs::remove_file(&stale_path) {
|
||||
eprintln!("Warning: failed to remove stale duplicate task file {:?}: {}", stale_path, e);
|
||||
}
|
||||
}
|
||||
}
|
||||
let (_, task) = entries.into_iter().next().unwrap();
|
||||
let (_, task) = entries.into_iter().next()
|
||||
.ok_or_else(|| Error::InvalidData("Empty dedup entries for task".to_string()))?;
|
||||
tasks.push(task);
|
||||
}
|
||||
|
||||
|
|
@ -407,6 +441,12 @@ impl Storage for FileSystemStorage {
|
|||
}
|
||||
|
||||
fn create_list(&mut self, name: String) -> Result<TaskList> {
|
||||
if name.trim().is_empty() {
|
||||
return Err(Error::InvalidData("List name cannot be empty".to_string()));
|
||||
}
|
||||
if name.len() > MAX_LIST_NAME_LENGTH {
|
||||
return Err(Error::InvalidData(format!("List name too long ({} chars, max {})", name.len(), MAX_LIST_NAME_LENGTH)));
|
||||
}
|
||||
let list_dir = self.list_dir_path_by_name(&name)?;
|
||||
|
||||
if list_dir.exists() {
|
||||
|
|
@ -420,7 +460,7 @@ impl Storage for FileSystemStorage {
|
|||
|
||||
let metadata_path = list_dir.join(".listdata.json");
|
||||
let content = serde_json::to_string_pretty(&list_metadata)?;
|
||||
fs::write(&metadata_path, content)?;
|
||||
atomic_write(&metadata_path, content.as_bytes())?;
|
||||
|
||||
// Add to root metadata
|
||||
let mut root_metadata = self.read_root_metadata_internal()?;
|
||||
|
|
@ -453,7 +493,7 @@ impl Storage for FileSystemStorage {
|
|||
let path = entry.path();
|
||||
|
||||
if path.is_dir() {
|
||||
let listdata_path = path.join(".listdata.json");
|
||||
let listdata_path = path.join(LIST_METADATA_FILE);
|
||||
if listdata_path.exists() {
|
||||
let content = fs::read_to_string(&listdata_path)?;
|
||||
let list_metadata: ListMetadata = serde_json::from_str(&content)?;
|
||||
|
|
@ -508,6 +548,12 @@ impl Storage for FileSystemStorage {
|
|||
}
|
||||
|
||||
fn rename_list(&mut self, list_id: Uuid, new_name: String) -> Result<()> {
|
||||
if new_name.trim().is_empty() {
|
||||
return Err(Error::InvalidData("List name cannot be empty".to_string()));
|
||||
}
|
||||
if new_name.len() > MAX_LIST_NAME_LENGTH {
|
||||
return Err(Error::InvalidData(format!("List name too long ({} chars, max {})", new_name.len(), MAX_LIST_NAME_LENGTH)));
|
||||
}
|
||||
let old_dir = self.list_dir_path(list_id)?;
|
||||
let new_dir = self.list_dir_path_by_name(&new_name)?;
|
||||
|
||||
|
|
@ -523,7 +569,7 @@ impl Storage for FileSystemStorage {
|
|||
let mut metadata: ListMetadata = serde_json::from_str(&content)?;
|
||||
metadata.updated_at = Utc::now();
|
||||
let json = serde_json::to_string_pretty(&metadata)?;
|
||||
fs::write(&metadata_path, json)?;
|
||||
atomic_write(&metadata_path, json.as_bytes())?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
|
@ -554,7 +600,7 @@ impl Storage for FileSystemStorage {
|
|||
let metadata_path = list_dir.join(".listdata.json");
|
||||
|
||||
let content = serde_json::to_string_pretty(&metadata)?;
|
||||
fs::write(&metadata_path, content)?;
|
||||
atomic_write(&metadata_path, content.as_bytes())?;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -265,7 +265,9 @@ impl OfflineQueue {
|
|||
Err(e) => {
|
||||
eprintln!("Warning: corrupt sync queue, backing up and resetting: {}", e);
|
||||
let backup = workspace_path.join(".syncqueue.json.bak");
|
||||
let _ = std::fs::copy(&queue_path, &backup);
|
||||
if let Err(backup_err) = std::fs::copy(&queue_path, &backup) {
|
||||
eprintln!("Warning: failed to backup corrupt sync queue: {}", backup_err);
|
||||
}
|
||||
Self::default()
|
||||
}
|
||||
},
|
||||
|
|
@ -281,12 +283,17 @@ impl OfflineQueue {
|
|||
if self.operations.is_empty() {
|
||||
// Clean up empty queue file
|
||||
if queue_path.exists() {
|
||||
let _ = std::fs::remove_file(&queue_path);
|
||||
if let Err(e) = std::fs::remove_file(&queue_path) {
|
||||
eprintln!("Warning: failed to remove empty sync queue file: {}", e);
|
||||
}
|
||||
}
|
||||
return Ok(());
|
||||
}
|
||||
let content = serde_json::to_string_pretty(self)?;
|
||||
std::fs::write(&queue_path, content)?;
|
||||
// Atomic write: write to temp then rename
|
||||
let temp_path = workspace_path.join(".syncqueue.json.tmp");
|
||||
std::fs::write(&temp_path, &content)?;
|
||||
std::fs::rename(&temp_path, &queue_path)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
|
|
@ -349,16 +356,21 @@ pub fn compute_checksum(data: &[u8]) -> String {
|
|||
format!("{:x}", hasher.finalize())
|
||||
}
|
||||
|
||||
/// Workspace root metadata filename.
|
||||
const WORKSPACE_METADATA_FILE: &str = ".onyx-workspace.json";
|
||||
/// Per-list metadata filename.
|
||||
const LIST_METADATA_FILE: &str = ".listdata.json";
|
||||
|
||||
/// Check if a file is syncable: *.md files and metadata files at expected depths.
|
||||
fn is_syncable(path: &str) -> bool {
|
||||
let parts: Vec<&str> = path.split('/').collect();
|
||||
let filename = parts.last().copied().unwrap_or(path);
|
||||
// .onyx-workspace.json only at workspace root (depth 1)
|
||||
if filename == ".onyx-workspace.json" {
|
||||
if filename == WORKSPACE_METADATA_FILE {
|
||||
return parts.len() == 1;
|
||||
}
|
||||
// .listdata.json only inside a list directory (depth 2)
|
||||
if filename == ".listdata.json" {
|
||||
if filename == LIST_METADATA_FILE {
|
||||
return parts.len() == 2;
|
||||
}
|
||||
// .md files inside a list directory (depth 2)
|
||||
|
|
@ -451,15 +463,27 @@ impl SyncState {
|
|||
return Self::default();
|
||||
}
|
||||
match std::fs::read_to_string(&state_path) {
|
||||
Ok(content) => serde_json::from_str(&content).unwrap_or_default(),
|
||||
Err(_) => Self::default(),
|
||||
Ok(content) => match serde_json::from_str(&content) {
|
||||
Ok(state) => state,
|
||||
Err(e) => {
|
||||
eprintln!("Warning: corrupt sync state file, resetting: {}", e);
|
||||
Self::default()
|
||||
}
|
||||
},
|
||||
Err(e) => {
|
||||
eprintln!("Warning: failed to read sync state file: {}", e);
|
||||
Self::default()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub fn save(&self, workspace_path: &Path) -> Result<()> {
|
||||
let state_path = workspace_path.join(".syncstate.json");
|
||||
let content = serde_json::to_string_pretty(self)?;
|
||||
std::fs::write(&state_path, content)?;
|
||||
// Atomic write: write to temp file then rename to prevent corruption on crash
|
||||
let temp_path = workspace_path.join(".syncstate.json.tmp");
|
||||
std::fs::write(&temp_path, &content)?;
|
||||
std::fs::rename(&temp_path, &state_path)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
|
|
@ -589,6 +613,16 @@ async fn sync_workspace_inner(
|
|||
Ok(result)
|
||||
}
|
||||
|
||||
/// Validate that a sync path doesn't escape the workspace via path traversal.
|
||||
fn validate_sync_path(path: &str) -> Result<()> {
|
||||
for component in path.split('/') {
|
||||
if component == ".." || component.contains('\\') {
|
||||
return Err(Error::Sync(format!("Path traversal not allowed: {}", path)));
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Execute a single sync action.
|
||||
async fn execute_action(
|
||||
client: &WebDavClient,
|
||||
|
|
@ -598,6 +632,9 @@ async fn execute_action(
|
|||
remote_meta: &HashMap<&str, &RemoteFileSnapshot>,
|
||||
report: &(dyn Fn(&str) + Send + Sync),
|
||||
) -> Result<()> {
|
||||
// Validate path before any file system operation
|
||||
validate_sync_path(action.path())?;
|
||||
|
||||
match action {
|
||||
SyncAction::Upload { path } => {
|
||||
let local_path = workspace_path.join(path.replace('/', std::path::MAIN_SEPARATOR_STR));
|
||||
|
|
@ -681,7 +718,9 @@ async fn execute_action(
|
|||
.unwrap_or(metadata.task_order.len());
|
||||
metadata.task_order.insert(insert_pos, new_id);
|
||||
if let Ok(json) = serde_json::to_string_pretty(&metadata) {
|
||||
let _ = std::fs::write(&listdata_path, json);
|
||||
if let Err(e) = std::fs::write(&listdata_path, json) {
|
||||
eprintln!("Warning: failed to update listdata after conflict recovery: {}", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -5,6 +5,9 @@ use crate::error::{Error, Result};
|
|||
|
||||
/// Hard timeout for any WebDAV network operation.
|
||||
pub const REQUEST_TIMEOUT: Duration = Duration::from_secs(30);
|
||||
const CONNECT_TIMEOUT: Duration = Duration::from_secs(10);
|
||||
/// Maximum size for file downloads (10 MB).
|
||||
const MAX_FILE_BYTES: u64 = 10 * 1024 * 1024;
|
||||
|
||||
/// Information about a file on the remote WebDAV server.
|
||||
#[derive(Debug, Clone)]
|
||||
|
|
@ -36,8 +39,8 @@ impl WebDavClient {
|
|||
let base_url = base_url.trim_end_matches('/').to_string();
|
||||
Self {
|
||||
_client: Client::builder()
|
||||
.timeout(Duration::from_secs(30))
|
||||
.connect_timeout(Duration::from_secs(10))
|
||||
.timeout(REQUEST_TIMEOUT)
|
||||
.connect_timeout(CONNECT_TIMEOUT)
|
||||
.build()
|
||||
.unwrap_or_else(|_| Client::new()),
|
||||
_base_url: base_url,
|
||||
|
|
@ -64,7 +67,7 @@ impl WebDavClient {
|
|||
/// Test connection by issuing a PROPFIND depth 0 on the root.
|
||||
pub async fn test_connection(&self) -> Result<()> {
|
||||
let resp = self._client
|
||||
.request(reqwest::Method::from_bytes(b"PROPFIND").unwrap(), &self._base_url)
|
||||
.request(reqwest::Method::from_bytes(b"PROPFIND").expect("PROPFIND is a valid HTTP method"), &self._base_url)
|
||||
.basic_auth(self._username.as_str(), Some(self._password.as_str()))
|
||||
.header("Depth", "0")
|
||||
.header("Content-Type", "application/xml")
|
||||
|
|
@ -86,7 +89,7 @@ impl WebDavClient {
|
|||
pub async fn list_files(&self, path: &str) -> Result<Vec<RemoteFileInfo>> {
|
||||
let url = self.full_url(path);
|
||||
let resp = self._client
|
||||
.request(reqwest::Method::from_bytes(b"PROPFIND").unwrap(), &url)
|
||||
.request(reqwest::Method::from_bytes(b"PROPFIND").expect("PROPFIND is a valid HTTP method"), &url)
|
||||
.basic_auth(self._username.as_str(), Some(self._password.as_str()))
|
||||
.header("Depth", "1")
|
||||
.header("Content-Type", "application/xml")
|
||||
|
|
@ -129,7 +132,15 @@ impl WebDavClient {
|
|||
return Err(Error::WebDav(format!("GET failed with status {}", status)));
|
||||
}
|
||||
|
||||
Ok(resp.bytes().await?.to_vec())
|
||||
if resp.content_length().unwrap_or(0) > MAX_FILE_BYTES {
|
||||
return Err(Error::WebDav(format!("File too large (>{}MB)", MAX_FILE_BYTES / (1024 * 1024))));
|
||||
}
|
||||
let bytes = resp.bytes().await?;
|
||||
if bytes.len() as u64 > MAX_FILE_BYTES {
|
||||
return Err(Error::WebDav(format!("File too large (>{}MB)", MAX_FILE_BYTES / (1024 * 1024))));
|
||||
}
|
||||
|
||||
Ok(bytes.to_vec())
|
||||
}
|
||||
|
||||
/// Upload a file.
|
||||
|
|
@ -172,7 +183,7 @@ impl WebDavClient {
|
|||
pub async fn create_dir(&self, path: &str) -> Result<()> {
|
||||
let url = self.full_url(path);
|
||||
let resp = self._client
|
||||
.request(reqwest::Method::from_bytes(b"MKCOL").unwrap(), &url)
|
||||
.request(reqwest::Method::from_bytes(b"MKCOL").expect("MKCOL is a valid HTTP method"), &url)
|
||||
.basic_auth(self._username.as_str(), Some(self._password.as_str()))
|
||||
.send()
|
||||
.await?;
|
||||
|
|
@ -192,7 +203,7 @@ impl WebDavClient {
|
|||
let from_url = self.full_url(from);
|
||||
let to_url = self.full_url(to);
|
||||
let resp = self._client
|
||||
.request(reqwest::Method::from_bytes(b"MOVE").unwrap(), &from_url)
|
||||
.request(reqwest::Method::from_bytes(b"MOVE").expect("MOVE is a valid HTTP method"), &from_url)
|
||||
.basic_auth(self._username.as_str(), Some(self._password.as_str()))
|
||||
.header("Destination", &to_url)
|
||||
.header("Overwrite", "F")
|
||||
|
|
|
|||
40
docs/API.md
40
docs/API.md
|
|
@ -305,11 +305,12 @@ let result = sync_workspace(
|
|||
"username",
|
||||
"password",
|
||||
SyncMode::Full,
|
||||
None, // optional progress callback
|
||||
).await?;
|
||||
|
||||
// Push-only or pull-only
|
||||
sync_workspace(path, url, user, pass, SyncMode::PushOnly).await?;
|
||||
sync_workspace(path, url, user, pass, SyncMode::PullOnly).await?;
|
||||
sync_workspace(path, url, user, pass, SyncMode::Push, None).await?;
|
||||
sync_workspace(path, url, user, pass, SyncMode::Pull, None).await?;
|
||||
```
|
||||
|
||||
#### Check Sync Status
|
||||
|
|
@ -375,7 +376,9 @@ client.delete_file("old-task.md").await?;
|
|||
- **Offline queue**: Pending operations are queued and replayed when connectivity returns
|
||||
- **Sync state**: Stored in `.syncstate.json` within the workspace directory
|
||||
- **Auto-sync**: Periodic polling (configurable `sync_interval_secs`), debounced file-change trigger (5s), window-focus trigger (30s stale threshold)
|
||||
- **Response size cap**: PROPFIND responses are limited to 10 MB (checked via `Content-Length` header and actual body size) to prevent memory exhaustion from malicious servers
|
||||
- **Response size cap**: PROPFIND responses and file downloads are limited to 10 MB (checked via `Content-Length` header and actual body size) to prevent memory exhaustion from malicious servers
|
||||
- **Path traversal protection**: Sync paths are validated to reject `..` and `\` components before any file system operation
|
||||
- **Atomic writes**: Sync state (`.syncstate.json`) and offline queue (`.syncqueue.json`) use atomic write pattern (temp file + rename) to prevent corruption on crash
|
||||
- **Syncable files**: Only processes files at expected depths — `.onyx-workspace.json` at root (depth 1), `.listdata.json` and `*.md` inside list directories (depth 2)
|
||||
|
||||
## Error Handling
|
||||
|
|
@ -397,6 +400,37 @@ pub enum Error {
|
|||
}
|
||||
```
|
||||
|
||||
## Input Validation & Safety
|
||||
|
||||
### Size Limits
|
||||
|
||||
The storage layer enforces the following limits:
|
||||
|
||||
| Input | Max Length | Error |
|
||||
|-------|-----------|-------|
|
||||
| Task title | 500 characters | `InvalidData` |
|
||||
| Task description | 1,000,000 bytes (1 MB) | `InvalidData` |
|
||||
| List name | 255 characters | `InvalidData` |
|
||||
| WebDAV file download | 10 MB | `WebDav` |
|
||||
| PROPFIND response | 10 MB | `WebDav` |
|
||||
|
||||
### Atomic Writes
|
||||
|
||||
All metadata and state files use an atomic write pattern (write to `.tmp` then rename) to prevent data corruption if the process crashes mid-write:
|
||||
|
||||
- `.onyx-workspace.json` (root metadata)
|
||||
- `.listdata.json` (list metadata)
|
||||
- `config.json` (app config)
|
||||
- `.syncstate.json` (sync state)
|
||||
- `.syncqueue.json` (offline queue)
|
||||
|
||||
### Path Safety
|
||||
|
||||
- **List names**: Rejected if they contain `/`, `\`, or `..` components. Canonicalized and verified to stay within workspace root.
|
||||
- **Sync paths**: Validated to reject `..` and `\` before any file system operation.
|
||||
- **Workspace paths** (Tauri): Rejected if they point to system directories (`/etc`, `/usr`, `/bin`, etc.).
|
||||
- **Filenames**: Sanitized to replace `/ \ : * ? " < > |` and control characters with `_`.
|
||||
|
||||
## Example: Complete Workflow
|
||||
|
||||
```rust
|
||||
|
|
|
|||
Loading…
Reference in a new issue