Removes persistentTimer from all borgbackup services and unnecessary
network-online.target dependencies. Changes fuchsia offsite to 14:00
fixed schedule when system is reliably awake.
Persistent timer catch-ups immediately after system resume caused
failures due to services starting before network/system fully stabilized:
- Onsite: DNS resolution failures (viridian.home.arpa)
- Offsite: BorgBase connection refusals during SSH/borg handshake
Fixed schedules provide reliable backups without catch-up complexity:
- fuchsia offsite: 14:00 daily (typical awake time for desktop)
- viridian offsite: midnight daily (always-on server)
- All onsite: hourly (no catch-up needed)
Offsite services retain wants/after dependencies on onsite completion
to prevent race conditions on shared /btrfs-subvolumes snapshot paths.
Network dependencies removed as fixed schedules run when system is
already stable, eliminating timing issues with network-online.target.
Fixes DNS resolution failures when persistent timers trigger backups
after system wake.
The NixOS borgbackup module adds network-online.target dependencies
to the timer when persistentTimer=true, but systemd timers don't pass
their dependencies to the services they trigger. This caused onsite
backups to start before the network was ready, resulting in "Could not
resolve hostname" errors.
Adding after/wants network-online.target directly to the service
ensures the backup waits for network availability regardless of how
it's triggered (timer or offsite's Wants= dependency).
Example failure (Oct 11, 07:43):
- Backup started at 07:43:43 (persistent timer caught up)
- DNS lookup failed: "Could not resolve hostname viridian.home.arpa"
- WiFi connected at 07:43:47 (4 seconds too late)
Applied to both fuchsia and viridian onsite backups.
Fixes multiple issues with borgbackup service coordination:
1. Race condition between onsite/offsite backups
- Set Type=oneshot to ensure services wait for completion
- Added Wants= dependency to trigger onsite when offsite runs
- Prevents snapshot path collision at /btrfs-subvolumes
2. Network unavailability after sleep/wake
- Added persistentTimer=true to onsite backups
- NixOS module now auto-adds network-online.target dependencies
- Fixes DNS resolution failures for SSH repos
3. Data loss risk from missed backups
- Persistent timers ensure backups run on wake if missed
- Protects work done before sleep from being unbackored
4. Duplicate onsite runs at midnight
- Removed 15-minute stagger (00:15 -> 00:00)
- Systemd deduplicates services in same transaction
- Onsite now runs once, not twice
Applied to both fuchsia and viridian for consistency.
Update backup paths to use actual persistent storage locations (/persist/*) rather than bind-mounted paths, making it clear where data truly resides and simplifying restore operations.
Changes staging directories from hidden to visible and aligns backup paths with actual BTRFS subvolume naming conventions for better clarity when browsing archives.
Add critical system state from persist.nix to borgbackup jobs:
- SSH host keys (required for borg authentication)
- machine-id and nixos state
- Network and bluetooth configurations
Paths mirror persist.nix configuration for maintainability.
Service-specific persist data (traefik, crowdsec) excluded -
will create dedicated subvolumes if/when needed.
Add automated snapshot and backup system with three independent tiers:
Snapper (hourly local snapshots):
- Configure snapper for all srv-* subvolumes
- Tiered retention: 24 hourly, 7 daily, 4 weekly, 12 monthly
- Snapshots stored at /.snapshots on viridian drive
- Provides fast operational rollback for user errors
Borgbackup onsite (hourly local backups):
- Independent staging snapshots at /.staging-onsite
- Repository on data drive at /srv/borg-repo
- Unencrypted (physical security assumed)
- Matches snapper retention policy
- Fast local disaster recovery
Borgbackup offsite (daily remote backups):
- Independent staging snapshots at /.staging-offsite
- Encrypted backups to borgbase repository
- Retention: 7 daily, 4 weekly, 12 monthly
- Remote disaster recovery with prune policy
Architecture decisions:
- Separate staging directories prevent job conflicts
- Staging snapshots decouple borg jobs from snapper
- Consistent zstd,9 compression across both borg jobs
- Special case handling for containers subvolume path