3.3 KiB
3.3 KiB
| tags | ||
|---|---|---|
|
NAS
- clean install newest FreeBSD 14.2
- move OS /home into /data/home zpool/home
- mount ISO over IPMI
- Manage old zpool:
- (old OS) zpool export zpool
- (new OS) zpool import -N zpool
- zpool status
- zpool upgrade
- zpool upgrade zpool
- https://docs.freebsd.org/en/books/handbook/zfs/#zfs-zpool-upgrade
- Syncthing shares on separated ZFS subvolumes
- zfs autosnapshot retetion policies
- pyrotechnics & private data zfs copies=2?
- Applications in VMs
- Photoprism
- Homeassistant? or in jail?
- Applications in seperate freebsd jails
- Syncthing
- Transmission
- (existing -> upgrade) Gitea
- Template FreeBSD 14.2
- Samba
- VTVBB sync + go tooling
- Cache: pkg + freebsd update (for jails)
- Data partitioning
- zroot (ssd OS only)
- zpool (14TB hdd mirror)
- /data/home
Host OS services:
- SSH + sshguard
Improvements & things to not forget:
- backup settings from /etc and /usr/local/etc before SSD OS disk wipe
- private keychains daily snapshots (separate Syncthing share + copies=2?)
- Syncthing
- per-share zfs subvolume
- per-share needs .zfs ignore or else snapshots are propagated
- crontab(s) backup
- samba config
- gitea backup
- sshguard
- jails settings backup
- vanilla jails management with templates
- Migrate from zfstools auto-snapshot and prune to Python zfs-autobackup?
- URLs for (web)services with nanodash for homelab + quick access
- Upgrade gitea and migrate sqlite to postgres
- Migrate all automations Hue -> Home Assistant
- Samba network share
- AVAHI/Bonjour autodiscovery
- Automount network shares on macOS
- ZFS zpool scrub monthly cron
- Homeassistant in FreeBSD jail rc.d service file for auto-start on boot
- FreeBSD pkg cache for jails
Syncthing share enrol on ZFS subvolume
- Create zfs subvolume:
zfs create ... - Set zfs-auto-snapshot property (for zfstools) :
zfs set .. - Create share in Syncthing web GUI
- Ignore .zfs folder (to not propagate to connected peers): filter
.zfs
ZFS dataset datablock copies
For extra redundancy amount of datablock copies can be set and tested:
# zfs create data/test-dataset/dataset-1
# zfs list
# zfs set copies=2 data/test-dataset/dataset-1
# zfs get copies data/test-dataset/dataset-1
root@mango:/data/test-dataset/dataset-1 # dd if=/dev/random of=testfile bs=64K count=1024
1024+0 records in
1024+0 records out
67108864 bytes transferred in 0.609759 secs (110058049 bytes/sec)
root@mango:/data/test-dataset/dataset-1 # ls -lah
total 131146
drwxr-xr-x 2 root wheel 3B Dec 19 19:56 .
drwxr-xr-x 3 root wheel 3B Dec 19 19:55 ..
-rw-r--r-- 1 root wheel 64M Dec 19 19:57 testfile
root@mango:/data/test-dataset/dataset-1 # zfs list | grep dataset-1
data/test-dataset/dataset-1 128M 410G 128M /data/test-dataset/dataset-1