USC2025+SE2 — Backups for the people!
We have started deploying a new backup server, levering the zfs filesystem together with FreeBSD jails 🤓
We have started deploying a new backup server, levering the zfs filesystem together with FreeBSD jails 🤓
The following example aims at illustrating the core concept of zfs send/recv:
zfs create storage/test-source;zfs snap storage/test-source@one;zfs send -v storage/test-source@one | zfs recv -v storage/test-destination; we now
have a working independant copy of the dataset on the destination, ain’t
this cool?zfs snap storage/test-source@two;zfs send -v -i storage/test-source@two | zfs recv -v storage/test-destinationzfs list -t snap to check what’s
up.Note: To track the progress of zfs send | zfs recv, one can use a
well-known tool, pv, as suggested in
the Solaris
documentation.
Most of us came from traditional storage systems, and had to wrap our minds
around new concepts introduced by zfs. Let’s break it down:
A zpool combines multiple physical disks into a single storage pool,
handling redundancy, caching, and data integrity at the block level. It’s
comparable to a:
Instead of manually partitioning disks or setting up traditional RAID, zfs
automatically distributes data across the zpool.
Using ZFS for NAS appliances storing large media files is a very common scenario. In this specific context, we may want to get:
…in that order!
We’ll try & figure out proper storage designs for 8 large hard disks in this specific context.
You may well want to read choosing the right ZFS pool layout in addition to getting familiar with ZFS concepts.