Interesting… Though I know nothing about your particular setup, or migrating existing data, I have a similar project in the works. This project is to automatically setup a ZFS RAID 10 on Ubuntu 24.04.
If you are interested in seeing how I am doing it, I used the openzfs root on Debian/Ubuntu guides.
For the code, take a look at this git hub: https://github.com/Reddimes/ubuntu-zfsraid10/
One thing to note is this runs two zpools, one for / and one for /boot. It is also specifcally UEFI and if you need legacy you need to change the partitioning a little bit(see init.sh)
BE WARNED THAT THIS SCRUBS ALL FILESYSTEMS AND DELETES ALL PARTITIONS
To run it, load up a ubuntu-server live cd and run the following:
git clone --depth 1 https://github.com/Reddimes/ubuntu-zfsraid10.git cd ubuntu-zfsraid10 chmod +x *.sh vim init.sh # Change all disks to be relevant to your setup. vim chroot.sh # Same thing here. sudo ./init.sh
On first login, there are a few things I have not scripted yet:
apt update && apt full-upgrade dpkg-reconfigure grub-efi-amd64
There are two parts to automating this, either I need to create a runonce.d service(here). Or I need to add a script to the users profile.d directory which goes ahead and deletes itself. And also I need to include a proper netplan configuration. I’m simply not there yet.
I imagine in your case you could start a new pool and use zfs send to copy over the data from the old pool. Then remove the old pool entirely and add the old disks to the new pool. I certainly have never done this though and I suspect there may be an issue. The other option you have (if you have room for one more drive) is to configure it into a ZFS RAID 10. Then you don’t need to migrate the data, but just need to add an additional vdev mirror with the additional drive and resilver.
One thing I tried to do was to make the scripts easily customizable. It still is not yet ready for that, though. You could simply change the zpool commands in the init.sh.
Sounds interesting, but while I have room for one more drive, I don’t want to spend money for one more drive xD (As mentioned, I have >= 12Tb drives, so another one I don’t really need would hurt the wallet quite a bit.)
That is totally fair. Actually I just upgraded to 12 TB drives and that’s why I’m working on this. So huge props to your design choice. Also props for using zfs, I feel like it flies under the radar a lot.
You do not keep the pool, but the data set = file system.
man zfs send
So the broken pool is kinda stupid and you shouldn’t do it, you will be running without parity the whole time but if you want to risk it it does work.
Or… If you have the drives and space, you can combine a bunch of smaller drives with mdadm (assuming Linux but freebsd has geom I think) and then use that as your third drive, then once everything is copied do a zpool replace. That way you keep full parity the whole time.
Edit: latest version of zfs supports raidz expansion. So you could create a 2 drive raidz1 then copy everything over then expand it. You will still be running your source disk without parity but at least the destination would be safe.
Sadly I don’t have 11 Tb worth of smaller empty drives around (and not even eniugh sata-Ports, so I’d also have to buy an expansion-card).
Yeah, I don’t like the broken pool idea either, that’s why I was hoping there was a better method.
The least bad imo is making a two drive raidz1 then expanding it.
I don’t tgink you can create a RaidZ1 with just two drives.
Just did it to test on a couple of files, worked fine.
Pick up a bunch of used low cost drives and create a new pool.
But the low-cost pool would probably not have the capacity to hold my data (or not be low-cost).
If you don’t have a spare drive to copy the contents, how do you do backup?