Go back to topic: Software RAID (mdadm) Qubes Installation Procedure (WIP)
I see comments that people are using software raid (md rescue disk, set up the EFI partition (which you have already allocated) and install grub on thadm), but I don't see any good instructions for how to do it. I just finished a software raid install, so I'm putting instructions here for the next person. | I see comments that people are using software raid (md rescue disk, set up the EFI partition (which you have already allocated) and install grub on thadm), but I don't see any good instructions for how to do it. I just finished a software raid install, so I'm putting instructions here for the next person. |
* The graphical install procedure seems to fully work for the stated case, but this will always be a work in progress as i'd like people to add information that can help optimize the procedure, and there is always more that can be added (like adding a real /boot/efi2) | * The graphical install procedure seems to fully work for the stated case, but this will always be a work in progress as i'd like people to add information that can help optimize the procedure, and there is always more that can be added (like adding a real /boot/efi2) ### What is mdadm? There are 3 known ways to do software raid on qubes. - Mdadm (the traditional way) - zfs (a really cool way, with a strange licensing quirk) - and btrfs (tries to do zfs like things, but without the licensing quirk). - This article documents the mdadm way. If you want to try the zfs way, you can start by looking here: https://forum.qubes-os.org/t/zfs-in-qubes-os/18994 If you want to try the btrfs way, you can try starting here: https://forum.qubes-os.org/t/btrfs-redundant-disk-setup-raid-alternative/25824 |
### Create the boot filesystem
1. Click on | ### Create the boot filesystem 1. Click on "Free space" of one of the drives |
1. Click on | 1. Click on "Free space" of one of the drives |
|Device type | | |Device type | Software RAID| |
|name | | |name | pv-encrypted | |
1. Right click | 1. On the left hand side, under "RAID" (instead of under "Disks), select the new pv-encrypted device 1. Right click in the area to the right that normally said "free space" of the new pv-encrypted volume -> new |
(Note: if we do not create the thinpool, do we not have to select "use pre-existing pool" at the end and it automatically creates one? I have no idea but itd be interesting to investigate) | |
|Device type | "LVM2 Logical Volume"| | |Device type | "LVM2 Thin Logical Volume"| |
At this point click "done". it should complain at the bottom of the screen. You can view the issues, and it should say something like: "/boot" is on a RAID volume, but EFI is not! If your boot drive goes down you wont be able to boot.
EFI filesystem is not intended for RAID, so thats not a real option. And undoing raid for /boot won't help the situation. So you can ignore | At this point click the "done" button in the upper left hand corner. it should complain at the bottom of the screen. You can view the issues, and it should say something like: "/boot" is on a RAID volume, but EFI is not! If your boot drive goes down you wont be able to boot. EFI filesystem is not intended for RAID, so thats not a real option. And undoing raid for /boot won't help the situation. So you can ignore the message. |
After all problems are resolved: | After all problems (other then the one you are ignoring) are resolved: |
2. Going back into the "Installation Device" will clear out all the work you just did. | 2. Going back into the "Installation Device" will clear out all the work you just did. 3. You are not done! The installer will install stuff, then when it reboots you will still need to do one last thing! (shown in the very next line on this page) 4. If it breaks during install, jump to section "If it breaks during the install" ### After install works and it tries to reboot into the new qubes system You'll get the "Out Of Box" configuration options. Under "Advanced configuration" select "use existing LVM thin pool" with "qubes_dom0" (the only option) and LVM thin pool "Pool00" (the only option). This might mean we did not need to manually create the thin-pool lvm thing from the partition editor?? |
5. do: mdadm --assemble --scan | |
6. cat /proc/mdstat to view what raid drives are currently active | |
7. Hit Alt-F6 (to get back to the graphical screen) | |
Basically switch back and fourth between doing "mdadm --assemble --scan" and being in different partitioning screens until something works. I believe what worked for me was doing "mdadm --assemble --scan" *after* clicking "Installation Device" and *before* clicking "advanced custom - blivet-gui", then "Done" (to get to the advanced screen). Whatever I did, when i got to blivet-gui, it recognized the raid disks and I was able to reassign the mount points and then start the install (which worked this time)
| Basically switch back and fourth between doing "mdadm --assemble --scan" and being in different partitioning screens until something works. I believe what worked for me was doing "mdadm --assemble --scan" *after* clicking "Installation Device" and *before* clicking "advanced custom - blivet-gui", then "Done" (to get to the advanced screen). Whatever I did, when i got to blivet-gui, it recognized the raid disks and I was able to reassign the mount points and then start the install (which worked this time) |
2. Select "Installation | 2. Select "Installation Destination" |
4. Select "advanced | 4. Select "advanced custom Blivet-GUI" 5. Click "Done" |
7. Set it to GPT | |
4. Then do the same for the 2nd drive, but | 4. Then do the same for the 2nd drive, but instead of /boot/efi as the mountpoint, use /boot/efi2 as the mountpoint (Note: This procedures still does not use /boot/efi2 for anything, but maybe someday we can have it set up /boot/efi2 so the 2nd drive can be *booted* from as well (I.E. without a rescue disk)) |
6. Set raid level to "raid1" 7. Set the following: | |
|Device type | | |Device type | Software RAID| |
| encryption | no. do *not* encrypt the boot partition| | | encryption | no. do *not* encrypt the boot partition| 7. Click "OK" when done |
* | * The graphical install procedure seems to fully work for the stated case, but this will always be a work in progress as i'd like people to add information that can help optimize the procedure, and there is always more that can be added (like adding a real /boot/efi2) |
(It would be neat to mount it to /boot/efi2, but during the attempt that actually worked i did not have a mountpoint set) | (It would be neat to mount it to /boot/efi2, but during the attempt that actually worked i did not have a mountpoint set (update! I was able to get it to install by putting /boot/efi2 as the mount point. It still does not *use* the second efi partition, but it's progress)) |
## How to tell if a drive fails You can type `cat /proc/mdstat` to check if both drives are working. (Note: there is a good chance it's still syncing the drive at this moment) You will probably want to write a script that checks it and notifies you if there is a problem, and then put it the script in cron. something like this: ``` NUMBER_OF_RAIDS=2 #NOTE: You can run a test of this, to make sure nootifications are working by setting NUMBER_OF_RAIDS to more then you have if [ $NUMBER_OF_RAIDS != `grep '[[]UUU*[]]' /proc/mdstat | wc -l` ] then notify-send --expire-time=360000 "RAID issue!" "A drive may have failed, or other raid issue. (or you have NUMBER_OF_RAIDS set wrong). do: cat /proc/mdstat to see details" exit fi exit ``` |
|size | 1 | |size | 1 Gib| |
|size | 2 | |size | 2 Gib| |
|size | 10 | |size | 10 Gib| |
I see comments that people are using software raid ( | I see comments that people are using software raid (md rescue disk, set up the EFI partition (which you have already allocated) and install grub on thadm), but I don't see any good instructions for how to do it. I just finished a software raid install, so I'm putting instructions here for the next person. |
Basically switch back and fourth between doing "mdadm --assemble --scan" and being in different partitioning screens until something works. I believe what worked for me was doing "mdadm --assemble --scan" *after* clicking "Installation Device" and *before* clicking "advanced custom - blivet-gui", then "Done" (to get to the advanced screen). Whatever I did, when i got to blivet-gui, it recognized the raid disks and I was able to reassign the mount points and then start the install (which worked this time) | Basically switch back and fourth between doing "mdadm --assemble --scan" and being in different partitioning screens until something works. I believe what worked for me was doing "mdadm --assemble --scan" *after* clicking "Installation Device" and *before* clicking "advanced custom - blivet-gui", then "Done" (to get to the advanced screen). Whatever I did, when i got to blivet-gui, it recognized the raid disks and I was able to reassign the mount points and then start the install (which worked this time) ### After install works and it tries to reboot into the new qubes system You'll get the "out of box" configuration options. Under "Advanced configuration" select "use existing LVM thin pool" with "qubes_dom0" (the only option) and LVM thin pool "Pool00" (the only option). This might mean we did not need to manually create the thin-pool lvm thing from the partition editor?? |
# Software Raid installation procedure: | # Graphical Software Raid installation procedure: |
# | # CLI Software Raid installation procedure: |
Once it's installed, it would be cool to mount the 2nd efi partition as /boot/ | Once it's installed, it would be cool to mount the 2nd efi partition as /boot/efi2 and adjust the grub installer to install to both partitions. As it stands, if your boot drive goes out, then you will have to boot to a rescue disk, set up the EFI partition (which you have already allocated) and install grub on the other disk |