Software RAID (mdadm) Qubes Installation Procedure (WIP) revisions

Go back to topic: Software RAID (mdadm) Qubes Installation Procedure (WIP)

  1. v10 anchor; v10 full version
  2. v9 anchor; v9 full version
  3. v8 anchor; v8 full version
  4. v7 anchor; v7 full version
  5. v6 anchor; v6 full version
  6. v5 anchor; v5 full version
  7. v4 anchor; v4 full version
  8. v3 anchor; v3 full version
  9. v2 anchor; v2 full version

Revision #10

Edited on
2025-03-05
Edited by user
ddevz
I see comments that people are using software raid (md rescue disk, set up the EFI partition (which you have already allocated) and install grub on thadm), but I don't see any good instructions for how to do it. I just finished a software raid install, so I'm putting instructions here for the next person. I see comments that people are using software raid (md rescue disk, set up the EFI partition (which you have already allocated) and install grub on thadm), but I don't see any good instructions for how to do it. I just finished a software raid install, so I'm putting instructions here for the next person.
* The graphical install procedure seems to fully work for the stated case, but this will always be a work in progress as i'd like people to add information that can help optimize the procedure, and there is always more that can be added (like adding a real /boot/efi2) * The graphical install procedure seems to fully work for the stated case, but this will always be a work in progress as i'd like people to add information that can help optimize the procedure, and there is always more that can be added (like adding a real /boot/efi2) ### What is mdadm? There are 3 known ways to do software raid on qubes. - Mdadm (the traditional way) - zfs (a really cool way, with a strange licensing quirk) - and btrfs (tries to do zfs like things, but without the licensing quirk). - This article documents the mdadm way. If you want to try the zfs way, you can start by looking here: https://forum.qubes-os.org/t/zfs-in-qubes-os/18994 If you want to try the btrfs way, you can try starting here: https://forum.qubes-os.org/t/btrfs-redundant-disk-setup-raid-alternative/25824

Revision #9

Edited on
2025-02-27
Edited by user
ddevz
### Create the boot filesystem 1. Click on empty space of one of the drives ### Create the boot filesystem 1. Click on "Free space" of one of the drives
1. Click on empty space of one of the drives 1. Click on "Free space" of one of the drives
|Device type | partition| |Device type | Software RAID|
|name | set name to a random string | |name | pv-encrypted |
1. Right click on empty space of the new "luks raid physical volume" -> new 1. On the left hand side, under "RAID" (instead of under "Disks), select the new pv-encrypted device 1. Right click in the area to the right that normally said "free space" of the new pv-encrypted volume -> new
(Note: if we do not create the thinpool, do we not have to select "use pre-existing pool" at the end and it automatically creates one? I have no idea but itd be interesting to investigate)
|Device type | "LVM2 Logical Volume"| |Device type | "LVM2 Thin Logical Volume"|
At this point click "done". it should complain at the bottom of the screen. You can view the issues, and it should say something like: "/boot" is on a RAID volume, but EFI is not! If your boot drive goes down you wont be able to boot. EFI filesystem is not intended for RAID, so thats not a real option. And undoing raid for /boot won't help the situation. So you can ignore that. At this point click the "done" button in the upper left hand corner. it should complain at the bottom of the screen. You can view the issues, and it should say something like: "/boot" is on a RAID volume, but EFI is not! If your boot drive goes down you wont be able to boot. EFI filesystem is not intended for RAID, so thats not a real option. And undoing raid for /boot won't help the situation. So you can ignore the message.
After all problems are resolved: After all problems (other then the one you are ignoring) are resolved:
2. Going back into the "Installation Device" will clear out all the work you just did. 2. Going back into the "Installation Device" will clear out all the work you just did. 3. You are not done! The installer will install stuff, then when it reboots you will still need to do one last thing! (shown in the very next line on this page) 4. If it breaks during install, jump to section "If it breaks during the install" ### After install works and it tries to reboot into the new qubes system You'll get the "Out Of Box" configuration options. Under "Advanced configuration" select "use existing LVM thin pool" with "qubes_dom0" (the only option) and LVM thin pool "Pool00" (the only option). This might mean we did not need to manually create the thin-pool lvm thing from the partition editor??
4. do: mdadm --assemble --scan 5. do: mdadm --assemble --scan
5. cat /proc/mdstat to view what raid drives are currently active 6. cat /proc/mdstat to view what raid drives are currently active
6. Hit Alt-F6 (to get back to the graphical screen) 7. Hit Alt-F6 (to get back to the graphical screen)
Basically switch back and fourth between doing "mdadm --assemble --scan" and being in different partitioning screens until something works. I believe what worked for me was doing "mdadm --assemble --scan" *after* clicking "Installation Device" and *before* clicking "advanced custom - blivet-gui", then "Done" (to get to the advanced screen). Whatever I did, when i got to blivet-gui, it recognized the raid disks and I was able to reassign the mount points and then start the install (which worked this time) ### After install works and it tries to reboot into the new qubes system You'll get the "out of box" configuration options. Under "Advanced configuration" select "use existing LVM thin pool" with "qubes_dom0" (the only option) and LVM thin pool "Pool00" (the only option). This might mean we did not need to manually create the thin-pool lvm thing from the partition editor?? Basically switch back and fourth between doing "mdadm --assemble --scan" and being in different partitioning screens until something works. I believe what worked for me was doing "mdadm --assemble --scan" *after* clicking "Installation Device" and *before* clicking "advanced custom - blivet-gui", then "Done" (to get to the advanced screen). Whatever I did, when i got to blivet-gui, it recognized the raid disks and I was able to reassign the mount points and then start the install (which worked this time)

Revision #8

Edited on
2025-02-27
Edited by user
ddevz

Revision #7

Edited on
2025-02-27
Edited by user
ddevz
2. Select "Installation Device" 2. Select "Installation Destination"
4. Select "advanced custom" 4. Select "advanced custom Blivet-GUI" 5. Click "Done"
6. Set it to GPT 7. Set it to GPT
4. Then do the same for the 2nd drive, but dont give it a mountpoint) (It would be neat to mount it to /boot/efi2, but during the attempt that actually worked i did not have a mountpoint set (update! I was able to get it to install by putting /boot/efi2 as the mount point. It still does not *use* the second efi partition, but it's progress)) 4. Then do the same for the 2nd drive, but instead of /boot/efi as the mountpoint, use /boot/efi2 as the mountpoint (Note: This procedures still does not use /boot/efi2 for anything, but maybe someday we can have it set up /boot/efi2 so the 2nd drive can be *booted* from as well (I.E. without a rescue disk))
5. Set raid level to "raid1" 6. Set the following: 6. Set raid level to "raid1" 7. Set the following:
|Device type | partition| |Device type | Software RAID|
| encryption | no. do *not* encrypt the boot partition| | encryption | no. do *not* encrypt the boot partition| 7. Click "OK" when done

Revision #6

Edited on
2025-02-27
Edited by user
ddevz
* This is a work in progress. Currently it's probably about half done. * The graphical install procedure seems to fully work for the stated case, but this will always be a work in progress as i'd like people to add information that can help optimize the procedure, and there is always more that can be added (like adding a real /boot/efi2)

Revision #5

Edited on
2025-02-10
Edited by user
ddevz
(It would be neat to mount it to /boot/efi2, but during the attempt that actually worked i did not have a mountpoint set) (It would be neat to mount it to /boot/efi2, but during the attempt that actually worked i did not have a mountpoint set (update! I was able to get it to install by putting /boot/efi2 as the mount point. It still does not *use* the second efi partition, but it's progress))
## How to tell if a drive fails You can type `cat /proc/mdstat` to check if both drives are working. (Note: there is a good chance it's still syncing the drive at this moment) You will probably want to write a script that checks it and notifies you if there is a problem, and then put it the script in cron. something like this: ``` NUMBER_OF_RAIDS=2 #NOTE: You can run a test of this, to make sure nootifications are working by setting NUMBER_OF_RAIDS to more then you have if [ $NUMBER_OF_RAIDS != `grep '[[]UUU*[]]' /proc/mdstat | wc -l` ] then notify-send --expire-time=360000 "RAID issue!" "A drive may have failed, or other raid issue. (or you have NUMBER_OF_RAIDS set wrong). do: cat /proc/mdstat to see details" exit fi exit ```

Revision #4

Edited on
2024-11-19
Edited by user
ddevz
|size | 1 Gig| |size | 1 Gib|
|size | 2 Gig| |size | 2 Gib|
|size | 10 Gig| |size | 10 Gib|

Revision #3

Edited on
2024-07-25
Edited by user
ddevz
I see comments that people are using software raid (mdadm), but I don't see any good instructions for how to do it. I just finished a software raid install, so I'm putting instructions here for the next person. I see comments that people are using software raid (md rescue disk, set up the EFI partition (which you have already allocated) and install grub on thadm), but I don't see any good instructions for how to do it. I just finished a software raid install, so I'm putting instructions here for the next person.
Basically switch back and fourth between doing "mdadm --assemble --scan" and being in different partitioning screens until something works. I believe what worked for me was doing "mdadm --assemble --scan" *after* clicking "Installation Device" and *before* clicking "advanced custom - blivet-gui", then "Done" (to get to the advanced screen). Whatever I did, when i got to blivet-gui, it recognized the raid disks and I was able to reassign the mount points and then start the install (which worked this time) Basically switch back and fourth between doing "mdadm --assemble --scan" and being in different partitioning screens until something works. I believe what worked for me was doing "mdadm --assemble --scan" *after* clicking "Installation Device" and *before* clicking "advanced custom - blivet-gui", then "Done" (to get to the advanced screen). Whatever I did, when i got to blivet-gui, it recognized the raid disks and I was able to reassign the mount points and then start the install (which worked this time) ### After install works and it tries to reboot into the new qubes system You'll get the "out of box" configuration options. Under "Advanced configuration" select "use existing LVM thin pool" with "qubes_dom0" (the only option) and LVM thin pool "Pool00" (the only option). This might mean we did not need to manually create the thin-pool lvm thing from the partition editor??

Revision #2

Edited on
2024-07-25
Edited by user
ddevz
# Software Raid installation procedure: # Graphical Software Raid installation procedure:
## Doing it the CLI way. # CLI Software Raid installation procedure:
Once it's installed, it would be cool to mount the 2nd efi partition as /boot/efi and adjust the grub installer to install to both partitions. As it stands, if your boot drive goes out, then you will have to boot to a rescue disk, set up the EFI partition (which you have already allocated) and install grub on the other diskOnce it's installed, it would be cool to mount the 2nd efi partition as /boot/efi2 and adjust the grub installer to install to both partitions. As it stands, if your boot drive goes out, then you will have to boot to a rescue disk, set up the EFI partition (which you have already allocated) and install grub on the other disk