Software RAID (mdadm) Qubes Installation Procedure (WIP)

Original forum link
https://forum.qubes-os.org/t/27886
Original poster
ddevz
Created at
2024-07-25 16:55:42
Last wiki edit
2025-02-10 22:22:18
Revisions
9 revisions
Posts count
23
Likes count
7

I see comments that people are using software raid (md rescue disk, set up the EFI partition (which you have already allocated) and install grub on thadm), but I don't see any good instructions for how to do it. I just finished a software raid install, so I'm putting instructions here for the next person.

Notes:

Graphical Software Raid installation procedure:

  1. Do the standard steps listed in https://www.qubes-os.org/doc/installation-guide/ , up to the part about configuring the "Installation Destination" (after the anchor: https://www.qubes-os.org/doc/installation-guide/#installation-summary)
  2. Select "Installation Device"
  3. Select all media
  4. Select "advanced custom"

NOTE: There is the CLI way to set up the partitions, and the graphical way to set up the partitions. Below is the graphical way, but see "If it breaks during the install" section at the end of this document for the beginning of the discussion of CLI related ways of doing things.

Create the (blank) partition tables

  1. Click on disk 1
  2. Right click on "Free space" -> Edit -> Set partition table
  3. Set it to GPT
  4. Click on disk 2
  5. Right click on "Free space" -> Edit -> Set partition table
  6. Set it to GPT

Create the partitions

Note: One of the problems that we will run into is that once we create the raid device is that we wont be able to just say "use the new raid device and do your automatic partitioning thing using that". We will have to make partitions manually.

Create the MBR partition (optional)

The first 1 meg on a drive should normally be reserved for MBR related things. You may end up needing this space if UEFI does not go well for you. Some partition editors automatically reserve the first meg for you in some way. The current version of the blivet-gui partition editor that Qubes installer uses appears to set the default for the starting block of new partitions to be after the first meg, so as long as you leave the defaults, you don't need to create a 1 meg partition for MBR (and then do nothing with it, just so nothing else uses it)

Create the EFI filesystem

  1. Click on disk 1
  2. Right click "Free space" -> New
  3. Set the following:
Field Setting
Device type partition
size 1 Gib
Filesystem "EFI System partition"
label EFI
mountpoint /boot/efi
encryption no. do not encrypt the EFI partitions
  1. Then do the same for the 2nd drive, but dont give it a mountpoint) (It would be neat to mount it to /boot/efi2, but during the attempt that actually worked i did not have a mountpoint set (update! I was able to get it to install by putting /boot/efi2 as the mount point. It still does not use the second efi partition, but it's progress))

Create the boot filesystem

  1. Click on empty space of one of the drives
  2. Right click -> new
  3. Select type "software raid"
  4. See the list of drives spring up and add the 2nd drive
  5. Set raid level to "raid1"
  6. Set the following:
Field Setting
Device type partition
size 2 Gib
Filesystem "ext4"
label boot
name boot
mountpoint /boot
encryption no. do not encrypt the boot partition

Create the root filesystem

  1. Click on empty space of one of the drives
  2. Right click -> new
  3. Select type "software raid"
  4. See the list of drives spring up and add the 2nd drive
  5. Set raid level to "raid1"
  6. Set the following:
Field Setting
Device type partition
size Take all the space
Filesystem "physical volume LVM"
name set name to a random string
encryption yes! if you want encryption, enable it here

Create the volume group

  1. Right click on empty space of the new "luks raid physical volume" -> new
  2. Set the name to "qubes_dom0"

Create the swap volume

  1. Right click on empty space of the new volume group -> new
  2. Set the following:
Field Setting
Device type "LVM2 Logical Volume"
size 10 Gib
Filesystem swap
label swap
name swap
encryption no. do not encrypt (already encrypted)

Create the thinpool

  1. Right click on empty space of the new volume group -> new
  2. Select device type: "LVM2 ThinPool"
  3. Set size to: take it all
  4. Set name to "pool00"
  5. do not encrypt (already encrypted)

Create the root volume

  1. Right click on new lvmthinpool -> New
  2. Set the following:
Field Setting
Device type "LVM2 Logical Volume"
size take it all
Filesystem ext4
label root
name root
mountpoint /
encryption no. do not encrypt (already encrypted)

At this point click "done". it should complain at the bottom of the screen. You can view the issues, and it should say something like: "/boot" is on a RAID volume, but EFI is not! If your boot drive goes down you wont be able to boot.

EFI filesystem is not intended for RAID, so thats not a real option. And undoing raid for /boot won't help the situation. So you can ignore that.

However, if it says something else, like 'you forgot to put a root mountpoint for "/" ', then you should go fix that.

After all problems are resolved: 1. Click "done" 2. If nothing happens, Click "done" again 3. Start the installation

WARNING: 1. It did not save the partitions you just set up. If you reboot now it will be gone. 2. Going back into the "Installation Device" will clear out all the work you just did.

If it breaks during the install (or if you did it the CLI way)

If it breaks during the install, it probably write the partitions, so after the reboot: 1. Hit control-alt-F2 2. If you can't read the screen because the text is too small on a high-res screen, type: setfont solar24x32{enter}

Note: Many other CLI instructions that can be typed here are available from https://web.archive.org/web/20230316180945/https://www.qubes-os.org/doc/custom-install/

  1. cat /proc/mdstat to view what raid drives are currently active (it will say none are active)
  2. do: mdadm --assemble --scan (hopefully it will find the raid drives)
  3. cat /proc/mdstat to view what raid drives are currently active (this time it should show both as active)
  4. Hit Alt-F6 (to get back to the graphical screen)

Now the fun part. If you hit "rescan" to get the graphical partitioner to acknowledge the RAID drives, the "rescan" process will deactivate the RAID drives!

Basically switch back and fourth between doing "mdadm --assemble --scan" and being in different partitioning screens until something works. I believe what worked for me was doing "mdadm --assemble --scan" after clicking "Installation Device" and before clicking "advanced custom - blivet-gui", then "Done" (to get to the advanced screen). Whatever I did, when i got to blivet-gui, it recognized the raid disks and I was able to reassign the mount points and then start the install (which worked this time)

After install works and it tries to reboot into the new qubes system

You'll get the "out of box" configuration options. Under "Advanced configuration" select "use existing LVM thin pool" with "qubes_dom0" (the only option) and LVM thin pool "Pool00" (the only option).

This might mean we did not need to manually create the thin-pool lvm thing from the partition editor??

CLI Software Raid installation procedure:

This section is not complete. For now, see https://web.archive.org/web/20230316180945/https://www.qubes-os.org/doc/custom-install/ for ideas

Aftermath

How to tell if a drive fails

You can type cat /proc/mdstat to check if both drives are working. (Note: there is a good chance it's still syncing the drive at this moment) You will probably want to write a script that checks it and notifies you if there is a problem, and then put it the script in cron. something like this:

NUMBER_OF_RAIDS=2

#NOTE: You can run a test of this, to make sure nootifications are working by setting NUMBER_OF_RAIDS to more then you have

if [ $NUMBER_OF_RAIDS != `grep '[[]UUU*[]]' /proc/mdstat | wc -l` ]
then  
    notify-send --expire-time=360000  "RAID issue!" "A drive may have failed, or other raid issue.  (or you have NUMBER_OF_RAIDS set wrong).  do: cat /proc/mdstat to see details"
    exit
fi
exit

Things that would be cool

Once it's installed, it would be cool to mount the 2nd efi partition as /boot/efi2 and adjust the grub installer to install to both partitions. As it stands, if your boot drive goes out, then you will have to boot to a rescue disk, set up the EFI partition (which you have already allocated) and install grub on the other disk