Go back to topic: Qubes OS live mode. dom0 in RAM. Non-persistent Boot. Protection against forensics. Tails mode. Hardening dom0
| # qubes_dom0-root Qubes_Root=$(findmnt -n -o SOURCE /) | |
| fi
| fi |
| fi | fi Qubes_Root=$(findmnt -n -o SOURCE /) |
| mount -o ro / | mount -o ro $Qubes_Root /mnt |
| ** | **Step 1. Make New Directorys for Dracut Automation Modules:** |
| ** | **Step 2. Make Two Dracut Script Files module-setup.sh:** |
| ** | **Step 3. Make New Dracut Script File overlay-mount.sh:** |
| ** | **Step 4. Creating a script to automatically create zram-mount.sh and edit /etc/grub.d/40_custom.** |
| cat > /etc/dracut.conf.d/ramboot.conf << EOF | cat > /etc/dracut.conf.d/ramboot.conf << 'EOF' |
| > (Create `/usr/lib/dracut/modules.d/01ramboot/zram-mount.sh`, Update **Dracut**, Edit `40_custom` file, update **GRUB**) | |
| ** | **Step 5. Clone dangerous qubes (dvm‑template, appVMs) into a pool in dom0** |
| > :loudspeaker: Don’t worry about installing these modes - doing so won’t affect the default Qubes boot. The new GRUB options are added to `/etc/grub.d/40_custom` (so it don’t modify your `/etc/default/grub`), and the new dracut modules only run when the additional GRUB parameters are supplied (these parameters aren’t present in the default boot). | > :loudspeaker: Don’t worry about installing these modes - doing so won’t affect the default Qubes boot. I created this scenario to be as safe as possible and isolated from the default Qubes boot. The new GRUB options are added to `/etc/grub.d/40_custom` (so it don’t modify your `/etc/default/grub`), and the new dracut modules only run when the additional GRUB parameters are supplied (these parameters aren’t present in the default boot). Default Qubes boot won’t be affected even if the new modes encounter problems in the future, such as after dom0 updates. |
| `chmod 755 /usr/lib/dracut/modules.d/90ramboot/module-setup.sh` | |
| `chmod 755 /usr/lib/dracut/modules.d/90overlayfs-root/module-setup.sh` | |
| `chmod 755 /usr/lib/dracut/modules.d/90overlayfs-root/overlay-mount.sh` | |
| Press `Ctrl + X` to exit nano editor. **+ Step 4. Make New Dracut Config File ramboot.conf:** `sudo touch /etc/dracut.conf.d/ramboot.conf` `sudo nano /etc/dracut.conf.d/ramboot.conf` add: ``` add_drivers+=" zram " add_dracutmodules+=" ramboot " ``` Press `Ctrl + O` to save file. | |
| ** | ** +Step 4. Creating a script to automatically create zram-mount.sh and edit /etc/grub.d/40_custom.** |
| cat > /etc/dracut.conf.d/ramboot.conf << EOF add_drivers+=" zram " add_dracutmodules+=" ramboot " EOF | |
| **+ Step | **+ Step 5. Clone dangerous qubes (dvm‑template, appVMs) into a pool in dom0** |
| :green_circle: If you’re worried about entering something incorrectly, use the fully automated live‑mode setup via a [simple script](https://forum.qubes-os.org/t/qubes-os-live-mode-dom0-in-ram-non-persistent-boot-protection-against-forensics-tails-mode-hardening-dom0/38868/2) |
| mount -n -t tmpfs -o mode=0755,size= | mount -n -t tmpfs -o mode=0755,size=70%,nr_inodes=500k,noexec,nodev,nosuid,noatime,nodiratime tmpfs /cow |
| `sudo /usr/local/bin/grub-custom.sh` | `sudo /usr/local/bin/grub-custom.sh` > If you want to add your custom GRUB parameters for live modes, do so in `/etc/grub.d/40_custom` |
| > :loudspeaker: Don’t worry about installing these modes - doing so won’t affect the default Qubes boot. The new GRUB options are added to `/etc/grub.d/40_custom` (so it don’t modify your `/etc/default/grub`), and the new dracut modules only run when the additional GRUB parameters are supplied (these parameters aren’t present in the default boot). | > :loudspeaker: Don’t worry about installing these modes - doing so won’t affect the default Qubes boot. The new GRUB options are added to `/etc/grub.d/40_custom` (so it don’t modify your `/etc/default/grub`), and the new dracut modules only run when the additional GRUB parameters are supplied (these parameters aren’t present in the default boot). You are not editing the default `/etc/default/grub`. Default Qubes boot won’t be affected even if the new modes encounter problems in the future, such as after dom0 updates. I created this scenario to be as safe as possible and isolated from the default Qubes boot. |
| **+ Step 1. Make New Directorys for Dracut Automation Modules:** `sudo mkdir /usr/lib/dracut/modules.d/90ramboot` | |
| **+ Step | **+ Step 2. Make Two Dracut Script Files module-setup.sh:** `sudo touch /usr/lib/dracut/modules.d/90ramboot/module-setup.sh` `sudo nano /usr/lib/dracut/modules.d/90ramboot/module-setup.sh` |
| **+ Step | **+ Step 3. Make New Dracut Script File overlay-mount.sh:** |
| **+ Step | **+ Step 4. Make New Dracut Config File ramboot.conf:** |
| ****+Step | ****+Step 5. Creating a script to automatically create zram-mount.sh and edit /etc/grub.d/40_custom.** |
| DOM0_MAX_MB=$(( DOM0_MAX_KB / 1024 )) | DOM0_MAX_MB=$(( (DOM0_MAX_KB * 70) / (1024 * 100) )) |
| cat > /usr/lib/dracut/modules.d/ | cat > /usr/lib/dracut/modules.d/90ramboot/zram-mount.sh << EOF |
| chmod 755 /usr/lib/dracut/modules.d/ | chmod 755 /usr/lib/dracut/modules.d/90ramboot/zram-mount.sh |
| grub2-mkconfig -o /boot/grub2/grub.cfg | grub2-mkconfig -o /boot/grub2/grub.cfg # Disable Dom0 Swap: sed -i '/[[:space:]]\+swap[[:space:]]\+/s/^/#/' "/etc/fstab" swapoff -a |
| **+ Step | **+ Step 6. Clone dangerous qubes (dvm‑template, appVMs) into a pool in dom0** |
| **If you have limited memory, use this guide:** https://forum.qubes-os.org/t/really-disposable-ram-based-qubes/21532 | **If you have limited memory, use this guide:** https://forum.qubes-os.org/t/really-disposable-ram-based-qubes/21532 :sunglasses: **Also use these guides to hide traces of VPN/Tor connections and for external system shutdown and memory clearing for maximum paranoid security:** [Installation of AmneziaVPN: effective circumvention of internet blocks via DPI](https://forum.qubes-os.org/t/installation-of-amneziavpn-effective-circumvention-of-internet-blocks-via-dpi-for-china-russia-belarus-turkmenistan-and-iran-vpn-with-xray-reality/39005) [USB Kill Switch for Qubes OS](https://forum.qubes-os.org/t/usb-kill-switch-for-qubes-os-physical-security-enhancement/35541) |
| > You can make backups in live mode (I’ve done it many times using a | > You can make backups in live mode (I’ve done it many times using a Overlay-Live Mode). |
| mount -o nodev,nosuid,noatime, | mount -o nodev,nosuid,noatime,nodiratime /dev/zram0 /sysroot |
| This guide combines the best practices from this topic https://forum.qubes-os.org/t/qubes-in-tmpfs/11127 and it adds new options and capabilities for launching Qubes in RAM. (The old topic’s founder and one of its main contributors are no longer active on forum) | |
|  :shield: Both modes significantly increase dom0 security - the default root runs in read‑only mode, and operations take place within a hardened copy of the system. This also works great for experiments in Qubes or for beginners who want to learn without fear of breaking anything - all changes disappear after a reboot. It will extend the lifespan of your SSD. > Overlay‑Live Mode is very fast and maximally (paranoid) secure. Zram‑Live Mode starts slowly, uses more CPU power, but saves RAM dramatically (~ 2× compared to Overlay‑Live). This guide solves the old problem of [ implement live boot by porting grub-live to Qubes - amnesia / non-persistent boot / anti-forensics](https://github.com/QubesOS/qubes-issues/issues/4982). | |
| **Make a backup before you start | > :loudspeaker: Don’t worry about installing these modes - doing so won’t affect the default Qubes boot. The new GRUB options are added to `/etc/grub.d/40_custom` (so it don’t modify your `/etc/default/grub`), and the new dracut modules only run when the additional GRUB parameters are supplied (these parameters aren’t present in the default boot). **But Make a backup before you start working :slightly_smiling_face:** |
| mount -n -t tmpfs -o mode=0755,size=95%,nr_inodes=500k,noexec,nodev,nosuid, | mount -n -t tmpfs -o mode=0755,size=95%,nr_inodes=500k,noexec,nodev,nosuid,noatime,nodiratime tmpfs /cow |
| mount -t overlay -o noatime,volatile,lowerdir=/live/image,upperdir=/cow/rw,workdir=/cow/work,default_permissions,relatime overlay $NEWROOT | mount -t overlay -o noatime,nodiratime,volatile,lowerdir=/live/image,upperdir=/cow/rw,workdir=/cow/work,default_permissions,relatime overlay $NEWROOT |
| mount /dev/mapper/qubes_dom0-root /mnt | mount -o ro /dev/mapper/qubes_dom0-root /mnt |
| mount /dev/zram0 /sysroot | mount -o nodev,nosuid,noatime,nodiratime,nobarrier,commit=60,data=writeback /dev/zram0 /sysroot |
| > :exclamation: You can update templates in live modes, **but update dom0 in persistent mode!** | > :exclamation: You can update templates in live modes, **but update dom0 in persistent mode!** . |
| **+Step 7. Creating a script to automatically | ****+Step 7. Creating a script to automatically create zram-mount.sh and edit /etc/grub.d/40_custom.** |
| **Run** | ****Run** grub-custom.sh** |
| **+ Step | **+ Step 8. Clone dangerous qubes (dvm‑template, appVMs) into a pool in dom0** |
| > : | > :mega: **Overlay-Live Mode** (used by native kicksecure/Whonix) is more secure and fast: |
| Overlay is much closer to a real Tails OS (a read‑only image). I added some additional flags to | Overlay is much closer to a real Tails OS (a read‑only image). :shield: I added [some additional flags to Overlay-Live Mode to strengthen the security](https://forum.qubes-os.org/t/qubes-os-live-mode-dom0-in-ram-non-persistent-boot-protection-against-forensics-tails-mode-hardening-dom0/38868/3). It makes the dom0 more hardened. > :loudspeaker: **Zram‑Live Mode** works by copying (`cp -a`) root to zram0 - it is a significantly less secure and slower scenario. Also not recommended to update dracut and GRUB while in Zram‑Live Mode - doing so could cause the system to break after a reboot! :gear: But it saves RAM usage dramatically (~ 2x Overlay-Live Mode). This mode is suitable for running Standalone Live VMs that require a lot of disk space. |
| > :exclamation: If you have something like 32GB of RAM or greater in your system, and you want to run several fully stateless user qubes within Dom0 RAM space, then you may want to consider increasing this “10240M” Dom0 RAM maximum value, in order to allow for more Dom0 RAM space where you can store more user qubes as fully stateless. For example, with 32GB of system RAM, you may choose to allocate “20480M” (20GB) or even greater if you wish.
| > :exclamation: If you have something like 32GB of RAM or greater in your system, and you want to run several fully stateless user qubes within Dom0 RAM space, then you may want to consider increasing this “10240M” Dom0 RAM maximum value, in order to allow for more Dom0 RAM space where you can store more user qubes as fully stateless. For example, with 32GB of system RAM, you may choose to allocate “20480M” (20GB) or even greater if you wish. |
| **+ Step 6. Make New Dracut Config File ramboot.conf:** | |
| Press `Ctrl + X` to exit nano editor.
| Press `Ctrl + X` to exit nano editor. **+Step 7. Creating a script to automatically edit** `/etc/grub.d/40_custom`. |
| Add: | Add: |
| # Max memory dom0 DOM0_MAX_KB=$(xenstore-read /local/domain/0/memory/hotplug-max 2>/dev/null \ || xenstore-read /local/domain/0/memory/static-max 2>/dev/null \ || echo 0) if [ "$DOM0_MAX_KB" -gt 0 ]; then DOM0_MAX_MB=$(( DOM0_MAX_KB / 1024 )) DOM0_MAX_GB=$(( DOM0_MAX_MB / 1024 )) DOM0_MAX_GBG="${DOM0_MAX_GB}G" DOM0_MAX_RAM="dom0_mem=max:${DOM0_MAX_MB}M" else DOM0_MAX_RAM="dom0_mem=max:10240M" DOM0_MAX_GB="10" fi cat > /usr/lib/dracut/modules.d/01ramboot/zram-mount.sh << EOF #!/bin/sh . /lib/dracut-lib.sh if ! getargbool 0 rootzram ; then return fi mkdir /mnt umount /sysroot mount /dev/mapper/qubes_dom0-root /mnt modprobe zram echo $DOM0_MAX_GBG > /sys/block/zram0/disksize /mnt/usr/sbin/mkfs.ext2 /dev/zram0 mount /dev/zram0 /sysroot cp -a /mnt/* /sysroot exit 0 EOF chmod 755 /usr/lib/dracut/modules.d/01ramboot/zram-mount.sh # Update INITRAMFS dracut --verbose --force | |
| multiboot2 /$XEN_PATH placeholder console=none dom0_mem=min:1024M | multiboot2 /$XEN_PATH placeholder console=none dom0_mem=min:1024M $DOM0_MAX_RAM ucode=scan smt=off gnttab_max_frames=2048 gnttab_max_maptrack_frames=4096 \${xen_rm_opts} |
| multiboot2 /$XEN_PATH placeholder console=none dom0_mem=min:1024M | multiboot2 /$XEN_PATH placeholder console=none dom0_mem=min:1024M $DOM0_MAX_RAM ucode=scan smt=off gnttab_max_frames=2048 gnttab_max_maptrack_frames=4096 \${xen_rm_opts} |
| # Update | # Update GRUB |
| Edit `40_custom` | **Run** `grub-custom.sh` > (Create `/usr/lib/dracut/modules.d/01ramboot/zram-mount.sh`, Update **Dracut**, Edit `40_custom` file, update **GRUB**) |
| **+ Step | **+ Step 7. Clone dangerous qubes (dvm‑template, appVMs) into a pool in dom0** |
| > :exclamation: You can update templates in live modes, **but update dom0 in persistent mode!**
Now if dom0 updates | > :exclamation: You can update templates in live modes, **but update dom0 in persistent mode!** **Avoid experimenting with dom0 in Zram‑Live Mode** - it can break the system! For dom0 experiments, use only **Overlay‑Live Mode**. > > Now if dom0 updates kernels or If you’ve changed `dom_mem=max` in GRUB, just run `sudo /usr/local/bin/grub-custom.sh` and the live modes will work with the new kernels, and max memory settings. |
| **+ Step 2. Edit | **+ Step 2. Edit GRUB:** |
| Within the `GRUB_CMDLINE_XEN_DEFAULT` line, make a partial modification of this line, from dom0_`mem=max:4096M` to now be `dom0_mem=max:10240M` → Just change the text “`4096M`” to “`10240M`” (without quotes). | Edit line `GRUB_TIMEOUT=5` to `GRUB_TIMEOUT=30` (change 5 seconds to 30 seconds). Within the `GRUB_CMDLINE_XEN_DEFAULT` line, make a partial modification of this line, from dom0_`mem=max:4096M` to now be `dom0_mem=max:10240M` → Just change the text “`4096M`” to “`10240M`” (without quotes). |
| ****!! Important:** also change ram-disk size in Step | ****!! Important:** If you set more than 10 GB of dom0_mem=max, also change ram-disk size in Step 6 line** `echo 10G > /sys/block/zram0/disksize` |
| **+ Step 3. Make New Directorys for Dracut Automation Modules:** | |
| **+ Step | **+ Step 4. Make Two Dracut Script Files module-setup.sh:** |
| **+ Step | **+ Step 5. Make New Dracut Script File overlay-mount.sh:** |
| **+ Step | **+ Step 6. Make New Dracut Script File zram-mount.sh:** |
| **+ Step | **+ Step 7. Make New Dracut Config File ramboot.conf:** |
| **+ Step | **+ Step 8. Regenerate with New Dracut Automation Module:** |
| **+Step 9. Creating a script to automatically edit** `/etc/grub.d/40_custom`. `sudo touch /usr/local/bin/grub-custom.sh` `sudo chmod 755 /usr/local/bin/grub-custom.sh` `sudo nano /usr/local/bin/grub-custom.sh` Add: ``` #!/bin/bash #BOOT_UUID BOOT_UUID=$(findmnt -n -o UUID /boot 2>/dev/null || echo "AUTO_BOOT_NOT_FOUND") if [ "$BOOT_UUID" = "AUTO_BOOT_NOT_FOUND" ]; then BOOT_UUID=$(blkid -s UUID -o value -d $(findmnt -n -o SOURCE /boot 2>/dev/null)) fi # LUKS_UUID LUKS_DEVICE=$(blkid -t TYPE="crypto_LUKS" -o device 2>/dev/null | head -n1 || echo "") if [ -n "$LUKS_DEVICE" ]; then LUKS_UUID=$(sudo cryptsetup luksUUID "$LUKS_DEVICE" 2>/dev/null) else LUKS_UUID="AUTO_LUKS_NOT_FOUND" fi # Latest XEN_PATH XEN_PATH=$(ls /boot/xen*.gz 2>/dev/null | sort -V | tail -1 | xargs basename 2>/dev/null || echo "/xen-4.19.4.gz") # Latest kernel/initramfs LATEST_KERNEL=$(ls /boot/vmlinuz-*qubes*.x86_64 2>/dev/null | grep -E 'qubes\.fc[0-9]+' | sort -V | tail -1 | xargs basename) LATEST_INITRAMFS=$(echo "/initramfs-${LATEST_KERNEL#vmlinuz-}.img") cat > /etc/grub.d/40_custom << EOF #!/usr/bin/sh exec tail -n +3 \$0 menuentry 'Qubes Overlay-Live Mode (latest kernel)' --class qubes --class gnu-linux --class gnu --class os --class xen \$menuentry_id_option 'xen-gnulinux-simple-/dev/mapper/qubes_dom0-root' { insmod part_gpt insmod ext2 search --no-floppy --fs-uuid --set=root $BOOT_UUID echo 'Loading Xen ...' if [ "\$grub_platform" = "pc" -o "\$grub_platform" = "" ]; then xen_rm_opts= else xen_rm_opts="no-real-mode edd=off" fi insmod multiboot2 multiboot2 /$XEN_PATH placeholder console=none dom0_mem=min:1024M dom0_mem=max:10240M ucode=scan smt=off gnttab_max_frames=2048 gnttab_max_maptrack_frames=4096 \${xen_rm_opts} echo 'Loading Linux $LATEST_KERNEL ...' module2 /$LATEST_KERNEL placeholder root=/dev/mapper/qubes_dom0-root ro rd.luks.uuid=$LUKS_UUID rd.lvm.lv=qubes_dom0/root rd.lvm.lv=qubes_dom0/swap plymouth.ignore-serial-consoles rhgb rootovl quiet usbcore.authorized_default=0 echo 'Loading initial ramdisk ...' insmod multiboot2 module2 --nounzip $LATEST_INITRAMFS } menuentry 'Qubes Zram-Live Mode (latest kernel)' --class qubes --class gnu-linux --class gnu --class os --class xen \$menuentry_id_option 'xen-gnulinux-simple-/dev/mapper/qubes_dom0-root' { insmod part_gpt insmod ext2 search --no-floppy --fs-uuid --set=root $BOOT_UUID echo 'Loading Xen ...' if [ "\$grub_platform" = "pc" -o "\$grub_platform" = "" ]; then xen_rm_opts= else xen_rm_opts="no-real-mode edd=off" fi insmod multiboot2 multiboot2 /$XEN_PATH placeholder console=none dom0_mem=min:1024M dom0_mem=max:10240M ucode=scan smt=off gnttab_max_frames=2048 gnttab_max_maptrack_frames=4096 \${xen_rm_opts} echo 'Loading Linux $LATEST_KERNEL ...' module2 /$LATEST_KERNEL placeholder root=/dev/mapper/qubes_dom0-root ro rd.luks.uuid=$LUKS_UUID rd.lvm.lv=qubes_dom0/root rd.lvm.lv=qubes_dom0/swap plymouth.ignore-serial-consoles rhgb rootzram quiet usbcore.authorized_default=0 echo 'Loading initial ramdisk ...' insmod multiboot2 module2 --nounzip $LATEST_INITRAMFS } EOF # Update Grub grub2-mkconfig -o /boot/grub2/grub.cfg ``` Press `Ctrl + O` to save file. Press `Ctrl + X` to exit nano editor. Edit `40_custom` file and update **GRUB**: `sudo /usr/local/bin/grub-custom.sh` |
| This guide combines the best practices from this topic https://forum.qubes-os.org/t/qubes-in-tmpfs/ | This guide combines the best practices from this topic https://forum.qubes-os.org/t/qubes-in-tmpfs/11127 and it adds new options and capabilities for launching Qubes in RAM. (The old topic’s founder and one of its main contributors are no longer active on forum) This guide adds two new options to the GRUB menu for safely launching live modes. You will get two ways to launch **dom0** in RAM for protection against forensics: 1. **Qubes Overlay-Live Mode** 2. **Qubes Zram-Live Mode**  This guide solves the old problem of [ implement live boot by porting grub-live to Qubes - amnesia / non-persistent boot / anti-forensics](https://github.com/QubesOS/qubes-issues/issues/4982). This also works great for experiments in Qubes or for beginners who want to learn without fear of breaking anything - all changes disappear after a reboot. It will extend the lifespan of your SSD. > :exclamation: **Overlay-Live Mode** (used by native kicksecure/Whonix) is more secure and fast: |
| > The system runs on an overlay: the original root is mounted as lowerdir=/live/image (read‑only view), and all changes go into upperdir=/cow/rw on tmpfs. This makes it hard for malware to persist on the real disk, since any modifications live only in the volatile upper layer. | > The system runs on an overlay: the original root is mounted as lowerdir=/live/image (read‑only view), and all changes go into upperdir=/cow/rw on tmpfs. This makes it hard for malware to persist on the real disk, since any modifications live only in the volatile upper layer. Overlay in tmpfs does not create and format new block devices nor copy the entire root filesystem at runtime. There are fewer points where an attacker could hook into disk‑level operations or tamper with filesystem creation and bulk copying. |
| > | > **Zram‑Live Mode** works by copying (`cp -a`) root to zram0 - it is a significantly less secure and slower scenario. But it saves RAM usage dramatically (~ 2x Overlay-Live Mode). You will need: at least 16 GB RAM for **Zram-Live Mode** and at least 24 GB RAM for **Overlay-Live Mode** for comfortably launching several qubes in live mode. |
| **+Step 3. Creating a script to automatically edit** `/etc/grub.d/40_custom`. `sudo touch /usr/local/bin/grub-custom.sh` `sudo chmod 755 /usr/local/bin/grub-custom.sh` `sudo nano /usr/local/bin/grub-custom.sh` Add: ``` #!/bin/bash #BOOT_UUID BOOT_UUID=$(findmnt -n -o UUID /boot 2>/dev/null || echo "AUTO_BOOT_NOT_FOUND") if [ "$BOOT_UUID" = "AUTO_BOOT_NOT_FOUND" ]; then BOOT_UUID=$(blkid -s UUID -o value -d $(findmnt -n -o SOURCE /boot 2>/dev/null)) fi # LUKS_UUID LUKS_DEVICE=$(blkid -t TYPE="crypto_LUKS" -o device 2>/dev/null | head -n1 || echo "") if [ -n "$LUKS_DEVICE" ]; then LUKS_UUID=$(sudo cryptsetup luksUUID "$LUKS_DEVICE" 2>/dev/null) else LUKS_UUID="AUTO_LUKS_NOT_FOUND" fi # Latest XEN_PATH XEN_PATH=$(ls /boot/xen*.gz 2>/dev/null | sort -V | tail -1 | xargs basename 2>/dev/null || echo "/xen-4.19.4.gz") # Latest kernel/initramfs LATEST_KERNEL=$(ls /boot/vmlinuz-*qubes*.x86_64 2>/dev/null | grep -E 'qubes\.fc[0-9]+' | sort -V | tail -1 | xargs basename) LATEST_INITRAMFS=$(echo "/initramfs-${LATEST_KERNEL#vmlinuz-}.img") cat > /etc/grub.d/40_custom << EOF #!/usr/bin/sh exec tail -n +3 \$0 menuentry 'Qubes Overlay-Live Mode (latest kernel)' --class qubes --class gnu-linux --class gnu --class os --class xen \$menuentry_id_option 'xen-gnulinux-simple-/dev/mapper/qubes_dom0-root' { insmod part_gpt insmod ext2 search --no-floppy --fs-uuid --set=root $BOOT_UUID echo 'Loading Xen ...' if [ "\$grub_platform" = "pc" -o "\$grub_platform" = "" ]; then xen_rm_opts= else xen_rm_opts="no-real-mode edd=off" fi insmod multiboot2 multiboot2 /$XEN_PATH placeholder console=none dom0_mem=min:1024M dom0_mem=max:10240M ucode=scan smt=off gnttab_max_frames=2048 gnttab_max_maptrack_frames=4096 \${xen_rm_opts} echo 'Loading Linux $LATEST_KERNEL ...' module2 /$LATEST_KERNEL placeholder root=/dev/mapper/qubes_dom0-root ro rd.luks.uuid=$LUKS_UUID rd.lvm.lv=qubes_dom0/root rd.lvm.lv=qubes_dom0/swap plymouth.ignore-serial-consoles rhgb rootovl quiet usbcore.authorized_default=0 echo 'Loading initial ramdisk ...' insmod multiboot2 module2 --nounzip $LATEST_INITRAMFS } menuentry 'Qubes Zram-Live Mode (latest kernel)' --class qubes --class gnu-linux --class gnu --class os --class xen \$menuentry_id_option 'xen-gnulinux-simple-/dev/mapper/qubes_dom0-root' { insmod part_gpt insmod ext2 search --no-floppy --fs-uuid --set=root $BOOT_UUID echo 'Loading Xen ...' if [ "\$grub_platform" = "pc" -o "\$grub_platform" = "" ]; then xen_rm_opts= else xen_rm_opts="no-real-mode edd=off" fi insmod multiboot2 multiboot2 /$XEN_PATH placeholder console=none dom0_mem=min:1024M dom0_mem=max:10240M ucode=scan smt=off gnttab_max_frames=2048 gnttab_max_maptrack_frames=4096 \${xen_rm_opts} echo 'Loading Linux $LATEST_KERNEL ...' module2 /$LATEST_KERNEL placeholder root=/dev/mapper/qubes_dom0-root ro rd.luks.uuid=$LUKS_UUID rd.lvm.lv=qubes_dom0/root rd.lvm.lv=qubes_dom0/swap plymouth.ignore-serial-consoles rhgb rootzram quiet usbcore.authorized_default=0 echo 'Loading initial ramdisk ...' insmod multiboot2 module2 --nounzip $LATEST_INITRAMFS } EOF # Update Grub grub2-mkconfig -o /boot/grub2/grub.cfg ``` Press `Ctrl + O` to save file. Press `Ctrl + X` to exit nano editor. Edit `40_custom` file and update **GRUB**: `sudo /usr/local/bin/grub-custom.sh` **+ Step 4. Make New Directorys for Dracut Automation Modules:** | |
| **+ Step | **+ Step 5. Make Two Dracut Script Files module-setup.sh:** |
| inst_simple "$moddir/ | inst_simple "$moddir/zram-mount.sh" inst_hook cleanup 00 "$moddir/zram-mount.sh" |
| **+ Step | **+ Step 6. Make New Dracut Script File overlay-mount.sh:** |
| #!/bin/sh
| #!/bin/sh |
| modprobe overlay
| modprobe overlay mount -o remount,nolock,noatime $NEWROOT |
| umount $NEWROOT
| umount $NEWROOT |
| mount -n -t tmpfs -o mode=0755,size= | mount -n -t tmpfs -o mode=0755,size=95%,nr_inodes=500k,noexec,nodev,nosuid,relatime tmpfs /cow mkdir /cow/work /cow/rw mount -t overlay -o noatime,volatile,lowerdir=/live/image,upperdir=/cow/rw,workdir=/cow/work,default_permissions,relatime overlay $NEWROOT |
| **+ Step 7. Make New Dracut Script File zram-mount.sh:** `sudo touch /usr/lib/dracut/modules.d/01ramboot/zram-mount.sh` `sudo chmod 755 /usr/lib/dracut/modules.d/01ramboot/zram-mount.sh` `sudo nano /usr/lib/dracut/modules.d/01ramboot/zram-mount.sh` | |
| . /lib/dracut-lib.sh if ! getargbool 0 rootzram ; then return fi | |
| ;; [Ss]* ) exit 0 ;; * ) exit 1 ;; esac | |
| > :exclamation:Don't add templates to **varlibqubes** - templates don't retain any session data in an appVM (that's the point of Qubes' isolation). So, in live mode it's sufficient that only the **private** and **volatile** storages are active. You can inspect all template metadata yourself and verify that none of them contain artifacts from sessions in appVM.
| > :exclamation:Don't add templates to **varlibqubes** - templates don't retain any session data in an appVM (that's the point of Qubes' isolation). So, in live mode it's sufficient that only the **private** and **volatile** storages are active. You can inspect all template metadata yourself and verify that none of them contain artifacts from sessions in appVM. **Restart Qubes OS and Test Qubes live modes :wink:**  > Zram-Live Mode takes longer to start (about 30–40 seconds more). > :exclamation: You can update templates in live modes, **but update dom0 in persistent mode!** Now if dom0 updates kernels, just run `sudo /usr/local/bin/grub-custom.sh` and the live modes will work with the new kernels. |
| Overlay is much closer to a real Tails OS (a read‑only image). | Overlay is much closer to a real Tails OS (a read‑only image). I added some additional flags to overlay‑tmpfs to strengthen the security of the live mode. |
| mount -t overlay -o noatime,volatile,lowerdir=/live/image,upperdir=/cow/rw,workdir=/cow/work, | mount -t overlay -o noatime,volatile,lowerdir=/live/image,upperdir=/cow/rw,workdir=/cow/work,default_permissions,relatime overlay $NEWROOT |
| # a little bit tuning | |
| # Move root | |
| # Create tmpfs | |
| mount -n -t tmpfs -o mode=0755,size= | mount -n -t tmpfs -o mode=0755,size=95%,nr_inodes=500k,noexec,nodev,nosuid,relatime tmpfs /cow |
| mount -t overlay -o noatime,volatile,lowerdir=/live/image,upperdir=/cow/rw,workdir=/cow/work,default_permissions ,relatime overlay $NEWROOT |
| > :exclamation:Don't add templates to **varlibqubes** - templates don't retain any session data in an appVM (that's the point of Qubes' isolation). So, in live mode it's sufficient that only the **private** and **volatile** storages are active. You can inspect all template metadata yourself and verify that none of them contain artifacts from sessions in appVM. | > :exclamation:Don't add templates to **varlibqubes** - templates don't retain any session data in an appVM (that's the point of Qubes' isolation). So, in live mode it's sufficient that only the **private** and **volatile** storages are active. You can inspect all template metadata yourself and verify that none of them contain artifacts from sessions in appVM. :exclamation:You can update templates in live modes, but update dom0 in persistent mode (ssd)! |
| > You can update templates in live mode, but update dom0 in persistent mode. |
| PROMPT="Enter boot mode: t-tmpfs, z- | PROMPT="Enter boot mode: t-tmpfs, z-zram, s-ssd" |
| > :exclamation:You will need to press one of three letters during the Qubes startup. You can edit the line "Enter boot mode: t-tmpfs, z-zram, s-ssd" however you | > :exclamation:You will need to press one of three letters during the Qubes startup. You can edit the line "Enter boot mode: t-tmpfs, z-zram, s-ssd" however you like. :warning: If you want to use different letters to launch the modes, then specify those same letters in the next script (Step 7)! But keep in mind that the letter used to launch a mode must not match the first letter of your password! Otherwise, one of the modes will start automatically after you enter the password! |
| :flashlight: You can use this terminal theme so can see which mode you’re currently in: | :flashlight: You can add a “System Monitor” widget to the XFCE panel and configure it to run command `findmnt -n -o SOURCE /`. This widget will display which mode you’re currently in:  You can also use this terminal theme so can see which mode you’re currently in: |
| > The second scenario (overlay in tmpfs), used by native kicksecure/Whonix, is more secure: > Root filesystem is effectively read‑only | > :exclamation:The second scenario (overlay in tmpfs), used by native kicksecure/Whonix, is more secure: > Root filesystem is effectively read‑only. |
| > | > :exclamation: If you have something like 32GB of RAM or greater in your system, and you want to run several fully stateless user qubes within Dom0 RAM space, then you may want to consider increasing this “10240M” Dom0 RAM maximum value, in order to allow for more Dom0 RAM space where you can store more user qubes as fully stateless. For example, with 32GB of system RAM, you may choose to allocate “20480M” (20GB) or even greater if you wish. |
| > You will need to press one of three letters during the Qubes startup. You can edit the line "Enter boot mode: t-tmpfs, z-zram, s-ssd" however you like (but keep in mind that the letter used to launch a mode must not match the first letter of your password! Otherwise, one of the modes will start automatically after you enter the password). If you want to use different letters to launch the modes, then specify those same letters in the next script (Step 7)! | > :exclamation:You will need to press one of three letters during the Qubes startup. You can edit the line "Enter boot mode: t-tmpfs, z-zram, s-ssd" however you like (but keep in mind that the letter used to launch a mode must not match the first letter of your password! Otherwise, one of the modes will start automatically after you enter the password). If you want to use different letters to launch the modes, then specify those same letters in the next script (Step 7)! |
| > Since only dom0 runs in live mode, you have to start the VMs from the pool in dom0. You can use the default pool `varlibqubes` or [create a new pool](https://dev.qubes-os.org/projects/core-admin-client/en/latest/manpages/qvm-pool.html). | > :exclamation:Since only dom0 runs in live mode, you have to start the VMs from the pool in dom0. You can use the default pool `varlibqubes` or [create a new pool](https://dev.qubes-os.org/projects/core-admin-client/en/latest/manpages/qvm-pool.html). |
| > :exclamation:Don't add templates to **varlibqubes** - templates don't retain any session data in an appVM (that's the point of Qubes' isolation). So, in live mode it's sufficient that only the **private** and **volatile** storages are active. You can inspect all template metadata yourself and verify that none of them contain artifacts from sessions in appVM. |
| > The only advantage of working with zram0 is memory savings. You will need: at least 16 GB RAM for **zram0 mode** and at least | > The only advantage of working with zram0 is memory savings. You will need: at least 16 GB RAM for **zram0 mode** and at least 24 GB RAM for **overlayfs on tmpfs mode** for comfortably launching several qubes in live mode. |
|  |
| > The second scenario (overlay in tmpfs), used by native kicksecure/Whonix, is more secure: > Root filesystem is effectively read‑only > The system runs on an overlay: the original root is mounted as lowerdir=/live/image (read‑only view), and all changes go into upperdir=/cow/rw on tmpfs. This makes it hard for malware to persist on the real disk, since any modifications live only in the volatile upper layer. > Overlay in tmpfs does not create and format new block devices nor copy the entire root filesystem at runtime. There are fewer points where an attacker could hook into disk‑level operations or tamper with filesystem creation and bulk copying. > A separate mkfs and bulk copying with `cp ‑a` is a much more vulnerable scenario than creating an overlay. Overlay is much closer to a real Tails OS (a read‑only image). > The only advantage of working with zram0 is memory savings. You will need: at least 16 GB RAM for **zram0 mode** and at least 32 GB RAM for **overlayfs on tmpfs mode** for comfortably launching several qubes in live mode. |
| > You will need to press one of three letters during the Qubes startup. You can edit the line "Enter boot mode: t-tmpfs, z-zram, s-ssd" however you like (but keep in mind that the letter used to launch a mode must not match the first letter of your password! Otherwise, one of the modes will start automatically after you enter the password). If you want to use different letters to launch the modes, then specify those same letters in the next script (Step | > You will need to press one of three letters during the Qubes startup. You can edit the line "Enter boot mode: t-tmpfs, z-zram, s-ssd" however you like (but keep in mind that the letter used to launch a mode must not match the first letter of your password! Otherwise, one of the modes will start automatically after you enter the password). If you want to use different letters to launch the modes, then specify those same letters in the next script (Step 7)! |
| That will permanently disable your Dom0 Swap after restart, which is likely best for operating Qubes Dom0 in live mode.
| That will permanently disable your Dom0 Swap after restart, which is likely best for operating Qubes Dom0 in live mode. |
| This also works great for experiments in Qubes or for beginners who want to learn without fear of breaking anything - all changes disappear after a reboot. This guide also solves the old problem of [porting grub‑live to Qubes](https://github.com/QubesOS/qubes-issues/issues/4982).
| This also works great for experiments in Qubes or for beginners who want to learn without fear of breaking anything - all changes disappear after a reboot. It will extend the lifespan of your SSD. This guide also solves the old problem of [porting grub‑live to Qubes](https://github.com/QubesOS/qubes-issues/issues/4982). Launching requires a lot of memory: at least 16 GB for **zram0 mode** and at least 32 GB for **overlayfs on tmpfs mode**. |
| Using nano, type a **#** character in front of the line | Using nano, type a **#** character in front of the line with word “**swap**” in the middle (**usually the last line**). Like this: `#UUID=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx swap defaults,x-systemd.device-timeout=0 0 0` |
| > Since only dom0 runs in live mode, you have to start the VMs from the pool in dom0. You can use the default pool `varlibqubes` or [create a new pool](https://dev.qubes-os.org/projects/core-admin-client/en/latest/manpages/qvm-pool.html). In the Qube Manager click **clone qube**, then in **Advanced** select a pool in dom0 (`varlibqubes` or your new pool. **not vm‑pool**). If you have a lot of memory, you can run all appVMs (sys, dvm, appVM) in live mode. |
| It offers two live‑mode options in a single convenient menu. The topic’s founder and one of its main contributors are no longer active on forum. I created this guide so users don’t have to read through the entire topic of almost 100 comments looking for a final solution. I’ve been using this guide for four months. | It offers two live‑mode options in a single convenient menu. The old topic’s founder and one of its main contributors are no longer active on forum. I created this guide so users don’t have to read through the entire topic of almost 100 comments looking for a final solution. I’ve been using this guide for four months. |
| launching requires a lot of memory: at least 16 GB for **zram0 mode** and at least 32 GB for **overlayfs on tmpfs mode**. | |
| ****!! Important:** also change | ****!! Important:** also change ram-disk size in Step 8 line** `echo 10G > /sys/block/zram0/disksize` |
| **Step 3. Make New Directorys for Dracut Automation Modules:** | **+ Step 3. Make New Directorys for Dracut Automation Modules:** |
| # --move does not always work. Google >mount move "wrong fs"< for # details | |
| PROMPT="Enter boot mode: t-tmpfs, z- | PROMPT="Enter boot mode: t-tmpfs, z-zram0, s-ssd" |
| Regenerate with New GRUB Configuration: *If you are using Qubes 4.2 / 4.3, you should run this single command:* | |
| **Step | **Step 3. Make New Directorys for Dracut Automation Modules:** |
| **+ Step | **+ Step 4. Make Two Dracut Script Files module-setup.sh:** |
| **+ Step | **+ Step 5. Make New Dracut Script File overlay-mount.sh:** |
| **+ Step | **+ Step 6. Make New Dracut Script File pass.sh:** |
| **+ Step | **+ Step 7. Make New Dracut Script File tmpfs.sh:** |
| **+ Step | **+ Step 8. Make New Dracut Config File ramboot.conf:** |
| **+ Step | **+ Step 9. Regenerate with New Dracut Automation Module:** |
| **+ Step | **+ Step 10. Clone dangerous qubes (dvm‑template, appVMs) into a pool in dom0** |
| That will permanently disable your Dom0 Swap after restart, which is likely best for operating Qubes Dom0 in | That will permanently disable your Dom0 Swap after restart, which is likely best for operating Qubes Dom0 in live mode. If you still want to frequently use Qubes in Persistent mode and have Dom0 Swap enabled, then you can alternatively skip this permanent disable step, and rather temporarily disable Dom0 Swap when you startup Qubes Live by remembering (each and every time) to manually run the following Dom0 command… |
| > Note: If you have something like 32GB of RAM or greater in your system, and you want to run several fully stateless user qubes within Dom0 RAM space, then you may want to consider increasing this “10240M” Dom0 RAM maximum value, in order to allow for more Dom0 RAM space where you can store more user qubes as fully stateless. For example, with 32GB of system RAM, you may choose to allocate “20480M” (20GB) or even greater if you wish. | > Note: If you have something like 32GB of RAM or greater in your system, and you want to run several fully stateless user qubes within Dom0 RAM space, then you may want to consider increasing this “10240M” Dom0 RAM maximum value, in order to allow for more Dom0 RAM space where you can store more user qubes as fully stateless. For example, with 32GB of system RAM, you may choose to allocate “20480M” (20GB) or even greater if you wish. ****!! Important:** also change the ram-disk size in Step 8 line** `echo 10G > /sys/block/zram0/disksize` |
| If you have limited memory, use this guide: https://forum.qubes-os.org/t/really-disposable-ram-based-qubes/21532 | **If you have limited memory, use this guide:** https://forum.qubes-os.org/t/really-disposable-ram-based-qubes/21532 |