Go back to topic: CPU Pinning Alder Lake
# Dom0 | |
# xen-user.xml | |
# Qubes admin events | |
*create a file: /usr/local/bin/cpu_pinning.py containing:* | |
``` [details="alternative where VMs selected by name - CLICK"] in @ymy s variant all VMs containing `disp` will be assigned to all cores, while other machines are bound to efficency cores (given with 4 p & 8 e) | |
#!/usr/bin/env python3 import asyncio import subprocess import qubesadmin import qubesadmin.events P_CORES = '0-11' E_CORES = '4-11' def _vcpu_pin(name, cores): cmd = [ 'xl', 'vcpu-pin', name, 'all', cores] subprocess.run(cmd).check_returncode() def pin_by_tag(vm, event, **kwargs): vm = app.domains[str(vm)] if 'disp' in vm.name: _vcpu_pin(vm.name, P_CORES) print(f'Pinned {vm.name} to all Cores') else: _vcpu_pin(vm.name, E_CORES) print(f'Pinned {vm.name} to Efficency Cores') app = qubesadmin.Qubes() dispatcher = qubesadmin.events.EventsDispatcher(app) dispatcher.add_handler('domain-start', pin_by_tag) asyncio.run(dispatcher.listen_for_events()) ``` [/details] create a file:/lib/systemd/system/cpu-pinning.service containing: ``` [Unit] Description=Qubes CPU pinning After=qubesd.service [Service] ExecStart=/usr/local/bin/cpu_pinning.py [Install] WantedBy=multi-user.target ``` follows by: systemctl enable cpu-pinning.service systemctl start cpu-pinning.service | |
# CPU pools | |
# Using qrexec to migrate qubes | |
# Notes on SMT. |
**Using qrexec to migrate qubes** With CPU pools, it's easy to make a qrexec rpc command to allow qubes to request to be moved to pcores. Policy files `/etc/qubes-rpc/policy/qubes.PCores` ``` $anyvm dom0 allow ``` Qubes-rpc file `/etc/qubes-rpc/qubes.PCores` ``` #!/bin/bash pool=$(/usr/sbin/xl list -c $QREXEC_REMOTE_DOMAIN | awk '{if(NR!=1) {print $7}}') if [[ $pool == "ecores" ]]; then /usr/sbin/xl cpupool-migrate $QREXEC_REMOTE_DOMAIN pcores fi ``` From any qube you can use the command `qrexec-client-vm dom0 qubes.PCores` and the qube will be moved to pcores, if the qube currently is placed on ecores. This allows you to start qubes on E cores, and be moved to P cores when you start a program that needs to run on P cores. The qrexec can be added to menu commands > Exec=bash -c ‘qrexec-client-vm dom0 qubes.PCores&&blender’ This is an example of how you can automatically move a qube to pcores when you start blender. |
`dom0_max_vcpus=4 | `dom0_max_vcpus=4 dom0_vcpus_pin` If you want to use SMT read the notes at the end. |
**Qubes admin events** It's possible to use qubes admin events to pin or migrate qubes, this method has the advantage it happens after the qube has started, and it's possible to use on qubes that doesn't use xen-user.xml ``` #!/usr/bin/env python3 import asyncio import subprocess import qubesadmin import qubesadmin.events # i5-13600k (smt=off) P_CORES = '0-5' E_CORES = '6-13' tag = 'performance' def _vcpu_pin(name, cores): cmd = ['xl', 'vcpu-pin', name, 'all', cores] subprocess.run(cmd).check_returncode() def pin_by_tag(vm, event, **kwargs): vm = app.domains[str(vm)] if tag in list(vm.tags): _vcpu_pin(vm.name, P_CORES) print(f'Pinned {vm.name} to P-cores') else: _vcpu_pin(vm.name, E_CORES) print(f'Pinned {vm.name} to E-cores') app = qubesadmin.Qubes() dispatcher = qubesadmin.events.EventsDispatcher(app) dispatcher.add_handler('domain-start', pin_by_tag) asyncio.run(dispatcher.listen_for_events()) ``` Thanks to @noskb for shown how to use admin events. **CPU pools** You can use CPU pools as an alternative to pinning, which has the advantage of the pool configuration being defined in a single place. If you are using CPU pools with SMT enabled, you probably need to use the credit scheduler, SMT doesn't seem to work with credit2. All cores start in the pool named Pool-0, dom0 needs to remain in Pool-0. This is how you split the cores in a ecores and pcores pool, and leave 2 cores in Pool-0 for dom0. > /usr/sbin/xl cpupool-cpu-remove Pool-0 2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23 > /usr/sbin/xl cpupool-create name=\"pcores\" sched=\"credit2\" > /usr/sbin/xl cpupool-cpu-add pcores 2,3,4,5,6,7,8,9,10,11,12,13,14,15 > /usr/sbin/xl cpupool-create name=\"ecores\" sched=\"credit2\" > /usr/sbin/xl cpupool-cpu-add ecores 16,17,18,19,20,21,22,23 When the pools have been created, you can migrate domains to the pools with this command: > /usr/sbin/xl cpupool-migrate sys-net ecores |
You can leave out smt=on if you don’t want to enable SMT. | You can leave out smt=on if you don’t want to enable SMT, and if you want to use SMT read the notes at the end. |
Using `sched-gran=core` doesn't seem to work with Alder Lake, xen dmesg has the following warning. ``` (XEN) *************************************************** (XEN) Asymmetric cpu configuration. (XEN) Falling back to sched-gran=cpu. (XEN) *************************************************** ``` Using SMT can be dangerous, and not being able to use sched-gran=core makes it more dangerous. **Unless you understand and are okay with the consequences of enabling SMT, you should just leave it off.** |