Split-OpenSnitch with Per-VM Identity (qubes-opensnitch-pipes)

Original forum link
https://forum.qubes-os.org/t/37783
Original poster
hUt4Ke107Y7VyK
Editors
ephile, hUt4Ke107Y7VyK
Created at
2025-12-08 11:36:44
Last wiki edit
2025-12-12 03:44:31
Revisions
3 revisions
Posts count
8
Likes count
7

Hi everyone,

I wanted to share a project I have been working on to solve the "identity crisis" when running OpenSnitch in a split configuration on Qubes OS.

The Problem: When you connect multiple AppVMs to a central OpenSnitch UI VM, all traffic usually appears as coming from localhost. This makes it impossible to distinguish which VM is triggering a rule or requesting access.

The Solution: I have created a set of helper scripts: qubes-opensnitch-pipes.

The project uses a client/server model ("pipe" and "piped") to tunnel traffic through unique ports and dummy network interfaces. This allows the OpenSnitch UI to see a unique IP address for every AppVM (e.g., sys-net appears as 127.0.0.3, vault as 127.0.0.2, etc.).

Status & Compatibility

I have been using this setup on Qubes 4.2 and currently on 4.3. * Reliability: It has served me well for daily use, though it is still somewhat experimental. * Complexity: It takes a little work to set up initially. * Disposables: Managing persistent rules for Disposable VMs can be a bit tricky, but it is doable.


Setup Guide

Here is a summary of how to get it running. For the most up-to-date instructions and source code, please check the GitHub Repository.

Click to view Setup Instructions

Prerequisites

Remember to disable opensnitch systemd service systemctl disable opensnitch and remove /etc/xdg/autostart/opensnitch_ui.desktop. We will handle these steps differently later.


1. Qubes Policy Configuration (dom0)

You must allow TCP connections between your nodes and the UI VM.

Edit /etc/qubes/policy.d/30-opensnitch.policy in dom0:

# OpenSnitch node connections
# 50050 is the handshake/control port
qubes.ConnectTCP +50050 @tag:snitch @default allow target=snitch-ui

# 50051+ are the data ports (one per node slot)
qubes.ConnectTCP +50052 @tag:snitch @default allow target=snitch-ui
qubes.ConnectTCP +50053 @tag:snitch @default allow target=snitch-ui
qubes.ConnectTCP +50054 @tag:snitch @default allow target=snitch-ui
qubes.ConnectTCP +50055 @tag:snitch @default allow target=snitch-ui
qubes.ConnectTCP +50056 @tag:snitch @default allow target=snitch-ui
qubes.ConnectTCP +50057 @tag:snitch @default allow target=snitch-ui
qubes.ConnectTCP +50058 @tag:snitch @default allow target=snitch-ui
qubes.ConnectTCP +50059 @tag:snitch @default allow target=snitch-ui

Replace snitch-ui with the actual name of your UI AppVM.

Tagging VMs

Apply the snitch tag to any AppVM that should send data to the UI:

qvm-tags [VM_NAME] add snitch

2. Server Setup (UI VM)

Configure the AppVM where opensnitch-ui will run.

Configuration Map

Create a config file to map your VMs to specific loopback IP addresses. This ensures that sys-net always appears as 127.0.0.3 (for example).

mkdir -p ~/.config/qubes-opensnitch-piped
nano ~/.config/qubes-opensnitch-piped/config.json

Example config.json:

{
    "vault": "127.0.0.2",
    "sys-net": "127.0.0.3",
    "sys-usb": "127.0.0.4",
    "gpu-personal": "127.0.0.5",
    "sys-protonvpn": "127.0.0.6",
    "test": "127.0.0.7"
}

Enable the Helper Service

Create the user-level systemd service file: File: ~/.config/systemd/user/qubes-opensnitch-piped.service

[Unit]
Description=Qubes OpenSnitch Piped Connector
After=graphical-session.target

[Service]
ExecStart=/usr/local/bin/qubes-opensnitch-piped -ad \
    -c /home/user/.config/qubes-opensnitch-piped/config.json
Restart=on-failure

[Install]
WantedBy=default.target

Enable the user-level systemd service for the pipe daemon:

systemctl --user enable --now qubes-opensnitch-piped

Start OpenSnitch UI

Ensure the standard OpenSnitch UI is listening on all interfaces (or specifically the IPv6 wildcard) so it can accept the forwarded traffic. Add this to your /rw/config/rc.local or autostart:

# Start local daemon
systemctl enable --now opensnitch

# Start UI listening on port 50051
opensnitch-ui --socket "[::]:50051" &

3. Client Setup (Node VMs)

There are two ways to configure the nodes: via the TemplateVM (System-wide) or per AppVM (User mode).

Pre-requisite: Persistence

Since AppVMs reset /etc on reboot, you must symlink the rules directory to a persistent location. Run this on your TemplateVM:

mkdir -p /rw/config/opensnitchd/rules
rm -rf /etc/opensnitchd/rules
ln -s /rw/config/opensnitchd/rules /etc/opensnitchd/rules

This method allows you to deploy the script in a TemplateVM but only activate it on specific hosts (e.g., sys-net).

File: /etc/systemd/system/qubes-opensnitch-pipe.service

[Unit]
Description=Qubes OpenSnitch Pipe
After=graphical-session.target
# Only start on these specific VMs:
ConditionHost=|sys-net
ConditionHost=|sys-usb

[Service]
ExecStart=/usr/local/bin/qubes-opensnitch-pipe -sp /rw/config/opensnitchd/rules
Restart=on-failure
KillMode=process

[Install]
WantedBy=default.target
Note: The -sp flag adds a prefix to the rules directory (e.g., /rw/config/opensnitchd/rules.sys-net), allowing different rule sets for different VMs.

Option B: User Service (AppVM Specific)

If you prefer configuring per AppVM (requires sudo privileges):

File: ~/.config/systemd/user/qubes-opensnitch-pipe.service

[Unit]
Description=Qubes OpenSnitch Pipe
After=graphical-session.target

[Service]
ExecStart=/usr/local/bin/qubes-opensnitch-pipe -sp /rw/config/opensnitchd/rules
Restart=on-failure
KillMode=process

[Install]
WantedBy=default.target
Enable it: systemctl --user enable --now qubes-opensnitch-pipe