People often think you need VMware, Hyper-V, or at minimum Proxmox to run a “real” hypervisor. Something with a web UI, enterprise features, the whole package.
But here’s the thing: KVM with libvirt can do virtually everything those commercial hypervisors do. Live migration, memory ballooning, CPU pinning, GPU passthrough, SR-IOV, nested virtualization — it’s all there. The Linux kernel has been a production-grade hypervisor for over a decade.
I run NixOS as my hypervisor. No Proxmox, no web UI, just declarative Nix configs and virsh. Let me show you what’s possible.
The Foundation: KVM and QEMU
Before diving in, let’s clarify the stack:
- KVM (Kernel-based Virtual Machine): A Linux kernel module that turns your kernel into a type-1 hypervisor. Hardware-accelerated, near-native performance.
- QEMU: The userspace emulator that provides virtual hardware (disks, network cards, etc.) to VMs.
- libvirt: A management layer that provides a consistent API across different virtualization technologies. Includes
virshCLI.
Together, this stack powers everything from your laptop’s VMs to massive cloud infrastructure. AWS’s Nitro hypervisor? Built on KVM. Google Cloud? KVM. Most of the cloud runs on this technology.
NixOS Configuration
Here’s how I set up NixOS as a hypervisor:
{ config, pkgs, ... }:
{
# Enable virtualization
virtualisation.libvirtd = {
enable = true;
qemu = {
package = pkgs.qemu_kvm;
runAsRoot = true;
swtpm.enable = true; # TPM emulation
ovmf = {
enable = true; # UEFI support
packages = [ pkgs.OVMFFull.fd ];
};
};
};
# Add yourself to the libvirt group
users.users.tom = {
extraGroups = [ "libvirtd" ];
};
# Useful tools
environment.systemPackages = with pkgs; [
virt-manager # GUI if you want it
virt-viewer # For console access
libguestfs # VM image manipulation
win-virtio # Windows VirtIO drivers
];
# Bridge networking (optional, for bridged VMs)
networking.bridges.br0.interfaces = [ "enp1s0" ];
networking.interfaces.br0.useDHCP = true;
}
Apply this config, and you have a fully functional hypervisor. That’s it. No installer, no setup wizard, just Nix.
Creating VMs with virsh
Let’s create a VM. First, allocate a disk:
# Create a 50GB qcow2 disk
qemu-img create -f qcow2 /var/lib/libvirt/images/ubuntu-server.qcow2 50G
Then define the VM with XML (or use virt-install):
virt-install \
--name ubuntu-server \
--memory 4096 \
--vcpus 2 \
--disk /var/lib/libvirt/images/ubuntu-server.qcow2 \
--cdrom /var/lib/libvirt/images/ubuntu-24.04-server.iso \
--os-variant ubuntu24.04 \
--network bridge=br0 \
--graphics vnc
Now you have a running VM. Manage it with:
virsh list --all # List all VMs
virsh start ubuntu-server # Start VM
virsh shutdown ubuntu-server # Graceful shutdown
virsh destroy ubuntu-server # Force stop
virsh console ubuntu-server # Serial console
Live Migration
Here’s where people think you need enterprise software. You don’t.
Live migration moves a running VM from one host to another with minimal downtime. The VM keeps running while its memory is copied, then the final switchover happens in milliseconds.
Prerequisites:
- Shared storage (NFS, Ceph, GlusterFS) or same disk path on both hosts
- Same CPU architecture (or use CPU model compatibility)
- libvirt running on both hosts
# From the source host
virsh migrate --live ubuntu-server qemu+ssh://target-host/system
# With options for better performance
virsh migrate --live --persistent --undefinesource \
--copy-storage-all \
ubuntu-server qemu+ssh://target-host/system
That’s it. Your VM is now running on the other host. The --copy-storage-all flag even copies the disk if you don’t have shared storage.
For scripted environments, you can do this programmatically:
#!/bin/bash
# Simple load balancer - migrate VM to least loaded host
HOSTS=("host1" "host2" "host3")
VM="ubuntu-server"
# Find host with lowest load
TARGET=$(for h in "${HOSTS[@]}"; do
echo "$(ssh $h 'cat /proc/loadavg | cut -d" " -f1') $h"
done | sort -n | head -1 | cut -d" " -f2)
virsh migrate --live $VM qemu+ssh://$TARGET/system
Memory Ballooning
Memory ballooning lets you dynamically adjust VM memory without rebooting. The VM’s balloon driver releases memory back to the host or requests more.
Enable it in your VM definition:
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</memballoon>
Then adjust at runtime:
# Set current memory to 2GB (VM has 4GB max)
virsh setmem ubuntu-server 2G --live
# Give it back 4GB
virsh setmem ubuntu-server 4G --live
This is incredibly useful for overcommitting memory. You can allocate 64GB across VMs on a 32GB host, letting the balloon driver balance actual usage.
GPU Passthrough
This is the holy grail for homelab enthusiasts: passing a physical GPU directly to a VM for near-native graphics performance. Gaming VMs, machine learning workloads, transcoding — all possible.
First, configure IOMMU in your NixOS config:
{
boot.kernelParams = [
"intel_iommu=on" # or amd_iommu=on for AMD
"iommu=pt"
];
# Isolate the GPU from the host
boot.extraModprobeConfig = ''
options vfio-pci ids=10de:2204,10de:1aef
'';
boot.initrd.kernelModules = [
"vfio_pci"
"vfio"
"vfio_iommu_type1"
];
}
Find your GPU’s PCI IDs:
lspci -nn | grep -i nvidia
# 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation ... [10de:2204]
# 01:00.1 Audio device [0403]: NVIDIA Corporation ... [10de:1aef]
Then add the GPU to your VM:
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</source>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
</source>
</hostdev>
The VM now has direct access to the GPU. Install the native drivers in the guest, and you get full hardware acceleration.
Network Card Passthrough and SR-IOV
Same principle works for network cards. For 10GbE or 25GbE cards, you might want direct passthrough:
# Find the NIC
lspci -nn | grep -i ethernet
# Pass it through like the GPU
But the real power is SR-IOV (Single Root I/O Virtualization). It splits one physical NIC into multiple virtual functions, each assignable to a different VM:
# Enable 4 virtual functions on the NIC
echo 4 > /sys/class/net/enp5s0f0/device/sriov_numvfs
# List them
lspci | grep "Virtual Function"
Each VF gets its own PCI address and can be passed to a VM. Hardware-accelerated networking without the overhead of virtio.
CPU Pinning
For latency-sensitive workloads, you want VMs to use specific CPU cores without being migrated by the scheduler:
<vcpu placement='static'>4</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='4'/>
<vcpupin vcpu='1' cpuset='5'/>
<vcpupin vcpu='2' cpuset='6'/>
<vcpupin vcpu='3' cpuset='7'/>
<emulatorpin cpuset='0-1'/>
</cputune>
This pins the VM’s 4 vCPUs to host cores 4-7, with emulator threads on cores 0-1. Combined with isolcpus in your kernel parameters, you get near-bare-metal latency.
Declarative VM Management on NixOS
Here’s where NixOS shines. You can define VMs declaratively:
{
virtualisation.libvirtd.enable = true;
# Define VMs as Nix expressions
systemd.services."libvirt-guest-ubuntu" = {
after = [ "libvirtd.service" ];
requires = [ "libvirtd.service" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = "yes";
};
script = ''
${pkgs.libvirt}/bin/virsh define /etc/libvirt/qemu/ubuntu-server.xml || true
${pkgs.libvirt}/bin/virsh start ubuntu-server || true
'';
preStop = ''
${pkgs.libvirt}/bin/virsh shutdown ubuntu-server || true
sleep 10
${pkgs.libvirt}/bin/virsh destroy ubuntu-server || true
'';
};
}
Or use NixVirt for a more integrated approach:
{
virtualisation.libvirt.connections."qemu:///system" = {
domains = [{
definition = ./vms/ubuntu-server.xml;
active = true;
}];
};
}
Your VMs are now part of your NixOS configuration. Version controlled, reproducible, declarative.
The Limits of CLI-Only
I’ve been running this setup for years, and it works well. But I won’t pretend it’s all roses.
What gets tedious:
- Creating VMs:
virt-installhas a million flags. I have wrapper scripts, but it’s not pretty. - Monitoring: You end up writing scripts to check VM status, resource usage, health. Or grafana dashboards with libvirt exporters.
- Storage management: Creating pools, volumes, snapshots — all doable with
virsh, but the commands are verbose. - Network visualization: Which VM is on which bridge? Which has which IP? You need scripts or good documentation.
What scripts can’t easily do:
- Quick visual overview: How are my VMs doing? What’s the resource usage? A dashboard beats
virsh listplusvirsh domstatsplus parsing. - Console access:
virt-viewerworks, but a web-based console is more convenient for remote management. - Delegation: Want to let someone else manage specific VMs without full host access? That’s complex with raw libvirt.
Why Proxmox Exists
This brings me to Proxmox. It’s the same underlying technology — KVM, QEMU, libvirt — but wrapped in a web UI that makes everything more accessible.
I’m not saying you need Proxmox. For a homelab where you’re the only admin, NixOS + virsh + scripts is perfectly viable. I’ve run it that way for years.
But there’s real value in a good UI:
- Onboarding: New team member can understand the infrastructure immediately
- Emergency response: At 3 AM, clicking buttons is easier than remembering
virshsyntax - Visibility: See all VMs, their status, resource usage, network topology at a glance
- Integrated features: Backup scheduling, HA clustering, firewall rules — configured in one place
Proxmox also adds clustering, Ceph integration, and backup management that would take significant effort to replicate with scripts.
My Recommendation
If you want to learn virtualization deeply, start with NixOS + KVM + virsh. You’ll understand exactly what’s happening. No magic, no abstraction. Every feature the “enterprise” hypervisors advertise, you can implement yourself.
If you need to run virtualization in production or with a team, consider Proxmox. The UI isn’t just convenience — it’s operational sanity.
And if you’re like me — running a homelab where learning is part of the fun — NixOS as a hypervisor is a great choice. Declarative configs, full control, and the satisfaction of knowing there’s no vendor lock-in. Just Linux, doing what Linux does best.
The best hypervisor is the one you understand. KVM gives you that understanding. Whether you wrap it in Proxmox, NixOS, or raw scripts is a matter of preference and use case.
