Skip to main content

Proxmox Hypervisor - GPU and Disk Passthrough

1931 words·
Proxmox Hypervisor Debian GPU Passthrough
Proxmox - This article is part of a series.
Part 1: This Article

Proxmox tutorial part 1

Proxmox Web GUI
#

IP:8006


Prerequisites
#

Repositories
#

Link: https://pve.proxmox.com/wiki/Package_Repositories

After the installation per default the Proxmox and Ceph enterprise repositories are enabled, which leads to an error when trying to update the package manager with apt update.

Shell error:

Reading package lists... Done
E: Failed to fetch https://enterprise.proxmox.com/debian/ceph-quincy/dists/bookworm/InRelease  401  Unauthorized [IP: 212.224.123.70 443]
E: The repository 'https://enterprise.proxmox.com/debian/ceph-quincy bookworm InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: Failed to fetch https://enterprise.proxmox.com/debian/pve/dists/bookworm/InRelease  401  Unauthorized [IP: 212.224.123.70 443]
E: The repository 'https://enterprise.proxmox.com/debian/pve bookworm InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

Check the repositories in the GUI:


Add the non enterprise repositories for Proxmox and Ceph:

vi /etc/apt/sources.list

deb http://ftp.at.debian.org/debian bookworm main contrib

deb http://ftp.at.debian.org/debian bookworm-updates main contrib

## Add the following Repository
deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription

## Add the following Repository
deb http://download.proxmox.com/debian/ceph-quincy bookworm no-subscription

# security updates
deb http://security.debian.org bookworm-security main contrib

Comment out the Proxmox enterprise repository:

vi /etc/apt/sources.list.d/pve-enterprise.list

#deb https://enterprise.proxmox.com/debian/pve bookworm pve-enterprise

Comment out the Ceph enterprise repository:

vi /etc/apt/sources.list.d/ceph.list

#deb https://enterprise.proxmox.com/debian/ceph-quincy bookworm enterprise

After the correction it should look like follows in the GUI:


VIM on Debian
#

If you are used to Ubuntu proceed as follows to change VIM to the “vim.basic” standard version on Debian:

# Install
apt install vim -y

# Set vim.basic as the default
update-alternatives --set vi /usr/bin/vim.basic

The paste with the right mouse button use: :set mouse=v


Storage
#

In my setup I use a SATA SSD for the Proxmox / Debian OS, a M2 SSD for the VM’s and 2 SATA disks for my file server VM.


LVM
#

Create VG

Let’s create an LVM-Thin Volume Group on the M2 SSD, this can be easily done within the GUI:

Delte VG

To delete the VG proceed as follows:

Remove the VG from the listing:


Manage Storage from Shell
#

Storage Pool Settings
#

List storage pool settings: cat /etc/pve/storage.cfg

dir: local
        path /var/lib/vz
        content iso,backup,vztmpl

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

lvmthin: ssd_lvm-thin_1
        thinpool ssd_lvm-thin_1
        vgname ssd_lvm-thin_1
        content rootdir,images
        nodes pm10

Content defines what sort of data can be stored in the pools, e.g., images, iso,…


Storage Pool Data
#

# List VM disks from storage pool
pvesm list ssd_lvm-thin_1

# Shell Output
Volid                        Format  Type             Size VMID
ssd_lvm-thin_1:vm-100-disk-0 raw     images    34359738368 100

Disks
#

Move VM Disk to different Storage
#


Passthrough Physical Disk to VM
#

Create VM
#

Select the node, in my case it’s “pm10” and define an ID for the VM. The minimum value for the ID is 100.

Select the ISO file for the VM:


Find Disk IDs
#

Link: https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM)

Open a shell to the Proxmox server.

# Install LSHW
apt install lshw

# List Devices
lshw -class disk -class storage

Shell Output:

*-disk:1
     description: ATA Disk
     product: ST4000NE001-2MA1
     physical id: 1
     bus info: scsi@1:0.0.0
     logical name: /dev/sdb
     version: EN01
     serial: WS21NSGH # Write down SN
     size: 3726GiB (4TB)
     configuration: ansiversion=5 logicalsectorsize=512 sectorsize=4096
*-disk:2
     description: ATA Disk
     product: ST4000NE001-2MA1
     physical id: 0.0.0
     bus info: scsi@3:0.0.0
     logical name: /dev/sdc
     version: EN01
     serial: WS21MGLA # Write down SN
     size: 3726GiB (4TB)
     configuration: ansiversion=5 logicalsectorsize=512 sectorsize=4096

Write down the serial numbers for the disks that you want to pass through to the VM, in my case they are:

disk:1 - serial: WS21NSGH
disk:2 - serial: WS21MGLA

List the disk identifier directory with: ls -la /dev/disk/by-id

This provides static and unique identifier for the disks that does not change even if the physical location like the SATA port of the disk changes.

...
lrwxrwxrwx 1 root root   9 Jul 15 14:22 ata-ST4000NE001-2MA101_WS21MGLA -> ../../sdc
lrwxrwxrwx 1 root root   9 Jul 15 14:22 ata-ST4000NE001-2MA101_WS21NSGH -> ../../sdb
...

Write down the device IDs, in my case they are:

disk:1 - ID: ata-ST4000NE001-2MA101_WS21NSGH
disk:2 - ID: ata-ST4000NE001-2MA101_WS21MGLA


Add Disks to VM
#

VM ID: qm set 100

  • Use the correct VM ID that was chosen when you created the VM

Virtual Port Number: scsi1 & scsi2

  • This are the port numbers for the disks whithin the VM, scsi0 is used by the disk for the OS
# Add disks to VM
qm set 100 -scsi1 /dev/disk/by-id/ata-ST4000NE001-2MA101_WS21NSGH
qm set 100 -scsi2 /dev/disk/by-id/ata-ST4000NE001-2MA101_WS21MGLA

# Shell Output
update VM 100: -scsi1 /dev/disk/by-id/ata-ST4000NE001-2MA101_WS21NSGH
update VM 100: -scsi2 /dev/disk/by-id/ata-ST4000NE001-2MA101_WS21MGLA

Check the GUI for the new attached disks:


Add Serial Number to Disks
#

Create a backup of the original config file for of the VM:

cp /etc/pve/qemu-server/100.conf /root/backups

Open the config file:

vi /etc/pve/qemu-server/100.conf

boot: order=scsi0;ide2;net0
cores: 2
cpu: x86-64-v2-AES
ide2: local:iso/ubuntu-22.04.2-live-server-amd64.iso,media=cdrom,size=1929660K
memory: 4096
meta: creation-qemu=8.0.2,ctime=1689432764
name: JKW-Fileserver
net0: virtio=DE:AF:96:AF:C1:55,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-lvm:vm-100-disk-0,iothread=1,size=32G
scsi1: /dev/disk/by-id/ata-ST4000NE001-2MA101_WS21NSGH,size=3907018584K
scsi2: /dev/disk/by-id/ata-ST4000NE001-2MA101_WS21MGLA,size=3907018584K
scsihw: virtio-scsi-single
smbios1: uuid=13f9d496-2130-4225-a132-50ec9ab4fe9e
sockets: 1
vmgenid: 43fcef2d-3667-4529-a81a-55226235cb77

The the serial number to the disks, it should look like this:

boot: order=scsi0;ide2;net0
cores: 2
cpu: x86-64-v2-AES
ide2: local:iso/ubuntu-22.04.2-live-server-amd64.iso,media=cdrom,size=1929660K
memory: 4096
meta: creation-qemu=8.0.2,ctime=1689432764
name: JKW-Fileserver
net0: virtio=DE:AF:96:AF:C1:55,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-lvm:vm-100-disk-0,iothread=1,size=32G
scsi1: /dev/disk/by-id/ata-ST4000NE001-2MA101_WS21NSGH,size=3907018584K,serial=WS21NSGH # Add SN
scsi2: /dev/disk/by-id/ata-ST4000NE001-2MA101_WS21MGLA,size=3907018584K,serial=WS21MGLA # Add SN
scsihw: virtio-scsi-single
smbios1: uuid=13f9d496-2130-4225-a132-50ec9ab4fe9e
sockets: 1
vmgenid: 43fcef2d-3667-4529-a81a-55226235cb77

Check the GUI again:


Start in SSH into the VM and use the lshw -class disk -class storage command, you should be able to see the disk serial numbers:

*-scsi:1
     description: SCSI storage controller
     product: Virtio SCSI
     vendor: Red Hat, Inc.
     physical id: 2
     bus info: pci@0000:01:02.0
     version: 00
     width: 64 bits
     clock: 33MHz
     capabilities: scsi msix bus_master cap_list
     configuration: driver=virtio-pci latency=0
     resources: irq:10 ioport:e040(size=64) memory:fe801000-fe801fff memory:fd404000-fd407fff
*-disk
     description: SCSI Disk
     product: QEMU HARDDISK
     vendor: QEMU
     physical id: 0.0.1
     bus info: scsi@3:0.0.1
     logical name: /dev/sdb
     version: 2.5+
     serial: WS21NSGH # SN
     size: 3726GiB (4TB)
     capabilities: 5400rpm
     configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512

*-scsi:2
     description: SCSI storage controller
     product: Virtio SCSI
     vendor: Red Hat, Inc.
     physical id: 3
     bus info: pci@0000:01:03.0
     version: 00
     width: 64 bits
     clock: 33MHz
     capabilities: scsi msix bus_master cap_list
     configuration: driver=virtio-pci latency=0
     resources: irq:11 ioport:e080(size=64) memory:fe802000-fe802fff memory:fd408000-fd40bfff
  *-disk
     description: SCSI Disk
     product: QEMU HARDDISK
     vendor: QEMU
     physical id: 0.0.2
     bus info: scsi@4:0.0.2
     logical name: /dev/sdc
     version: 2.5+
     serial: WS21MGLA # SN
     size: 3726GiB (4TB)
     capabilities: 5400rpm
     configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512

GPU Passthrough
#

Hardware
#

I used the following hardware on my setup:

GPU: MSI Nvidia Geforce GT 730

  • 2GB DDR3, PCIe 2.0

Motherboard: MSI MAG B460m Mortar

  • B460 Intel Chipset

CPU: Intel Core i3-10105

  • 10th Gen, LGA1200

RAM: Corsair Vengeance LPX

  • DDR4 1x 8GB
  • 2400MHz

Bios Settings
#

Set the onboard GPU as primary boot device

Go to:

Settings / Advanced / Integrated Graphics Configuration: Initiate Graphic Adapter [IGD]

  • IGD Is the integrated / onboard GPU
  • PEG Is the external / PCIe GPU

Enable Intel VT-D

Go to:

OC (Overclocking) / Other Setting / CPU Features / Intel VT-D Tech: Enable


Network Interface
#

Since I build-in the GPU only after I had Proxmox already installed, the GPU took over the first PCI bus spot and the network interface was pushed to the second PCI bus. This caused the network interface and the virtual bride both to be in down state after I booted the server.

It was necessary to rename enp1s0 to enp2s0 in the network interface configuration:
vi /etc/network/interfaces

auto lo
iface lo inet loopback

auto enp2s0 # Add auto enable
iface enp2s0 inet manual # Changed to enp2s0

auto vmbr0
iface vmbr0 inet static
        address 192.168.70.13/24
        gateway 192.168.70.1
        bridge-ports enp2s0 # Changed to enp2s0
        bridge-stp off
        bridge-fd 0

Restart the networking service: systemctl restart networking


Grub
#

vi /etc/default/grub

# Uncomment the following line
GRUB_CMDLINE_LINUX_DEFAULT="quiet"

# Add the following line
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"

It should look like this:

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
#GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
GRUB_CMDLINE_LINUX=""s

Update the GRUB configuration: update-grub


Modules Configuration File
#

Add the following kernel modules to load them at boot time:

vi /etc/modules

vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

modprobe.d
#

Add the following configuration files under /etc/modprobe.d

# IOMMU interrupt remapping
echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/iommu_unsafe_interrupts.conf
echo "options kvm ignore_msrs=1" > /etc/modprobe.d/kvm.conf

This block the GPU drivers from being loaded during the system boot process:

# Blacklisting Drivers
echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nvidia" >> /etc/modprobe.d/blacklist.conf

lspci
#

Use the following command to list the devices connected to PCI buses:
lspci -v

Find the devices related to the GPU, usually there are at least two devices, the GPU and the audio device for the GPU:

01:00.0 VGA compatible controller: NVIDIA Corporation GK208B [GeForce GT 730] (rev a1) (prog-if 00 [VGA controller])
        Subsystem: Micro-Star International Co., Ltd. [MSI] GK208B [GeForce GT 730]
        Flags: bus master, fast devsel, latency 0, IRQ 255
        Memory at bb000000 (32-bit, non-prefetchable) [size=16M]
        Memory at b0000000 (64-bit, prefetchable) [size=128M]
        Memory at b8000000 (64-bit, prefetchable) [size=32M]
        I/O ports at 4000 [size=128]
        Expansion ROM at bc000000 [disabled] [size=512K]
        Capabilities: [60] Power Management version 3
        Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
        Capabilities: [78] Express Legacy Endpoint, MSI 00
        Capabilities: [100] Virtual Channel
        Capabilities: [128] Power Budgeting <?>
        Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
        Kernel modules: nvidiafb, nouveau

01:00.1 Audio device: NVIDIA Corporation GK208 HDMI/DP Audio Controller (rev a1)
        Subsystem: Micro-Star International Co., Ltd. [MSI] GK208 HDMI/DP Audio Controller
        Flags: bus master, fast devsel, latency 0, IRQ 17
        Memory at bc080000 (32-bit, non-prefetchable) [size=16K]
        Capabilities: [60] Power Management version 3
        Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
        Capabilities: [78] Express Endpoint, MSI 00
        Kernel driver in use: snd_hda_intel
        Kernel modules: snd_hda_intel

Write down the PCI bus address for the GPU and Audio device. In my case they are as follows:
GPU: 01:00.0
Audio device: 01:00.1

  • 01 is the bus number
  • 00 is the device number on that bus
  • .0 is the function number

Use the bus and device numer to output the vendor and device codes with the following command lspci -n -s 01:00

  • -n Show PCI vendor and device codes
  • -s Show only devices in the specified slot
# Output your GPU Vendor ID
lspci -n -s 01:00

# Shell Output
01:00.0 0300: 10de:1287 (rev a1)
01:00.1 0403: 10de:0e0f (rev a1)
  • 01:00.0 Is the devices bus address
  • 0300 Is the class of the device (VGA controller)
  • 10de Is the vendor (Nvidia)
  • 1287 Is the particular model of the GPU
  • rev a1 Is the revision numer

Write down GPU Vendor and model, in my case they are:
GPU: 10de:1287
Audio device: 10de:0e0f


VFIO
#

Virtual Function I/Ois a kernel module that provides a framework for direct device access from userspace, often used for device passthrough in virtualization scenarios, where a physical device, like a GPU is passed through directly to a virtual machine.

# Add GPU vendor and model to the VFIO
echo "options vfio-pci ids=10de:1287,10de:0e0f disable_vga=1"> /etc/modprobe.d/vfio.conf

# Update initramfs
update-initramfs -u

# Reboot server
reboot

Create VM
#

Use the following System settings

Graphic card: Default
Machine: q35
Bios: OVMF (UEFI)

In the network settings it was necessary to select the real network interface, in my case “Realtek RTL8139” instead of “VirtIO (paravirtualized)” which I use for my Linux VMs.

Add the GPU to the VM
#

Note: Don’t use the passed through GPU as primary GPU like I did on the screenshot, otherwise you’ll get the following error when you try to access the console of the VM:
TASK ERROR: Failed to run vncproxy

The error actually makes sense - which took my a while to understand - since it’s an passed through GPU and Proxmox can’t access it. Connect a display to the GPU or use RDP to access the VM when the Primary GPU setting is enabled.


Proxmox - This article is part of a series.
Part 1: This Article