Skip to main content

Nvidia GPU Passthrough Unprivileged LXC Containers

Getting a functional, and stable, Proxmox installation that also has working GPU passthru capabilities for LXC containers is a  critical step that you need to achieve for your home server setup to be maximized. These step provide you with just that.

Enter the Proxmox Shell or SSH

In the host node, enter the shell terminal and run the following commands.

apt update && apt upgrade -y && apt install pve-headers-$(uname -r) build-essential software-properties-common make nvtop htop -y
update-initramfs -u

Download the Debaian Template for LXC to your machine

Create the new LXC container. Enter the container shell. Update the container.

Find latest driver for your cards

 Nvidia driver site

wget HTTP://URLHERE

Make driver file executable

chmod +x xxxxx.run

Run to install your drivers 

./xxxxx.run --dkms

click yes to install it all the way

Make note of the the groups

We will grab the cgroups here by running

ls -al /dev/nvidia*

It will output something similar to this for a single GPU (nvidia0) and the universal groups for nvidiactl, nvidia-uvm, nvidia-uvm-tools, nvidia-cap1 and nvidia-cap2. Note the 195, 509 and 234 listed here. Those id’s WILL be different for you and we will note what those are and use them in the next step.

root@goldielox:~# ls -al /dev/nvidia*
crw-rw-rw- 1 root root 195, 0 Nov 13 01:23 /dev/nvidia0
crw-rw-rw- 1 root root 195, 255 Nov 13 01:23 /dev/nvidiactl
crw-rw-rw- 1 root root 509, 0 Nov 13 01:23 /dev/nvidia-uvm
crw-rw-rw- 1 root root 509, 1 Nov 13 01:23 /dev/nvidia-uvm-tools

/dev/nvidia-caps:
total 0
drwxr-xr-x 2 root root 80 Nov 13 01:23 .
drwxr-xr-x 22 root root 6360 Nov 13 01:23 ..
cr——– 1 root root 234, 1 Nov 13 01:23 nvidia-cap1
cr–r–r– 1 root root 234, 2 Nov 13 01:23 nvidia-cap2

Add the groups to the container configu file

Now we will add these groups to the LXC container for passthru of the hardware.

nano /etc/pve/lxc/105.conf

Append these values with the IDs you noted earlier to your file like so. Note the placement of the 195, 234 and 509. This is for a SINGLE gpu also, if you have multiple add additional

lxc.mount.entry: /dev/nvidia1 dev/nvidia1 none bind,optional,create=file

lines with incrementing like nvidia1,2,3 etc…

lxc.cgroup2.devices.allow: c 195:* rwm
lxc.cgroup2.devices.allow: c 234:* rwm
lxc.cgroup2.devices.allow: c 509:* rwm
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-caps/nvidia-cap1 dev/nvidia-caps/nvidia-cap1 none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-caps/nvidia-cap2 dev/nvidia-caps/nvidia-cap2 none bind,optional,create=file

Start the container

pct start 105


Push the .run installer into the container

pct push 105 NVIDIA-Linux-x86_64-550.107.02.run /root/NVIDIA-Linux-x86_64-550.107.02.run

Install the drivers in the LXC Container

Then enter into the console/terminal for the LXC container. Install the pushed .run installer with --no-kernel-modules flag

./NVIDIA-Linux-x86_64-550.107.02.run --no-kernel-modules

Install the NVIDIA Container Toolkit in the Container

apt install gpg curl
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
apt update
apt install nvidia-container-toolkit
Enable the no-cgroups

Edit the config.toml and enable the no-cgroups and set it to true from false.

nano /etc/nvidia-container-runtime/config.toml
#no-cgroups = false
to
no-cgroups = true

save and reboot the container now. This should get your GPU passed into the LXC. Next we will get Docker setup with Dockge and GPU passthru.

Install Docker for Debian Bookworm

apt install ca-certificates
apt update
apt install ca-certificates
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/trusted.gpg.d/docker.gpg
chmod a+r /etc/apt/trusted.gpg.d/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/trusted.gpg.d/docker.gpg] https://download.docker.com/linux/debian bookworm stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
apt update
apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Enable the nvidia container toolkit

Now we need to enable the nvidia container toolkit to work with Docker.

nvidia-ctk runtime configure --runtime=docker

Install Dockge

Install Dockge Docker Container Manager. Your data dir is inside /opt/dockge in this instance.

mkdir -p /opt/stacks /opt/dockge
cd /opt/dockge
curl https://raw.githubusercontent.com/louislam/dockge/master/compose.yaml --output compose.yaml
docker compose up -d