Accelerating Containers: Utilizing NVIDIA GPUs in Containers
If you're running a Docker Host as a virtual machine inside Hyper-V, enabling GPU passthrough is essential for leveraging hardware acceleration. This guide walks you through setting up NVIDIA drivers, configuring GPU passthrough, and installing the NVIDIA Container Toolkit for use with Docker.
🔄 GPU Passthrough in Hyper-V
⚠️ Note for Older Windows Versions
If you're using Windows Server 2019 or an older version of Windows 10 Pro, you'll need to set up Discrete Device Assignment (DDA) to pass through a GPU to your VM.If you're on a newer version of Windows or Windows Server, you can use GPU partitioning to allocate a portion or the entire GPU to a VM.
For detailed instructions on enabling GPU passthrough via DDA, check out: GPU Passthrough via Discrete Device Assignment.
🎯 Installing NVIDIA Drivers
Depending on your Linux distribution and setup preference, choose one of the following methods:
✅ 1. Install via ubuntu-drivers
(Ubuntu Recommended)
Ubuntu provides a built-in tool for easy NVIDIA driver installation:
sudo ubuntu-drivers install
For full instructions, refer to Ubuntu's official guide.
⚙️ 2. Install Manually via apt-get
(Ubuntu)
- Download the
.deb
package from NVIDIA's driver page.- Make sure to select "Show all operating systems" to find Ubuntu drivers.
- Install the driver using
apt-get
:
sudo apt-get install nvidia-driver-550
🌎 3. Install Manually (Works on Any Linux Distro)
If you're using non-Ubuntu distros or prefer a generic installation, follow these steps:
- Download the Linux version for your CPU type from NVIDIA's website.
- Grant execution permissions:
sudo chmod +x <filename>
- Run the installer:
./Nvidia-Driver-<version>.run
- If prompted to disable Nouveau, follow the instructions for better compatibility.
📦 Installing NVIDIA Container Toolkit for Docker
To enable GPU acceleration inside Docker containers, install the NVIDIA Container Toolkit:
- Follow the official installation guide.
- Test the installation with:
sudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi
🖥️ Expected Output:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 535.86.10 Driver Version: 535.86.10 CUDA Version: 12.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 |
| N/A 34C P8 9W / 70W | 0MiB / 15109MiB | 0% Default |
| | | N/A |
+-----------------------------------------------------------------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
If you see a similar output, congratulations! 🎉 Your GPU is now successfully set up and ready for CUDA, AI training, or gaming workloads.
Final Thoughts
Now you have GPU acceleration inside your Docker Containers!
In a future post I will go over how we were able to leverage this technology to build out our Spectator system allowing spectating guests to peek into the virtual world our pilots are experiencing!
Happy computing! 🎮🔬