- Install and run Docker Desktop on Mac 🔗 Double-click Docker.dmg to open the installer, then drag the Docker icon to the Applications folder. Double-click Docker.app in the Applications folder to start Docker. (In the example below, the Applications folder is in “grid” view mode.).
- The docs present instructions for installing via Docker on MacOS. GPU support isn't mentioned explicitly, but it seems that GPU (nvidia-docker) can't be supported for MacOS.
- On a Linux server you can install normal docker plus nvidia-docker and then your Docker containers get GPU access with no noticeable performance hit. If you're on a Mac you can install Docker for Mac, which is pretty solid in my experience.
This week at DockerCon, Docker made several announcements, but one in particular caused massive confusion as users thought that “Docker” was becoming “Moby.” Well… OK, but which Docker? The Register probably put it best, when they said, “Docker (the company) decided to differentiate Docker (the commercial software products Docker CE and Docker EE) from Docker (the open source project).” Tack on a second project about building core operating systems, and there’s a lot to unpack.
The nvidia-docker is an open source project hosted on GITHUB and it provides driver-agnostic CUDA images & docker command line wrapper that mounts the user mode components of the driver and the GPUs (character devices) into the container at launch. With this enablement, the NVIDIA Docker plugin enables deployment of GPU-accelerated applications. Nvidia-docker 2.0 は今まで説明した 1.0 とは異なった実装になっている。 docker のコンテナランタイムである containerd 技術を直接使っている; nvidia-docker という docker ラッパーコマンドはなくなり、 docker run -runtime=nvidia として起動する.
Let’s start with Moby.
What is Moby?
Docker, being the foundation of many peoples’ understanding of containers, unsurprisingly isn’t a single monolithic application. Instead, it’s made up of components such as runc, containerd, InfraKit, and so on. The community works on those components (along with Docker, of course) and when it’s time for a release, Docker packages them all up and out they go. With all of those pieces, as you might imagine, it’s not a simple task.
And what happens if you want your own custom version of Docker? After all, Docker is built on the philosophy of “batteries included but swappable”. How easy is it to swap something out?
In his blog post introducing the Moby Project, Solomon Hykes explained that the idea is to simplify the process of combining components into something usable. “We needed our teams to collaborate not only on components, but also on assemblies of components, borrowing an idea from the car industry where assemblies of components are reused to build completely different cars.”
Hykes explained that from now on, Docker releases would be built using Moby and its components. At the moment there are 80+ components that can be combined into assemblies. He further explained that:
“Moby is comprised of:
- A library of containerized backend components (e.g., a low-level builder, logging facility, volume management, networking, image management, containerd, SwarmKit, …)
- A framework for assembling the components into a standalone container platform, and tooling to build, test and deploy artifacts for these assemblies.
- A reference assembly, called Moby Origin, which is the open base for the Docker container platform, as well as examples of container systems using various components from the Moby library or from other projects.”
Who needs to know about Moby?
The first group that needs to know about Moby is Docker developers, as in the people building the actual Docker software, and not people building applications using Docker containers, or even people building Docker containers. (Here’s hoping that eventually this nomenclature gets cleared up.) Docker developers should just continue on as usual, and Docker pull requests will be reouted to the Moby project.
So everyone else is off the hook, right?
Well, um, no.
If all you do is pull together containers from pre-existing components and software you write yourself, then you’re good; you don’t need to worry about Moby. Unless, that is, you aren’t happy with your available Linux distributions.
Enter LinuxKit.
What is LinuxKit?
While many think that Docker invented the container, in actuality linux containers had been around for some time, and Docker containers are based on them. Which is really convenient — if you’re using Linux. If, on the other hand, you are using a system that doesn’t include Linux, such as a Mac, a Windows PC, or that Raspberry Pi you want to turn into an automatic goat feeder, you’ve got a problem.
Docker requires linuxcontainers. Which is a problem if you have no linux.
Enter LinuxKit.
Nvidia Docker Container
The idea behind LinuxKit is that you start with a minimal Linux kernal — the base distro is only 35MB — and add literally only what you need. Once you have that, you can build your application on it, and run it wherever you need to. Stephen Foskitt tweeted a picture of an example from the announcement:
More about #LinuxKit#DockerConpic.twitter.com/TfRJ47yBdB
— Stephen Foskett (@SFoskett) April 18, 2017
The end result is that you can build containers that run on desktops, mainframes, bare metal, IoT, and VMs.
The project will be managed by the Linux Foundation, which is only fitting.
Docker Run Nvidia
So what about Alpine, the minimal Linux that’s at the heart of Docker? Docker’s security director, Nathan McCauley said that “LinuxKit’s roots are in Alpine.” The company will continue to use it for Docker.
Today we launch LinuxKit — a Linux subsystem focussed on security. pic.twitter.com/Q0YJsX67ZT
— Nathan McCauley (@nathanmccauley) April 18, 2017
So what does this have to do with Moby?
Where LinuxKit has to do with Moby
If you’re salivating at the idea of building your own Linux distribution, take a deep breath. LinuxKit is an assembly within Moby.
So if you want to use LinuxKit, you need to download and install Moby, then use it to build your LinuxKit pieces.
So there you have it. You now have the ability to build your own Linux system, and your own containerization system. But it’s definitely not for the faint of heart.
Resources
What is NVIDIA-Docker?
NVIDIA designed NVIDIA-Docker in 2016 to enable portability in Docker images that leverage NVIDIA GPUs. It allowed driver agnostic CUDA images and provided a Docker command line wrapper that mounted the user mode components of the driver and the GPU device files into the container at launch.
The NVIDIA-Docker enables deployment of GPU-accelerated applications across any Linux server. Using Docker, we can develop and prototype GPU applications on a workstation, and then ship and run those applications anywhere that supports GPU containers.
The requirements
Just for your information, the current latest version of NVIDIA-Docker is 2.0. So in this tutorial we will just care about the requirements of this version. In order to run NVIDIA-Docker version 2.0, your Operation System must have:
- GNU/Linux x86_64 with kernel version > 3.10
- Docker >= 1.12
- NVIDIA GPU with Architecture > Fermi (2.1)
- NVIDIA driver >= 361.93
According to NVIDIA document, they support various Linux distribution.
- Ubuntu 14.04/16.04/18.04
- Debian Jessie/Stretch
- CentOS 7
- RHEL 7.4/7.5
- Amazon Linux 1/2
Just for demonstration purpose, in this tutorial i will use Ubuntu 16.04 LTS . The same steps can be applied for other Debian family distribution.
Install NVIDIA-Docker on Ubuntu
First of all, you have to make sure your graphic card is working properly on the host operating system. It is required to have CUDA driver to run a NVIDIA graphic card. If you haven’t installed it yet, follow these 2 following steps.
Download the CUDA repository package and install it
Then update our system apt repository and install CUDA driver
Once you installed the CUDA driver, it is time to load the NVIDIA-Docker. Let’s add new repository for nvidia-docker2 package.
Then update our system apt repository and install nvidia-docker2. Please note that your /etc/docker/daemon.json
file might be modified to load the new plugin.
Restart Docker daemon to load the nvidia-docker2 plugin
That’s all for the installation. It is simple right? Now we can verify whether our GPU can work inside the Docker container.
The command above will try to run a docker container from nvidia/cuda
image and execute nvidia-smi
command to show the basic GPU statistics.
GPU Benchmark in Docker container
Following is the simple code to do the test with Tensorflow using CPU/GPU acceleration. Let’s create a new file called benchmark.py
with following content
Run Tensorflow test with CPU
Run Tensorflow test with GPU
You would see the difference in the performance when with Tensorflow test using CPU vs GPU.
What’s next?
If you want to build your own application which use the GPU acceleration, you can build a docker image for it and launch the application from the container. NVIDIA also offer their Docker images which your image can base on. You can get them from Docker Hub, and their Docker files are available on Github.
Nvidia-docker For Mac Os
Since NVIDIA-Docker is an open source project. It is open to discuss. You can report issue on their Github repo. The Frequently Asked Questions also contains helpful information that you might want to take a look before playing further with NVIDIA-Docker.