Reprinted from gregbouwens.com with the permission of the author
Ok, I know it’s not a new thing to be able to run complicated ML / AI software such as Tensorflow or Deepstack on Windows and make use of your Nvidia GPU—but what if you want to run a Docker container inside of WSL and have GPU loveliness available to you there? YES you can do it, and here are the steps to get it working.
TL;DR
1. DO NOT install Docker Desktop! We’re going to install Docker ourselves inside of Ubuntu.
2. DO install Windows 10 Insider Build or Windows 11 Beta.
3. DO install WSL 2 w/Ubuntu 20.04 or equivalent.
4. DO install Nvidia CUDA package (NOT Cuda Toolkit).
5. DO install Docker manually inside of WSL2/Ubuntu.
6. DO install Nvidia Container Toolkit inside of WSL2/Ubuntu.
7. DO run N-body simulation CUDA samples, Jupyter with Tensorflow.
8. DO let your imagination run wild!
Let’s start at the end—when you’re finished with what I’m about to teach you, you’ll be able to run this Nvidia CUDA test, on a container using your GPU, from inside of your WSL Linux distro (mine is Ubuntu 20.04). First up is the CPU-only variation:
greg@gregbo:~$ docker run --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -cpu -benchmark
> 1 Devices used for simulation
> Simulation with CPU
4096 bodies, total time for 10 iterations: 2779.572 ms
= 0.060 billion interactions per second
= 1.207 single-precision GFLOP/s at 20 flops per interaction
And here is the same test, but using the GPU this time:
greg@gregbo:~$ docker run --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
GPU Device 0: "Ampere" with compute capability 8.6
> Compute 8.6 CUDA device: [NVIDIA GeForce RTX 3080]
69632 bodies, total time for 10 iterations: 57.380 ms
= 845.003 billion interactions per second
= 16900.066 single-precision GFLOP/s at 20 flops per interaction
HUGE difference, huh? How do you do that? Here’s how!
1. Install or upgrade to Windows 10 Preview, or Win 11 Beta
You cannot make use of the GPU in WSL 2 unless you’re running an Insider build of Windows 10, or a beta build of Windows 11. I know, it’s a hassle, but believe me, it’s worth it, and Win 11 is scheduled for public release in a few weeks so it’s a small price to pay. Here’s my winver: 22000.194
2. Install Nvidia drivers for CUDA
Download the software directly from Nvidia using this link – all you have to do is sign up for the Nvidia Developer Program and you’re set.
I have an Nvidia GeForce RTX 3080 and my download package was 510.06_gameready_win11_win10-dch_64bit_international.exe. IMPORTANT: This replaces your existing graphic adapter software.
Reboot now, and then carry on…
3. Install WSL 2 and your fave Linux distro
You know what to do here – I went ahead and set up Windows Terminal, and mostly use Ubuntu 20.04 but there are other options. The main thing is when you run this command, you get a “2” in the VERSION column!
C:\Users\greg>wsl.exe --list -v
NAME STATE VERSION
* Ubuntu-20.04 Running 2
4. Set up CUDA Toolkit
Nvidia recommends that you use the Linux package manager to install CUDA (not CUDA Toolkit) under WSL 2. This is because CUDA Toolkit comes packaged with Nvidia’s Linux GPU driver which must not be installed under WSL 2. So follow these directions carefully.
Run the following commands (one at a time, of course):
wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-wsl-ubuntu.pin
sudo mv cuda-wsl-ubuntu.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/11.4.0/local_installers/cuda-repo-wsl-ubuntu-11-4-local_11.4.0-1_amd64.deb
sudo dpkg -i cuda-repo-wsl-ubuntu-11-4-local_11.4.0-1_amd64.deb
sudo apt-key add /var/cuda-repo-wsl-ubuntu-11-4-local/7fa2af80.pub
sudo apt-get update
sudo apt-get -y install cuda
5. Running CUDA applications
At this point you can run CUDA apps exactly as you would under any other installation of Linux!
For example, you can build and run the BlackScholes sample application:
cd /usr/local/cuda-11.4/samples/4_Finance/BlackScholes
then:
sudo make BlackScholes
then:
./BlackScholes
[./BlackScholes] - Starting...
GPU Device 0: "Ampere" with compute capability 8.6
Initializing data...
...allocating CPU memory for options.
...allocating GPU memory for options.
...generating input data in CPU mem.
...copying input data to GPU mem.
Data init done.
Executing Black-Scholes GPU kernel (512 iterations)...
Options count : 8000000
BlackScholesGPU() time : 0.125945 msec
Effective memory bandwidth: 635.196316 GB/s
Gigaoptions per second : 63.519632
BlackScholes, Throughput = 63.5196 GOptions/s, Time = 0.00013 s, Size = 8000000 options, NumDevsUsed = 1, Workgroup = 128
Reading back GPU results...
Checking the results...
...running CPU calculations.
Comparing the results...
L1 norm: 1.741792E-07
Max absolute error: 1.192093E-05
Shutting down...
...releasing GPU memory.
...releasing CPU memory.
Shutdown done.
[BlackScholes] - Test Summary
NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
Test passed
6. Install Docker
In the bash shell, use this Docker installation script to install Docker:
curl https://get.docker.com | sh
Then, make sure Docker is alive:
docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
7. Nvidia Container Toolkit
Instructions here are provider for Ubuntu (run each command separately):
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
curl -s -L https://nvidia.github.io/libnvidia-container/experimental/$distribution/libnvidia-container-experimental.list | sudo tee /etc/apt/sources.list.d/libnvidia-container-experimental.list
sudo apt-get update
sudo apt-get install -y nvidia-docker2
Then, in another WSL 2 window, stop and restart the docker daemon like this:
sudo service docker stop && sudo service docker start
``
8. Use your GPU with everything!
Now you should be able to run the N-body simulation CUDA sample that I showed you at the beginning of this post:
docker run --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
Or, you can run a Jupyter notebook example from Tensorflow:
greg@gregbo:~$ docker run -it --gpus all -p 8888:8888 tensorflow/tensorflow:latest-gpu-py3-jupyter
________ _______________
___ __/__________________________________ ____/__ /________ __
__ / _ _ \_ __ \_ ___/ __ \_ ___/_ /_ __ /_ __ \_ | /| / /
_ / / __/ / / /(__ )/ /_/ / / _ __/ _ / / /_/ /_ |/ |/ /
/_/ \___//_/ /_//____/ \____//_/ /_/ /_/ \____/____/|__/
WARNING: You are running this container as root, which can cause new files in
mounted volumes to be created as the root user on your host machine.
To avoid this, run the container by specifying your user's userid:
$ docker run -u $(id -u):$(id -g) args...
[I 01:08:54.379 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret
jupyter_http_over_ws extension initialized. Listening on /http_over_websocket
[I 01:08:54.495 NotebookApp] Serving notebooks from local directory: /tf
[I 01:08:54.495 NotebookApp] The Jupyter Notebook is running at:
[I 01:08:54.495 NotebookApp] http://ad45281857e1:8888/?token=0f5921a7fc66ea3d244bbc962dfe9256da396929d013f940
[I 01:08:54.495 NotebookApp] or http://127.0.0.1:8888/?token=0f5921a7fc66ea3d244bbc962dfe9256da396929d013f940
[I 01:08:54.495 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 01:08:54.498 NotebookApp]
To access the notebook, open this file in a browser:
file:///root/.local/share/jupyter/runtime/nbserver-1-open.html
Or copy and paste one of these URLs:
http://ad45281857e1:8888/?token=0f5921a7fc66ea3d244bbc962dfe9256da396929d013f940
or http://127.0.0.1:8888/?token=0f5921a7fc66ea3d244bbc962dfe9256da396929d013f940```
And when you open the link in your browser, voila! Jupyter and Tensorflow!
The end
I really enjoyed figuring this stuff out and hopefully you find it useful.
Introduction
This article is a report that describes what I did to build the environment as the title says.
It is unknown whether it is stable because the environment used is Windows 10, Docker Desktop, and Docker-compose, which are preview versions.
- I’m sorry if the method is not correct because it is the first post of Qiita.
- Since Docker also has a study period of about one month, I would appreciate it if you could point out any incorrect content.
Since we were able to build the environment for the title by collecting information inside and outside Qiita, we will describe the procedure for building the environment.
** * The contents are as of January 10, 2021. ** **
Please note that DockerDesktop, WSL2, and GPU (nvidia) are updated quickly.
** * Added on January 20, 2021 **
1.28.0 of Docker-compose has been released.
docker-compose.yml works fine as it is in this article.
Github Release Page
- I would appreciate it if you could point out if a similar article already exists just because I have not found it.
We will delete the article immediately.
Without Qiita, the environment could not be built.
I hope there is someone who can be helpful.
Constitution
OS: Windows 10 Pro Insider Preview Build: 21286.1000
CPU:Ryzen5900X
GPU:GeForce RTX 3070
WSL2:ubuntu 20.04LTS
Docker Desktop: Technical Preview
Docker:version 20.10.0
Docker-compose:1.28.0
Construction procedure
- Install Windows10 Insider Preview Build
- Install WSL2
- Install the preview version of Docker Desktop
- Install beta version of Nvidia driver
- Pull the NGC container image
- Start the container, install the required packages, and then create the image
- Check if GPU is valid in the created image
- Create a docker-compose yml file and launch a container with docker-compose
Step 1: Install Windows 10 Insider Preview Build
Select Update & Security, Windows Insider Program from Windows Settings
Select «Dev Channel» in «Dev Channel/Beta/Release Preview»
- Microsoft account is required.
- Installation takes time.
Step 2: Install WSL2
Install WSL2 by referring to the following article.
- There is no problem with Windows 10 pro.
- Select Ubuntu-20.04LTS as the Linux distribution
Using WSL 2 + Docker on Windows 10 Home
When executing the following command in Windows Power shell
PowerShell
wsl --set-default-version 2
Kernel component updates are required to run WSL 2. See https://aka.ms/wsl2kernel for more information
If is displayed, refer to the following article «Update kernel components» and later.
WSL2 introduction | Screenshot from Win update to WSL2 default
Step 3: Install a preview version of Docker Desktop
Docker official blog that states that GPU is supported
To get started with Docker Desktop with Nvidia GPU support on WSL 2, you will need to download our technical preview build from here.
Install the preview version of Docker Desktop from here
- Don’t forget to enable WSL2 backend after installation
Step 4: Install the beta version of the Nvidia driver
Install the beta version of the NVIDIA driver from the link below
https://developer.nvidia.com/cuda/wsl/download
NVIDIA Developer Program Membership Required
The file or page you have requested requires membership in the NVIDIA Developer Program. Please either log in or join the program to access this material. Learn more about the benefits of the NVIDIA Developer Program.
Is displayed. If you are registered for the NVIDIA Developer Program Membership, please log in to download and install the driver.
- If you have not registered, please register and download.
Step 5: Pull the NGC container image
The image to be pulled can be anything, but since there is a Docker image (NGC container) provided by Nvidia, that is pulled.
You can select Tensorflow or Pytorch as the framework.
From the links below, select TensorFlow and look for the latest Docker image tag
https://www.nvidia.com/ja-jp/gpu-cloud/containers/
Pull the latest image with the following command
- It will take some time.
- The latest image may change, so please check each time.
bash
#For TensorFlow1
$docker pull nvcr.io/nvidia/tensorflow:20.12-tf1-py3
# ↑(20.12) ← I think this will change for each ver.
#For TensorFlow2
$docker pull nvcr.io/nvidia/tensorflow:20.12-tf2-py3
Step 6: Create a Docker image after launching the container and installing the required packages
Although TensorFlow and Keras are already installed
Because packages such as pandas, matplotlib, and Seaborn are not installed
After launching the container and installing the required packages, create a new image.
- I think it is common to create a Dockerfile, but that has not been done yet …
After launching the container of the image pulled earlier in the terminal of WSL2
Install the required packages.
bash
#Execute the following command in the WSL2 terminal
$docker run -it nvcr.io/nvidia/tensorflow:20.12-tf2-py3 bash
bash
#-------In the container--------------
================
== TensorFlow ==
================
NVIDIA Release 20.12-tf2 (build 18110405)
TensorFlow Version 2.3.1
Container image Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
Copyright 2017-2020 The TensorFlow Authors. All rights reserved.
#~~~Omission~~~~
root@9f1a8350d911:/workspace#pip install Required packages(like matplotlib)
#Stop the container once the required packages have been installed
root@9f1a8350d911:/workspace#exit
bash
#-------WSL2 terminal from here--------------
#Get the container ID with the ps command
$docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5dc0ae8981bf nvcr.io/nvidia/tensorflow:20.12-tf2-py3 "/usr/local/bin/nvid…" 3 minutes ago Exited (0) 3 seconds ago confident_noether
#After confirming the container ID, create an image with the commit command
#When pushing the name of the image to Dockerhub etc., it is necessary to match the name with the repository. If you just use it locally, you can use any name you like.)
$docker commit 5dc0ae8981bf hogehoge:latast
#If you can create the image without any problem, the following screen will be output.
sha256:8461579f0c2adf2a052b2b30625df0f48d81d3ab523635eb97b360f03096b4
#Check images with docker images command
#If there is no problem, the following output
$docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hogehoge latest 9d3ea0900f00 29 hours ago 13.4GB
nvcr.io/nvidia/tensorflow 20.12-tf2-py3 21d1065bfe8f 5 weeks ago 12.2GB
Step 7: Create a container with the created image and check if the GPU is valid
** * To enable GPU, «—gpus all» is required as an option of docker run command. ** **
bash
#Execute the following command in the WSL2 terminal
#--rm is an option to automatically delete when the container is stopped.
docker run -it --rm --gpus all -p 8888:8888 hogehoge:latest jupyter lab
#---From here in the container---
================
== TensorFlow ==
================
NVIDIA Release 20.12-tf2 (build 18110405)
TensorFlow Version 2.3.1
#~~~Omission~~~
To access the notebook, open this file in a browser:
file:///root/.local/share/jupyter/runtime/nbserver-1-open.html
Or copy and paste one of these URLs:
http://hostname:8888/?token=[token]
or http://127.0.0.1:8888/?token=[token]
#Ctrl on WSL terminal to exit+Run C
Access http : //127.0.0.1:8888/? Token = [token] with a web browser (I am Chrome) because it is output as above.
Now that you can connect to Jupyterlab, run the following code in your new notebook
device_type: GPU is enabled if «GPU» is displayed
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
#If GPU is enabled, output as below
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 16078152362305136132,
name: "/device:XLA_CPU:0"
device_type: "XLA_CPU"
memory_limit: 17179869184
locality {
}
incarnation: 6904616874393552950
physical_device_desc: "device: XLA_CPU device",
name: "/device:XLA_GPU:0"
device_type: "XLA_GPU"
memory_limit: 17179869184
locality {
}
incarnation: 13161252575635162092
physical_device_desc: "device: XLA_GPU device",
name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 5742592000
locality {
bus_id: 1
links {
}
}
incarnation: 2330595400288827072
physical_device_desc: "device: 0, name: GeForce RTX 3070, pci bus id: 0000:2b:00.0, compute capability: 8.6"]
#--If gpu is not attached to the option, or if something goes wrong and gpu is not recognized, the output will be as follows.
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 68886281224950509,
name: "/device:XLA_CPU:0"
device_type: "XLA_CPU"
memory_limit: 17179869184
locality {
}
incarnation: 13575954317913527773
physical_device_desc: "device: XLA_CPU device"]
Step 8: Use docker-compose
If you create a container with the docker run command, there is no problem up to step 7.
I would like to use docker-compose because it is difficult to type long commands every time.
The stable version of docker-compose does not support —gpus all
runtime: It is a method to use «nvidia».
- I didn’t get it this way …
Reference article:
How to use gpu with docker-compose
Run GPU in docker-compose (as of February 2, 2020)
issue on github
issue on github
~~ When I was at a loss, I heard that the preview version of docker-copmose supported GPU, so I will go that way. ~~
~~ Github Release Page ~~
~~ ↑ ver is 1.28.0-rc1, but the latest preview version is 1.28.0-rc2, so install that. ~~
- Added 2021/1/20 *
The official version (1.28.0) has been released.
Release Page
How to verup docker-compose
See docker-compose formula
bash
#Do the following at the WSL terminal
$sudo curl -L "https://github.com/docker/compose/releases/download/1.28.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
#If you can download it, do the following
$sudo chmod +x /usr/local/bin/docker-compose
#docker-Check compose ver
$docker-compose -version
#If there is no problem, the following is output
docker-compose version 1.28.0-rc2, build f1e3c356
After installing the preview version, create docker-compose.yml.
See also: Github Enable GPU access with Device Requests
- I don’t understand the meaning of the comment of deploy :.
docker-compose.yml
version: '3.7'
services:
jupyterlab:
image: hogehoge:latest #List the image name you created
deploy:
resources:
reservations:
devices:
- 'driver': 'nvidia'
'capabilities': ['gpu']
container_name: jupyterlab
ports:
- '8888:8888'
#↓ Mount the host and container volumes Host volume: Notation of container volume
#./Is the current directory, so docker in the WSL2 terminal-compose.Move to the directory where yml is located before using
volumes:
- './ds_python:/workspace' #Change the folder name appropriately
command: jupyter lab
tty: true
stdin_open: true
Once docker-compose.yml is created
Make the following directory structure
└── ./Appropriate folder
├──── docker-compose.yml(Directly under a suitable folder)
└── ds_python(ds_Store the files you want to bring to the container under python)
├── ~~~~.py
├── ~~~~.ipynb
Once the directory is configured, do the following in the WSL terminal
- To express the directory on the Windows side with WSL2, write as mnt/c/users ~. (Example is C drive)
bash
#Do the following at the WSL terminal
#docker-compose.Move to the directory with yml
$cd /mnt/c/~
#If you can move the directory docker-Start container with compose-d is carried out in the background
$docker-compose up -d
#If there is no problem, the following is output
Creating network "docker_default" with the default driver
Creating jupyterlab ... done
#I don't know the address of jupyter lab, so check below
$ docker logs jupyterlab
#Since the same contents as the docker run command are output,
#Web browser(I'm chrome)At http://127.0.0.1:8888/?token=[token]Access to
#If you want to delete the container, docker-End with compose down
$docker-compose down
After accessing Jupyterlab, if the GPU device is displayed by the same procedure as the docker-run command, it is completed.
in conclusion
I’ve researched various things, but it seems that the construction procedure will soon become obsolete due to the field where updates are quick.
Reference article
** Reference articles in steps 1, 3 and 4 **
Comfortable development life with WSL2 + Docker Desktop + GPU
https://www.docker.com/blog/wsl-2-gpu-support-is-here/
** Reference article in step 2 **
Use WSL 2 + Docker on Windows 10 Home (https://qiita.com/KoKeCross/items/a6365af2594a102a817b)
WSL2 introduction | Screenshot from Win update to WSL2 default
** Reference article in step 5 **
I tried NGC (nVIDIA GPU Cloud)! (Outside Qiita)
** Reference article in step 6 **
Docker command frequently used
Create Docker container, start-stop
Create an image from a Docker working container to make porting easier
** Reference article in step 7 **
[Docker] Create a jupyterLab (python) environment in 3 minutes!
Build a Jupyter environment in 5 minutes using Docker
Check if GPU can be recognized from TensorFlow with 2 lines code (outside Qiita)
** Reference article in step 8 **
How to use gpu with docker-compose
Run GPU in docker-compose (as of February 2, 2020)
Docker-compose formula
Launch jupyter with docker-compose 1
I explained how to write docker-compose.yml
Configuration management tool «Docker Compose» that automatically launches multiple Docker containers (Let’s use the latest Docker functions: Part 7)
Explore the Nvidia Container Toolkit for Windows, enhancing InvokeAI’s performance and compatibility with GPU resources.
- Configuring Docker for NVIDIA GPU Access on Windows
- Installing cuDNN for Enhanced Performance
- Verifying NVIDIA Driver and CUDA Installation
- Comprehensive Guide to Installing InvokeAI with Docker on Windows and Linux
- Comprehensive Guide to Installing InvokeAI with Docker on Windows and Linux
Configuring Docker for NVIDIA GPU Access on Windows
To enable Docker to utilize NVIDIA GPUs on Windows, you need to ensure that your system is properly configured. Start by verifying that your NVIDIA drivers and CUDA are installed correctly. You can do this by running the command nvidia-smi
in your command line. If this command fails or does not display the expected driver and CUDA versions, you will need to install the appropriate drivers from the CUDA Toolkit Downloads.
Installing Docker Desktop
- Download Docker Desktop: Visit the Docker Desktop for Windows page and download the installer.
- Install Docker Desktop: Follow the installation instructions provided on the website. Ensure that you enable the WSL 2 feature during installation, as it is required for GPU support.
- Enable GPU Support: After installation, open Docker Desktop and navigate to Settings > Resources > WSL Integration. Ensure that your desired WSL 2 distributions are enabled.
Configuring NVIDIA Container Toolkit
To allow Docker to access the GPU, you must install the NVIDIA Container Toolkit. This toolkit enables Docker to utilize the GPU resources effectively.
- Install NVIDIA Container Toolkit: Open a command prompt with administrative privileges and run the following command:
wsl --install
This command installs the necessary components for WSL 2.
- Install the NVIDIA Container Toolkit: Follow the instructions on the NVIDIA Container Toolkit installation guide to complete the installation.
Running Docker with GPU Access
Once you have Docker Desktop and the NVIDIA Container Toolkit installed, you can run containers with GPU access. Use the following command to start a container with GPU support:
docker run --gpus all --publish 9090:9090 ghcr.io/invoke-ai/invokeai
This command will run the InvokeAI container, mapping port 9090 from the container to your local machine. After the container starts, you can access the application by navigating to http://localhost:9090 in your web browser. From there, you can install models and begin generating content.
Troubleshooting
If you encounter issues, ensure that:
- Your NVIDIA drivers are up to date.
- Docker Desktop is configured to use WSL 2.
- The NVIDIA Container Toolkit is installed correctly.
By following these steps, you can successfully configure Docker to access NVIDIA GPUs on your Windows machine, enabling efficient utilization of GPU resources for your applications.
Related answers
-
Install Triton On Windows — InvokeAI
Learn how to install Triton on Windows for seamless integration with InvokeAI. Step-by-step guide for efficient setup.
-
Why Isn’t My GPU Being Used — InvokeAI
Explore common reasons your GPU isn’t utilized with InvokeAI and how to troubleshoot performance issues effectively.
-
InvokeAI Local Network Usage
Explore the technical aspects of local network usage with InvokeAI, enhancing your understanding of its capabilities.
The framework for AI agents
Build reliable and accurate AI agents in code, capable of running and persisting month-lasting processes in the background.
Installing cuDNN for Enhanced Performance
To ensure optimal performance with your Nvidia GPU, particularly the 30-series and 40-series cards, it is crucial to have the latest cuDNN library installed. An outdated cuDNN library can significantly hinder performance, so follow these steps to install or update cuDNN effectively.
Backup Existing DLLs
Before making any changes, it’s essential to back up your existing cuDNN DLL files. Here’s how to do it:
- Locate your InvokeAI installation folder, typically found at
C:\Users\Username\InvokeAI\
. - Open the
.venv
folder, which is usually atC:\Users\Username\InvokeAI\.venv
. You may need to enable the display of hidden files to see this folder. - Navigate to the
torch
package directory, found atC:\Users\Username\InvokeAI\.venv\Lib\site-packages\torch
. - Copy the
lib
folder withintorch
and store it in a safe location as a backup.
Download and Install Updated cuDNN DLLs
Next, download the latest cuDNN DLLs and replace the outdated ones:
- Visit the Nvidia cuDNN page.
- Create an account if you don’t have one, and log in.
- Select the most recent version of cuDNN compatible with your GPU architecture. Refer to the cuDNN support matrix to find the appropriate version.
- Download the latest version and extract the files.
- Locate the
bin
folder within the extracted files, typically namedcudnn-windows-x86_64-SOME_VERSION\bin
. - Copy the
.dll
files from thebin
folder and paste them into thelib
folder you backed up earlier. Make sure to replace any existing files when prompted.
Performance Check
After updating the cuDNN DLLs, restart your application. If you notice no improvement in performance, you can restore your backup or re-run the installer to revert torch
to its original state.
Shared GPU Memory Considerations
For users with Nvidia GPUs using driver version 536.40 or later, Windows can allocate CPU RAM to the GPU when VRAM is insufficient. This process, while helpful, can slow down generation significantly. If you prefer to receive an error message instead of allowing shared memory usage, follow the guide provided by Nvidia here.
To find the Python path required in the linked guide:
- Execute
invoke.bat
. - Choose option 2 for the developer console.
- Copy the Python path that includes your InvokeAI installation directory, usually the first one listed.
Troubleshooting Common Issues
If you encounter issues with the installer not finding Python, ensure that you checked the Add python.exe to PATH option during Python installation. If Python is already installed, you can modify the installation to include this option.
Additional Notes
If you see a Triton error on startup, it can be safely ignored as InvokeAI does not utilize Triton. However, if you are on Linux and wish to dismiss the error, you can install Triton if necessary.
Related answers
-
How To Use InvokeAI Effectively
Learn the essential steps and techniques to effectively utilize InvokeAI for your projects and enhance your productivity.
-
InvokeAI Invoke Flux Overview
Explore the technical aspects of Invoke Flux in InvokeAI, including its features and applications in AI workflows.
-
Install Triton for InvokeAI
Learn how to pip install Triton for seamless integration with InvokeAI, enhancing your AI model performance.
Verifying NVIDIA Driver and CUDA Installation
To verify that the NVIDIA drivers and CUDA are correctly installed on your system, you can use the command line tool nvidia-smi
. This command provides detailed information about the GPU, including the driver version and CUDA version. If the command fails or does not display the expected versions, it indicates that the drivers may not be installed correctly. In such cases, you should visit the CUDA Toolkit Downloads page and follow the installation instructions specific to your operating system.
After installation, run nvidia-smi
again to confirm that it now displays the correct driver and CUDA versions.
Using NVIDIA Container Runtime
For users who prefer not to install CUDA directly on their system, the NVIDIA Container Runtime offers an alternative. This allows you to run applications in a containerized environment, which can simplify dependency management and ensure compatibility. To get started, refer to the NVIDIA Container Runtime documentation for detailed setup instructions.
Driver Installation for NVIDIA and AMD GPUs
If you are using an NVIDIA or AMD GPU, it is crucial to ensure that the appropriate drivers are installed. Depending on your hardware and operating system, you may need to manually install these drivers. Check the official NVIDIA website for the latest drivers compatible with your GPU model.
Shared GPU Memory on Windows
Important Note for NVIDIA GPUs with Driver 536.40
For NVIDIA GPUs with driver version 536.40 or later, released in June 2023, Windows can allocate some of its CPU RAM to the GPU when there is insufficient VRAM for a task. This feature, known as shared GPU memory, allows the system to continue generating outputs even when VRAM is exhausted. However, this can significantly slow down performance. If you prefer to receive an error message instead of allowing shared memory usage, you can follow the guide provided by NVIDIA here.
To find the Python path required for this guide:
- Execute
invoke.bat
. - Choose option 2 for the developer console.
- The console will display at least one Python path; copy the path that includes your InvokeAI installation directory, which is typically the first one listed.
Python Installation Issues on Windows
When installing Python, ensure that you check the box labeled Add python.exe to PATH. This option is available at the bottom of the Python Installer window. If Python is already installed, you can modify the installation by re-running the installer, selecting the Modify option, and checking the box to add Python to your PATH.
Triton Error on Startup
If you encounter a Triton error during startup, it can be safely ignored as InvokeAI does not utilize Triton. However, if you are on a Linux system and wish to eliminate the error message, you can choose to install Triton as a workaround.
Related answers
-
Install Triton On Windows — InvokeAI
Learn how to install Triton on Windows for seamless integration with InvokeAI. Step-by-step guide for efficient setup.
-
InvokeAI Shared GPU Memory Insights
Explore how stable diffusion utilizes shared GPU memory in InvokeAI for enhanced performance and efficiency.
-
Increase Vram For InvokeAI
Learn effective methods to increase VRAM for InvokeAI, enhancing performance and efficiency in your AI projects.
The framework for AI agents
Build reliable and accurate AI agents in code, capable of running and persisting month-lasting processes in the background.
Comprehensive Guide to Installing InvokeAI with Docker on Windows and Linux
Installing InvokeAI with Docker
To effectively utilize InvokeAI, it is crucial to ensure that your Docker setup is configured to leverage your machine’s GPU capabilities. This guide provides detailed instructions for both Windows and Linux users, focusing on the necessary configurations and commands to get started.
Docker Configuration for GPU Access
Windows Users
Docker Desktop on Windows supports GPU access, which significantly enhances generation speeds. To enable this feature:
- Ensure you have the latest version of Docker Desktop installed.
- Follow the instructions provided in the Docker blog to configure GPU support.
Linux Users
For Linux users, the process involves following the appropriate documentation based on your GPU vendor:
- NVIDIA Users: Refer to the NVIDIA Container Toolkit installation guide for detailed steps.
- AMD Users: Follow the AMD ROCm installation guide to set up Docker with GPU support.
Quick Start with Docker Commands
Once your Docker setup is ready, you can run the following command to start the InvokeAI container:
docker run --runtime=nvidia --gpus=all --publish 9090:9090 ghcr.io/invoke-ai/invokeai
After executing this command, open your web browser and navigate to http://localhost:9090 to access the InvokeAI interface. From there, you can install models and begin generating content.
Building the Docker Image
To build the Docker image, navigate to the docker
directory and use the following command:
docker compose build
If you are using an AMD GPU, you have two options:
- Set the
GPU_DRIVER=rocm
environment variable in yourdocker-compose.yml
file and run the build command as usual. - Alternatively, set
GPU_DRIVER=rocm
in the.env
file and utilize the providedbuild.sh
script for convenience.
Running the Docker Container
To run the container, execute:
docker compose up
This command will start the container and configure the InvokeAI root directory if this is your first installation. You can then access the application at http://localhost:9090.
Building from Source
For those interested in customizing their Docker setup, all necessary materials are located in the docker directory of the GitHub repository. To get started:
cd docker
cp .env.sample .env
docker compose up
Additionally, a run.sh
convenience script is available. For detailed customization instructions, refer to the docker/README.md
file.
Prerequisites
Before proceeding, ensure that Docker is installed on your system. For Windows users, access the Docker Desktop app and navigate to Preferences > Resources > Advanced. Here, increase the CPUs and Memory allocation to prevent performance issues. You may also need to adjust the Swap and Disk image size settings to optimize performance.
Related answers
-
InvokeAI Colab Environment Variables
Learn how to manage environment variables in the InvokeAI Colab setup for optimal performance and configuration.
-
InvokeAI Local Network Usage
Explore the technical aspects of local network usage with InvokeAI, enhancing your understanding of its capabilities.
-
InvokeAI Gpu Not Being Used
Explore reasons why InvokeAI may not utilize your GPU effectively and how to troubleshoot the issue.
Comprehensive Guide to Installing InvokeAI with Docker on Windows and Linux
Installing InvokeAI with Docker
To effectively utilize InvokeAI, it is crucial to ensure that your Docker setup is configured to leverage your machine’s GPU. This guide provides detailed instructions for both Windows and Linux users, focusing on the necessary steps to achieve optimal performance.
Docker Configuration for GPU Access
Windows Users
Docker Desktop on Windows supports GPU access, which significantly enhances generation speeds. To configure Docker for GPU usage:
- Ensure you have the latest version of Docker Desktop installed.
- Enable the WSL 2 feature, which allows Docker to utilize the GPU.
- Follow the instructions provided in the Docker blog for detailed setup.
Linux Users
For Linux users, the process involves:
- Installing the NVIDIA or AMD container toolkit, depending on your GPU.
- Refer to the official NVIDIA installation guide or the AMD installation guide for specific instructions.
Quick Start with Docker Commands
Once your Docker setup is complete, you can start using InvokeAI with the following command:
docker run --runtime=nvidia --gpus=all --publish 9090:9090 ghcr.io/invoke-ai/invokeai
After executing this command, open your browser and navigate to http://localhost:9090 to access the InvokeAI interface. From there, you can install models and begin generating content.
Building the Docker Image
To build the Docker image, navigate to the docker
directory and use the following command:
docker compose build
For AMD GPU Users
If you are using an AMD GPU, you have two options:
- Set the
GPU_DRIVER=rocm
environment variable in yourdocker-compose.yml
file and run the build command as usual. - Alternatively, set
GPU_DRIVER=rocm
in the.env
file and utilize the providedbuild.sh
script for convenience.
Running the Docker Container
To run the container, execute:
docker compose up
This command will start the container and configure the InvokeAI root directory if this is your first installation. You can then access the application at http://localhost:9090.
Building from Source
All necessary Docker materials are available in the docker directory of the GitHub repository. To build from source:
cd docker
cp .env.sample .env
docker compose up
Additionally, a run.sh
convenience script is included for ease of use. For further customization, refer to the docker/README.md
file.
Prerequisites for Docker Installation
Before proceeding with the installation, ensure that you have Docker installed. You can find the installation instructions on the Docker official site.
In the Docker Desktop app, navigate to Preferences > Resources > Advanced. It is advisable to increase the CPUs and Memory allocation to prevent performance issues. You may also need to adjust the Swap and Disk image size to accommodate your needs.
Related answers
-
InvokeAI Local Network Usage
Explore the technical aspects of local network usage with InvokeAI, enhancing your understanding of its capabilities.
-
InvokeAI Gpu Not Being Used
Explore reasons why InvokeAI may not utilize your GPU effectively and how to troubleshoot the issue.
-
InvokeAI GPU Optimization Techniques
Explore advanced GPU techniques for optimizing InvokeAI performance and enhancing AI model efficiency.
The framework for AI agents
Build reliable and accurate AI agents in code, capable of running and persisting month-lasting processes in the background.
- Background
- 1. Windows Insider Preview Build 20150
- 2. Install WSL
- 3. Linux Kernel
- 4. Install Ubuntu 18.04
- 5. Install CUDA drivers on Win 10 (host)
- 6. Install Docker
- Docker Desktop for Windows confusion
- 7. Install NVIDIA Container Toolkit
- 8. Use VS Code for development
- Issues faced
- References
- Summary
Background
This is a followup to my earlier post in which I wrote how to setup Docker and Python. I had recently installed a NVIDIA GPU (RTX 2060 Super) in my machine and I wanted to use it to develop deep learning models in Tensorflow. I have been extensively using Docker and VS Code, so I was looking for a setup that would fit in my existing workflow.
There were few options :
- Install Python and tensorflow directly on Windows [The setup seems quite complicated and I prefer the containerised approach for development]
- Dual boot into Ubuntu and setup tensorflow [Best and recommended. But this would mean I would be switching back forth between Ubuntu and Windows]
- Use the Docker Desktop for Windows and create tensorflow containers [Currently GPU is not supported, hence the next option]
- Use Windows Subsystem for Linux (WSL)
I decided to go with the last option. And it was a good timing as Microsoft and NVIDIA had recently announced the support for GPU acceleration in WSL 2. So, the plan is as follows :
- Enable WSL on Windows
- Install Ubuntu inside WSL
- Install Docker and NVIDIA toolkit in Ubuntu and create tensorflow containers (with GPU support)
- Use the VS Code IDE for development
Please note that as of 26th Jun 20, most of these features are still in development. They worked for me and you may try them at your own risk, as they did end up messing some parts of my system. You will need to sign up for :
- Windows Insider Program
- NVIDIA Developer Program
Also, under each step I have mentioned some CHECKS
. Hope they help in making sure you are on the right track.
1. Windows Insider Preview Build 20150
First and foremost, the GPU support on WSL is not available in the Retail Build of Windows 10. I had to get the latest Windows 10 Insider Preview Build 20150. The details of all the builds are available in the Flight Hub. The installation process took an hour.
CHECK : Windows Version. In Power Shell (PS) or CMD, type winver
2. Install WSL
Steps to be done in PS/CMD:
- Enable WSL1 :
dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart
- Enable WSL2 :
dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart
- Restart Computer
- Default to version 2 :
wsl.exe --set-default-version 2
3. Linux Kernel
After running the above command, I got this message : WSL 2 requires an update to its kernel component. For information please visit https://aka.ms/wsl2kernel.
The page has instructions for installing the WSL2 Linux Kernel MSI. After installing it in Win10, this command worked : wsl.exe --set-default-version 2
CHECK : Linux Kernel version should be 4.19.121 or higher. On PS/CMD type wsl cat /proc/version
Output : Linux version 4.19.121-microsoft-standard (oe-user@oe-host) (gcc version 8.2.0 (GCC)) #1 SMP Fri Jun 19 21:06:10 UTC 2020
If you dont have latest kernel, then go to Windows Settings \ Update & Security \ Windows Update \ Check for updates
. Ensure that “Receive Updates” is turned on.
As per my understanding, in the future releases both the installation/update of WSL Linux Kernel will be handled by these two commands : wsl --install
and wsl --update
As per Microsoft :
We’ve removed the Linux kernel from the Windows OS image and instead will be delivering it to your machine via Windows Update, the same way that 3rd party drivers (like graphics, or touchpad drivers) are installed and updated on your machine today. When automatic install and update of the Linux kernel is added you’ll start getting automatic updates to your kernel right away.
4. Install Ubuntu 18.04
- You can get the distro from the Microsoft store, to be installed on WSL
- I prefer the Debian distribution as it is lighter than Ubuntu, so I spent quite some time in setting it up. But it did not work.
- In the end, I went ahead with Ubuntu 18.04 (I didn’t choose 20.04 as I have read somewhere that tensorflow has issues with that version. I might be wrong)
CHECK : Version of Ubuntu. On PS/CMD type wsl -l -v
Output :
NAME STATE VERSION
* Ubuntu-18.04 Stopped 2
5. Install CUDA drivers on Win 10 (host)
The CUDA drivers for WSL — Public Preview can be installed from here
CHECK : The CUDA driver version should be 455.41 or higher. On PS/CMD type nvidia-smi
6. Install Docker
- Enter the WSL by typing
wsl
on PS/CMD. This will activate Ubuntu. - Install docker using
curl https://get.docker.com | sh
- As far as I understand, the NVIDIA container toolkit does not work with Docker Desktop for Windows so I ignored this
WSL DETECTED: We recommend using Docker Desktop for Windows. You may press Ctrl+C now to abort this script.
- As far as I understand, the NVIDIA container toolkit does not work with Docker Desktop for Windows so I ignored this
CHECK : Docker version. At the Ubuntu prompt, type docker -v
Docker Desktop for Windows confusion
- The Docker Desktop for Windows is different from the docker that I installed in Step 6
- Prior to the release of WSL2, it would use Hyper-V virtualisation for creating the Linux VMs in Windows
- After I installed WSL2, I enabled the WSL2 integration in Docker Desktop and let it use the WSL backend
- But this started interfering with the docker that I had installed in WSL/Ubuntu
- In particular, it would create its own Linux distros :
docker-desktop
anddocker-desktop-data
- And the images/containers that I created using the WSL/Ubuntu dockerd went missing
- In particular, it would create its own Linux distros :
- I found these solutions to work:
- Disable the Docker Desktop integration with WSL2 (so that it continues to use Hyper-V backend). And then restart computer
- Quit Docker Desktop and degister
docker-desktop
anddocker-desktop-data
usingwslconfig /u docker-desktop
andwslconfig /u docker-desktop-data
- Check
wsl -l -v
- Setting default distro to Ubuntu
wsl --set-default Ubuntu 18.04
did not help
8. Use VS Code for development
One of the challenges I faced was to connect VS Code to the container that was running on Ubuntu which was inturn running inside WSL. The Remote-Containers
extension was unable to reach the container. And Remote-WSL
extension only reached the Ubuntu.
With some help from the VS Code community and the docker documentation here and here, I managed to get it working in below 2 steps :
- In WSL/Ubuntu, I started the docker daemon using the -H flag
sudo dockerd -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375
- This opens the 2375 port for the docker to listen (this is a non-secured way)
- And in VS Code settings.json, I added this line :
"docker.host":"tcp://localhost:2375"
And then in VS Code, I could use the Remote Containers:Attach to running containers
to connect to the container running in WSL/Ubuntu
Issues faced
- The boot loader menu for my Ubuntu — Windows 10 dual boot disappeared (Most likely after the installation of the Insider Build 20150)
- The CUDA toolkit doesn’t seem to work on Debian distro (I might have to give it another try )
- The Docker Desktop for Windows (with WSL2 backend) should not be running as it interferes with the docker daemon in Ubuntu
- Some useful
wsl
optionswsl --set-default Ubuntu 18.04
to change default distrowsl --shutdown
to shutdown the Linux Kernel
- I also had another non-CUDA display adapter on my machine GeForce GT710. And I realised tensorflow was crashing at this command
tf.test.gpu_device_name()
, so I disabled it and it seemed to work well.
References
- GPU Compute, WSL Install and WSL Update arrive in the latest Insider build for the Windows Subsystem for Linux | Microsoft
- Enable NVIDIA CUDA in WSL 2 | Microsoft
- Windows Subsystem for Linux Installation Guide for Windows 10 | Microsoft
- Announcing CUDA on Windows Subsystem for Linux 2 | NVIDIA
- CUDA on WSL User Guide | NVIDIA
- NVIDIA Forum 1
- NVIDIA Forum 2
- VS Code configuration
- Tensorflow
- Getting started with CUDA on Ubuntu on WSL 2 | Ubuntu
Summary
This was just a start and I could run into issues as I continue using tensorflow. But in summary, it was a good learning experience even though I ended up spending couple of days in setting it up. I am not an expert in Docker neither in CUDA, so if you see any issues with the blog please leave a comment and I will rectify. You can also reach out to the NVIDIA team, they are highly engaged and promptly reply to queries on their forums. Cheers !
“Docker is the easiest way to run TensorFlow on a GPU…”
One of the fastest ways to get started with TensorFlow and Keras on your local developer machine is using Docker. This is especially useful if you are working on a Windows machine with an NVIDIA GPU to speed up model training and inference. I have put together the following detailed setup guide to help you get up and running more quickly.
"Cargo ship close to Moreton island, QLD, Australia" - Photo by Borderpolar Photographer on Unsplash
Docker on WSL2 (Windows Subsystem for Linux) allows you to avoid incompatibilities you may see running directly on Windows by instead running in a Docker container for Linux. With an NVIDIA CUDA-enabled video card, GPU acceleration trains your models faster, improving your ability to iterate and find an optimal deep learning architecture and associated parameters for the solution you seek.
“Docker is the easiest way to run TensorFlow on a GPU since the host machine only requires the NVIDIA® driver …”
https://www.tensorflow.org/install/docker#gpu_support
Use the following steps to get all this working…
System Requirements
- Windows 10 19044 (aka 21H2) or higher
- CUDA-enabled GPU card – You can check your setup using the Verify your NVIDIA CUDA GPU setup step below
- Graphics card drivers installed – this should already be done. How else are you going to play Doom? 😉
Install WSL2 on Windows
Your first step is to install WSL2 on Windows. Open PowerShell as an administrator (“Run as administrator”) and run the following command:
wsl --install
You can also choose to install a specific Linux distribution. In this example I am installing the latest Ubuntu LTS version.
# get a list of available distros
wsl --list --online
Results:
The following is a list of valid distributions that can be installed.
Install using 'wsl.exe --install <Distro>'.
NAME FRIENDLY NAME
Ubuntu Ubuntu
Debian Debian GNU/Linux
kali-linux Kali Linux Rolling
Ubuntu-18.04 Ubuntu 18.04 LTS
Ubuntu-20.04 Ubuntu 20.04 LTS
Ubuntu-22.04 Ubuntu 22.04 LTS
...
# install latest Ubuntu LTS
wsl --install Ubuntu-22.04
For additional details: Install WSL command – How to install Linux on Windows with WSL
Install Docker
To download and install Docker Desktop for Windows and WSL2, see the Install Docker Desktop on Windows guide on Docker Docs.
Verify your NVIDIA CUDA GPU setup
Verify your setup using nvidia-smi
Run the following commands in PowerShell/Windows Terminal:
nvidia-smi
should display CUDA information about your GPU. Example:
You are now ready to run TensorFlow using Docker!
Download and run the TensorFlow image
Run the following commands in PowerShell/Windows Terminal. Be sure to include the entire contents of line 3 below including the bash
command at the end.
A bash shell will now be displayed in your terminal. Run these commands to confirm your TensorFlow image is working:
Use Docker Compose to run a TensorFlow project
We will now use a Docker Compose file to run TensorFlow and Jupyter Lab with a Computer Vision example that uses Keras, a Convolutional Neural Network and the PTMLib library.
- Clone the
tensorflow-gpu-docker
demo project for this tutorial to your local machine
git clone https://github.com/dreoporto/tensorflow-gpu-docker
cd tensorflow-gpu-docker
- Copy and rename the
sample.env
file to.env
- Run using Docker Compose:
docker compose up -d
- View the Logs tab in Docker Desktop, under
Containers > pendragon-ai-tensorflow > tensorflow-jupyter-gpu
- Click
http://127.0.0.1:8888/lab?token=...
in the Logs tab. This will launch Jupyter Lab.
- Open
notebooks/Computer-Vision-with-Model-Caching.ipynb
to run the sample notebook we have included
- Click the
Restart Kernel and Run All Cells...
button
When this notebook completes running, all files are saved locally in tensorflow-gpu-docker/notebooks
and will remain even after you shut down your Docker Container, allowing you to continue your work later. This is made possible using the volumes
property in the docker-compose.yml
file, which maps the /app/notebooks
directory in the Docker container to the local tensorflow-gpu-docker/notebooks
folder on your machine.
When you’ve finished working in Jupyter, shut down the Docker container using the following command.
docker compose down
You can now verify that the updated Jupyter Notebook file still exists on your machine under tensorflow-gpu-docker/notebooks
for later usage.
Next Steps
Now that you’ve tried a sample solution, you can create your own notebooks and scripts using TensorFlow and Docker.
- Create your own notebooks in Jupyter Lab
- Connect to Jupyter Lab using VSCode:
- Copy
http://127.0.0.1:8888/lab?token=...
used above. You will need this in the ‘Enter the URL of the running Jupyter server.’ step - Follow the steps at https://code.visualstudio.com/docs/datascience/jupyter-notebooks#_connect-to-a-remote-jupyter-server
- Copy
The combined power of Docker and WSL2 makes it much easier to get TensorFlow with GPU acceleration up and running quickly. You can avoid the challenges of managing library dependencies which may be incompatible with native Windows. It also makes it much easier to reproduce results consistently on other systems. I highly recommend this setup if you have a Windows machine with a CUDA compatible GPU.
I hope you get a chance to try out these resources personally, and that this setup guide helps you spend more time learning and experimenting with AI to build great products and real-world solutions.
For More Info
- Official TensorFlow Docker Images
- Install TensorFlow – Docker – GPU Support
- Data Science in Visual Studio Code
- Docker Docs