Pytorch cuda install windows

Installing on macOS

PyTorch can be installed and used on macOS. Depending on your system and GPU capabilities, your experience with PyTorch on macOS may vary in terms of processing time.

Prerequisites

macOS Version

PyTorch is supported on macOS 10.15 (Catalina) or above.

Python

It is recommended that you use Python 3.9 — 3.12.
You can install Python either through Homebrew or
the Python website.

Package Manager

To install the PyTorch binaries, you will need to use the supported package manager: pip.

pip

Python 3

If you installed Python via Homebrew or the Python website, pip was installed with it. If you installed Python 3.x, then you will be using the command pip3.

Tip: If you want to use just the command pip, instead of pip3, you can symlink pip to the pip3 binary.

Installation

pip

To install PyTorch via pip, use the following command, depending on your Python version:

# Python 3.x
pip3 install torch torchvision

Verification

To ensure that PyTorch was installed correctly, we can verify the installation by running sample PyTorch code. Here we will construct a randomly initialized tensor.

import torch
x = torch.rand(5, 3)
print(x)

The output should be something similar to:

tensor([[0.3380, 0.3845, 0.3217],
        [0.8337, 0.9050, 0.2650],
        [0.2979, 0.7141, 0.9069],
        [0.1449, 0.1132, 0.1375],
        [0.4675, 0.3947, 0.1426]])

Building from source

For the majority of PyTorch users, installing from a pre-built binary via a package manager will provide the best experience. However, there are times when you may want to install the bleeding edge PyTorch code, whether for testing or actual development on the PyTorch core. To install the latest PyTorch code, you will need to build PyTorch from source.

Prerequisites

  1. [Optional] Install pip
  2. Follow the steps described here: https://github.com/pytorch/pytorch#from-source

You can verify the installation as described above.

Installing on Linux

PyTorch can be installed and used on various Linux distributions. Depending on your system and compute requirements, your experience with PyTorch on Linux may vary in terms of processing time. It is recommended, but not required, that your Linux system has an NVIDIA or AMD GPU in order to harness the full power of PyTorch’s CUDA support or ROCm support.

Prerequisites

Supported Linux Distributions

PyTorch is supported on Linux distributions that use glibc >= v2.17, which include the following:

  • Arch Linux, minimum version 2012-07-15
  • CentOS, minimum version 7.3-1611
  • Debian, minimum version 8.0
  • Fedora, minimum version 24
  • Mint, minimum version 14
  • OpenSUSE, minimum version 42.1
  • PCLinuxOS, minimum version 2014.7
  • Slackware, minimum version 14.2
  • Ubuntu, minimum version 13.04

The install instructions here will generally apply to all supported Linux distributions. An example difference is that your distribution may support yum instead of apt. The specific examples shown were run on an Ubuntu 18.04 machine.

Python

Python 3.9-3.12 is generally installed by default on any of our supported Linux distributions, which meets our recommendation.

Tip: By default, you will have to use the command python3 to run Python. If you want to use just the command python, instead of python3, you can symlink python to the python3 binary.

However, if you want to install another version, there are multiple ways:

  • APT
  • Python website

If you decide to use APT, you can run the following command to install it:

Package Manager

To install the PyTorch binaries, you will need to use the supported package manager: pip.

pip

Python 3

While Python 3.x is installed by default on Linux, pip is not installed by default.

sudo apt install python3-pip

Tip: If you want to use just the command pip, instead of pip3, you can symlink pip to the pip3 binary.

Installation

pip

No CUDA

To install PyTorch via pip, and do not have a CUDA-capable or ROCm-capable system or do not require CUDA/ROCm (i.e. GPU support), in the above selector, choose OS: Linux, Package: Pip, Language: Python and Compute Platform: CPU.
Then, run the command that is presented to you.

With CUDA

To install PyTorch via pip, and do have a CUDA-capable system, in the above selector, choose OS: Linux, Package: Pip, Language: Python and the CUDA version suited to your machine. Often, the latest CUDA version is better.
Then, run the command that is presented to you.

With ROCm

To install PyTorch via pip, and do have a ROCm-capable system, in the above selector, choose OS: Linux, Package: Pip, Language: Python and the ROCm version supported.
Then, run the command that is presented to you.

Verification

To ensure that PyTorch was installed correctly, we can verify the installation by running sample PyTorch code. Here we will construct a randomly initialized tensor.

import torch
x = torch.rand(5, 3)
print(x)

The output should be something similar to:

tensor([[0.3380, 0.3845, 0.3217],
        [0.8337, 0.9050, 0.2650],
        [0.2979, 0.7141, 0.9069],
        [0.1449, 0.1132, 0.1375],
        [0.4675, 0.3947, 0.1426]])

Additionally, to check if your GPU driver and CUDA/ROCm is enabled and accessible by PyTorch, run the following commands to return whether or not the GPU driver is enabled (the ROCm build of PyTorch uses the same semantics at the python API level link, so the below commands should also work for ROCm):

import torch
torch.cuda.is_available()

Building from source

For the majority of PyTorch users, installing from a pre-built binary via a package manager will provide the best experience. However, there are times when you may want to install the bleeding edge PyTorch code, whether for testing or actual development on the PyTorch core. To install the latest PyTorch code, you will need to build PyTorch from source.

Prerequisites

  1. Install Pip
  2. If you need to build PyTorch with GPU support
    a. for NVIDIA GPUs, install CUDA, if your machine has a CUDA-enabled GPU.
    b. for AMD GPUs, install ROCm, if your machine has a ROCm-enabled GPU
  3. Follow the steps described here: https://github.com/pytorch/pytorch#from-source

You can verify the installation as described above.

Installing on Windows

PyTorch can be installed and used on various Windows distributions. Depending on your system and compute requirements, your experience with PyTorch on Windows may vary in terms of processing time. It is recommended, but not required, that your Windows system has an NVIDIA GPU in order to harness the full power of PyTorch’s CUDA support.

Prerequisites

Supported Windows Distributions

PyTorch is supported on the following Windows distributions:

  • Windows 7 and greater; Windows 10 or greater recommended.
  • Windows Server 2008 r2 and greater

The install instructions here will generally apply to all supported Windows distributions. The specific examples shown will be run on a Windows 10 Enterprise machine

Python

Currently, PyTorch on Windows only supports Python 3.9-3.12; Python 2.x is not supported.

As it is not installed by default on Windows, there are multiple ways to install Python:

  • Chocolatey
  • Python website

If you decide to use Chocolatey, and haven’t installed Chocolatey yet, ensure that you are running your command prompt as an administrator.

For a Chocolatey-based install, run the following command in an administrative command prompt:

Package Manager

To install the PyTorch binaries, you will need to use the supported package manager: pip.

pip

If you installed Python by any of the recommended ways above, pip will have already been installed for you.

Installation

pip

No CUDA

To install PyTorch via pip, and do not have a CUDA-capable system or do not require CUDA, in the above selector, choose OS: Windows, Package: Pip and CUDA: None.
Then, run the command that is presented to you.

With CUDA

To install PyTorch via pip, and do have a CUDA-capable system, in the above selector, choose OS: Windows, Package: Pip and the CUDA version suited to your machine. Often, the latest CUDA version is better.
Then, run the command that is presented to you.

Verification

To ensure that PyTorch was installed correctly, we can verify the installation by running sample PyTorch code. Here we will construct a randomly initialized tensor.

From the command line, type:

then enter the following code:

import torch
x = torch.rand(5, 3)
print(x)

The output should be something similar to:

tensor([[0.3380, 0.3845, 0.3217],
        [0.8337, 0.9050, 0.2650],
        [0.2979, 0.7141, 0.9069],
        [0.1449, 0.1132, 0.1375],
        [0.4675, 0.3947, 0.1426]])

Additionally, to check if your GPU driver and CUDA is enabled and accessible by PyTorch, run the following commands to return whether or not the CUDA driver is enabled:

import torch
torch.cuda.is_available()

Building from source

For the majority of PyTorch users, installing from a pre-built binary via a package manager will provide the best experience. However, there are times when you may want to install the bleeding edge PyTorch code, whether for testing or actual development on the PyTorch core. To install the latest PyTorch code, you will need to build PyTorch from source.

Prerequisites

  1. Install pip
  2. Install CUDA, if your machine has a CUDA-enabled GPU.
  3. If you want to build on Windows, Visual Studio with MSVC toolset, and NVTX are also needed. The exact requirements of those dependencies could be found out here.
  4. Follow the steps described here: https://github.com/pytorch/pytorch#from-source

You can verify the installation as described above.

Prerequisites

Make sure you have an NVIDIA GPU supported by CUDA and have the following requirements.

1. CUDA for GPU support
    • For CUDA 11.8 version, make sure you have Nvidia Driver version 452.39 or higher
    • For CUDA 12.1 version, make sure you have Nvidia Driver version 527.41 or higher

2. Windows 10 or higher (recommended), Windows Server 2008 r2 and greater

4 Steps to Install Pytorch with CUDA Version

Step 1. Check your NVIDIA driver

Open the NVIDIA Control Panel. Click System Information and check the driver version. It should be greater then 537.58, as this is the current driver version at the time of writing.

If you have an older version, goto https://www.nvidia.com/en-us/geforce/drivers/ and update your driver. There is an automatic and manual driver update possible if you know the videocard type.

Step 2. Open a Command Prompt

Open a Windows terminal or the command prompt (cmd) and type python. The Windows app store will open automatically where you can install it from!

Step 3. Install Pytorch with CUDA Version

install Pytorch 2.1.1 with CUDA 12.1

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

install Pytorch 2.1.1 with CUDA 11.8

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

Note: You can also install previous versions of Pytorch. The following is the command reference.

install Pytorch 2.1.0 with CUDA 12.1

pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121

install Pytorch 2.1.0 with CUDA 11.8

pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu118

install Pytorch 2.0.1 with CUDA 11.8

pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118

install Pytorch 2.0.1 with CUDA 11.7

pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2

install Pytorch 2.0.0 with CUDA 11.8

pip install torch==2.0.0 torchvision==0.15.1 torchaudio==2.0.1 --index-url https://download.pytorch.org/whl/cu118

install Pytorch 2.0.0 with CUDA 11.7

pip install torch==2.0.0 torchvision==0.15.1 torchaudio==2.0.1

install Pytorch 1.13.1 with CUDA 11.7

pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu117

install Pytorch 1.13.1 with CUDA 11.6

pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116

install Pytorch 1.13.0 with CUDA 11.7

pip install torch==1.13.0+cu117 torchvision==0.14.0+cu117 torchaudio==0.13.0 --extra-index-url https://download.pytorch.org/whl/cu117

install Pytorch 1.13.0 with CUDA 11.6

pip install torch==1.13.0+cu116 torchvision==0.14.0+cu116 torchaudio==0.13.0 --extra-index-url https://download.pytorch.org/whl/cu116

install Pytorch 1.12.1 with CUDA 11.6

pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu116

install Pytorch 1.12.1 with CUDA 11.3

pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113

install Pytorch 1.12.1 with CUDA 10.2

pip install torch==1.12.1+cu102 torchvision==0.13.1+cu102 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu102

install Pytorch 1.12.0 with CUDA 11.6

pip install torch==1.12.0+cu116 torchvision==0.13.0+cu116 torchaudio==0.12.0 --extra-index-url https://download.pytorch.org/whl/cu116

install Pytorch 1.12.0 with CUDA 11.3

pip install torch==1.12.0+cu113 torchvision==0.13.0+cu113 torchaudio==0.12.0 --extra-index-url https://download.pytorch.org/whl/cu113

install Pytorch 1.12.0 with CUDA 10.2

pip install torch==1.12.0+cu102 torchvision==0.13.0+cu102 torchaudio==0.12.0 --extra-index-url https://download.pytorch.org/whl/cu102

install Pytorch 1.11.0 with CUDA 11.3

pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113

install Pytorch 1.11.0 with CUDA 10.2

pip install torch==1.11.0+cu102 torchvision==0.12.0+cu102 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu102

Step 4. Verify Installation

To ensure that PyTorch was installed correctly, we can verify the installation by running sample PyTorch code. Here we will construct a randomly initialized tensor.

From the command line, type python, then then enter the following code:

import torch
x = torch.rand(2, 3)
print(x)

The output should be something similar to:

>>> print(torch.rand(2,3))
tensor([[0.7688, 0.5814, 0.9436],
        [0.0245, 0.6007, 0.2279]])
>>>

Additionally, to check if your GPU driver and CUDA is enabled and accessible by PyTorch, run the following commands to return whether or not the CUDA driver is enabled:

C:\Users\Administrator>python
Python 3.11.6 (tags/v3.11.6:8b6ee5b, Oct  2 2023, 14:57:12) [MSC v.1935 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
True
>>>
>>> print(torch.cuda.device_count())
1

В очередной раз после переустановки Windows осознал, что надо накатить драйвера, CUDA, cuDNN, Tensorflow/Keras для обучения нейронных сетей.

Каждый раз для меня это оказывается несложной, но времязатратной операцией: найти подходящую комбинацию Tensorflow/Keras, CUDA, cuDNN и Python несложно, но вспоминаю про эти зависимости только в тот момент, когда при импорте Tensorflow вижу, что видеокарта не обнаружена и начинаю поиск нужной страницы в документации Tensorflow.

В этот раз ситуация немного усложнилась. Помимо установки Tensorflow мне потребовалось установить PyTorch. Со своими зависимостями и поддерживаемыми версиями Python, CUDA и cuDNN.

По итогам нескольких часов экспериментов решил, что надо зафиксировать все полезные ссылки в одном посте для будущего меня.

Краткий алгоритм установки Tensorflow и PyTorch

Примечание: Установить Tensorflow и PyTorch можно в одном виртуальном окружении, но в статье этого алгоритма нет.

Подготовка к установке

  1. Определить какая версия Python поддерживается Tensorflow и PyTorch (на момент написания статьи мне не удалось установить PyTorch в виртуальном окружении с Python 3.9.5)
  2. Для выбранной версии Python найти подходящие версии Tensorflow и PyTorch
  3. Определить, какие версии CUDA поддерживают выбранные ранее версии Tensorflow и PyTorch
  4. Определить поддерживаемую версию cuDNN для Tensorflow – не все поддерживаемые CUDA версии cuDNN поддерживаются Tensorflow. Для PyTorch этой особенности не заметил

Установка CUDA и cuDNN

  1. Скачиваем подходящую версию CUDA и устанавливаем. Можно установить со всеми значениями по умолчанию
  2. Скачиваем cuDNN, подходящую для выбранной версии Tensorflow (п.1.2). Для скачивания cuDNN потребуется регистрация на сайте NVidia. “Установка” cuDNN заключается в распакове архива и заменой существующих файлов CUDA на файлы из архива

Устанавливаем Tensorflow

  1. Создаём виртуальное окружение для Tensorflow c выбранной версией Python. Назовём его, например, py38tf
  2. Переключаемся в окружение py38tf и устанавливаем поддерживаемую версию Tensorflow pip install tensorflow==x.x.x
  3. Проверяем поддержку GPU командой
    python -c "import tensorflow as tf; print('CUDA available' if tf.config.list_physical_devices('GPU') else 'CUDA not available')"
    

Устанавливаем PyTorch

  1. Создаём виртуальное окружение для PyTorch c выбранной версией Python. Назовём его, например, py38torch
  2. Переключаемся в окружение py38torch и устанавливаем поддерживаемую версию PyTorch
  3. Проверяем поддержку GPU командой
python -c "import torch; print('CUDA available' if torch.cuda.is_available() else 'CUDA not available')"

В моём случае заработала комбинация:

  • Python 3.8.8
  • Драйвер NVidia 441.22
  • CUDA 10.1
  • cuDNN 7.6
  • Tensorflow 2.3.0
  • PyTorch 1.7.1+cu101

Tensorflow и PyTorch установлены в разных виртуальных окружениях.

Итого

Польза этой статьи будет понятна не скоро: систему переустанавливаю я не часто.

Если воспользуетесь этим алгоритмом и найдёте какие-то ошибки – пишите в комментарии

  • Home
  • /

  • Blog
  • /

  • Step-by-Step Guide to Setup Pytorch for Your GPU on Windows 10/11

Step By Step Guide To Setup Pytorch For Your Gpu On Windows 10 And 11

In this competitive world of technology, Machine Learning and Artificial Intelligence technologies have emerged as a breakthrough for developing advanced AI applications like image recognition, natural language processing, speech translation, and more. However, developing such AIpowered applications would require massive amounts of computational power far beyond the capabilities of CPUs (Central Processing Units).

Thats because CPUs come with very few handcountable cores and threads. So, CPUs can only process a few threads at a time, which becomes a bottleneck for the highly parallelizable computations required for deep learning algorithms. This gave rise to the use of GPUs (Graphics Processing Units), which shipped with thousands of cores and can handle thousands of threads simultaneously and are designed for mathematicallyintensive tasks like realtime 3D graphics rendering, crypto mining, deep learning where a large number of mathematical computations are required.

NVIDIA, the GPU manufacturer giant, revolutionized the applicability of neural networks by developing CUDA, a parallel computing platform and API model for GPU acceleration. This allowed developers to leverage the processing prowess of NVIDIA GPUs for generalpurpose computing through languages like C/C++ and Python. To further ease the development of GPUaccelerated deep learning applications, companies like Meta and Google developed frameworks like PyTorch and TensorFlow. Built on top of CUDA, these frameworks provide highlevel APIs for building and training neural networks in Python without directly working with lowlevel CUDA code.

PyTorch, developed by Facebooks AI Research lab, has emerged as one of the leading choices for developing, training, and interfering in deep learning research and production. With its imperative programming model and Pythonic syntax, PyTorch is widely adopted in natural language processing and reinforcement learning applications.

However, setting up PyTorch on Windows with GPU support can be challenging with multiple dependencies like NVIDIA drivers, CUDA toolkit, CUDNN library, PyTorch and TensorFlow versions, etc. In this comprehensive guide, I aim to provide a stepbystep process to setup PyTorch for GPU devices on Windows 10/11. Lets begin this post by going through the prerequisites like hardware requirements, driver setup, and installing CUDA, CUDNN, Anaconda, and Pytorch. We will share details on correctly configuring environment variables and verifying GPU access.

By the end, you will learn the complete process of setting up PyTorch on Windows to leverage the power of NVIDIA GPUs for accelerating deep neural network training and inferencing.

Common Terminologies

Before diving into the installation process, lets familiarize ourselves with some common terminologies:

  • CPU (Central Processing Unit) The main processor in a computer that handles computations. CPUs are optimized for sequential serial processing.

  • GPU (Graphics Processing Unit) Specialized electronic circuits designed to rapidly process parallel workloads. GPUs are optimized for parallel processing and ideal for machine learning workloads. Popular GPUs are made by Nvidia.

  • NVIDIA Leading manufacturer of GPUs commonly used for AI/ML workloads. Popular Nvidia GPUs include the Tesla and GeForce RTX series.

  • CUDA Parallel computing platform created by Nvidia that allows software developers to leverage the parallel computing capabilities of Nvidia GPUs.

  • cuDNN Nvidias library of GPUaccelerated deep neural network primitives. Helps optimize common deep learning operations.

  • Anaconda Open source Python distribution designed for largescale data processing, predictive analytics, and scientific computing. Comes bundled with many popular data science libraries.

  • PyTorch Open source machine learning framework based on Python and optimized for GPU acceleration. Provides flexibility like Python with high performance like C++.

  • TensorFlow Endtoend opensource machine learning platform developed by Google. Offers many highlevel APIs for building and training ML models.

  • IDE (Integrated Development Environment) A software application that provides tools and interfaces for programmers to develop software and applications. Examples: Visual Studio Code, PyCharm, and Jupyter Notebook.

  • CUDA Cores Processing units within Nvidia GPUs designed specifically to perform the calculations required for parallel computing. More CUDA cores lead to improved parallel processing performance.

  • CUDA Toolkit Software development kit created by Nvidia that provides GPUaccelerated libraries, compilers, development tools, and APIs for developing software that leverages Nvidia GPUs.

  • Conda Env Selfcontained directory that contains a specific collection of conda packages, Python interpreter, and other dependencies needed to run an application. Helpful for managing dependencies.

  • FP32/FP64 Floating point precision formats that represent 32bit (single precision) or 64bit (double precision) floating point values. FP32 requires less memory so commonly used but FP64 offers higher precision.

  • NVCC Nvidias C/C++ compiler that can compile code for both CPU and GPU. Part of the CUDA Toolkit.

  • Half Precision 16bit floating point format that requires less memory and bandwidth compared to 32bit FP32. Useful for some ML applications.

  • Auto Mixed Precision Training deep learning neural networks using both lower precision (FP16) and higher precision (FP32) automatically based on need. Helps accelerate training while retaining accuracy.

  • Tensor Cores Specialized cores within Nvidia GPUs designed specifically to accelerate mixed precision matrix multiplication operations commonly used in deep learning.

  • Machine Learning The field of computer science that gives computers the ability to learn without being explicitly programmed. Focuses on computer programs that can teach themselves to grow, change, and improve on their own by using algorithms and statistical models to analyze data.

  • Deep Learning A subfield of machine learning that uses neural networks modeled after the human brain and containing multiple layers. Excels at finding patterns in large amounts of unstructured data like images, video, audio, and text.

  • Natural Language Processing (NLP) The field of AI focused on enabling computers to understand, interpret, and manipulate human language. The key component of conversational AI.

Prerequisites to Setup Pytorch for Your GPU on Windows 10/11

First and foremost thing, you cant setup either CUDA or machine learning frameworks like Pytorch or TensorFlow on any machine that has GPU. there are certain hardware and software requirements that must be met. Lets see the key prerequisites in this section.

Hardware Requirements

  • NVIDIA GPU A CUDAcapable GPU from NVIDIA is essential. CUDA and related libraries like cuDNN only work with NVIDIA GPUs shipped with CUDA cores. Check out the CUDAcomparable GPUs here.

  • Compatible Motherboard The motherboard should have a PCIe slot to accommodate the NVIDIA GPU. It should be compatible with installing and interfacing with the GPU.

  • Minimum 8GB RAM Having enough RAM is important for running larger deep learning models. The CUDA guide recommends at least 8GB for optimal performance. If you are into heavyduty tasks then we suggest going for 32GB or higher for desktop computers.

  • Enough Disk Space You will need a few GB of free disk space to install the NVIDIA drivers, CUDA toolkit, cuDNN, PyTorch, and other libraries. We installed CUDA 12.2 on our Windows PC. It eaten up to 20 GB of disk space.

Software Requirements

  • Windows 10/11 or Windows Server 2022/2019 The OS should be Windows 10 or the latest Windows 11 for full compatibility. On the server side Microsoft Windows Server 2022 or  Microsoft Windows Server 2019

  • NVIDIA GPU Drivers The latest Game Ready driver from NVIDIA that supports your GPU model. It allows Windows to recognize the GPU.

  • CUDA Toolkit Provides libraries, APIs, and compilers like nvcc to enable GPU acceleration.

  • cuDNN The GPUaccelerated library for deep learning primitives from NVIDIA.

  • Visual Studio The Visual C++ redistributables are needed to run Python on Windows.

  • Anaconda/Miniconda To manage Python packages and environments.

  • PyTorch/TensorFlow The deep learning framework we aim to install and use with CUDA/GPUs.

These are the essential prerequisites in terms of hardware and software for setting up PyTorch on Windows with CUDA GPU acceleration based on NVIDIAs official documentation. The actual installation steps will be covered later in this guide.

The Demo PC System Specification

We tried this on one of our Windows PCs, which has the below hardware and software.

Hardware:

  • CPU: Intel Core i& 7700 8 cores 16 threads with 3.2 G

  • RAM: 32 GB DDR4 2400 MGh

  • GPU: GTX 1050 2 GB

  • SSD: Intel NVME M.2 1TB

Software

  • OS: Windows 10 22H

  • NVIDIA CUDA Tool Kit: 12.2 (on System) and 12.1 (on Conda)

  • cuDNN: 8.9.4

  • Anaconda: 22.9.0

  • Python: 3.8, 3.9, 3.10

  • PyTorch: 2.1.0.dev20230902 py3.9_cuda12.1_cudnn8_0 pytorchnightly

  • Visual Studio Community 2022 with Universal Windows Platform development and Desktop Development with C++ (optional)

  • PyCharm: 2023.2.1 (Community Edition)

How to Setup Pytorch for Your GPU on Windows 10/11?

This is the phase, in which we spend most of our time. Sometimes, more than expected. What do you think the real challenge here is, its neither the procedure nor the application/packages/drivers. Its the comparability between them (GPU cards, Storage, RAM, Python, CUDA, cuDNN, PyTorch, or TensorFlow). Thats why it takes time to figure out the combination of versions that match your computer hardware. 

We tried covering detailed stepbystep instructions to setup Pytorch for Your GPU on Windows PC. We used Windows 10 for this demo. However, the procedure remains the same for Windows 11 either.

Lets start this process with the installation of NVIDIA Drivers.

Step 1: Install NVIDIA GPU Drivers

The first thing you need is the proper driver for your NVIDIA GPU. Head over to NVIDIAs driver download page and get the latest Game Ready driver or Studio Driver. Make sure to select Windows 10/11 64bit as the operating system.


Alternatively, you can go to the NVIDIA GPU card page and download the driver. You will get the same driver installer either way. 


Run the downloaded executable and follow the onscreen instructions. A system restart will be required after installing the drivers. Restart the system to complete the Driver installation process.

Note: If you are not sure which GPU you have, fire up the Device Manager window and expand the Display Adopters. Your GPU card should be listed if you have configured it without any issues. 

Step 2: Install the CUDA Toolkit

CUDA or Compute Unified Device Architecture is NVIDIAs parallel computing platform and API model that allows software developers to leverage GPU acceleration. It is required for frameworks like PyTorch to utilize NVIDIA graphics cards.

Go to the
CUDA toolkit archive and download the latest stable version that matches your Operating System, GPU model, and Python version you plan to use (Python 3.x recommended).


Again, run the installer accepting all the default options. Remember the installation path as we will need to configure environment variables later on.

https://developer.nvidia.com/cuda-downloads?target_os=Windows&target_arch=x86_64

Step 3: Install cuDNN Library

To further boost performance for deep neural networks, we need the cuDNN library from NVIDIA. You will need to create an NVIDIA developer account to access the download link.


Download the cuDNN version that matches your CUDA toolkit version. For example, if you installed CUDA 10.1, you need cuDNN 7.6.X.

Unzip the downloaded package and copy the contents of
bin, include, and lib to the respective directory paths where CUDA is installed on your machine. For example:


C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\include
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\lib


We downloaded cuDNN 8.9.4. Unzipped it, and copied the DLLs from bin, include, and lib to the respective folders underneath NVIDIA GPU Computing Toolkit\CUDA\v12.2\

Step 4: Configure Environment Variables

We need to update a few environment variables for CUDA, cuDNN, and the NVIDIA driver to work properly.

Click Win +R to open the Run prompt. Type sysdm.cpl then Enter. System Properties opens up. Go to advance Tab. Click on Environment Variables.  Under
System Variables, click Path and add the following:


Under Path:
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2\libnvvp
C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common
C:\Program Files\NVIDIA Corporation\Nsight Compute 2023.2.2\
C:\Program Files\NVIDIA Corporation\NVIDIA NvDLISR


CUDA_PATH:
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2


CUDA_PATH_V12_2:
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2

Step 5: Install Anaconda

We will use Anaconda to set up and manage the Python environment for LocalGPT.
1. Download the latest Anaconda installer for Windows from
https://www.anaconda.com/products/distribution
2. Choose Python 3.10 or higher during installation.
3. Complete the installation process and restart your terminal.
4. Open the Anaconda Prompt which will have the Conda environment activated by default.

To verify the installation is successful, fire up the Anaconda Prompt and enter this command:

conda --version

Refer to these online documents for installation, setting up the environmental variable, and troubleshooting:
https://docs.anaconda.com/free/anaconda/install/windows/


Once installed, open the Anaconda Prompt terminal and create a new Conda environment:

conda create -n <Conda env name> python=3.9


Activate the environment:

conda activate <Conda env name>


To deactivate the Conda Env:

conda deactivate


To activate the Base Env:

conda activate base


To see the list of Environments:

conda env list

Step 6: Install IDE (Optional)

This step is totally optional. You can use any code editor of your choice. We recommend using either Pycharm or Visual Studio Code as it has excellent Python support through extensions.


For this demo, we will use PyCharm as our chosen IDE for Python development. You can check out how to
install PyCharm on Windows here.  If in case you are not a fan of any IDE, you can directly download the Python interpreter and use it on your CLI.

Download and install PyCharm Community Edition from jetbrains.com/pycharm. Make sure to customize the installer to add Anaconda Python environment support.


Once setup is complete, open PyCharm. We need to configure it to use the Conda environment we created earlier. Go to File > Settings > Project Interpreter. Click the gear icon and select Add. Locate the Python executable in the <conda env> folder in the Anaconda installation.

Great! Now PyCharm is configured to use the right Python environment.

Let us know if you need any help setting up the IDE to use the PyTorch GPU environment we have configured.

Step 7: Take Note of the CUDA Took Kit, CUDA Runtime API, CUDA Driver API, and GPU Card version

Before going ahead with the installation, you should make a note of the CUDA Tool Kit version installed on your system. CUDA has two APIs Runtime API and Driver API.  Run these commands to check the CUDA API version:

Run this command to check the Runtime API version:

nvcc --version


Output
---
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Aug_15_22:09:35_Pacific_Daylight_Time_2023
Cuda compilation tools, release 12.2, V12.2.140
Build cuda_12.2.r12.2/compiler.33191640_0

Run this command to check the Driver API version:

nvidia-smi

Output
---
GPU 0: NVIDIA GeForce GTX 1050 (UUID: GPU-c45d4514-6d35-9cb7-cfb9-c2fae7306659)


Run this command to check the GPU is recognized.

nvidia-smi --list-gpus

Output
---
Sun Sep 3 23:18:11 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 537.13 Driver Version: 537.13 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce GTX 1050 WDDM | 00000000:01:00.0 On | N/A |
| 45% 39C P8 N/A / 75W | 1306MiB / 2048MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+


+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 4552 C+G ....Search_cw5n1h2txyewy\SearchApp.exe N/A |
| 0 N/A N/A 4896 C+G ...ocal\Programs\Evernote\Evernote.exe N/A |
| 0 N/A N/A 7772 C+G ...CBS_cw5n1h2txyewy\TextInputHost.exe N/A |
| 0 N/A N/A 8088 C+G ...oogle\Chrome\Application\chrome.exe N/A |
| 0 N/A N/A 8932 C+G ...ekyb3d8bbwe\PhoneExperienceHost.exe N/A |
| 0 N/A N/A 9884 C+G ...m Files\Mozilla Firefox\firefox.exe N/A |
| 0 N/A N/A 10504 C+G ...2txyewy\StartMenuExperienceHost.exe N/A |
| 0 N/A N/A 11060 C+G C:\Windows\explorer.exe N/A |
| 0 N/A N/A 13208 C+G ...les\Microsoft OneDrive\OneDrive.exe N/A |
| 0 N/A N/A 14972 C+G ...pulse\Screenpresso\Screenpresso.exe N/A |
| 0 N/A N/A 16352 C+G ...Brave-Browser\Application\brave.exe N/A |
| 0 N/A N/A 16864 C+G ...72.0_x64__8wekyb3d8bbwe\GameBar.exe N/A |
| 0 N/A N/A 17252 C+G ....Search_cw5n1h2txyewy\SearchApp.exe N/A |
| 0 N/A N/A 17404 C+G ...m Files\Mozilla Firefox\firefox.exe N/A |
| 0 N/A N/A 17780 C+G ...GeForce Experience\NVIDIA Share.exe N/A |
| 0 N/A N/A 18216 C+G ...es (x86)\MSI\Fast Boot\FastBoot.exe N/A |
| 0 N/A N/A 18288 C+G ...l\Microsoft\Teams\current\Teams.exe N/A |
| 0 N/A N/A 19000 C+G ....0_x64__8wekyb3d8bbwe\HxOutlook.exe N/A |
| 0 N/A N/A 20136 C+G ...e Stream\80.0.1.0\GoogleDriveFS.exe N/A |
| 0 N/A N/A 21352 C+G ...2.0_x64__cv1g1gvanyjgm\WhatsApp.exe N/A |
| 0 N/A N/A 23116 C+G ...ork Manager\MSI_Network_Manager.exe N/A |
| 0 N/A N/A 29064 C+G ...5n1h2txyewy\ShellExperienceHost.exe N/A |
| 0 N/A N/A 29400 C+G ...l\Microsoft\Teams\current\Teams.exe N/A |
+---------------------------------------------------------------------------------------+

CUDA Version: 12.2
GPU: NVIDIA GeForce GTX 1050
Driver Version: 537.13

The above information is required to download the PyTorch and TensorFlow frameworks.

Step 8: Install PyTorch and CUDA for your GPU

Now we are all set to install PyTorch and CUDA for your GPU on a Windows machine. Visit the PyTorch website: https://pytorch.org/get-started/locally/ to construct the command to install PyTorch and CUDA.

Select the PyTorch Build, OS, Package, Programming Language, and CUDA version.

You can choose the Pip instead of the Conda Package. Its absolutely find and gives the same outcome. Since we used Anaconda as our package manager, we will go with Conda option.

In our case, we selected:

1.
PyTorch Build: Nightly (Since we are using the latest NVIDIA CUDA Tool Kit)
2.
OS: Windows
3.
Package: Conda (You can use Pip. Both will give the same outcome. )
4.
Language: Python
5.
Compute Platform: CUDA 12.1 (Since we cant select 12.2, we selected the closest version)

Copy the command and run it on your Conda terminal. Note: Ensure you have activated the correct Conda Environment before you execute this command

conda install pytorch torchvision torchaudio pytorchcuda=12.1 c pytorchnightly c nvidia

This will install the latest stable PyTorch version 2.1.0.dev20230902 py3.9_cuda12.1_cudnn8_0 pytorchnightly with CUDA 12.1.

Step 9: Verify GPU Usage

We can validate that everything is working as expected by running a small PyTorch program:

import torch
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
print("Using", device, "device")


Install TensorFlow

We tried setting up GPU for TensorFlow, but unfortunately, we couldnt configure it for GPU. After surfing the TesorFlow site, we got to know that there is no GPU support for Windows and Mac. Windows and Mac have only CPU support. If you want to run TensorFlow for your GPU, it could only be possible on Linux.

Ref:

https://www.tensorflow.org/install/pip#windows-native_1
https://www.tensorflow.org/install/source_windows

GPU support on native Windows is only available for 2.10 or earlier versions, starting in TF 2.11, CUDA build is not supported for Windows. For using TensorFlow GPU on Windows, you will need to build/install TensorFlow in WSL2 or use TensorFlowcpu with TensorFlowDirectMLPlugin

TensorFlow

Bottom Line

In this comprehensive guide, we went through the entire process of setting up PyTorch on Windows 10/11 with CUDA GPU acceleration.

We looked at the hardware and software prerequisites like having an NVIDIA GPU, and compatible motherboard and installing the necessary NVIDIA drivers, CUDA toolkit, cuDNN library, etc.

The stepbystep installation process was explained in detail including tips like doing a clean driver install and verifying CUDA installation using nvcc.

Configuring the environment variables properly is key to ensuring PyTorch can locate the CUDA install directory. Using Anaconda for Python package management simplifies setting up Conda environments for PyTorch.

The latest PyTorch stable release was installed with matching CUDA driver support for leveraging the GPU acceleration. A simple PyTorch program was run to confirm that the GPU is accessible.

Following these steps will help you correctly configure PyTorch on Windows for up to 1050x speedups on deep learning workloads by harnessing the power of your NVIDIA GPUs. However, the performance purely depends on the GPU card used over the CPU chip. We will plan the performance test in some other post. 

You now have the required setup to start developing and training large neural network models efficiently by exploiting the massively parallel processing capabilities of NVIDIA GPUs. The flexible PyTorch framework offers you full control over model building, debugging, and optimization for your AI projects.

We hope this guide served as a helpful reference manual to setup Pytorch for Your GPU on Windows. Let me know if you have any other questions in the comments! We thank you for reading this blog post. Visit our website, thesecmaster.com, and social media pages on Facebook, LinkedIn, Twitter, Telegram, Tumblr, & Medium and subscribe to receive updates like this.

You may also like these articles:

Arun KL

Arun KL is a cybersecurity professional with 15+ years of experience in IT infrastructure, cloud security, vulnerability management, Penetration Testing, security operations, and incident response. He is adept at designing and implementing robust security solutions to safeguard systems and data. Arun holds multiple industry certifications including CCNA, CCNA Security, RHCE, CEH, and AWS Security.

Check if CUDA is available by torch:

import torch

def check_cuda():
    print(torch.version.cuda)
    cuda_is_ok = torch.cuda.is_available()
    print(f"CUDA Enabled: {cuda_is_ok}")

Get CUDA version:

Sun Aug 13 01:27:00 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 531.79                 Driver Version: 531.79       CUDA Version: 12.1     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                      TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf            Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 2060 S...  WDDM | 00000000:01:00.0  On |                  N/A |
| 40%   37C    P8               35W / 105W|   1762MiB /  8192MiB |     23%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

So the CUDA version for our driver is 12.1.
But currently (2023.08.13), the latest pytorch only supports up to CUDA 11.8,
so we need to download and install an older CUDA version.

I recommend Download and Install CUDA 11.7:

  • CUDA Toolkit Archive | NVIDIA Developer
    • https://developer.nvidia.com/cuda-toolkit-archive
    • https://developer.nvidia.com/cuda-11-7-0-download-archive?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exe_local

Now we could use nvcc to check CUDA version:

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Tue_May__3_19:00:59_Pacific_Daylight_Time_2022
Cuda compilation tools, release 11.7, V11.7.64
Build cuda_11.7.r11.7/compiler.31294372_0

Add following paths to environments path variables: (The installation would add them by default)

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7\libnvvp

Run following commands to install Python torch with CUDA enabled:

python -m pip uninstall torch
python -m pip cache purge

# Use 11.7, it should be compatible
python -m pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117

# If want to use preview version of torch with CUDA 12.1
# python -m pip install torch torchvision --pre -f https://download.pytorch.org/whl/nightly/cu121/torch_nightly.html

Issues

If torch.version.cuda always returns None, this means the installed PyTorch library was not built with CUDA support.
So we need to choose another version of torch.

python -m pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
# python -m pip install torch==2.0.0+cu117 torchvision==0.15.1+cu117 torchaudio==2.0.1 --index-url https://download.pytorch.org/whl/cu117

Or your CUDA version is too new that torch has not supported, so you need to choose another CUDA version to download and install.
I recommend to use 11.7, while 12.1 is too new:

  • CUDA Toolkit 11.7 Downloads | NVIDIA Developer
    • https://developer.nvidia.com/cuda-11-7-0-download-archive

References:

  • Install pytorch with Cuda 12.1 — PyTorch Forums

    • https://discuss.pytorch.org/t/install-pytorch-with-cuda-12-1/174294/17
  • Pytorch installation with CUDA 12.1 — Reddit

    • https://www.reddit.com/r/pytorch/comments/11z9vkf/comment/jm5g09k/?utm_source=share&utm_medium=web2x&context=3
  • Start Locally | PyTorch

    • https://pytorch.org/get-started/locally/
  • Previous PyTorch Versions | PyTorch

    • https://pytorch.org/get-started/previous-versions/

Понравилась статья? Поделить с друзьями:
0 0 голоса
Рейтинг статьи
Подписаться
Уведомить о
guest

0 комментариев
Старые
Новые Популярные
Межтекстовые Отзывы
Посмотреть все комментарии
  • Установщик обнаружил ошибку 0x8024800с windows 7 как исправить
  • Windows 10 где найти журнал событий
  • How to install conda windows
  • Антивирус для windows 2000
  • Windows 7 service pack 1 windows 7 starter