Hello, everyone, I am Xiaozhi. Due to the epidemic, wafer is out of stock. Coupled with mining, the global core source is rising.

I can’t afford a graphics card, this time based on a customer’s question. This customer is mainly used to do three-dimensional object recognition, such as box recognition, this is still very interesting, after some time to study, and then to share with you.

I’ve shared the article about helping someone with their ROS problem for $80 in the free fish before, but the ROS problem turned into CUDA’s environmental problem.

Interested can poke: technology to cash!! What did $80 worth of idle fish tech support do? # nugget article # juejin.cn/post/698132…

Later, I checked on the xianyu, and found that there are really many services of this kind, which means there must be demand if there is a market.

So there must be a lot of people encounter this problem, today small wisdom will talk about the relationship between Torch, CUDNN, CUDa and our graphics card, to ensure that you will never have to pay for this kind of problems.

First of all, Satoshi tells us a secret about the computer — all operations will eventually become the processor’s arithmetic and logic calculations.

The principle is introduced

When we use the torch for convolution and so on, we end up doing arithmetic and logic on the processor, and the question becomes, how does that happen?

It’s actually a link like this

There are five characters:

Physical graphics cards: the real unit of computing, which you have to pay for

Graphics card driver: The driver that accompanies the graphics card (all hardware is software driven)

CUDA: CUDA(ComputeUnified Device Architecture) is a computing platform launched by NVIDIA. CUDA is responsible for calling graphics card drivers to complete computing

CUDNN: This is a plugin that works without it, but it’s much faster with it

Torch: I’m not going to introduce you to this framework because it’s certainly familiar

There are too many characters, and they depend on each other, and the versions are strongly related, which is why people always do a bad job, so the most important thing is to get the versions right, and everything will be clear.

First line: physical graphics card and graphics card driver

First use LSHW check your video card type (see) with the display, then to the site to choose the video card type: official GeForce driver | NVIDIA

The second line: graphics card driver and CUDA mapping

After you install the driver, you can use nvidia- SMI to view the version number of the driver in the Linux terminal

Then according to the version number comparison table, we can find out which CUDA versions are greater than 367.51.

Look up the table, we can install CUDA above CUDA8.0.

The third line, Torch, corresponds to the CUDA version

This diagram is taken from the web, and by the way it explains the version relationship with Python and Torchvision. Once you decide on the Torch version, you can determine everything else.

Fourth line: CUDNN and CUDA

I left this line to the end because it is possible to do normal running code without using CUDNN.

So how do you match the CUDA version?

Refer to the address: cuDNN Archive | NVIDIA Developer

Here cudann finally has a version number for CUDA, according to the version number to find the corresponding download on the line, after downloading good decompression copy to CUDA files, you can complete the installation, is not very convenient

I’ll end with a few links:

Download: cudnn cudnn Archive | NVIDIA Developer

Pytorch website: Previous pytorch Versions | pytorch

Graphics driver download: the official GeForce driver | NVIDIA

Installation and Uninstallation of CudNN: CUDA and CudNN – Jianshu.com

Next time we will share how to run gazebo simulation with graphics card, experience the silky feet ~

Welcome to pay attention to the public number: witty people!