Setting up CUDA on a too-old laptop with a too-new Linux
It turns out, installing CUDA correctly on a modern Linux but running on a 2012 laptop can be a pain. In this post I try to comprehensively describe my journey from installing correct drivers to compiling an old GCC to finally getting CUDA to work. I hope if someone find themselves in a similar situation this will spare them a few hours of head-scratching.
Background
Since this is more of a technical tutorial on a very specific topic, rather than a blog post I will assume that you already know what general-purpose computing on GPUs (GPGPU) and CUDA is. But even if you don't then stay tuned! – I am planning a post on GPU computing with a simple model of a physical system as an example.
I have a 2012 Asus laptop with both Intel integrated graphics and an nVidia GeForce GT 635 M (so-called Optimus technology) and Fedora 31 on board. I wanted to do some general-purpose GPU computing. As it turns out, properly installing nVidia drivers and CUDA Toolkit in such a configuration (modern Linux + Optimus + "obsolete" Fermi-line GPU) can be a pain. Brace yourselves!
Driver installation
Unfortunately, nVidia does not release open source drivers of their graphic cards for Linux. Fedora, and other popular free Linux distributions come with an open-source community-made alternative called noveau
. This is an amazing product, and its creators certainly deserve a lot of praise. I have been using it without any problem for many years, but there is one problem with noveau
: it does not support CUDA technology. So, when I decided to dabble a bit with GPU-accelerated numerics I had to turn to nVidia's proprietary drivers.
On paper, the installation process is very simple. There are two main ways to do it. The obvious approach is going to nVidia official website, finding the appropriate driver for your device, and then following the official instructions. However, my first attempt at following that ended up with my system unable to display any graphics at all. I found it way easier to only use the site to find what your device's aprropriate driver version is (340, 390, or 400+), and then installing the driver from PRMFusion through the package manager. If you don't have RPMFusion enabled yet, follow the instructions here. In my case (GeForce GT 635M → driver versions 390.xxx) the installation boils down to three commands:
sudo dnf update
sudo dnf install xorg-x11-drv-nvidia-390xx akmod-nvidia-390xx
sudo dnf install xorg-x11-drv-nvidia-390xx-cuda # Enables CUDA capabilities
Before you reboot
Last, crucial thing is to check whether your Linux kernel version is supported by the driver. At the time of writing this article (March 2020), the changelogs on the driver's official websites read: Fixed kernel module build problems with Linux kernel 5.4.0 release candidates. If the driver is compatible with your kernel version, you can skip the next two paragraphs.
My system at the time already used kernel 5.5, which came out as stable in January 2020. If your Linux is not a fresh installation you probably have a fall-back older kernel installed side-by-side with the newest version, which probably is compatible with the nVidia driver (e.g. I had Linux 5.3). Then the easiest way to start using the new capabilities of your system right away is to reboot, and then choose the lower-version kernel during boot in the GRUB menu. If you don't see GRUB menu before boot try holding Shift in the process, or run (tested only on Fedora 31):
sudo grub2-editenv - unset menu_auto_hide
If you don't have a compatible older kernel installed, you might find some with a bit of luck using
sudo dnf list 'kernel*'
and if not – I can only refer you to this Fedora Magazine article.
You can also try manually applying a patch to the driver making it compatible with newer kernels. This approach is recommended for advanced users, and the exact steps to take are dependent on the specific driver and kernel versions. From what I have seen the best resources on these matters are Fedora Project and/or nVidia Developer forums (e.g. for 390.192 driver and 5.5 kernel see 1 and 2).
Sometimes it might be necessary to rebuild your initramfs
image (initial ramdisk, used by Linux during boot as a temporary root filesystem) using sudo dracut -f
. If you don't do it the it's possible that at boot nouveau
driver will load instead of the nVidia one (although I am not sure of the specific circumstances for this to happen).
Installing CUDA Toolkit
Now that you have the driver installed (you can check that with nvidia-smi
command,), it is time to install the appropriate CUDA Toolkit. To make things harder, new versions of CUDA will not work with older CUDA-enabled GPUs, and that information unfortunately is not immediately clear if you just go and try to collect the Toolkit from the official website. For example, if you have a Fermi architecture device like me, you can find the relevant information in the CUDA Toolkit v8.0 Release Notes (!!)
- Deprecated Features The following features are deprecated in the current release of the CUDA software. The features still work in the current release, but their documentation may have been removed, and they will become officially unsupported in a future release. We recommend that developers employ alternative solutions to these features in their software. (...) Fermi Architecture Support. Fermi architecture support is being deprecated in the CUDA 8.0 Toolkit, which will be the last toolkit release to support it. Future versions of the CUDA Toolkit will not support the architecture and are not guaranteed to work on that platform. Note that support for Fermi is being deprecated in the CUDA Toolkit but not in the driver. Applications compiled with CUDA 8.0 or older will continue to work on Fermi with newer NVIDIA drivers.
Before you continue, make sure you to choose the appropriate CUDA Toolkit version, in this case – v8.0 GA2. You can download older versions of the software here. When selecting target platform, choose the newest possible version of your OS. Don't worry if it's a lot of versions behind the one you are running (like Fedora 23 vs 31) – what matters is that you have correct GPU, GPU driver, and GCC versions. Choose "Installer type" "runfile (local)".
While downloading the CUDA Toolkit (it's around 1.5 GB) you can perform what CUDA docs refer to as "Pre-installation Actions". As I said before, you can ignore point 2.2 ("Verify you have a supported version of Linux"). Most important point is 2.4 – installation of correct kernel headers and kernel-devel
. Run the commands suggested by the docs:
sudo dnf install kernel-devel-$(uname -r) kernel-headers-$(uname -r)
On Fedora, if dnf
doesn't find the specific kernel-headers
package you can use this command to list all available versions of a package: sudo dnf --showduplicates list kernel-headers
. If there is one with only the patch version number differing with uname -r
output (e.g. 5.3.6 vs 5.3.15), install it. Otherwise, I can't help you, but perhaps you can try your luck on Koji, similarily how it was with installing a specific kernel version.
Now you are ready to install the CUDA Toolkit itself.
First, make sure that the environment variable PERL5LIB
contains directory with the CUDA installation runfile: export PERL5LIB=.
.
Then go to the directory with the downloaded runfile and run
sudo sh cuda_8.0.61_375.26_linux.run --override --toolkitpath /usr/local/cuda-8.0/
The additional options will silence the installator complaints about unsupported configuration. When the installer starts, you can use q
to skip reading the EULA, and then the rest of the process should look like follows:
Do you accept the previously read EULA?
accept/decline/quit: accept
You are attempting to install on an unsupported configuration. Do you wish to continue?
(y)es/(n)o [ default is no ]: y
Install NVIDIA Accelerated Graphics Driver for Linux-x86_64 375.26?
(y)es/(n)o/(q)uit: n
Install the CUDA 8.0 Toolkit?
(y)es/(n)o/(q)uit: y
Enter Toolkit Location
[ default is /usr/local/cuda-8.0/ ]:
Do you want to install a symbolic link at /usr/local/cuda?
(y)es/(n)o/(q)uit: y
Install the CUDA 8.0 Samples?
(y)es/(n)o/(q)uit: y
Enter CUDA Samples Location
[ default is /home/YOUR_USERNAME ]:
Installing the CUDA Toolkit in /usr/local/cuda-8.0 ...
Missing recommended library: libXmu.so
Installing the CUDA Samples in /home/tom/CUDASamples ...
Copying samples to /home/tom/CUDASamples/NVIDIA_CUDA-8.0_Samples now...
Finished copying samples.
Some explanation: we don't install the driver from the CUDA installation because we have our own, and our driver will work with CUDA. Using driver from the Toolkit will not help in anything but it might result in breaking something. CUDA Samples are useful are useful to test if everything works. Don't worry about missing recommended libraries – CUDA will still work without them. However if you want to get full capabilities of the technology, you can always install those missing packages. Just google what package provides the library in question and download it with the package manager, e.g. sudo dnf install libXmu-devel mesa-\*
.
Environment variables
Now, the last step is to set up the environment variables PATH
and LD_LIBRARY_PATH
You can just add the following lines to your profile file (probably .bashrc
, if not then .profile
or .zshrc
):
# CUDA configuration
export PATH=$PATH:/usr/local/cuda-8.0/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-8.0/lib64
Compiling the GCC
OK, so you've got CUDA installed! Congrats! However if we try to run it, we are met with harsh reality:
error: #error -- unsupported GNU version! gcc versions later than 5 are not supported!
And in fact, Fedora 31 at the time shipped with GCC 9.2, which apparently is too new for the nVidia software. But this should be easy to fix right? Just install GCC 5.3 (officially supported version of the compiler)? There are several ways to do it, and there are archival binaries out there. However, installing from rpm archives can throw you into a dependency hell, and I was ready for a small challenge so I settled on compiling the GCC from the sources. There is an excellent tutorial on this here – you should just follow the steps from there (you can use enable-languages=c,c++
if you only need the old GCC for CUDA)! I will just add below solutions to a few problems I encountered probably due to the fact that GCC 9 might sometimes be too new even for GCC 5.
GCC 5.3 compilation with GCC 9.2: trouble-shooting
Problem 1. cfns.gperf:101:1: error: redeclared inline with 'gnu_inline' attribute on 'libc_name_p'
.
There are two definitions of what inline
means in C. One is from the standard, and one is an old GNU extension. New GCC versions don't allow requesting both behaviours on the same function. To resolve the problem I modified a bit the solution from this Stack Exchange post (here's my answer). Tl;dr: in cfns.h
replace declaration and the beginning of the definition of libc_name_p
(2 places) with
#ifdef __GNUC__
#ifdef __GNUC_STDC_INLINE__
__attribute__ ((__gnu_inline__))
#else
__inline
#endif
#endif
const char * libc_name_p (const char *, unsigned int);
(the #else
makes it that at a time either __inline
or __gnu_inline
is present, but not both).
Problem 2. ./md-unwind-support.h:141:18: error: field 'uc' has incomplete type
This struct in a struct seems to be too much for a modern GCC. But what are typedef
s for? Just change all struct ucontext
to ucontext_t
. Here are details:
Solution link. Note: md-unwind-support.h
is in build/
, not in source/
!
Problem 3. error: sys/ustat.h no such file or directory
– GCC 5.3.0 sources had an unnecessary #include <sys/ustat.h>
even though this header is deprecated. Just removing this include and a few unused definitions helps to solve the issue. Details: Solution link.
Problem 4. You can encounter problem similar to Problem 2 in libsanitizer/sanitizer_common/sanitizer_linux.cc
. Solution is similar and available here.
Problem 5. undefined SIGSEGV
– somehow one file in the sources of GCC 5.3.0 sanitizer that I downloaded missed #include <signal.h>
. You can literally just add this one line manually, or use this patch.
Wow! That was a bit of work! Now (assuming you followed the instructions from the linked tutorial) you can verify your GCC works by using it on some "hello-world" in C: gcc-5.3.0 -o hello.out hello.c
. Hello World!
Final steps
We've got GPU driver, CUDA and the C/C++ compiler. But if you go to the CUDA samples directory and try to build something you will see we are still not there just yet! There's a bug that makes CUDA 8 refuse to work with GNU C Library (glibc
) >= 2.26. Fortunately there is a dirty but working hack first described here. Just add #define _BITS_FLOATN_H
directive at the top of /usr/local/cuda-8.0/include/host_defines.h
!
We might still be unable to compile and run our CUDA programs, because even if our GPU is driven by the nVidia driver, the nvidia-uvm
module needs to be loaded into the kernel. To do this you can use sudo modprobe nvidia-uvm
.
Now we can finally compile and run CUDA programs! To verify everything is working, go to your CUDA Samples directory and run make
. You should get a bunch of executables in /bin/x86_64/linux/release/
. Run a few of them to verify everything is working – for example:
$ ./bin/x86_64/linux/release/simplePitchLinearTexture
simplePitchLinearTexture starting...
GPU Device 0: "GeForce GT 635M" with compute capability 2.1
Bandwidth (GB/s) for pitch linear: 9.08e+00; for array: 1.01e+01
Texture fetch rate (Mpix/s) for pitch linear: 1.13e+03; for array: 1.26e+03
simplePitchLinearTexture completed, returned OK
Conclusions
To be completely honest, I am a bit disappointed in how painful setting up CUDA in my configuration was. I wish it was easier to stay on old hardware longer.
Looking on the bright side, I learned a lot about the internals of my system, a bit of how a big C project like GNU Compiler Collection looks like under the hood and gained some confidence in my ability to trouble-shoot my system. I hope you have found my post useful, or at least interesting.