Blog-Layout

How to enable GPU support for TensorFlow or PyTorch on MacOS

Michael Hannecke

Train your ML models faster with GPU support on macOS

Is your machine learning model taking too long to train? Do you wish you could speed things up? Well, you’re in luck! In this blog post, we’ll show you how to enable GPU support in PyTorch and TensorFlow on macOS.


GPUs, or graphics processing units, are specialized processors that can be used to accelerate machine learning workloads. By using a GPU, you can train your models much faster than you could on a CPU alone.

If you’re using a MacBook Pro with an M1 or M2 chip, you’re in for a special treat. These chips have built-in GPUs that are specifically designed for machine learning. This means that you can get even more speedup by enabling GPU support in PyTorch and TensorFlow.



So, let’s get started!


Most Machine Learning frameworks use NVIDIA CUDA, short for “Compute Unified Device Architecture,” which is NVIDIA’s parallel computing platform and API that allows developers to harness the immense parallel processing capabilities of NVIDIA GPUs.

Apple uses a custom-designed GPU architecture for their M1 and M2 CPUs. This architecture is based on the same principles as traditional GPUs, but it is optimized for Apple’s specific needs. ‘Older’ Apple computer with dedicated GPIU utilizes AMD Chips which are not directly compatible with NVidia’s CUDA Framework.

But help is near, Apple provides with their own Metal library low-level APIS to enable frameworks like TensorFlow, PyTorch and JAX to use the GPU chips just like with an NVIDIA GPU.

Let’s step through the steps required to enable GPU support on MacOS for TensorFlow and PyTorch.



Requirements


  • Mac computers with Apple silicon or AMD GPUs
  • macOS 12.0 or later (Get the latest beta)
  • Python 3.8 or later
  • Xcode command-line tools: xcode-select — install



TensorFlow


First we have to install a virtual environment, we’re going with venv this time but anaconda would do as well.


Next we have to install the TensorFlow Base framework. For TensorFlow version 2.13 or later:

For TensorFlow version 2.12 or earlier:

Now we must install the Apple metal add-on for TensorFlow:

You can verify that TensorFlow will utilize the GPU using a simple script:

You can test the performance gain with the following script. Run this script once with GPU (metal) support enabled and once in a virtual environment without metal installed.


The difference is remarkable!



PyTorch



Again, start with a virtual environment, we’re going again with venv:


Next, install the PyTorch framework as follows:

You can verify that PyTorch will utilize the GPU (if present) as follows:

Run the next script in a virtual environment with and without GPU support to measure the performance:




Conclusion


There you have it! You now know how to enable GPU support in PyTorch and TensorFlow on macOS. Go forth and train your models faster than ever before!


And remember, if you run into any problems, don’t be afraid to ask for help. There’s a large and supportive community of machine learning practitioners who are always happy to lend a hand.

Now, go forth and train some amazing machine learning models!




P.S.


If you find that your GPU is still not working after following these steps, don’t worry. You’re not alone. Sometimes, things just don’t work out as planned. In that case, you can always try using a cloud-based GPU service. There are many different services available, so you’re sure to find one that fits your needs.


By Michael Hannecke 27 Dec, 2023
How to deploy kubernetes nodes with NVIDIA GPU support on GCP using Terraform as Infrastructure as code.
05 Dec, 2023
Summary of responsible AI topics
By Michael Hannecke 01 Dec, 2023
Tips for ensuring security in developing AI applications.
By Michael Hannecke 15 Nov, 2023
Typography of adversarial attacks in generative AI, Process and Countermeasures.
More Posts
Share by: