Notifications
Clear all

Question Running accelerated DL on M1/M2 chipset Macs

8 Posts
4 Users
4 Likes
439 Views
(@peterleong)
Eminent Member Member
Joined: 12 months ago
Posts: 9
Topic starter  

Does anyone have advise on how to to setup PyTorch or TF2 on M1/M2 chipset Macs with acceleration?

Transforming Data to Innovations


   
Quote
Laurence Liew
(@laurenceliew)
Estimable Member AI Ready Clinic Group
Joined: 1 year ago
Posts: 105
 

Save the headache, get an Intel x86 PC + NVidia GPU - a standard AI/ML/DL setup - and setup the PC at home. With todays' broadband speed/5G, just remote back into your PC for your AI/ML workloads.

This is my current setup: Macbook Air -> wireguard vpn back home -> intel PC+NVidia GPU over RDP. Very usable.

I get the best of both worlds. Long battery life, standard Windows/Linux AI desktop powered by a beefy GPU.

Outcompute to outcompete | Growing our own timber


   
CaffeinePowered and Jovi reacted
ReplyQuote
(@peterleong)
Eminent Member Member
Joined: 12 months ago
Posts: 9
Topic starter  

@laurenceliew Beowulf thanx. Actually I am asking on behalf of many mac enthusiasts who were sold into how cool (in celsius) the new Mx chips are.

Transforming Data to Innovations


   
ReplyQuote
(@peterleong)
Eminent Member Member
Joined: 12 months ago
Posts: 9
Topic starter  

And they are exclusive Mac users

Transforming Data to Innovations


   
ReplyQuote
Laurence Liew
(@laurenceliew)
Estimable Member AI Ready Clinic Group
Joined: 1 year ago
Posts: 105
 

@peterleong haha... same situations for the AMD enthusiasts... until they find out their peers can run the same AI/ML code 10-100X faster on similarly spec'ed INTEL systems... all because of Intel MKL.

You CAN run on AMD and M1/M2 systems - but the software ecosystem for AI/ML is a lot weaker, and you have to build (re-compile) a lot of the core libraries yourself. If you enjoy doing such stuff - please go ahead.

If you want to just fire and forget, and focus on your AI/Ml code, the easiest path is to use a well supported ecosystem of hardware and software tooling for AI/ML workloads today.

Use the right tool for the job. 

 

 

Outcompute to outcompete | Growing our own timber


   
ReplyQuote
Syak
 Syak
(@syakyr)
New Member Moderator
Joined: 3 years ago
Posts: 1
 

I would recommend the setup provided by @laurenceliew for best performance, but if one really needs to run TF2 and Pytorch on M1/M2 chips, I recommend to do a read up on the following sites:

TF2 on Metal:
removed link

Pytorch (Nightly) on Metal:
removed link

I highly recommend using miniconda to manage dependencies instead of pip as building packages such as numpy and pandas will be a hassle. This is taken from the TF2 on Metal site:

Download and install <a href=" removed link ">Conda env:

chmod +x ~/Downloads/Miniforge3-MacOSX-arm64.sh
sh ~/Downloads/Miniforge3-MacOSX-arm64.sh
source ~/miniforge3/bin/activate

Hope this helps.


   
ReplyQuote
(@mncr)
New Member Member
Joined: 12 months ago
Posts: 1
 

@laurenceliew Hi Laurence, long time, hope all is well. Very innovative set up !

 Side tracking here, do you then run the projects on a VM or windows(or your PC is a linux os?) directly?


   
ReplyQuote
Laurence Liew
(@laurenceliew)
Estimable Member AI Ready Clinic Group
Joined: 1 year ago
Posts: 105
 

I have both Windows and Linux VMs.  They are hosted on a single PC running Proxmox OS.. It is a very nice and virtualization hosts with a comprehensive web-based management interface. It's KVM so supports GPU pass thru and I can assign my GPU to my VMs as required. 

Outcompute to outcompete | Growing our own timber


   
ReplyQuote
Share: