This first post is on hardware and virtualization principles.

Hardware

Not every computer is created equal. Neither is every OS. I began my programming crusade on a Macbook Air from 2014 with 8 GB of RAM. While this is plenty compared to the Appolo Guidance Computer, it is not enough for modern-day software development / data wrangling. Faced with the conundrum of limited resources, the IT student has two options: (a) beef up his/her hardware or (b) outsource resource-intensive tasks to stronger machines. I suggest to do both.

My experience is that local limiting factors are RAM and I/O speed to disk. Most people I have met program on laptops, so I will focus on this scenario first. A few exceptions aside - see Framework products - laptops are not particularly modular, meaning extensible. If you make a mistake on your purchase, you will have a tough time correcting the shot. This makes it all the more important to identify the purchasing criteria precisely. Acquiring a desktop - “une tour” as we say in French - should not be discarded too quickly. In the long run, this may be the better choice.

Keep your options open

You want to acquire hardware components with easily accessible open-source drivers. You may at some point want to install Linux and it is preferable to anticipate on that. Nothing more frustrating than having to compile GPU or wifi drivers from source. This issue is less acute then it used to be. However, do check that the components are “supported” under Linux.

Memory over compute

In a world of high-end videogames, crypto-mining and LLMs, a lot of ink has been spent on computing power. As I have said, faced with a limited budget, prefer more RAM and/or a better hard drive than more CPUs/cores. In general programming, you will have to run multiple tools in parallel: IDE, web browser, terminal, virtual machines, docker,… This quickly adds up.

Virtualization (deprecated)

Talking about virtual machines. Back in 2019, the words “virtual machine” (vm) brought images of nebulosity to my mind. Like the word “cloud” and the like. Three years later I have fallen in love with this technology, perhaps at a time when a lot of developers are moving on from it. Indeed, the new hot thing on the block - at least on Windows - is WSL2.

My view is that it boils down to worflow preference. Some say booting a vm is slow. I personally like the fact that my vm takes ~60s to be ready. At its core, virtualization is about apportioning the physical resources of your machine and allocating some of them to a segregated construct in which to run processes. It is therefore about isolation. On my set-up, a vm takes a minute to stabilise and - while other tools can perform the same functionality faster (e.g. WSL2) - my brain needs this minute to transition from one context to another. This slow boot time gives my brain the time to acknowledge that code will now be executed remotely on a separate machine.

Machines are ever quicker, but my brain has a human clockspeed (12–35 Hz).

So much for the philosophical parenthese.

On Windows, I recommend to use Hyper-V for virtualization. As far as I know, it offers the best performance on that platform. On Linux, I suggest to explore KVM and Proxmox. For Windows users, I have written a powershell script to automate the vm creation process. Do note that you may want to create a NAT virtual switch beforehand. The reason being that the Hyper-V Default Switch hands out ip addresses via DHCP. It is not straightforward to tweak the settings to assign a fixed ip to newly created vms. This is why I prefer to use the NAT switch, despite a small network performance hit.