~ / cmdr2

projects: freebird, easy diffusion

hacks: carbon editor, torchruntime, findstarlink

// Cross-posted from Easy Diffusion’s blog.

A possible intuition for understanding GPU memory hierarchy (and the performance penalty for data transfer between various layers) is to think of it like a manufacturing logistics problem:

  1. CPU (host) to GPU (device) is like travelling overnight between two cities. The CPU city is like the “headquarters”, and contains a mega-sized warehouse of parts (think football field sizes), also known as ‘Host memory’.
  2. Each GPU is like a different city, containing its own warehouse outside the city, also known as ‘Global Memory’. This warehouse stockpiles whatever it needs from the headquarters city (CPU).
  3. Each SM/Core/Tile is a factory located in different areas of the city. Each factory contains a small warehouse (shed) for stockpiling whatever inventory it needs, also known as ‘Shared Memory’.
  4. Each warp is a bulk stamping machine inside the factory, producing 32 items in one shot. There’s a tray next to each machine, also known as ‘Registers’. This tray is used for keeping stuff temporarily for each stamping process.

This analogy can help understand the scale and performance penalty for data transfers.

For e.g. reading constantly from the Global Memory is like driving between the factory and the warehouse outside the city each time (with the traffic of city roads). This is much slower than going to the shed inside the factory (i.e. Shared Memory), and much much slower than just sticking your hand into the tray next to your stamping machine (i.e. Registers). And reading from the Host Memory (CPU) is like taking an overnight trip to another city.

Therefore the job of running a computation graph (like ONNX) efficiently on GPU(s) is like planning the logistics of a manufacturing company. You’ve got raw materials in the main warehouse that you need to transfer between cities, and store/process/transfer artifacts across different factories and machines. You need to make sure that:

And most importantly, you need to focus on your overall goal, i.e. either the time it takes to produce the finished product (i.e. latency), or maximum utilisation of all your machines (i.e. throughput), or maybe power efficiency.

If you’re supporting multiple models, then you’re dealing with multiple computation graphs. And if you’re supporting multiple GPU vendors (NVIDIA, AMD etc), and multiple architectures of each vendor (e.g. 3060, 4080, 5080 etc), then you’re dealing with multiple factory configurations.

So you can analyze the computation graph ahead-of-time (AOT) and perform some obvious optimizations like fusing operations etc. And you can take the factory configuration (GPU specs) into account and plan the task division and schedule.

But it might also make sense to have a “realtime” supervisor. This supervisor would get realtime information about how things are actually going, and adjust the task division and layout in real time. Maybe even change the compiled graph in realtime.

Notes:

  1. Apple Silicon and Mobile devices use a concept of “unified memory”, so they don’t have an overnight trip between cities. You can think of Apple Silicon as neighboring cities that almost overlap, like twin cities in some countries.
  2. Mobile devices usually don’t have a concept of shared memory, so their factories don’t have warehouse sheds.