This post concludes (for now) my ongoing deep-dive into ML compilers, while researching for sdkit v3. I’ve linked (at the end) to some of the papers that I read related to graph execution on GPUs. Some final takeaways: ML compilers might break CUDA’s moat (and fix AMD’s ROCm support). A single compiler is unlikely to fit every scenario. The scheduler needs to be grounded in truth. Simulators might be worth exploring more. ML compilers might break CUDA’s moat (and fix AMD’s ROCm support) It’s pretty clear that ML compilers are going to be a big deal. NVIDIA’s TensorRT is also an ML compiler, but it only targets their GPUs. Once the generated machine code (from cross-vendor ML compilers) is comparable in performance to hand-tuned kernels, these compilers are going to break the (in)famous moat of CUDA.