// Part 2 in the “Simple introduction to ggml” series. At the end of Part 1, we learnt how to keep the model weights separate from temporary computation-only tensor variables. This allowed the model weights to stay in memory across multiple predictions (which is the usual behavior of machine learning programs during inference). Now let’s modify that to build a simple Neural Network model using ggml. If you’re new to ggml, I recommend reading Part 1 first.