~ / cmdr2

projects: freebird, easy diffusion

hacks: carbon editor, torchruntime, findstarlink

  • #easydiffusion
  • #torchruntime
  • #torch
  • #ml

// Cross-posted from Easy Diffusion’s blog. Spent the last few days writing torchruntime, which will automatically install the correct torch distribution based on the user’s OS and graphics card. This package was written by extracting this logic out of Easy Diffusion, and refactoring it into a cleaner implementation (with tests). It can be installed (on Win/Linux/Mac) using pip install torchruntime. The main intention is that it’ll be easier for developers to contribute updates (for e.g. for newer or older GPUs). It wasn’t easy to find or modify this code previously, since it was buried deep inside Easy Diffusion’s internals.

  • #ai
  • #ml
  • #llm

Built two experiments using locally-hosted LLMs. One is a script that lets two bots chat with each other endlessly. The other is a browser bookmarklet that summarizes the selected text in 300 words or less. Both use an OpenAI-compatible API, so they can be pointed at regular OpenAI-compatible remote servers, or your own locally-hosted servers (like LMStudio). Bot Chat Summarize Bookmarklet The bot chat script is interesting, but the conversation starts stagnating/repeating after 20-30 messages. The conversation is definitely very interesting initially. The script lets you define the names and descriptions of the two bots, the scene description, and the first message by the first bot. After that, it lets the two bots talk to each other endlessly.

  • #ml
  • #transformers
  • #diffusion

Spent a few days learning more about Diffusion models, UNets and Transformers. Wrote a few toy implementations of a denoising diffusion model (following diffusers’ tutorial) and a simple multi-headed self-attention model for next-character prediction (following Karpathy’s video). The non-latent version of the denoising model was trained on the Smithsonian Butterfly dataset, and it successfully generates new butterfly images. But it’s unconditional (i.e. no text prompts), and non-latent (i.e. works directly on the image data, instead of a compressed latent space).