Random Things I Liked, Mar 2023

Automerge 2.0

https://automerge.org/blog/automerge-2/

The complete rewrite of Automerge dropped right after discussing CRDTs optimizations the last time, addressing performance concerns.

AsyncAPI

https://www.asyncapi.com/docs

It's like Swagger/OpenAPI, but for event-based architecture. I've been working with events and messages for quite a long time now, and tried some approaches to spec description, like Protocol Buffers, but it just didn't feel right. Too much hassle and not so much benefits. I'm eager to try this one though.

LLMs + external tools = 🔥

As of today, large language models are trained on "static" data sets, they don't have a memory, and don't know about the world around them. However, if the only thing they do and do so extremely well is predicting next token, it's only natural to wonder "why can't we teach the model to predict some API call when it needs external data?".

There have been a couple of papers on that topic:

Reading your mind with diffusion models

I've already noticed multiple times that casual news headlines today sound exactly like futuristic imaginations from the original Deus Ex game, but this paper, "High-resolution image reconstruction with latent diffusion models from human brain activity", takes it to the next level.

The research behind this has been going on for more than a decade, and the recent advances in computational capabilities and diffusion models accelerated the progress.

Concurrent Programming with Effect Handlers

https://github.com/ocaml-multicore/ocaml-effects-tutorial

Every developer is familiar with exceptions. Algebraic effects take them to the next level of abstraction, and expose the manipulation of control flow as a first-class citizen. This is a nice write-up and a collection of examples of various effects implementations in OCaml.

The Waluigi Effect

https://www.lesswrong.com/posts/D7PumeYTDPfBTp3i7/the-waluigi-effect-mega-post

An attempt to look at what's going on inside LLMs from a different perspective. This may initially look like some esoteric attempt to find sentience in the number-crunching machine, as it mentions cryptic things like "summoning a simulacrum", but if you think (step-by-step, ha-ha) deeper, it kind of makes sense.

So, the process of text prediction can be viewed as a superposition of all possible simulations consistent with the prompt. And here's the prompt engineering trick: by specifying the desired traits of the simulacra, e.g. "John is a renowned scientist with 180 IQ, known for being precise and accurate", we make simulations of a "precise intelligent scientist" more likely to appear.

The folks also observed that conditioning the model to have a specific trait actually causes it to be in a superposition of that trait and its opposite trait, e.g. loving anime and hating it. And as the conversation interaction unfolds, the superposition is more likely to collapse into the negative trait, e.g. anime hater.

AI dark forest

https://maggieappleton.com/ai-dark-forest

Oh, this clicked with me. I've been thinking about it for quite some time now. It just feels like massive online social media has been degrading for many years now. And with the recent influx of AI-generated content, it's just not fun to be around anymore. Why would I want to scroll through tons of bot posts and replies? Humans crave for interaction with other like-minded folks, there is no joy in "here's the summary of this thread".

We're likely to see the rise of offline walled garden communities once again. Like TikTok, the generated content is going to be a cheap dopamine for the poor.

How GitHub search works

https://github.blog/2023-02-06-the-technology-behind-githubs-new-code-search/

So how do you index all the code on GitHub? Besides their scale, searching for code is quite different from searching for natural language text, so most off-the-shelf solutions don't work. Clever usage of sharding and n-grams helps here.

Computerphile series on AI

The folks on these channel are so nice, I enjoyed a lot of their videos recently, like this one on "glitch" tokens in GPT models arising from some unique occurrences in test data, or their previous videos on how StableDiffusion works.