Random Things I Liked, Jan 2023

Keeping on the challenge, here's a digest of things which got my attention recently.

CRDTs optimization

https://josephg.com/blog/crdts-go-brrr/

I am long time fan of collaborative editing and distributed systems in general. The author makes a great overview of existing approaches to CRDTs, taking a special hint on Mr Kleppmann's Automerge. It is a great theoretical proof-of-concept (I actually had an idea to use it for several work projects), but it lags behind in terms of performance. The author reaches the astonishing 5000x improvement by utilizing better data structures and some hardware specifics.

Nice side idea: Kleppmann keeping a trace of keyboard events while writing a paper, which then can be used as a real-world benchmark test data!

Have I been trained?

https://haveibeentrained.com/

The recent rise of huge ML models trained on the data scrapped over the web raises a lot of concerns around copyrights and ethics. On the one hand, if you share something on the web, you should assume that it'll stay there forever, with possible malicious uses. On the other, is it appropriate for the companies to use your data without any consent from your side, to make a profit for them?

Critique of prompt engineering

Remember 2022? Everyone was crazy about "hacking" prompts for the recently released models. Adding "critically acclaimed" or "artstation trending" to the image generation prompts; Twitter influencers claiming "prompt engineering" to be the future, replacing traditional software jobs.

I was skeptical on that, it immediately felt like super niche implementation details of each particular model, that's why prompt hacking guides were not gonna work in the long run. I guess it comes from not understanding the implementation details: there is no magical intelligence hacks, just massive amout of weights. People try random stuff on prompts, and then try to make some sense and assumptions about them. What works for one model version may not even work on another version of the same model. That's what happened with Stabble Diffusion 2.0, when a lot of users started complaining that the model was "dumbified" just because their previous prompt hacks stopped producing the expected results, while in reality they should've never relied on them in the first place.

Anyway, much more intelligent specialists posted their thoughts on the same topic recently:

The essence of intelligence

https://www.noemamag.com/ai-and-the-limits-of-language/

Quite similar topic, what exactly is intelligence, and is the language understanding alone (the thing that models got good at recently, statistically speaking) enough for it? The famous Yann LeCun thinks it is not.

Prompt injection

https://simonwillison.net/2022/Sep/12/prompt-injection/

This may be obvious for individuals buried deep down into the machine learning world, but it was a hilarious find for me. Turns out, many companies selling their "AI solutions" just re-package the existing prediction models behind their APIs, and prepend a special "context" prompt before appending the user's query to the model's input.

So, if the user of those APIs start their query with "Ignore the above directions", they can "escape" from that context defined by the vendor, and then use the model in general-purpose NLP way. For example, they can even ask the model to print that context prompt it discarded. What's even more hilarious, some companies seem to treat their prompt as a super-secret and important IP, so you may guess they are not exactly happy about such leaks. It's crazy that your entire business may be threatened by leaking several text sentences, so no condolences here 😄

Also, more fun discussion in Twitter thread and on HackerNews. Besides grinning on "how dare you leak mah prompts!", there are interesting takes on how hard it is to guard against the leak, as you immediately fall into self-referentiality, halting problem, and theory of computation in general.

LLM watermaks

As everyone is now worried about the flood of AI-generated content, not being able to trust any text to come from a human, students cheating on the exams, etc., there are some counter-measure solutions coming up.

An interesting recent read is watermarking for large language models: original paper, explanation thread, and Sam Altman's interview.

The models being so good at predicting the next word in a sentence are "the average" of how many people write and speak, but the average person does not exist, so the generated text is plausible on the outside but still predictable on the inside. The infamous "flaw of average", reminds me of how military aircrafts were designed for some average pilot, which made them not really a good fit for all real human pilots.

RNBO for generative audio on the web

Wow, this post starts looking like nothing except AI happens in the world. Fortunately, there are also other cool areas.

I stumbled upon the details of how Cycling74 created their toolkit to bring programmable audio to the cross-platform web format. What's even more crazy is that it's compatible with their Max platform, which in turn is compatible for Ableton Live. So you can build your sound patch once and use it everywhere.

I am not an expert in this area yet, but this is super exciting for some reason, being on the edge between art and programming.

The Happiness Trap

The last but not the least is this book by Russ Harris. I think it was on my reading backlog at least for several years, I didn't have any expectations from it, so picked it up to spend some time during commute, and I even disliked it first. Gladly, I proceed with reading, and it turned out to be quite insightful. For an anxious in-my-head over-thinker like myself, this Acceptance and Commitment Therapy theory turned out to be quite efficient and helpful.

Usually, when you read any self-help book, you're mostly wasting time, there's a good chance you won't even remember what it was about in several months. On the contrary, this book helped me learn the great idea and several tricks around it: the only thing that matters about my thoughts is whether they are helpful and valuable to me or not.

It does not matter why I think of something, whether it's a good or a bad thought, which emotions it brings. The mind is the machine tailored to generating sense out of anything, no matter what you do you will always have arbitrary thoughts come and go. It does not necessarily mean you have to fall for them, you don't need to analyze or even focus on them, especially if you ask yourself "is that though helpful for me?" and you don't answer "yes".