This is a DRAFT. HALF-BAKED. Please do not share without permission.

What I thought about in November

· 3 min read · Send your thoughts via twitter or mail.

I have published and retracted about two dozen pieces this month, circling a few themes, especially brain computer interfaces. I feel both skeptical and optimistic about them. I keep coming back to a few ideas: conscious thought carries around 10 bits per second, creative work leans on external vocabularies and instruments, read only interfaces can only pick up what already has an external expression, motor channels provide precise and well gated control, and neural signals drift over time.

I’m not sure if this is a restatement of the baseline or actually a new perspective. I feel that many of the arguments I made in those pieces just unpack basic points any careful machine learning or BCI researcher already accepts once you mention supervised learning, ground truth, and bandwidth: idle thoughts are noisy, you still need labels, and subconscious parallel processing does not magically turn into a reliable signal for the task at hand. I still feel calibration is undervalued in this framing… who knows… most creative work might have a calibrative nature. Along these lines I said:

We often don’t appreciate the dependency tree of creation: we use an instrument, a vocabulary, compiler, grammar, a brush setting and so on. If we actually were to “think up” a sound from nothing it would take many tens to millions of thousands bits of specification for a few seconds of audio (bars). Writing a simple function still depends on an interpreter, compiler, runtime to be meaningful. Playing a sound depends on an instrument.

and that constrains what BCIs can meaningfully influence. I also wrote about the causal staleness of decoder-only BCIs but this seems tautological: Yes, preferences change with every choice we make, so therefore any decoder is always hunting a shadow. If our brain could’ve parallelized it or made it automatic it would’ve and we wouldn’t have to make a top-down decision about it—at least that’s my belief at this moment.


Another theme was Coadaptive User Interfaces which boiled down to RLHF plus preference learning plus versioned configuration… but for interfaces. You accept or reject diffs and they turn into labeled preferences. This gets you portable and inspectable preference objects.

I’ve been thinking about preference learning in different terms since at least 2019. Once a year I waste a few weeks with new software prototypes—mostly on the UI level. What you can do with it depends heavily on the intelligence of the base foundation model so it’s probably best to wait and see. Model companies have trillions of dollars in incentives to figure this out—so far no UI has impressed me. Everything from artifacts, projects, skills and canvas I had prototypes for. I don’t use these things much now so I lost interest in developing more.

But I felt I found a good simple description of taste as being A versus B in context Ca a C does a lot of work here. This needs more explanation but not today..

Taste as “A is better than B in context C” is the right atomic unit, and it travels well. Turning those A > B given C events into first class, versioned, portable preference objects with diff and merge semantics feels right.

I changed my mind on much of the above. It’s probably best to figure out the rest in the lab instead of reasoning about them here.