Read-Only Brain Computer Interfaces (BCI) will have little use for real-time creative work

· 4 min read · Send your thoughts via twitter or mail.

The sci-fi pitch of brain implants that magically decode thought into a finished creative artifact (ie. image, action, sound, sentence) bijectively will not come true even with next-gen highest-bandwidth scaled neural hardware and optimal decoding algorithms.

Brain Machine Interface Microetched by Greg Dunn
Figure 1: Artist: Greg Dunn, Title: Brain Machine Interface Microetched

Up until recently I never questioned the promise that is often sketched in trans-humanist manifestos and implicitly lingers as background radiation in cyberpunk literature. Intuitively, I believe my internal mental imagery is rich and dense. I’m artist by birth, what else could it be?

I thought that at some point in the medium term future (2035+) I could directly hook my brain to a computer and stream out mental objects that our tools and languages (harmonics, words, mathematics, programs, diagrams) can not express: novel timbres, visual textures, fragrances, 3D artworks, film sequences. a a Additionally to their high dimensional representation they actually drift in between the imagination attempts.

At the very least I expected BCIs to speed up thought serialization by bypassing the inner-representation → external-operator impedance tax.

But after looking closer, I now believe that read-only BCIs will be input devices for legible thoughts at best, not translators for ineffable ones. The more idiosyncratic, multi-channel the content, the less they’ll help.

(I have more hope for bidirectional BCIs that can stimulate experiences but that’s outside the scope of this piece)

Supervised learning requires paired data: Brain activations paired with ground truth labels. There's legible training pairs like the word 'cat', proprioceptive feedback or "Middle C pitch". You can hum a pitch, play it, there's a measurable label, it's canonical.

But the moment you step one rung up the ladder (timbre, spectrum, chord voicing, orchestral texture) the label space explodes and you can’t reproduce the imagined sound accurately enough to supply a ground-truth label. You could learn a consistent shift for each context and user if it was high bias but it is most likely high variance.

So the paradox is: BCIs can only decode thoughts that already have an external expression channel. But if it has an external channel, why not just use that channel directly?

On the Information transfer level, “conscious” thought clocks in at about 10 bits per seconds (BpS). 2 2 The Unbearable Slowness of Being, Markus Meister, 2024

BCI readouts that are meant to steer actions (control) can not magically circumvent this bottleneck, even if the readouts come from even earlier, parallel and pre-concious outer layers of the brain. The decisive user signals still operate at the 10 BpS limit.

Sure, you could decode the outer layers’ activations or “subconscious vibes” that might be more parallelized and could be used for trivial initial low level calibration of a system but they quickly become useless for alignment (preferences) or prediction (actions) within the high level domain vocabulary. My guess is that quite early on in the process of using such a tool, high order thoughts (serial) will carry the only valuable signal.

Even truly de novo synthesis—creating without samples, presets, instruments, vocabularies, or symbols—is condemned to the dominant co-adaptive workflow: you and the tool iteratively shape outputs through an exteroceptive UI, effectively composing with pre-existing artifacts (symbols, words, sounds, shapes, colors)

We often don't appreciate the dependency tree of creation: we use an instrument, a given vocabulary, compiler, grammar, a brush setting and so on. If we actually where to "think up" a sound from nothing it would take many tens to millions of thousands bits of specification for a few seconds of audio (*bars*). Writing a simple function still depends on an interpreter, compiler, machine to be meaningful. Playing a sound depends on an instrument.

We inherit an almost infinite context that maps to a low dimensional, simple parameter set (conditional description length). When we invent a new instrument/grammar, we pay the tax upfront. Creative work toggles between tuning parameters on a shared model and authoring the model itself.

Even with optimal, low-latency neural decoding using brain-computer interfaces as input device won’t beat existing input devices (keyboard, trackpad, pencils) for creative work.

Technically BCI embeddings can turn recalled memories into corpus searches, but that’s a digitization gap, not a BCI superpower. Once personal memory is indexed, traditional tools do nearly as well.

Even as an expert user with clean internal-to-domain mappings, read-only BCIs just provide an expensive hotkey system. I’d still trigger discrete commands from a menu of neural codecs, and not think things into being. In that case then, muscle memory is more reliable and, unlike brain intents, doesn’t require me to shift my entire mental state into a clean, decodable signal for every act (selection, search etc.). Motor control for somewhat memorized actions feels, at least to me, functionally decoupled and doesn’t break flow.

Given all that, I’ve become more pessimistic on non-stimulative (write) BCIs for creative domains. The remaining use cases are medical (paralysis patients), passive recording (dream playback, attention monitoring etc.), macro-level intent or fuzzy directives (“make this sound more [fuzzy pointer not yet in domain- or user vocabulary]”).

Alas, if you’re not paralyzed, the marginal value of read-only BCI is small.

Fuzzy, high-dimensional mappings are entertaining, but from my experience creating songs, fine art, films and writing, has been that most effort isn't in vibe explorations but point-target surgery: precise refactors or controlled mutations of a subcomponent–word, bar, shape, segment. The last mile makes up most the work, regrettable this essay was not an exception.

It separates slop from craft and it’s precisely where fuzzy affordances give out.

References

  1. [2] ↑ Meister, M. (2024). The Unbearable Slowness of Being arXiv preprint. arXiv:2408.10234
BibTeX Citation
@misc{strasser2025,
  author = {Strasser, Markus},
  title = {Read-Only Brain Computer Interfaces (BCI) will have little use for real-time creative work},
  year = {2025},
  url = {https://markusstrasser.org/},
  note = {Accessed: 2025-11-13}
}