Neural Computation
LLMs need a better marketing slogan.
With the introduction of ChatGPT, the industry hopes AI agents will transform work. However, treating them as the new “gold standard” is the wrong direction.
With the advent of LLMs, there is now a programmable neural computation unit with LLMs aided by structured generation. It’s remarkable that no one is discussing the fundamental paradigm shift this represents for big data.
If you had all of Instagram’s data, you could build a one-shot RAG application that asks “given person X, who would be the best match?” or process each instance separately by asking, “Is this person over 40?” and returning a binary (yes or no) output.
These computations are fuzzy and unstructured, but that’s essential for many language modeling tasks. Mercor’s hiring platform needs these models’ fuzziness to ask novel questions on resumes; otherwise, what’s the difference between using their algorithm and LinkedIn’s keyword search?
To maximize this wave’s benefits, neural computations must be orchestrated rather than prompted. These neural models do not operate like the human mind; they are not a priori judgments, but a posteriori judgments on all text.
Using these posteriori agent for a voice agent role seems obtuse, as they have a greater understanding of the world than we do. Some people have noticed this by developing AI Researchers (i.e.: Sakana AI), but that still does not take advantage of using large data to dynamically prompt these models into massively parallelized computation.
Humans have only ever talked to humans. These unseen large data language tasks that surround us will be some of the most interesting work out of this LLM revolution or at least the early pickings.
2025 Summer Updates: Auctor & Throxy have come and swooped up large parts of these markets to great success.