• 0 Posts
  • 5 Comments
Joined 1 year ago
cake
Cake day: July 15th, 2023

help-circle

  • In a more diplomatic reading of your post, I’ll say this: Yes, I think humans are basically incredibly powerful autocomplete engines. The distinction is that an LLM has to autocomplete a single prompt at a time, with plenty of time between the prompt and response to consider the best result, while living animals are autocompleting a continuous and endless barrage of multimodal high resolution prompts and doing it quickly enough that we can manipulate the environment (prompt generator) to some level.

    Yeah biocomputers are fucking wild and put silicates to shame. The issue I have is with considering biocomputation as something that fundamentally cannot be be done by any computational engine, and as far as neural computation is understood, it’s a really sophisticated statistical prediction machine


  • So for context, I am an applied mathematician, and I primarily work in neural computation. I have an essentially cursory knowledge of LLMs, their architecture, and the mathematics of how they work.

    I hear this argument, that LLMs are glorified autocomplete and merely statistical inference machines and are therefore completely divorced from anything resembling human thought.

    I feel the need to point out that not only is there no compelling evidence that any neural computation that humans do anything different from a statistical inference machine, there’s actually quite a bit of evidence that that is exactly what real, biological neural networks do.

    Now, admittedly, real neurons and real neural networks are way more sophisticated than any deep learning network module, real neural networks are extremely recurrent and extremely nonlinear, with some neural circuits devoted to simply changing how other neural circuits process signals without actually processing said signals on their own. And in the case of humans, several orders of magnitude larger than even the largest LLM.

    All that said, it boils down to an insanely powerful statistical machine.

    There are questions of motivation and input: we all want to stay alive (ish), avoid pain, and have constant feedback from sensory organs while a LLM just produces what it was supposed to. But in an abstraction the ideas of wants and needs and rewards aren’t substantively different from prompts.

    Anyway. I agree that modern AI is a poor substitute for real human intelligence, but the fundamental reason is a matter of complexity, not method.

    Some reading:

    Large scale neural recordings call for new insights to link brain and behavior

    A unifying perspective on neural manifolds and circuits for cognition

    a comparison of neuronal population dynamics measured with calcium imaging and electrophysiology


  • Maybe. But if I was thinking about buying into their ipo I might be pretty skeptical of a social media site that’s actively antagonizing their users while it can barely turn a profit when 90% of their labor force are unpaid volunteers.

    Engagement can be fleeting and I’m not sure their archive content is as valuable to LLMs as they think it is.