• jarfil@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    11 months ago

    Where did I exaggerate anything?

    We don’t even know what consciousness or sentience is, or how the brain really works.

    We know more than you might realize. For instance, consciousness is the ∆ of separate brain areas; when they go all in sync, consciousness is lost. We see a similar behavior with NNs.

    It’s nice that you mentioned quantum effects, since the NN models all require a certain degree of randomness (“temperature”) to return the best results.

    trying to accurately simulate a rat’s brain have not brought us much closer

    There lies the problem. Current NNs have overcome the limitations of 1:1 accurate simulations by solving only for the relevant parts, then increasing the parameter counts to a point where they solve better than the original thing.

    It’s kind of a brute force approach, but the results speak for themselves.

    the airoplane analogy (https://rodneybrooks.com/an-analogy-for-the-state-of-ai/).

    I’m afraid the “state of the art” in 2020, was not the same as the “state of the art” in 2024. We have a new tool: LLMs. They are the glue needed to bring all the siloed AIs together, a radical change just like that from air flight to spaceflight.

    • noxfriend@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      We know more than you might realize

      The human brain is the most complex object in the known universe. We are only scratching the surface of it right now. Discussions of consciousness and sentience are more a domain of philosophy than anything else. The true innovations in AI will come from neurologists and biologists, not from computer scientists or mathematicians.

      It’s nice that you mentioned quantum effects, since the NN models all require a certain degree of randomness (“temperature”) to return the best results.

      Quantum effects are not randomness. Emulating quantum effects is possible, they can be understood empirically, but it is very slow. If intelligence relies on quantum effects, then we will need to build whole new types of quantum computers to build AI.

      the results speak for themselves.

      Well, there we agree. In that the results are very limited I suppose that they do speak for themselves 😛

      We have a new tool: LLMs. They are the glue needed to bring all the siloed AIs together, a radical change just like that from air flight to spaceflight.

      This is what I mean by exaggeration. I’m an AI proponent, I want to see the field succeed. But this is nothing like the leap forward some people seem to think it is. It’s a neat trick with some interesting if limited applications. It is not an AI. This is no different than when Minsky believed that by the end of the 70s we would have “a machine with the general intelligence of an average human being”, which is exactly the sort of over-promising that led to the AI field having a terrible reputation and all the funding drying up.