Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown and unlikely to ever come to fruition. Their findings are published in Computational Brain & Behavior today.

  • petrol_sniff_king@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    22 hours ago

    Hey! Just asking you because I’m not sure where else to direct this energy at the moment.

    I spent a while trying to understand the argument this paper was making, and for the most part I think I’ve got it. But there’s a kind of obvious, knee-jerk rebuttal to throw at it, seen elsewhere under this post, even:

    If producing an AGI is intractable, why does the human meat-brain exist?

    Evolution “may be thought of” as a process that samples a distribution of situation-behaviors, though that distribution is entirely abstract. And the decision process for whether the “AI” it produces matches this distribution of successful behaviors is yada yada darwinism. The answer we care about, because this is the inspiration I imagine AI engineers took from evolution in the first place, is whether evolution can (not inevitably, just can) produce an AGI (us) in reasonable time (it did).

    The question is, where does this line of thinking fail?

    Going by the proof, it should either be:

    • That evolution is an intractable method. 60 million years is a long time, but it still feels quite short for this answer.
    • Something about it doesn’t fit within this computational paradigm. That is, I’m stretching the definition.
    • The language “no better than chance” for option 2 is actually more significant than I’m thinking. Evolution is all chance. But is our existence really just extreme luck? I know that it is, but this answer is really unsatisfying.

    I’m not sure how to formalize any of this, though.

    The thought that we could “encode all of biological evolution into a program of at most size K” did made me laugh.

    • If producing an AGI is intractable, why does the human meat-brain exist?

      Ah, but here we have to get pedantic a little bit: producing an AGI through current known methods is intractable.

      The human brain is extremely complex and we still don’t fully know how it works. We don’t know if the way we learn is really analogous to how these AIs learn. We don’t really know if the way we think is analogous to how computers “think”.

      There’s also another argument to be made, that an AGI that matches the currently agreed upon definition is impossible. And I mean that in the broadest sense, e.g. humans don’t fit the definition either. If that’s true, then an AI could perhaps be trained in a tractable amount of time, but this would upend our understanding of human consciousness (perhaps justifyingly so). Maybe we’re overestimating how special we are.

      And then there’s the argument that you already mentioned: it is intractable, but 60 million years, spread over trillions of creatures is long enough. That also suggests that AGI is really hard, and that creating one really isn’t “around the corner” as some enthusiasts claim. For any practical AGI we’d have to finish training in maybe a couple years, not millions of years.

      And maybe we develop some quantum computing breakthrough that gets us where we need to be. Who knows?