The further to the left or the right you move, the more your lens on life distorts.

Thursday, October 05, 2023

AGI

Over the years, I've written a number of posts on advancements in artificial intelligence [1] and their likely societal impact, e.g., here, here, here, here, and here. For most of those years, estimates on the advent of AGI (artificial general intelligence, that is, a machine that performs creative tasks in a manner that is indistinguishable from human performance [2]) would evolve relatively slowly and come to fruition in the 2040s.

No more.

Samuel Hammond argues that things are developing quickly and that AGI might occur as early as 2030. His bullet points:

  • However one defines AGI, we are on the path to brute force human level AI by simply emulating the generator of human generated data, i.e. the human brain.

  • Information theory, combined with forecasts of algorithmic and hardware efficiency, suggest systems capable of emulating humans on most tasks are plausible this decade and probable within 15 years.

  • The brain is an existence proof that general intelligence can emerge through a blind, hill-climbing optimization process (Darwinian evolution), while the evidence that the brain works on the same principles as deep learning is overwhelming.

  • Scale is the primary barrier to AGI, not radically new architectures. Human brains are just bigger chimpanzee brains that achieved generality by scaling the neocortex.

  • Artificial neural networks can accurately simulate cortical neurons and other brain networks, while the evidence that the brain’s biological substrate matters in other ways is either weak or nothing that bigger models can’t compensate for.

  • The human brain must work on relatively simple principles, as it grows out of a developmental process with sensitive initial conditions rather than being fully specified in our DNA.

  • “Universality,” or the observation that artificial neural networks and our brain independently learn similar circuits, is strong evidence that current methods in deep learning are sufficient for modeling human cognition even if they're suboptimal.

  • We tend to overestimate human intelligence because, as humans, we are computationally bounded in our ability to model and predict other humans.

  • The number of “hard steps” earth passed through to reach intelligent life suggests the steps aren’t as hard as they look, but are instead guided by statistical processes that mirror the scaling laws and phase-transitions seen in neural networks and other physical systems. AGI isn’t as hard as it looks either.

It's possible that Hammond is incorrect, but if recent developments are any indicator, progress toward an AGI has accelerating rather dramatically. [3]

The BIG questions: What happens when an AGI is achieved? How quickly can/will it improve itself to exceed human-level intelligence, and then, what happens once that happens? [4]

FOOTNOTES:

[1] As I aside: My PhD dissertation used a rudimentary form of A.I. tech applied to a manufacturing problem—long, long before A.I. had become the hyped, household concept it is today. 

[2] Hammond considers the definition and notes: "AGI is considered tricky to define, as concepts like “generality” and “human-level intelligence” open up bottomless philosophical debates. As such, many prefer the sister concept of Transformative AI – an operational definition that ignores the intrinsic properties of an AI system in favor of its external impact on the economy. An AI system that can automate most human cognitive labor would thus count as TAI even if it was based on something as “dumb” as a giant look-up table. This is somewhat dissatisfying, however, as what makes recent AI progress so exciting is precisely the “spark of general intelligence” seen in models like GPT-4."

[3] Above all else, progress toward AGI requires heavy-duty computing power. Again from Hammond: "The current ramp-up in computing resources is truly incredible. The global cloud computing market is expected to double over the next four years. The GPU market is growing at a compound annual rate of 32.7 percent off of demand for AI accelerators. As supply-chain bottlenecks alleviate, NVIDIA alone plans to ship 4x more H100s next year than it will ship in 2023. Meanwhile, the compute used to train milestone deep learning models is growing 4.2x per year. At this rate, deep learning would soon consume all the compute in the world. Fortunately, algorithmic progress is doubling effective compute budgets roughly every nine months. "

[4]  At this point, the only honest answer is:  no one knows. Genius level human intelligence is rare, and even the rarest genius level intelligence has limits. It cannot solve what we currently label as "intractable problems." It cannot provide us with a way to exceed the speed of light. It cannot (yet) understand itself and why some brains function so much better than others. The human brain iterates at biological speed to improve solutions to problems and increase understanding, and slowly, we do see advancements. But an AGI can iterate at computer speed, millions of times faster than the human brain. Will that result in unique and extraordinary problem solutions, insights, strategies, and accomplishments? Yup. Could it also result in a level of malevolence we cannot even understand, much yet combat? Yup. Gonna be interesting.