Acceleration
If you were to ask the average non-technical person to define the word "acceleration" you'd get a lot of answers that really aren't accurate. Some would say "It's how fast something goes." Wrong. "Others would say it has to do with speed." True, but not good enough.
Stated in non-mathematical form, acceleration is the change in velocity over time. When we say something is accelerating, what we mean is that its velocity in increasing (or decreasing) from second to second. The change in velocity can be so small it's almost imperceptible or it can be joltingly large.
Up until very recently, the acceleration of advancements in Artificial Intelligence (AI) has been relatively small, but over the past year that acceleration has changed, to the extent that it has now become joltingly large. The public is largely unaware of this, accept at the very edges, but the tech community has watched it happen, and because of the ramifications of it, has become alarmed.
Over the past few months—here and more importantly, here—I posted on the increasingly rapid acceleration of advancements in AI and their inexorable march toward AGI—artificial general intelligence—a technology that will have a historically profound impact on human beings—greater that the printing press, the Internet or just about anything else.
The BIG question is whether that impact will be mostly good or mostly bad.
John Stokes reports on a recent conversation between Lex Friedman and Sam Altman, the founder of OpenAI (the creator of ChapGPT and its variants), along with an analysis of ChapGPT Technical Report, and draws some conclusions that are worth noting. He writes:
The recent rollout of OpenAI’s GPT-4 model had a number of peculiar qualities to it, and having stared at this odd fact pattern for a while now, I’ve come to the conclusion that Altman is carefully, deliberately trying to engineer what X-risk nerds call a “slow take-off” scenario — AI’s capabilities increase gradually and more or less linearly, so that humanity can progressively absorb the novelty and reconfigure itself to fit the unfolding, science-fiction reality.
Here’s my thesis: The performance numbers published in the GPT-4 technical report aren’t really like normal benchmarks of a new, leading-edge technical product, where a company builds the highest-performing version it can and then releases benchmarks as an indicator of success and market dominance. Rather, these numbers were selected in advance by the OpenAI team as numbers the public could handle, and that wouldn’t be too disruptive for society. They said, in essence, “for GPT-4, we will release a model with these specific scores and no higher. That way, everyone can get used to this level of performance before we dial it up another notch with the next version.”
Providing a simplistic translation to this comment, Stokes thinks that the underlying "deep learning stack" (the underlying large language model (LLM) functionality of ChatGPT) allows it to do far more than it currently does, but it is being purposely constrained so that its acceleration toward an AGI can be managed in a way that will not be disruptive or filled with unacceptable risk. The goal is to have the LLM scale predictably.
At the risk of being overly simplistic, it's like feeding only a small amount of plant food to a plant that can grow to gargantuan heights , so it grows more slowly and predictably. In this AI metaphor, the "plant food" is the number of parameters that allow the LLM to grow and 'learn'. By limiting the parameters, acceleration is controllable.
But what if the metaphorical plant is allowed as much plant food as is available—hundreds of billions or trillions of parameters. Does it grow out of control with vines and roots spreading to places we don't want overwhelming our world in ways we can't even imagine, or does it grow straight up in the air while we watch in amazement? No one knows.
There's another word that many people have trouble understanding—exponential. Humans are good at linear growth—it's part of our world. We understand it and accept it without question. But exponential growth is a different matter. Mathematically, exponential growth is represented like this:
Things begin happening faster and faster and the change that results is unsettling at best and downright frightening to many. It appears that AI has entered the exponential growth phase where the velocity of change keeps accelerating until ... what?
<< Home