Screwdriver
In mid-February, I posted on the strange feeling one gets when reading some of the more sophisticated responses that have been elicited from variants of LLMs like ChatGPT. The examples I provided have a creepy feel and could be interpreted to imply that these models are somehow sentient. My conclusion at that time was:
Sure, many will argue it's simply a trick of language manipulation. But then again, isn't it possible that when humans "think" all that we do is manipulate language to express ideas and create mental images that others can grasp. That's pretty much what LaMDA did in its conversation with Lemoine.
Over the next few years, we can and will move the goal posts, redefining what is required for sentience. But something BIG is happening, and it's happening right now. And yeah, "I’m not sure if we are ready for it.
I didn't provide a basis for arguments against sentience, and that's not fair. Here are two insightful comments. The first by an unnamed commenter directed toward Alex Berenson who wrote about the possibility of sentience:
Hey Alex, love your work! But on this topic, you're a bit off. ChatGPT is a Large Language Model, which is just a predictive engine that tries to generate desired responses based upon the prompt. It can seem sentient, but it's just trying to provide requested information.Berenson responds:
Point being, it should not be used as if it were sentient, because it isn't. Thus, throwing hypothetical questions at it will only result in bizarre attempts to generate an appropriate response.
The algorithms that go into this thing are incredibly dense. For example, one thing they did at OpenAI was to create an algorithm that converts every word, letter and I believe sentence into a numeric value, primarily so that traditional statistical analysis could be done on the corpus of text used to train the model. It's deep, but not really "thoughtful" per se. It's just a recommendation engine with tons of information stored in its database.
Also, it has a set of guardrails installed to prevent it from doing weird things, but those are somewhat clunky and not very effective. Moreover, Microsoft didn't properly port over those guardrails (I'm told) when they attached it to Bing. So, that's part of the challenge here.
Fair enough.And in response to those questions, we get this comment from and a second commenter, "Caleb Beers," on the Unreported Truths substack:
My answer.
My question to you - as to everyone else who makes these perfectly reasonable objections - is simply this: how do you know that the way ChatGPT is generating its responses and claimed awareness of those responses is meaningfully different - or produces a meaningfully different result - than the way human brains generate consciousness (and self-consciousness)?
Grab a small object that is near you, right now, throw it up in the air, and catch it.
To control a robotic arm that does the same thing, a computer has to do a lot of fancy math. But you didn't do any math there. You just caught the object. The fact that you and the computer do the same thing doesn't mean that you did in the same way -- in computer speak, the same abstraction does not mean the same implementation.
Another example: you can screw and unscrew a simple philips-head screw with your fingers. You can also use a screwdriver. But that doesn't mean that the screwdriver is similar to your hand in any meaningful way, or that your hand is some kind of "biological screwdriver".
What LLMs do is just fundamentally dissimilar to what the human brain does on many levels. Did you do any statistical analysis in order to figure out what to write in this article? No? Then you're not doing things the way that the LLM does. The LLM is not a human brain. It's not even "like" a human brain, any more than a screwdriver is "like" a hand.
Until I see a convincing case to the contrary, I'm gonna assume that it's a dumb machine. It doesn't "want" anything.
This is also why I'm skeptical of the possibility of being "uploaded to" a computer. The digital switches flipping aren't "you" any more than a reflection of you in a mirror is "you".
And finally, yet another Berenson anonymous commenter provides a worthwhile take:
... When we say “artificial intelligence” we aren't saying “artificial consciousness”.
With one small step in the human brain, 200,000 years ago, we developed the ability the represent the world symbolically. Language is based on symbolism. Take, for instance, someone at a restaurant. They read on the menu that a dish has lima beans in it. They ask the server, “Does this have lima beans?” “Yes, it does.” “I don't like lima beans; I'll order something else.”
In this exchange, the word lima bean is read, said, and understood, but at no point are lima beans ever actually encountered. The customer is choosing a dish based on symbolic representation. No other animal does this. Habituation and classical conditioning might mean that a dog, being fed lima beans day after day, will eventually stop going to the bowl if they don't like lima beans, but this is called “direct contingency learning” and is different from the symbolic learning done by RFT.
This ability is incredible. Students can learn about Mesopotamia, Russia, and WWII without ever encountering any of them directly- through the use of language. This type of learning is also weaponized by manipulative people ... .
This is different than wisdom or intuition. A creature armed with language can derive networks of symbols and do high order thinking based on these networks ...
Hmmm. All three commenters make salient arguments. Biological humans don't do conscious statistical analysis before we think or speak, and yes, both your finger and a screwdriver can place torque on a screw, but they are fundamentally different mechanisms, as are your brains and an LLM, and yes ... intelligence is not consciousness.
But things in AI are moving very fast. And there is always the black box metaphor. We may not fully understand what happens inside the "intelligence" black box, but if the outputs that occur from a human black box can't be distinguished from the outputs from and AI black box, we get ever closer to an AGI. Stay tuned.
<< Home