This Time Magazine op-ed on AI has been making the rounds with its clear and forthright message of “we’re all gonna die.”
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers…
If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.
And a much more down-to-earth one.
This week, US Senator from Connecticut Chris Murphy said on Twitter that ChatGPT taught itself chemistry of its own accord without prompting from its creators. Unless Sen. Murphy knows something that hasn’t been made public, ChatGPT as we know it can do no such thing.
To be clear, if ChatGPT or any other GAN sought out and learned something on its own initiative, then we have truly crossed over into the post-human world of Artificial General Intelligence since demonstrating independent initiative would unquestionably be a marker of true human-level intelligence.
But if a developer tells a bot to go out and learn something new and the bot does as it’s asked, that doesn’t make the bot intelligent.
Doing as you’re told and doing something of your own will may look alike to an outside observer, but they are two very different things. Humans intuitively understand this, but ChatGPT can only explain this difference to us if it has been fed philosophy texts that discuss the issue of free will. If it’s never been fed Plato, Aristotle, or Nietzsche, it’s not going to come up with the idea of free will on its own. It won’t even know that such a thing exists if it isn’t told that it does…
ChatGPT, at its core, is a system of connected digital nodes with averages assigned to each that produces a logical output given an input. That isn’t thought, it’s literally just math that humans have programmed into a machine. It might look powerful, but the human minds that made it are the real intelligence here, and that shouldn’t be confused.
Of course, there’s potentially a whole lot more to AI than ChatGPT or a handful of others that are making the news, but the fundamental point remains correct. Software does what we tell it to do. Weaponized AI can be potentially quite deadly, but we’re the only ones who can weaponize it. The ‘sorcerer’s apprentice’ expectation that technology will come to life and run amok, from Frankenstein onward, is a parable about chaos and hubris that tells us a good deal about ourselves, but little about our artificial inventions.
Guns don’t jump off the shelves and shoot themselves. AIs won’t kill us because of their own motives, but because we tell them to do it. In the short term, they’ll be used to manipulate us and lie to us. Our tools are mirrors of ourselves. AI is just another mirror and a particularly apt one because it can copy us more closely than any living creature has before. We adapted canines to serve us and be our friends. And then we began trying to build machines that could do so. But dogs have a measure of personality and autonomy, the tech we make never will, and as our society becomes more solipsistic and narcissistic, that may suit us just fine.
.
“Software does what we tell it to do.” Exactly! I get sick of telling people that. Liars have been saying we’re on the verge of sentient AI since the 80s. And now they’ve taken those lies a step further and claim they did create one. It’s a load of bullshit. Like you said, it will never be developed. We don’t even understand the human brain and we’re going to build an artificial one? Humbug.
The most common “AIs’ are programs that can do what it used to take a human being to do, like the voice programs on phones and the facial recognition programs that law enforcement agencies use. They aren’t sentient. It’s just another idiotic computer term, like “menu.” Good luck ordering a pizza on your computer menu. Obviously computer geeks didn’t take enough English courses in college. Either that or they flunked them.
These cretins who go on about Skynet type world takeovers need to be slapped. They’ve been watching too many “Terminator” movies. Even if it were possible to create an AI, it wouldn’t be able to conquer the world any more than a toaster could.
What a load of shit.
True. We don’t understand how consciousness comes out of the human brain, so we’re certainly not duplicating it.
If there is one source I wouldn’t trust to explain AI it would be an MSM rag.
I remember an AI seminar at UC Berkeley in the 1980s. We were all sitting around drinking Chardonnay, and the atmosphere was like a funeral, none of us had any hopes for AI at that time. Since then AI has exploded.
I followed AI development in Chess, for example. I think the effect has mostly been positive, increasing our understanding of chess, despite AI now being better than humans at playing chess.
Lots of great scifi on AI, and much of it crosses into horror, like John Varley’s “IF YOU WISH TO KNOW MORE PRESS ENTER,”. or Harlen Ellison’s “I Have No Mouth, and I Must Scream. ” My favorite in the AI genre is probably the BOLO stories, created by Kieth Laumer, and expounded on by several other well known writers.
Most people confuse robotics using simulated personalities that just make the machine seem intelligent, with AI. The Berzerker series created by Fred Saberhagen, where the ancient alien doomsday machines, programmed to destroy all life have a random generator using decaying isotopes to generate unpredictability and creativity, is kind of a hybrid.
That series is one of my favorites, and does a great job of exploring how the peculiarities and faults of humans, and the natural abilities of other life on earth can outsmart and/or defeat intelligent machines. Things like the use of mnemonic devices(O Be A Fine Girl Kiss Me Sweetie) to tell machine from human, the incredible abilities and territoriality of the mantis shrimp, the pressure that can be generated by growing fruit in a confined space, the peaceful and artistic natures, or martial ineptitude of some of WW1’s deadliest fighter pilots, just to name a few.
Many stories have superhuman machine avatars with human personalities downloaded into them, like the “Safehold,” series, by James Weber.
Or Blade Runner”s empathy test. Varley though really seemed to have anticipated the reach of the NSA.
We do design systems that appear to understand us and cater to us, but like the occasional humanoid robot on display, that’s meant to fool us and doesn’t make for any kind of intelligent awareness.
That was a great adaptation of the novel (I own it too, lol, if my house ever catches fire, it’ll be spectacular), “Do Androids Dream of Electric Sheep.” Thanks for reminding me of the empathy test. The novel was mentioned in my original post (though it didn’t include the empathy test), which was over 500 words too long, so had to be edited out. I opted for example stories fewer might know of. I’m a reading pusher, always trying to get people hooked on something.
I got an interesting education from a roommate on the incredible reach of the NSA back in the 80s. We were taking different courses at the same post and the DoD put us up in a cheap off base apartment. He couldn’t divulge classified info, so I had to be careful of the wording of my questions. He could only make vague statements, and I had to noodle out the implications. The implications were scary, and when I later found the Varley story it reminded me of it.
I wish I could share your optimism.
The Luddites may have their absolution one day.
There are also biological tools. Those don’t stay in the toolbox even today.
An old question:
Will man ever create a machine more intelligent than himself?
If so, could that machine make a machine smarter than ITSELF? And if it could, would it?
The real miracle would be making something ex nihilo instead of using God’s dust.
Ironic that mankind, who took several thousand years to discover bacteria and viruses and develop engines and airplanes, determines he can make himself in his own image, but declares there is no God.
Man will never invent a new fruit or vegetable or a new animal. All man’s ideas are take offs on God’s originality.
In fact, humans can’t even construct one living cell, plant or animal. The list of what humans can do pales in comparison to what humans cannot ever do.
Easy to destroy; hard to build.
Hah! The same old adage I learned in the 70’s: garbage in, garbage out….