This Time Magazine op-ed on AI has been making the rounds with its clear and forthright message of “we’re all gonna die.”
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers…
If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.
And a much more down-to-earth one.
This week, US Senator from Connecticut Chris Murphy said on Twitter that ChatGPT taught itself chemistry of its own accord without prompting from its creators. Unless Sen. Murphy knows something that hasn’t been made public, ChatGPT as we know it can do no such thing.
To be clear, if ChatGPT or any other GAN sought out and learned something on its own initiative, then we have truly crossed over into the post-human world of Artificial General Intelligence since demonstrating independent initiative would unquestionably be a marker of true human-level intelligence.
But if a developer tells a bot to go out and learn something new and the bot does as it’s asked, that doesn’t make the bot intelligent.
Doing as you’re told and doing something of your own will may look alike to an outside observer, but they are two very different things. Humans intuitively understand this, but ChatGPT can only explain this difference to us if it has been fed philosophy texts that discuss the issue of free will. If it’s never been fed Plato, Aristotle, or Nietzsche, it’s not going to come up with the idea of free will on its own. It won’t even know that such a thing exists if it isn’t told that it does…
ChatGPT, at its core, is a system of connected digital nodes with averages assigned to each that produces a logical output given an input. That isn’t thought, it’s literally just math that humans have programmed into a machine. It might look powerful, but the human minds that made it are the real intelligence here, and that shouldn’t be confused.
Of course, there’s potentially a whole lot more to AI than ChatGPT or a handful of others that are making the news, but the fundamental point remains correct. Software does what we tell it to do. Weaponized AI can be potentially quite deadly, but we’re the only ones who can weaponize it. The ‘sorcerer’s apprentice’ expectation that technology will come to life and run amok, from Frankenstein onward, is a parable about chaos and hubris that tells us a good deal about ourselves, but little about our artificial inventions.
Guns don’t jump off the shelves and shoot themselves. AIs won’t kill us because of their own motives, but because we tell them to do it. In the short term, they’ll be used to manipulate us and lie to us. Our tools are mirrors of ourselves. AI is just another mirror and a particularly apt one because it can copy us more closely than any living creature has before. We adapted canines to serve us and be our friends. And then we began trying to build machines that could do so. But dogs have a measure of personality and autonomy, the tech we make never will, and as our society becomes more solipsistic and narcissistic, that may suit us just fine.