Artificial Intelligence (AI) is the last thing humanity will ever invent.
That’s a weird thing to think about, but it speaks to the enormity of inventing intelligence itself.
There are a few important ideas to keep in mind when assessing the risk of AI.
The first is to understand the varying levels of AI. There is “weak AI,” which is AI that can perform one task. This already exists, and is used everyday by most people. Google Search employs AI to help find the best search results and understand what the user is looking for.
The next level of AI is “Artificial General Intelligence.” This is the point at which a machine can do anything a human can do, with the same ability. AI has not yet reached general Intelligence, but many researchers believe it is going to be attained within the near future.
The final level is “Artificial Superintelligence.” This is the level where the machine is now several magnitudes more intelligent than humans. There are a lot of competing theories on when and how this will be attained, but for the sake of this article, the two more common ones will be the focus.
First is the fast takeoff theory. Essentially, once a machine is as intelligent as a human, it can be programmed to recursively improve itself. This would lead to an exponential takeoff in the system’s capabilities until it could reach a level of intelligence as unfathomable to humans as a human’s intelligence is to an ant.
This is a scenario that roughly half of the scientific community in this field subscribes to (these are very smart people; that is a bit crazy to think that this is possible, if not likely). The other half, believe it will be a slower, more gradual process to Artificial Superintelligence. The one thing most of the community agrees on is that at some point, Superintelligence will be reached.
So, to recap, it is a near-certainty that humanity will create a machine several magnitudes more intelligent than itself. What does this mean?
Once again, the scientific community is pretty split on the question. Half think it will be the greatest thing to ever happen to humanity, while half think it could be the worst.
The possible benefits are massive. A superintelligent AI (ASI) could solve our energy problems, cure every disease and possibly solve any problem that humanity has.
For those who are skeptical at this, think about how humanity looks at an ant that can’t figure out its way around a stick. To us, it is quite obvious to just walk around the stick, but perhaps the ant isn’t so quick to figure that out. This is the way that an ASI would look at something like renewable energy, or wealth inequality.
The drawbacks, however, are just as big. One problem that tends to arise when people think about AI (as well as ASI) is that they anthropomorphize the machine.
This is the tendency to apply human characteristics to nonhuman systems. Much in the same way an animal does not stop to think about the moral implications of its actions, a machine would not.
A machine, when given instructions, will execute them exactly as given. It does not stop to consider the moral implications of its actions or the impact of its actions. It just executes.
So if a programmer gave an ASI algorithm to a computer, and that algorithm said something like “produce original designs for thank you cards,” that computer would do everything in its capabilities to produce art for thank you cards.
This could mean eliminating humanity (just an organization of matter that is not perfectly optimized to make art for thank you cards), colonizing other planets, solar systems, whatever it had to do to produce as many original designs as it could.
Once again, for skeptics, think about the way humanity views less intelligent organisms. Sure, we like animals, but we have managed to wipe out thousands by accident through climate change and other human processes, without ever having intended to do so.
It’s not that our goal is to eliminate animal species, but it happens as a byproduct of us going about our lives.
AI could solve all of our problems, but it could also lead to our extinction. We might not know which one until it’s here.