An Exception to AI Exceptionalism

by Jasper Gilley

It’s not often that as much is made of a nascent technology as has been made in recent years of artificial intelligence (AI.) From Elon Musk to Stephen Hawking to Bill Gates, all the big names of technology have publicly asserted its incomparable importance, with various levels of apocalyptic rhetoric. The argument usually goes something like this:

“AI will be the most important technology humans have ever built. It’s in an entirely different category from the rest of our inventions, and it may be our last invention.”

I usually love thinking about the future, often making it a habit, for instance, to remind anyone who will listen that in 50 years, the by-then-defunct petroleum industry will seem kind of funny and archaic, like the first bulky cell phones.

When I read quotes like the above, however, I feel kind of uncomfortable. In 1820, mightn’t it have seemed like the nascent automation of weaving would inevitably spread to other sectors, permanently depriving humans of work? When the internal combustion engine was invented in the early 20th century, mightn’t it have seemed like motorized automatons would outshine humans in all types of manual labor, not just transportation? We’re all familiar with the quaint-seeming futurism of H.G. Wells’ The War of the Worlds, Fritz Lang’s Metropolis, and Jules Verne’s From the Earth to the Moon. With the benefit of hindsight, it’s easy to spot the anthropomorphization that makes these works seem dated, but is it a foregone conclusion that we no longer anthropomorphize new technologies?

Moreover, I suspect that when we put AI in a different category than every other human invention, we’re doing so by kidnapping history and forcing it to be on our side. We know the story of the internal combustion engine (newsflash: it doesn’t lead to superior mechanical automatons), but we don’t yet know the story of AI – and therein lies the critical gap that allows us to put it in a different category than every other invention.

By no means do I endeavor to argue that AI will be unimportant, or useless. It’s already being used en masse in countless ways: self-driving cars, search engines, and social media, to name a few. The internal combustion engine, of course, revolutionized the way that people get around, and could probably be said to be the defining technology of the 20th century. But I will bitterly contest the exceptionalism currently placed on AI by many thinkers, even if one of them is His Holiness Elon Musk. (Fortunately for me, Elon Musk is famous and I’m not, so if he’s right and I’m wrong, I don’t really lose much, and if I’m right and he’s wrong, I get bragging rights for the next 1,000,000 years.) I will contest the exceptionalism of AI on three fronts: the economic, the technological, and the philosophical.

The Economic Front

 

       

 

As you can see on these graphs, human total population and GDP per capita has been growing exponentially since around 1850, a phenomenon that I termed in a previous post the ongoing Industrial Revolution. I further subdivided that exponentialism into outbreaks of individual technologies, which I termed micro-industrializations, since the development curve of each technology is directly analogous to the graph of GDP per capita since 1850.

Since micro-industrializations occur only in sequential bursts during exponential periods (such as the ongoing Industrial Revolution), it would be fair to infer that they have common cause: in the case of the Industrial Revolution, that cause would be the genesis of what we would call science. Though the specific technologies that cause each micro-industrialization might be very different from one another (compare the internal combustion engine to the Internet), since they have common cause, they might be expected to produce macroeconomically similar results. Indeed, this has been the case during the Industrial Revolution. Each micro-industrialization replaces labor with capital, in some form (capital is money invested to make more money, which is a shockingly new concept in mass application.) In the micro-industrialization of textiles, for instance, the capital invested in cotton mills (they were expensive at the time) replaced the labor of people sitting at home, knitting. This is absolutely an area in which AI is not exceptional. Right now, truly astonishing amounts of capital, invested by companies like Google, Tesla, Microsoft, and Facebook, threaten to replace the labor of people in a wide variety of jobs, from trucking to accounting.

Of course, if job losses were the only labor aspect of a micro-industrialization, the economy wouldn’t really grow. In the Industrial era, inevitably, the jobs automated away are more than accounted for by growth in adjacent areas. Haberdashers lost their jobs in the mid-1800s, but many more jobs were created in the textiles industry (who wants to be a haberdasher anyway?) Courier services went bankrupt due to the Internet, but countless more companies were created by the Internet, more than absorbing job losses. It’s too early to observe either job losses or job creation from AI, but there are definitely authoritative sources (such as Y Combinator and The Economist) that seem to think that AI will conform to this pattern. AI will have a big impact on the world economy – but the net effect will be growth, just like every other micro-industrialization. Economically, at least, AI seems to be only as exceptional as every other micro-industrialization.

The Technological Front

But Elon Musk isn’t necessarily saying that AI might be humanity’s last invention because it puts us all out of a job. He’s saying that AI might be humanity’s last invention because it might exterminate us after developing an intelligence far greater than our own (“superintelligence,” to philosopher Nick Bostrom.) If this claim is true, it alone would justify AI exceptionalism. To examine the plausibility of superintelligence, we need to wade deeper into the fundamentals of machine learning, the actual algorithms behind the (probably misleading) term artificial intelligence.

There are three fundamental types of machine learning algorithms: supervised learning, unsupervised learning, and reinforcement learning. The first two generally deal with finding patterns in pre-existing data, while the third does something more akin to improvising its own data and taking action accordingly.

Supervised learning algorithms import “training data” that is pre-categorized by a human. Based on this training data, if you feed it more data, the algorithm will tell you which category the additional data falls into. Examples include spam-checking algorithms (“give me enough spam and not-spam, and I’ll tell you if a new email is spam or not”) and image-recognition algorithms (“show me enough school buses and I’ll tell you if an arbitrary image contains a school bus.”)

Unsupervised learning algorithms import raw, uncategorized data, and categorize that data independently. The most common type of unsupervised learning is clustering algorithms which break data into similar chunks (e.g., “give me a list of 1 billion Facebook interactions and I’ll output a list of distinct communities on Facebook.”)

Reinforcement learning algorithms require a metric by which they can judge themselves, and by randomly discovering things that allow them to improve their performance on the metric, they gradually eliminate the randomness, becoming “skilled” at whatever task they have been trained to do, with “skilled” being defined as better performance on the given metric. Recently, Elon Musk’s OpenAI designed a reinforcement learning algorithm that beat the world’s best human players at Dota 2, a video game: “tell me that winning at the video game is what I am supposed to do, and I’ll find patterns in the game that allow me to win.”¹

The first two types of algorithms aren’t terribly mysterious, and rather obviously won’t lead, in and of themselves, to superintelligence. When superintelligence arguments are made, they most frequently invoke advanced forms of reinforcement learning. Tim Urban of the (incredibly awesome) blog Wait But Why tells an allegory about AI that goes something like this:

A small AI startup called Robotica has designed an AI system called Turry that writes handwritten notes. Turry is given the goal of writing as many test notes as fast as possible, to improve her handwriting. One day, Turry asks to be connected to the internet in order to vastly improve her language skills, a request which the Robotica team grants, but only for a short time. A few months later, everyone on Earth dies, being killed by Turry, who simply is doing what it takes to accomplish her goal of writing more notes. (Turry accomplished this task by building an army of nanobots and manipulating humans to take actions that would, unbeknownst to them, further Turry’s plan.) Turry subsequently converts the earth into a giant note-manufacturing facility, and begins colonizing the galaxy, to generate even more notes.

In the allegory, Turry is a reinforcement learning algorithm: “tell me that writing signatures is what I am supposed to do, and I’ll find patterns that allow me to do it better.”

Unfortunately, there are two technical problems with the notion of even an advanced reinforcement learning algorithm doing this. First, gathering training data at the huge scales necessary to train reinforcement learning algorithms in the real world is problematic. At first, before they can gather training data, reinforcement learning algorithms simply take random actions. They improve specifically by learning which random actions worked and which didn’t. It would take time, and deliberate effort on the part of the machine’s human overlords, for Turry to gather enough training data to determine with superhuman accuracy how to do things like manipulate humans, kill humans, and build nanobots. Third, as Morpheus tells Neo in The Matrix, “[the strength of machines] is based in a world that is built on rules.” Reinforcement learning algorithms become incredibly good at things like video games by learning the rules inherent in the game, and proceeding to master them. Whether the real world is based on rules probably remains an open philosophical question, but to the extent that it isn’t (it certainly isn’t to the extent that Dota 2 is), it would be extremely difficult for a reinforcement learning algorithm to achieve the sort of transcendence described in the Turry story.

That being said, these technical reasons probably don’t rule out advanced AI being used for nefarious purposes by terrorists or despots. “Superintelligence” as defined in the Turry allegory may be problematic, but insofar as it might be something akin to nuclear weapons, AI still might be relatively exceptional as micro-industrializations go.

The Philosophical Front

Unfortunately, a schism in the nature of the universe presents what is perhaps the biggest potential problem for superintelligence. The notion that superintelligence could gain complete and utter superiority over the universe and its humans relies on the axiom that the universe – and humans – are deterministic. As we shall see, to scientists’ best current understanding, the universe is not entirely deterministic. (The term deterministic, when applied to a scientific theory, describes whether that theory eliminates all randomness from the future development of the phenomenon it describes. Essentially, if a phenomenon is deterministic, then what it will do in the future is predictable, given the right theory.)

Right now, there are two mutually incompatible theories to explain the physical universe: the deterministic theory of general relativity (which explains the realm of the very large and very massive) and the non-deterministic theory of quantum mechanics (which explains the realm of the very small and very un-massive.) So, at least some of the physical universe is decidedly non-deterministic.

The affairs of humans, however, are probably equally non-deterministic. Another interesting property of the universe is the almost uncanny analogy between the laws of physics and economics. Thermodynamics, for instance, is directly analogous to competition theory, with perfect competition and the heat death of the universe being exactly correspondent mathematically. If the physics-economics analogy is to be used as a guide, at least some of the laws governing human interactions (e.g., economics) are also non-deterministic. This is by all means something that stands to reason. Physics becomes non-deterministic when you begin to examine the fundamental building blocks of the universe – that is, subatomic particles like electrons, positrons, quarks, and so on. Economics would therefore also become non-deterministic when you examine its fundamental building blocks – individuals. It would take a philosophical leap that I don’t think Musk And Company are prepared to make to claim that all human actions are perfectly predictable, provided with the right theories.

If neither the universe nor its constituent humans are perfectly predictable, a massive wrench is thrown in the omnipotence of Turry. After all, how can you pull off an incredible stunt like Turry’s if you can’t guarantee humans’ actions?

The one caveat to this line of reasoning is that scientists are actively searching for a theory of quantum gravity (QG) that will (perhaps deterministically) rectify general relativity and quantum mechanics. If a deterministic theory of quantum gravity is found, a deterministic theory of economics might also be found, which a sufficiently powerful reinforcement learning algorithm might be able to discern, using it for potentially harmful ends. That being said, if a theory of quantum gravity is found, we’ll probably be able to build the fabled warp drive from Star Trek, so we’ll be able to seek help from beings more advanced than Turry (plus, I’d be more than OK with a malign superintelligence if it meant we could have warp drives.)

So if AI is only as exceptional as every other micro-industrialization, where does that leave us? Considering that we’re in the middle of one of the handful of instances of exponential growth in human history, maybe not too poorly off.


I’d love to hear your thoughts on this post, whether you agree or disagree with it. Please feel welcome to publish them in the below comment section!

If you enjoyed this post, consider liking Crux Capacitor on Facebook, or subscribing to get new posts delivered to your email address.


1 – OpenAI

Featured image is of HAL 9000, from Stanley Kubrick’s 1968 film 2001: A Space Odyssey.

2 thoughts on “An Exception to AI Exceptionalism

  • January 6, 2018 at 5:01 am
    Permalink

    I agree with the broad contours of your argument, Jasper, and I do believe that a questioning of what you term the “sensationalist” point of view is necessary. I was convinced by your argument until you arrived at the section entitled “The Philosophical Front” – at that point, I am not sure your implicit negative argument is sufficient.

    Reply
  • September 4, 2017 at 5:25 am
    Permalink

    Interesting read jg, but i see a few errors in your reasoning. i need some time to write that out. one of the things is the 3 types of algoritmes. where you dismiss the second one as mostly un important. The one other is i think you need to take in account is the importance of intelligence which made us the most dominant species on the planet. now we are dealing with an technologie which puts something on top of us in the food chain.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *