The State of AI to Come

A unique opportunity in the present to design a future that really works for us
Artificial Intelligence
Break Through Science
The State of AI to Come

AI is among us. Once the province of science fiction, AI is increasingly part of our reality, mundane and otherwise. It appears, often invisibly, in recommendation engines, search algorithms, driver-assist technology, smart devices, drone warfare, medicine, and a thousand other places. But what’s intelligent about it? Is it really all that artificial? And crucially, now that it’s here — where do we want it to go?

Wherever it’s going, it isn’t there yet, but philosopher and futurist Nick Bostrom argues that that makes now the perfect time to plot a trajectory for AI’s development. Bostrom is a philosophy professor at Oxford University and the founding Director of the Future of Humanity Institute. He is perhaps best known for his book Superintelligence: Paths, Dangers, Strategies, published in 2014. Rather than relying on the “build it because we can” ethos that has powered much of technological history, and then scrambling to mitigate (or adapt to) unwanted or unanticipated effects, Bostrom advocates unpacking the elided ethics in “what we can build” and considering “what we should build” for the future we want.

A Categorically Different Technology

AI differs from other technological developments in that it is a type of meta-technology: it is used in other technologies and systems both to run them and coordinate between them. AI is technology about thinking, something that, in time, may not only effectively rival human intelligence, but could even outthink us, in both speed and/or complexity. 

This situation offers potentially strong benefits: “AI is a much more general-purpose technology,” says Bostrom, “one that could shape all areas of the economy and could advise us.” It may also, however, have potentially troubling consequences: Bostrom considers advanced AI a possible “existential threat” to humanity. And that’s aside from any question of sentience: one could imagine a “20 GOTO 10” type of loop where the processes running the operation optimize the humans right out of the equation. (Bostrom has famously offered the thought experiment of an efficiency-optimizing paper clip factory.) And that’s if everything is working well; unintended and potentially catastrophic disruptions could result from the inevitable glitch or breakdown.

It’s not 10,000 years from now. I don't think it's next year, but I do think it's something that might happen, with some degree of probability, in the lifetime of some of us.

In terms of when an advanced AI might be making an appearance, Bostrom acknowledges substantial uncertainty: “It’s not 10,000 years from now. I don't think it's next year, but I do think it's something that might happen, with some degree of probability, in the lifetime of some of us.” He is clear, however, that the time to plan for this eventuality is now.

Considerations of AI are not all warnings of and safeguards against a future dystopia. Instead, advanced AI represents a unique opportunity in the present to design a future that really works for us, especially at a point in our development where systems -- ranging from the global supply chain, to the stock market, to the climate -- have become too complex to manage in its absence.

Interestingly, we may also need to consider the question of “what is good for AI”? In the same way that we involve ethics in our treatment of animals, from beloved pets to lab mice, we may need to consider an ethics of an advanced AI: how we should treat it, and how AI’s should act toward each other, as well as a meta-ethics for a blended (AI and biologicals) society.

The Issues

  1. Minimize potentional harm.
    As we develop and innovate in AI, it is imperative to guard against three main categories of potential harm, which resemble, to a degree, Isaac Asimov’s laws of robotics:
    • Making sure the AIs don't harm us
    • Making sure we don't harm each other using AI tools or weapons
    • Making sure we don't harm the AIs if they attain moral status
  2. Near- and long-term planning.
    We need to solve problems occurring now, as well as prevent them in the future. Much of the near-term needs have been instrumental. Facial recognition software results revealed discrimination in the algorithms, while driver-assist technology continues to confront the trolley problem; the search for directed near-term solutions will be a growth area. Long-term planning, on the other hand, may need to be both more general and more unusual. Features of AI might produce unexpectedly undesirable outcomes, as for example when high-frequency trading (HFT) algorithms, which enable the accumulation of wealth by utilizing fractions-of-a-second differentials in market value, produced the trillion-dollar “Flash Crash” of the stock market in 2010.
  3. A radically different intelligence.
    Advanced AI has the potential to outthink us, as well as think in different ways. As such, it represents a possible existential risk for humanity. We need to think of ways in which an advanced AI may and may not share certain characteristics of humans, including its structure, motivation, implementation, speed, values, and any sort of recursive learning abilities, as well as how to ensure its alignment with the three categories of “do no harm” above.
  4. Moral Status.
    We need to establish an ethical framework for future digital minds. “You have digital minds of increasing sophistication,” says Bostrom, “and sort of work could happen before you have super intelligence or even human levels of intelligence.” Today’s assembly-line robot might not think, per se, while an advanced AI may be functionally indistinguishable from human-level intelligence. But even before that point, guidelines and standards for human-AI interaction, not unlike those for human-animal interaction, should be developed. “There is a fairly wide consensus,” says Bostrom, “on the basic idea that there are moral considerations relevant to how we treat animals. If you have a dog, you can't just kick it because you feel like it. That would be morally problematic. Figuring out how that would apply to hypothetical digital minds is a big theoretical project at the moment because a digital mind might be quite different from a biological mind and might have different needs.”
  5. Macro-strategic thinking.
    "Macro-strategy” is a term that Bostrom employs as an approach toward, as he puts it, understanding “the long-term consequences of actions we can take today.” He sees an opportunity to create a world in which AI assists us to manage complexity. Bostrom proposes the need to develop robust and diverse international governance at all levels, including multi-organizational, international, and grassroots-style groups to brainstorm and consider ramifications.

Our View of the Future of AI

Advanced AI can be a help or a hindrance, in some ways that we have imagined and in many ways that we have not yet. In order to use AI to help design, build, and manage a workable future, we need to start thinking now about how it might do that, and establish clear guardrails and directions. We need to also start getting clearer about exactly what we want, about what we might outsource to an AI, and what we won’t, and about what assumptions underlie some of our common asks. An AI won’t be making the same assumptions, and unless those are specified in the code, in the systems, we cannot assume the outcome will be “what humans would do, just faster.” We need to make sure that an AI that we build behaves and does what the creators intend for it to do. We also need to examine what it is that we intend in the first place. Overall, we need to surface and prioritize the “what should we do” from its current embedded position in the “what can we do.”

Macro-Strategic Projection

The questions of “what is good for humanity” and “what future(s) do we want” are not met with uniform answers, or nor with answers that will stand unwavering for perpetuity. To give due consideration to the various potential answers requires ongoing, interlocking systems of research that reach across cultures, states, industries, and other communities and organizations. 

A lot of the leading AI developers are actually quite idealistic. They want to do something that's good for the world, not just for their country, but for humanity...

Bostrom is clear-eyed about the current state of international cooperation, and even of conversation. “We live in something that's relatively close to anarchy at the international level,” he says, “with weak, not zero, but weak international governance institutions.” However, he offers a helpful distributed model for what this could look like in the various governmental, research, private, and NGO entities involved. “A lot of the leading AI developers are actually quite idealistic,” he says. “They want to do something that's good for the world, not just for their country, but for humanity. We could build on that sense of higher purpose and idealism and cosmopolitanism.”

Socrates famously pronounced that the unexamined not worth living. To make a future worth living, we must examine all aspects of life, including the artificial. By assuming a viewpoint akin to Bostrom’s “macro-strategic” perspective, we can better create and choose technologies that will be good for us – all of us.

Footnote

Fellows Associated with This Initiative