The questions kids ask about science aren’t always easy to answer. Sometimes, their little brains can lead to big places adults forget to explore. With that in mind, we’ve started a series called Science Question From a Toddler, which will use kids’ curiosity as a jumping-off point to investigate the scientific wonders that adults don’t even think to ask about. The answers are for adults, but they wouldn’t be possible without the wonder only a child can bring. I want the toddlers in your life to be a part of it! Send me their science questions, and they may serve as the inspiration for a column. And now, our toddler …
Q: Why balloon pop? — Jin F., age 3
Partly, it’s because technology is imperfect. The rubbery skin of a balloon is made up of molecules connected to other molecules in long chains, like a chemistry conga line, said James Kakalios, a professor of physics at the University of Minnesota and author of “The Physics of Superheroes.” Each chain can bend and flex. The molecules in this metaphorical line dance can bunch up in an awkward pile or stretch apart so that they’re barely touching. Their flexibility has limits, though, and those limits aren’t uniform. “There’s always some place where the bonds are a little stronger than average,” Kakalios said. “And there’s always a place where it’s a little weaker than average.”
But the balloon going pop is also kind of your own dang fault. You are, after all, the one who blew a bunch of air into it, which stretched the molecules out enough that the weakest link finally broke. More importantly, when you inflate a balloon, you’re increasing the air pressure inside relative to the outside. So when the technology fails, the densely packed air molecules that were trapped inside the balloon fly free, expanding outward in a pressure wave that creates a sound — i.e., it fails with bang. (And, if the person blowing up the balloon is 3, probably also with a whimper.)
“You can keep it from popping by never inflating it,” Kakalios said. But, come on. That doesn’t really solve the problem. If you don’t inflate it, a balloon is useless and, frankly, kind of disgusting — a piece of floppity rubber, wobbling around, smelling of off-gassing petrocarbons.1 If you do inflate it, it metamorphoses into something wonderful — floaty like a snowflake, bouncy like a ball, capable of eliciting giggles from the stoniest of faces. You can make a joy machine … but it comes with a risk of exploding in your face. And this is what connects a simple question about a balloon to an emerging branch of research that’s enlisting philosophy and science in a quest to save the world.
A balloon popping is sort of the toddler-stakes equivalent of some of the bigger, scarier problems faced by grown-up humanity. There are many places where humans are grappling with technologies that bring us serious benefits … but also present serious risks. In some cases, the risks threaten the very lives the technology also improves. For instance, it’s wonderful in winter to hop into a heated car and drive to a grocery store, where, under electric lights, we can buy a fuzzy kiwi fruit grown thousands of miles away. But all of that relies on burning fossil fuels that change our atmosphere in ways that could lead to the deaths of millions of people. Or consider artificial intelligence. It seems like a great idea when you’re thinking about napping through your commute while a robot drives. It’s maybe less appealing when the robot is making decisions about who to kill in a war — and, either way, a miscalculation in programming could lead to entities with a lot of power that value human life in a very different way than we want them to.
Scientists call these kinds of problems — risks that pose serious, permanent threats to the future of humanity — existential or catastrophic risks. They’re particularly interested in anthropogenic existential risks — that is, the sort of terrifying, potentially humanity-destroying risk scenarios that we cause ourselves, through technology. That’s not to say technology is evil, said Max Tegmark, a physics professor at MIT and one of the founders of the Future of Life Institute, which studies existential risks. Technology makes our lives better, he said, just like a balloon improves a toddler’s day. “But it’s not enough to just keep building more powerful technology,” Tegmark said. “We have to educate and develop wisdom.”
People have been contemplating these problems for decades. For instance, the Doomsday argument is a classic thought experiment that essentially uses probability and logic to prove that the end is nigh-ish. In the timeline of human history, most humans will live in the middle 90 percent of the species’ life span, rather than the first 5 percent or the last. That fact, combined with humanity’s rate of population growth, means that human life on Earth is probably closer to its end than its beginning — or so goes the argument, which people have been talking about for more than 30 years.
But the field has really come into its own in the past 15 years or so. Bostrom, a philosopher, coined the term “existential risk” in 2002, and since then, no fewer than four academic institutions have arisen to study those risks — the Centre for the Study of Existential Risk at the University of Cambridge, the Future of Humanity Institute at the University of Oxford, the Future of Life Institute in Cambridge, Massachusetts, and the Global Catastrophic Risk Institute (which doesn’t have a physical headquarters).
Their work, by its nature, is heavily speculative. For instance, in January, Bostrom and other researchers from the Future of Humanity Institute published a paper that presented a mathematical model of what happens when people make decisions on actions that could affect more than just themselves — one country or corporation planning a geoengineering project aimed at cooling the climate, say. Their conclusion: We need principles that discourage unilateral action when the risks cross species, borders and generations. A different paper, published in May by the founders of the Global Catastrophic Risk Institute, introduced a sort of flow chart to help researchers think through scenarios in which superintelligent robots could kill all humans.
This is not to say, however, that the study of existential risk doesn’t have immediate real-world impacts. For instance, existential risk is playing a role in shifting the way we think about risk analysis, said Kaitlin Butler, program director for the Science and Environmental Health Network and an associate with the Global Catastrophic Risk Institute. Instead of a model based on direct costs and direct benefits, she told me, more scientists and governments are moving toward a more complex system that tries to take into account issues like public health, social justice and indirect risks that play out over long periods of time.
Traditional cost-benefit analysis is geared toward assessing monetary benefits and local, direct risks, Butler said. It doesn’t do a good job of accounting for the risks posed to people hundreds of miles away or a few generations removed. Existential risk researchers are working on new ways to adapt risk-benefit equations to these complex problems.
You can see this in how scientists are thinking about gain-of-function research — a branch of bioengineering in which pathogens are genetically changed to give them abilities they haven’t evolved in nature. Altering a bird flu strain in the lab so that it could infect humans can help scientists better predict how and when real-world viruses make that leap. But the potential drawbacks are also pretty obvious — obvious enough that the Obama administration put a moratorium on funding such studies in 2014. Scientists at the Future of Humanity Institute have proposed accounting for indirect risks by creating policies that would force scientists to think about (and agree to take responsibility for) the far-reaching costs of a gain-of-function experiment going wrong in the grant applications they write to fund it. The idea of building financial responsibility into the grant-writing process has been adopted by scientists as one possible way out of a research stalemate that’s now 2 years old.
Artificial intelligence — and the many, many ways in which it could go wrong — is another major focus of existential risk research. Besides the flow chart, there have been studies aimed at thinking philosophically about how machines think, how humans think and how we might miscommunicate our values to a machine. It’s not always clear to existential risk experts what the right answers are. For instance, while we were talking, Tegmark at first told me that it would be great if artificial intelligence could learn how to treat humans by watching our behavior. But as the conversation went on, he acknowledged that this strategy could backfire spectacularly — after all, humans don’t necessarily treat one another the way we would want to be treated by a supercomputer. And the potential risks of AI go beyond exotic, sci-fi scenarios about Skynet. You have to worry about basic hardware stability, too. It’s one thing if your cellphone gets bricked. It’s another thing entirely if an autonomous computer that operates a nuclear power plant has a catastrophic failure.
Existential risk research is having an impact on the companies and scientists developing artificial intelligence, Tegmark said, because it’s prompting people to spend more time thinking about what could go wrong and how to prevent it. “For the longest time, people were focused on how to make this stuff work,” he said. “All the money was invested … in making it more powerful.” Now, he told me, money is starting to be funneled toward making the technologies safe, trustworthy and robust. Recently, that influence led to the formation of the Partnership on AI, an organization that brings together companies including Facebook, Google, IBM and Amazon to research and share safety data.
In general, the idea behind existential risk is that, by thinking about what the worst case could be, we can figure out how to keep it from happening. If you can predict how a system could fail, then you can design future systems whose failure doesn’t necessarily mean catastrophe. “The balloon doesn’t pop if you just open the mouth and let the air out,” Butler said.