Doomsday scenarios? They’re always top of mind.

wesleygraphic

Nicolas Miailhe can’t stop thinking about the robot that’s going to take your job. And it’s not just robots that concern him, it’s also the contractor working for the latest Uber-like disruptor that plans to take over your industry. He’s also contemplating what will happen to our genetic sequences when we hand them over to doctors who promise personalized medicine, and how that data could fuel a new age of eugenics if it lands in the wrong hands. But he, of course, realizes that all of this worry will be for naught if climate change makes Earth unlivable.

Miailhe isn’t some crazy on the fringe of society. He is a student at Harvard University’s John F. Kennedy School of Government, and he belongs to just one of the serious groups around Boston that are devoting their brainpower to preparing for the technological crises of the future.

Elon Musk, founder of the electric car maker Tesla, is helping to fund such studies. Stephen Hawking, the theoretical physicist, was one of thousands of people to sign a letter published this week that warns of the dangers of autonomous weapons.

In case you’ve missed it, threats to civilization as we know it are a hot topic right now, as anyone who’s been to a bookstore or a movie theater knows. But real-world scientists are thinking apocalyptically, too. Many believe that humans — sometime between inventing agriculture and reshaping the global climate — have created a new geological epoch. This age, informally called the anthropocene, will be the subject of a new section at the National Museum of Natural History in Washington, D.C. The display will be set among the dinosaurs — perhaps as a reminder of just how precarious life for humans has become.

So it’s only natural that Boston and Cambridge, hubs for both technology and serious thinking, are sprouting groups that address doomsday anxieties.

Miailhe cofounded his organization, called the Future Society, in the fall of 2014. He and other students at the Kennedy School were worried about how technologies that are exploding today — nanotech, biotech, and others — will upend the way we live. Their first goal is to raise awareness of these topics at Harvard itself. Ultimately, they hope the university will dedicate more of its curriculum and resources to the same issues.

So far, Miailhe says, the Future Society has focused on questions it feels are “immediately threatening to our social contracts” — employability, for example. What’s the future of work? Who will retrain people who are pushed out of blue- and white-collar jobs by robots?

The society is also interested in the ways our genomic information could be used — for good and evil.

The goal of President Obama’s Precision Medicine Initiative is to harness a patient’s health data to create tailored medical treatments. But leaders will have to make sure, Miailhe says, that the flow of personal data doesn’t strip away our privacy — or worse.

And there’s the question of whether we’ll use medical technology to engineer future generations of humans — a movement called transhumanism that hopes to someday build more-perfect people.

But who decides which of us will have access to this technology, he wonders?

“We are a bit scared,” Miailhe says. “We are a bit anxious, because we see this wave of disruption and transformation coming our way. And it seems that we are not prepared.”

Another Cambridge group preparing for the worst is the Future of Life Institute, which is channeling its energy to the risks posed by artificial intelligence.

Richard Mallah, an FLI core member and an artificial intelligence, or AI, researcher, says we should expect to see computers with the intelligence of a “well-rounded” human within this century.

At an artificial intelligence conference organized by the institute early this year, he and nearly every other expert agreed: Machines that are just as smart as we are will walk, roll, or flicker onto the scene by 2100.

If intelligent machines don’t understand human priorities and ethics, they could make decisions that are disastrous for society. Misguided computers could harm us even without driving our cars or operating our weapons — though they’ll be doing those things, too.

To tackle such questions, the FLI awarded about $7 million this month to groups researching AI safety. Most of that grant money came out of a large donation to the institute from Musk, who also started SpaceX, the private space-travel company.

The winners’ research topics will include building AI systems that can learn about human values, explain their decisions to us, and police themselves.

This week, the FLI published an open letter signed by Hawking and nearly 14,000 others that highlighted the dangers of autonomous weapons — killer robots, essentially.

“A military AI arms race would not be beneficial for humanity,” the letter said, putting it gently.

The Future of Life Institute, founded in spring 2014, is run by volunteers. They also study other “existential risks” to humanity, such as biotechnology and nuclear war.

“We should be thinking about all future descendants of us, whether human or sort-of human,” Mallah said June 27 on a panel about the future of humanity, hosted by Boston University’s Center for Interdisciplinary Teaching and Learning.

Miailhe spoke on the same panel with professor Anthony Janetos, who directs BU’s Frederick S. Pardee Center for the Study of the Longer-Range Future. Established in 2000, the Pardee Center addresses yet another set of dire questions, focusing on issues such as global environmental change and how it affects humans who depend on the land for their livelihoods.

“We’re all about trying to understand the world better so that we can help people make better decisions,” Janetos says.

The Pardee Center is also asking questions about cities: As more and more humans live in urban areas, what will the health and environmental consequences be? What architectural strategies will work best?

In November, the center will cosponsor a meeting in Boston of the Association of Climate Change Officers about the ways coastal cities should prepare for, and adapt to, rising sea levels.

The thought of Boston falling into the ocean isn’t comforting — whether or not you’re in a self-driving car or sporting cyborg implants when it happens. But Janetos doesn’t let doomsday threats bother him.

Governments may be slow to make decisions about climate change and other crises, Janetos says, but it’s simply because those decisions are hard.

“It doesn’t mean we’re going to fail. It doesn’t mean that there’s no point in trying,” he says. “I have more important things to do than sit around worrying.”