Open-Minded Conversation May Be Our Best Bet for Survival in the 21st Century – A Conversation with Lord Martin Rees

Daniel Faggella

Daniel Faggella is Head of Research at Emerj. Called upon by the United Nations, World Bank, INTERPOL, and leading enterprises, Daniel is a globally sought-after expert on the competitive strategy implications of AI for business and government leaders.

Open-Minded Conversation May Be Our Best Bet for Survival in the 21st Century - A Conversation with Lord Martin Rees

Episode SummaryFew astrophysicists are as decorated as Martin Rees, Baron Rees of Ludlow, who was a primary contributor to the big-bang theory and named to the honorary position of UK’s astronomer royal in 1995. His work has explored the intersections of science and philosophy,  as well as human beings’ contextual place in the universe. In his book “Our Final Century”, published in 2003, Rees warned about the dangers of uncontrolled scientific advance, and argued that human beings have a 50 percent chance of surviving past the year 2100 as a direct result. In this episode, I asked him why he considers AI to be among one of the foremost existential risks that society should consider, as well as his thoughts around how we might best regulate AI and other emerging technologies in the nearer term.

GuestMartin Rees

ExpertiseAstronomy and Cosmology

Recognition in Brief: Dr. Martin Rees is a leading astrophysicist as well as a senior figure in UK science. He has conducted influential theoretical work on subjects as diverse as black hole formation and extragalactic radio sources, and provided key evidence to contradict the Steady State theory of the evolution of the Universe.

As Astronomer Royal and a Past President of the Royal Society, Martin is a prominent scientific spokesperson and the author of seven books of popular science. After receiving a knighthood in 1992 for his services to science, he was elevated to the title of Baron Rees of Ludlow in 2005.

Current AffiliationsBoard member of the Institute for Advanced Study in Princeton, the UK’s Institute for Public Policy Research (IPPR), the Oxford Martin School, and the Gates Cambridge Trust; co-founder of the Centre for the Study of Existential Risk; advisory board member on the Scientific Advisory Board for the Future of Life Institute

The Human Interest Factor that Permeates all Science

Martin Rees’ jump from cosmology and astronomy to existential risk was a gradual transition. A professor at Cambridge for much of his career, Rees’ interests in the political and social implications of fast-advancing technology peaked in the 1980s, when he became very involved in campaigns against nuclear weapons, including the Star Wars missile defense initiative. As he described his experiences,

“I used to attend Pugwash conferences, and wrote articles about how we could move towards zero nuclear weapons, so that was the first general topic I became involved in, but thereafter I gradually became concerned about other ways in which science was perhaps running away faster than we could control it.”

In 2005, he became more fully engaged when he was appointed as the President of the Royal Society, the UK counterpart of the US’ National Academy of Sciences, with responsibility for overseeing the whole of the country’s scientific progress and engaging with public and politicians on pressing issues. “That of course meant it became my job to think about these issues, and it was great to have something that was my job and something I wanted to move into anyway, and that’s how I became interested in these extreme risks, environmental, bio, and AI,” says Rees.

As prolific a contributor as Martin has been in cosmology and astronomy, he is quick to point out that he is not an expert in AI and other emerging technologies.

“I’m not an expert in this subject, although I have talked to quite a few of the people who are engaged with it, both academics and people in the commercial world, and of course as you know there’s agreement about the direction of the travel, but not so much agreement about the pace of travel.”

Figures like Kurzweil espouse the view that AI will outpace human intelligence in the next 25 years, while others (Rees mentions Harvard’s Dr. David Brooks as an example) believe it’s too early to worry about the far-reaching ramifications of this potential reality. For the most part, however, there’s a general consensus that a proactive and collaborative effort is necessary to establish regulatory guidelines that help guide and drive progress in AI and other emerging technological areas. As Rees states,

“I am impressed by the fact that many of those who are genuine experts do feel that despite the uncertainties, the field does probably need some guidelines for responsible innovation, you can see that some of the directions of travel could lead to dangerous trends, and there’s no harm in early on trying to have some guidelines to make sure to try and avoid the downsides, ensure that we develop more rapidly the more benign sides, and take precaution against the emergence of the ‘machines that get out of control and take us over’.”

As far as the specific AI risks that could arise, Martin separates the explicit threat of conscious robots with malevolent intentions from machines that gradually gain control of major infrastructures, such as the energy grid or the financial system, on which much of humanity is dependent.

“It could be that we will have a machine which gives its owner huge power over the whole of the external world, as you can imagine one machine getting ahead of all the others, that’s going to give huge power to the person who controls it; that of course is not the same as saying the machine is taking over…the other concern is a machine will not remain in its box, as it were…and of course the Science-Fiction scenario is that the machine has superhuman intelligence, it takes over and has a plan, but I think what is more likely is a machine will blunder and cause mayhem entirely through malfunction.”

Discerning where the insidious nature of potential AI technologies  lie – with the machine or with the humans who create and control them – seems to be an important consideration when discussing how to counter the existential risks that threaten humanity’s well being and survival.

Conversation Above All Else

Martin doesn’t believe that solutions that have worked for nuclear control will suffice with newer technologies. In his view, this is a significant and ongoing issue in biotechnology, which came on the map with the seminal Asilomar meeting in 1975 in Pacific Grove, CA, where pioneers in molecular biology met and came up with guidelines on DNA and genetic experimentation. In just the last year, another conference was held in the same spirit to discuss CRISPR and other gene-editing techniques, technologies that were at the heart of debate at the Asilomar talks and that many scientists still hope to contain.

But the ethical ramifications are enormous, and what concerns Rees is that even if we have regulations, enforcing them is a whole other matter. He says,

“Now biotech is a sort of hacking game for students and it’s done all over the world, and with strong commercial pressures in the biotech area, what really scares me is that enforcing the regulations will be as hopeless as enforcing the drug laws or the tax laws globally, anything that can be done will be done by someone somewhere, whether it’s ethical or not.”

In the longer term, there are similar concerns about AI, and it seems as if we can’t help but start to consider these situations, whether or not we need to be at the point of imminent worry.

While the dents in drug and tax law regulations are enough to keep law enforcement alert and active and most people adhering to laws, a little nick or slip-up in the regulatory laws for AI may not work so well. In other words, one mistake may be one too many, given the potential catastrophic downsides of such a powerful technology.

The reasons as to why it will be less governable seem rather obvious. Nuclear technology, for example, is not understood by the general public and requires a lot of resources and capital, where as many AI technologies – at least how to develop and use them – are understood by millions of people the world over. For this reason, controlling when and how AI is used seems an almost impossible task.

In a flash of optimism, Martin urges that this doesn’t mean we shouldn’t try and regulate related technologies, or go from ‘denial to despair’. “What we can do is ensure that there is a good dialogue between academics, ethicists, and those who are actually  pushing this technology forward in the commercial world,” he suggests.

Just as in the medical field, where most sensitized researchers, professionals, and involved citizens advocate for creating vaccines before modifying viruses, so too should we consider ways to use AI and reduce the risks that face all human beings, from environmental to direct threats from harmful AI. We may not be able to eliminate risk, says Martin, but having real and transparent conversations that foster consciousness of these risks is a good start.

Image credit: Nesta.org

 

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter:

Subscribe