Ensuring a Positive Posthuman Transition – Perspectives from Jaan Tallinn

Daniel Faggella

Daniel Faggella is Head of Research at Emerj. Called upon by the United Nations, World Bank, INTERPOL, and leading enterprises, Daniel is a globally sought-after expert on the competitive strategy implications of AI for business and government leaders.

Ensuring a Positive Posthuman Transition - Perspectives from Jaan Tallin

The AI In Industry podcast is often conducted over Skype, and this week’s guest happens to be one of its early developers. Jaan Tallinn is recognized as one of the technical leads behind Skype as a platform.

I met Jaan while we were both doing round table sessions at the World Government Summit, and in this episode, I talk to Tallinn about a topic that we often don’t get to cover on the podcast: the consequences of artificial general intelligence. Where’s this going to take humanity in the next hundred years?

Subscribe to our AI in Industry Podcast with your favorite podcast service:

Guest: Jaan TallinnCentre for the Study of Existential Risk

Expertise: Existential risk

Brief Recognition: Jaan Tallinn is recognized as one of the technical leads behind Skype as a platform. He was with the company very early on and stayed with them all the way through their acquisition by Microsoft for over $8 billion. He co-founded the Center of The Study Of Existential Risk and has invested in The Future of Life Institute, and The Machine Intelligence Research Institute. He was an investor in NNAISENSE and in DeepMind in London.

Interview Highlights

(04:00) How do you explain your dynamic of positive-sum thinking with AI to people?

Emerj CEO Dan Faggella (left) with Jaan Tallin (right)
Emerj CEO Dan Faggella (left) with Jaan Tallinn (right) at the World Government Summit

JT: One way of looking at the AI study’s leverage on human capabilities. When people think of AI as just another technology, they might be making this implicit assumption that the amount of resources is roughly constant.

I mean, we know that the amount of wealth in the world has been increasing although not necessarily equally, but this would be just a really tiny bit when we zoom out from the planet earth and look at [how] almost all the resources are just elsewhere. So the game isn’t about using more and more powerful technology to fight over a constant amount of resources on this planet here, but the game is really going to be maximizing our chances to go out and build a flourishing universe.

(05:30) What does this look like?

JT: I mean I don’t have an answer there…We need to have a long period of deliberation about what it means to maximize the amount of…happiness. I mean I like to think in terms of fun; I want to be more fun than less fun. I think we should be very careful not to lock in our current values, our current thinking because we know our values have progressed over time…We shouldn’t, cannot pause into everything, but I think there’s value in thinking about what we are going to do with the universe when it becomes an option.

(08:30) When we do leave the planet, go off and do grander things with much more resources, do you suspect that it will be as humanity at that time?

JT: Yeah. I don’t have strong views on that one way or the other. I really care about humans and I think it’s okay to care about humans, as opposed to some like AI researchers who I wouldn’t.

(09:30) Are there people that have that opinion?

JT: Yes. It’s like, not very common, but there is definitely a strand where people just think that they’re a part of some big evolution here and they just want to bring about the next step. I think it’s just wrong thinking.

Usually, the argument that I counter this with is, “Well what if we just engineered the killer virus that kills everyone? Wouldn’t that count as a step in evolution as well because in some ways there was a stronger thing that killed us?” That doesn’t sound right.

(10:30) So you don’t have a strong hunch as to whether, when the reaching of AGI into the galaxy occurs, it’s going to be at the behest of some global human organization or if there may only be some kind of semi-biological globular stuff happening here and that really we’re past humans at that point?

JT: There’s a better way to house sentience. A more economical way at least. So in that sense, I think because it’s more economical, it’s going to be more likely. But that’s why we like the term “aligned AI,” which kind of jumps over, they show up, is it going to be us? Is it going to be hybrid? Aligned AI is an AI whose idea of a good future is aligned with our idea of a good future…We shouldn’t basically be confident that now we know everything that a good future should be.

[I was at] this conference in Puerto Rico last month and one session was really about this topic. What would a good future with AI look like? And there were many wise ideas there but a couple of interesting common denominators. For example, one of them was that we don’t want disruptive changes or if there’s going to be a potentially really extreme change, it should happen over time and kind of in a controlled fashion as opposed to in a matter of minutes in a catastrophic way.

Another way was that there shouldn’t be any compulsory actions. If somebody wants to retain their biological form, they should be able to. Again, because almost all the utility is not on this planet. Keeping humans around is very cheap, let’s put it that way.

That’s actually one of my hopes about AI. Even if you get really unaligned AI, the hope for humans is that we are really cheap to keep around.

(14:00) Do you think that if people thought more about what would occur if AI got past a certain level if we reach artificial general intelligence, that there would be less hubbub about who is in control of it?

JT: Yeah. But the way I would frame it is that again, AI will likely enable, whether independently or together with us or as our tool, access to the universe. And when you talk about good future, almost all of it defines what happens in the rest of the universe. Because that’s just where…the stuff is happening.

In some ways, this planet is just sort of initial condition for getting to the rest of the potentially good experiences that can be had in the future. So if you fight over the kind of really tiny amount of resources on this planet in a zero-sum manner, we are going to sacrifice our ability to get to the actual meat. So that is…the state of mind change that I think would be really valuable. That the earth is not the name of the game. The name of the game is the universe.

(16:00) It’s possible that there might still be tension around who makes because potentially the trajectory of how we expand into the universe and potentially the fate of  the other nations could be molded to a great extent by whatever deity is built, and so some folks might say, “well we can kind of mold the values of it a little bit more.”

JT: Yeah, that’s it. That’s not a very human way of thinking, but that just makes a simple mathematical mistake, which is if you are building things in a cooperative manner, you are not going to waste resources to the competition. If you think that, “Okay, we need to get to the rest of the universe,” even if you win, proceeds from that win might be much smaller than if you had cooperated because you started from a way worse position given the competition. You can imagine that your goal is to win the race, but to do that you have competitors just sabotage each [everyone]. When you are the winner of the race, [your]…score is going to be really, really bad compared to where you actually cooperated.

(18:30) When you think about what that cooperation could be, what we’re ultimately talking about is kind of the global human society sort of having some shared vision around where we would want to take this and how we can go about it. How does that ever get started?

JT: Eric Drexler, who was the inventor of nanotechnology, has been advancing this concept. So it’s a utopia with other constraints that all the steps between now and utopia, they are parietal improvements. It means that everyone’s life is going to get better. So you have this other constraint that no one’s going to be left behind.

So you could basically build a roadmap so to speak, between now and utopia, whatever it might be, where we can guarantee the situation that no one’s life, no one’s situation’s going to get worse. That’s because of the abundance of resources out there. This is theoretical, actually doable. If we fail to do it, it’s going to be because of some silly human [stuff].

Subscribe to our AI in Industry Podcast with your favorite podcast service:

 

Header Image Credit: Cloud Tweaks

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter:

Subscribe