Can Artificial Intelligence Make the World a Better Place?

Daniel Faggella

Daniel Faggella is Head of Research at Emerj. Called upon by the United Nations, World Bank, INTERPOL, and leading enterprises, Daniel is a globally sought-after expert on the competitive strategy implications of AI for business and government leaders.

Can Artificial Intelligence Make the World a Better Place?

My most recent TEDx is titled “Can AI Make the World a Better Place?” – but this title is somewhat misleading.

While the presentation touches lightly on how artificial intelligence can be used to altruistic purposes in the present, it is ultimately about the same topic that all of my most important talks are about:

How the transition beyond humanity will take place.

Those who’ve followed Emerj since the early days are aware of the broader moral vision behind the company: “To proliferate the conversation about determining and moving towards the most beneficial transition beyond humanity.”

I have never identified as a transhumanist, I see the transition beyond humanity as literally inevitable, and I believe we should guide this transition rather than be taken for a ride inadvertently. *1

Because the TEDx format is so short, I’m never permitted the kind of time I wish I was permitted to fully flesh out my ideas, and to reference the sources and people I have drawn from in putting the ideas together. In this article I’ll break down the ideas presented in this talk – and their sources – and strike at the ultimate point behind the presentation itself.

Strong AI and Utilitarianism

The article begins with a basic idea: That “doing good” implies proliferating the happiness or pleasure and eliminating the pain of conscious creatures. This is straight-up utilitarianism – by no means a perfect moral theory – but about as good as we’ve got.

I mention how hard it is to project the long-term consequences of a “good” action. I.e.: How much suffering and pleasure was created by helping to build this library… or by volunteering to run a kids soccer camp in the summer? The butterfly effects are impossible to track, and it’s easy to deceive ourselves into justifying any of our actions based on a “utilitarian” belief that is indeed false and wrong.

However, it’s probably somewhat better than having no moral compass at all… or one bent on something other than utilitarian good. Think about a perfectly “virtuous” (good luck with whatever that means) society that was also miserable all the time. Think about a society that all believes in the “right” God (good luck with whatever that means) that was also miserable all the time. *2

Note: Normally this is the kind of article I’d compose on my personal blog at DanFaggella.com, where I write exclusively about the ethical considerations of post-human intelligence. Feel free to follow my personal blog for more long-term AI and AGI material.

Levels at Which Artificial Intelligence Might “Do Good”

The structure of the article roughly covers what we might consider to be the “gradients” of artificial intelligence’s influence on the moral good, from most near-term (and smallest) to most long-term (and greatest):

a) AI as a Tool for Doing Good:

We’ve done a good deal of coverage about the “altruistic” applications of AI (see our article on “AI for Good”). It should be noted that by no means do I think that nonprofit AI is the only “good” AI. There might be companies that generate massive profits from optimizing farming with AI or diagnosing cancer with AI – and by golly they may well “do” plenty of “good” in the process. I quickly move past this topic as it’s not what the talk is ultimately about.

b) AI as a Guage Towards Moral Goodness Itself:

If maximizing pleasure and eliminating pain is the ultimate goal of what we’re after, that’s good to know – but hard (basically impossible) to measure. I can guess that by being a mailman instead of a heroin dealer that I’ll have a more net positive impact on the world. If I donate to feeding children in Africa as opposed to buy the latest iPhone, then maybe – again – I can guess that I’m “doing good.” But it’s all guesses, and it’s all bad guesses.

If an AI system could in some way measure sentient hurt and sentience pleasure, and correlate those factors to actions, behaviors, public policy, etc… all with the ability to project those impacts into the future with better predictive ability than any team of human scientists – the indeed that might be the most morally meaningful invention of all time.

This would involve the following:

  • Understanding consciousness
  • Measuring well-being and it’s opposite in (basically all) living things
  • Somehow modeling that sentient measurement along with a near-infinite number of other variables in the real world
  • Extrapolate sentient well-being somewhat accurately into the future

Frankly, I consider it more likely that AI of post-human intelligence will arrive before this kind of “moral barometer” machine is invented, as the ability to detect the neural activity of all living things seems much closer to “impossible” than the creation of super intelligence (which itself is a gargantuan challenge). *3

c) Artificial Intelligence (and Nonbiological Sentience) as Goodness Itself

It seems safe to say that there is no “good” or “bad” without living, experiencing creatures. With no “experiencer”, there is no “good” or “bad” “experience”.

I’ve argued that the “moral value” of an entity seems to tie directly to the depth and richness of it’s consciousness (i.e. How much of a range of pains and pleasures that entity can experience). *4

Imagine the greatest pleasures and pains of a rodent, and compare those with the range of pains and pleasures that a human being can experience. A human clearly has many orders of magnitude more sentient potential and range (losing a child, growing a business, reading a poem, oil painting, nostalgia, humor, etc…), vastly beyond the experience of any rodent.

Lets imagine that an average rodent’s total “sentient range” score is a 10 (this is an arbitrary number, but stick with me here), and an average human’s total “sentient range” score is a 200. We might ask, what would a creature be able to experience with a “sentient range” score of 50,000? If a creature of that kind could be created (or somehow “enhanced” from existing biological life), it might be hard to deny its moral preeminence above humanity – a troubling idea to contemplate. *5

What Do We Do? (or “The Point”)

Indeed this is the big question. This is the purpose of my life, and the purpose of Emerj as an extension thereof.

The point of this presentation had little to do with talking about how AI is helping with farming or diagnosing disease (though these applications are important and should be pursued). Rather, the point was to talk about what “moral end-game” we’re striving for as a species.

  1. Are we to simply improve technology and encourage peace, and consider those aims to be good enough for the next millennia of homo sapiens?
  2. Can we possibly build an AI system to detect or predict the utilitarian impact of actions… maybe 10x or 100x better than human beings can now?
  3. Can we build an intelligence to determine a better, deeper, more robust moral understanding than that of utilitarianism?
    1. The impacts of such a system might be terrible *6, and there may be no “good” to determine outside of relative benefit. I think it’s quite likely that there is in fact no moral bedrock to eventually stand on, and I believe that “goodness” will likely remain subjective, contextual and illusive even for superintelligent machines.
  4. Can we use technology and super intelligence to somehow calibrate society, law, and biology to proliferate (or “engineer”) well-being itself? (While I’m rather congenial to this idea, ..
    1. Quote from the talk: “If we’re talking about well-being here… if that’s the point… to ‘make the world a better place’… then maybe we are not only fighting own own flawed nature, but maybe we’re also fighting nature itself. It seems safe to say that Nature herself cares not for the happy species, but cares only for the surviving one.” *7, **7
  5. Is the ideal moral aim to create not only an infinitely more intelligent, but also an infinitely more blissful
    1. In this case, humanity, and almost all biological life (which is nearly entirely predicated on violence and suffering, from the lowest to the highest levels) show “bow out” nicely – making way for more morally worthy entities who can not only understand the universe in vastly greater depth, but who might be able to do so indefinitely at a level of conscious bliss that is positively unimaginable by humans.
    2. Quote from talk: “A beacon of super-intelligent super-bliss that could populate the galaxy.” *8

It would seem a shame if monkeys overtly decided that they were the chosen random species of the universe, and that indeed no species beyond them should ever be developed. What a shame to never have language, poetry, law, art, space travel, or the internet… all because of an arbitrary barrier to development from one selfish species?

The question arrises, what new dimensions of experience, of art, of knowledge, of moral and scientific understanding are we holding back if we envision “man as he is” (flawed as he is) as the great and ultimate aim of the universe?

In absolutely no way am I eager to run beyond humanity toward something “better”. Rather, I see this transition as inevitable, and that navigating this transition without terrible (maybe extinction-level) consequences will involve an incredible amount of global collaboration and ethical forethought – a process that should begin now.

The great importance of AI and neurotechnology – and the whole class of technologies that might create or enhanced intelligence and sentience itself – is that they not only pose an existential threat to humanity (i.e. they could be misused as destructive forces to snuff out life on earth)… but that they also could imply a great proliferation of moral value and “life”… vastly beyond what exists on this planet – or maybe anywhere in the universe.

For this reason, I’m of the belief (and have been since 2012) that determining the trajectory of intelligence and sentience itself is the preeminently important moral concern of our species.

This will need to be an open-minded, interdisciplinary process, and one that – for better or for worse – I think will require global steering efforts (to ensure that “team humans” is on the same page about what kind of next-level intelligence we’re trying to create here), and global transparency efforts (to ensure that nobody is tinkering with intelligence and consciousness in ways that seem likely to cause massive and unnecessary conflict).

In the long term, Emerj was created for one reason:

To proliferate that conversation about the trajectory of intelligence itself.

Currently, we mostly cover the industry impact and applications of AI. We do that because:

  • It’s valuable, and will allow us to sustain Emerj’s case as a growing business without needing to ask for donations or handouts.
  • It draws the attention of business and government leaders – exactly the folks who will be helping to develop and adopt AI technologies that will shape our world (I unabashedly aim to eventually draw these same leaders into a conversation beyond business implications, into the discussion of the future of intelligence itself – and how humanity will manage that transition).
  • It provides a platform for us to gather consensus level though about not just the business implications of AI, but it’s moral and social impact (see our previous examples of this with our “Conscious AI” researcher poll, or our “AI Risk” researcher poll), something we’ll be doing more and more of.

Do you have ideas about the grand trajectory of consciousness or intelligence?

Do you have ideas about how we might approach this challenge as a species without destroying ourselves in the process?

Feel free to reach out (dan [at] Emerj [dot] com)

 

*1 – I don’t have the space to argue in this essay for why transhumanism is impossible to prevent, but the following article by Nayef Al Rodhan is a good start. https://isnblog.ethz.ch/security/inevitable-transhumanism-how-emerging-strategic-technologies-will-affect-the-future-of-humanity. I believe that in the next 10 years Al Rodhan will be known widely – in the same way that Nick Bostrom’s ideas have recently risen from relative obscurity.

*2 – John Stuart Mill has argued that even Kant’s “Categorical Imperative” eventually boils down to a utilitarian ethic. I happen to think that this critique is generally quite strong, but you come to your own conclusion. https://www.utm.edu/staff/jfieser/class/300/categorical.htm. The topic of consequentialist vs deontological moral theory is much more complex than I could cover in this article – but feel free to email me if you’d like a rousing debate on the matter.

*3 – I’ve written in greater depth about the idea of a “Strong Phronetic AI”, drawing on Aristotle’s use of the word “Phronesis”. http://danfaggella.com/strong-phronetic-artificial-intelligenc/

*4 – I go into much greater depth on this concern of creating “more morally worthy entities” in my 2015 TEDx talk titled “What will we do when the robots can feel?” https://www.youtube.com/watch?v=PjiZbMhqqTM

*5 – This concept of ranking the “value” moral entities is something that I grapple with in my 2014 TEDx talk titled “Tinkering with consciousness” https://www.youtube.com/watch?v=d5VNkRpgvns&t=37s. I’ve also explored a few competing ideas about “scales” of moral value on my personal blog, where I cover the larger dynamics of AI and the future of consciousness in much greater depth: http://danfaggella.com/threshold-vs-scala-moral-status-in-a-post-human-world/

*6 – I’ve written previously about the difficulty in assuring human safety (or indeed human dignity or worth) in a landscape of super-intelligent moral evolution. http://danfaggella.com/morality-in-a-transhuman-future/

*7 – Lucretius has a saying “the vessel is flawed”, in reference to the incapability of humans to hold onto happiness. When yet again surprise at my own inability to stave off anxieties I often recall the phrase and am in some way comforted. http://classics.mit.edu/Carus/nature_things.html

**7 – In my opinion there is little doubt that the best source for ideas around “uplifting” and the topic of hedonism at large is David Pearce, whose myriad writings on this subject deserve a hundred times more attention than they currently garner (hedweb.com).

In this talk I agree with David about the imperative to reduce (or eliminate) suffering, and to enhance and expand wellbeing. David’s phrase “gradients of bliss” is one that I quote often, and that I wish I’d included more in this actual talk. 

In this talk I also express serious pessimism about the idea of animal “uplifting” (which David has written about at length), or genetically programming species to be blissful. I unfortunately see the mechanizations of nature to be vastly too complex to tinker with. Arguably (I am not certain here, nobody is), the teetering balance of animals and chemicals and physical forces that keep up a natural ecosystem are more complex than understanding and replicating consciousness itself. I am of the belief that if “doing good” (on a grand utilitarian scale) is “the point”, then it would likely be easier to create some kind of mega-blissful computronium (Yudkowsky is a good source for greater depth on this scenario: https://intelligence.org/files/CFAI.pdf), packing in as much intelligence and bliss per square millimeter as possible – rather than optimizing the wellbeing of all 350,000 known species of beetles, and all mammals, and everything else under the sun.

*8 – Hugo de Garis is one among many thinkers who has argued that it is probably best for more super intelligent entities (not run-of-the-mill humans) to populate the galaxy. His work on “The Artifact War” is in part a prediction about the future great human conflict that will arise when some humans aim to remain “human”, while others aim to enhance humanity and/or create super intelligence.

I believe that he is right in presuming that this conflict will arise (so long as we don’t blow ourselves up with nukes beforehand). He also believe that it is likely best for humans to make way for the super intelligent entities that come after them. I – fortunately or unfortunately – believe that there is great credence to this argument (presuming that these entities are indeed vastly more intelligent and blissful than humans), and that this is an issue we will have to grapple with deeply in the next 3-6 decades. https://agi-conf.org/2008/artilectwar.pdf

 

Header image credit: TEDx

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter:

Subscribe