Before 2021
2021 - 2035
2036 - 2060
2061 - 2100
2101 - 2200
Before 3000
Likely never
[no timeframe
given]
Dr. Stephen Thaler
Dr. Peter Boltuc
Dr. Pieter Mosterman
Dr. Mehdi Dastani
Dr. Helgi Helgason
Dr. Massimiliano Versace
Dr. Pei Wang
Peter Voss
Dr. Michael (Mishka) Bukatin
Dr. Andras Kornai
Dr. Jim Hendler
Dr. Bruce Maclennan
Dr. Daniel Berleant
Dr. Ben Goertzel
Dr. Tjin van der Zant
Dr. Danko Nikolic
Dr. Lyle Ungar
Dr. Blair MacIntyre
Dr. Nils Nilsson
Dr. Eyal Amir
Dr. Keith Wiley
Dr. Joscha Bach
Dr. Robin D Hanson
Dr. Sean Holden
Dr. Richard Ennals
Dr. Roger Schank
Dr. Steve Omohundro
Dr. Noel Sharkey
Dr. Eduardo Torres-Jara
Dr. Philippe Pasquier
Dr. Yoshua Bengio
Dr. Roman Yampolskiy
PhD Physics, University of Missouri-Columbia | Founder and Board Chairman at Imagitron, LLC (Inventor of the Creativity Machine Paradigm)
"Both consciousness and sentience have already been implemented within machines beginning with the reductions to practice behind US Patent 5,659,666, “Device for the Autonomous Generation of Useful Information.” This ‘recipe’ for creative, emotional, and self-aggrandizing cognition has formed the basis of military, commercial, and national intelligence applications for decades, and is now being implemented within trillion neuron synthetic brains just a few feet from where I now sit."
PhD in Moral and Political Philosophy, Bowling Green and in Philosophy and Sociology, Warsaw University | Professor of Philosophy at the University of Illnois, Springfield | Associate Professor, Warsaw School of Economics | Member, Committee on Philosophy and Computers of the American Philosophical Associateion | Co-Editor, H-Net for Online Education in the Humanities | International Co-Editor, Dialogue and Universalism | Editorial Board Member, E-Mentor (Warsaw School) | Editorial Board Member, Filosofia i Egzystencja (University of Szczecin)
"The engineering argument for first-person machine consciousness: 1. Some day we should learn how the stream of phenomenal consciousness is generated in human brains. 2. TO understand this is to have an engineering blueprint how to engineer first-person consciousness. 3. If we use this blueprint to build first-person machine consciousness we should be able to build it."
PhD, Vanderbilt University | Senior Research Scientist at MathWorks in Natick, Massachusetts | Adjunct Professor at School of Computer Science, McGill University | Associate Editor of Applied Intelligence: The International Journal of AI, Neural Networks, and Complex Problem-Solving Technologies; of International Journal of Critical Computer-Based Systems; and of International Journal of Control and Automation
"The combination of machine classification and reasoning creates a functional semblance of human intelligence. The results of such reasoning may serve as the starting point for a layer of meta classification and meta reasoning, which will appear as consciousness in the same way as humans (but not like humans)."
PhD Humanities, University of Amsterdam | Associate Professor at the Intelligent Systems Group of the department of Information and Computing Sciences at Utrecht university | Member of Editorial Review Board of International Journal of Agent Technologies and Systems
"It is obvious that computer systems are getting increasingly powerful and will gradually take over complex human activities that we tend to consider as requiring some kind of consciousness. In this sense, computer systems are becoming increasingly conscious and sentient. However, the question whether computer systems will become conscious in the way that human are requires a clear and explicit definition of human consciousness, which would be the subject of continuous change at least as long as human knowledge, challenges, abilities, perceptions, lifestyle, etc. are changing"
PhD in Artificial Intelligence, Reykjavik University (Iceland) | VP Operational Intelligence at Activity Stream
"Since human intelligence (and consciousness) occurs in nature it must be a process emerging from physics and chemistry, I see no theoretical reason that would prevent us from eventually reproducing it in man-made systems if we so desired."
PhD in Cognitive and Neural Systems, Boston University | Co-founder and CEO of Neurala Inc | Founder and Director Neuromorphics Lab, Boston University | Research Assistant Professor, Boston University | Co-Director CELEST Catalyst, Boston University
"Machines will be able to have something similar to what we call consciousness in the next 10-20 years, but we will not need that to achieve machines with narrow (task-specific) super-human capabilities. This is achieved already today and this is the real use of AI for our society."
PhD Computer Science, Indiana University | Associate Professor of Computer Science at Temple University | Primary project: NARS (Non-Axiomatic Reasoning System) | Executive Editor of Journal of Artificial General Intelligence
"I think it can be done, and we already have a preliminary design for it in the NARS project."
Artificial Intelligence Researcher | Founder of SmartAction, LLC
"Machines *will* reach human-level intelligence – which inherently includes the ability to conceptualize abstractly. Consciousness is a direct consequence, or by-product, of this ability."
PhD, Computer Science, Brandeis University | Senior Software Engineer at Nokia | Robotics/AI Board Member, Lifeboat Foundation
"I don't know of any reason which should prevent us from developing machine consciousness, and moreover I hope that smarter-than-human machines would eventually solve the Hard Problem of Consciousness, even if humans keep failing at solving it on their own."
PhD in Linguistics, Stanford | Professor at the Budapest Institute of Technology | Senior Scientific Advisor at Computer and Automation Research Institute of the Hungarian Academy of Sciences | Research associate at Boston University | Board Member on ACL SIGFSM
"Since we have an existence proof that such things are possible to build from protein, it is evident that no magic will be required."
Professor of Computer, Web and Cognitive Science, as well as Director of Data Exploration and Applications, at Rensselaer Polytechnic Institute | Author of multiple books, including Spinning the Semantic Web (2002)
" I do believe that we are seeing the beginning of an increasing autonomy that will be operationally non-differentiatable from awareness some day – but the border between non-aware and aware is not a sharp one, so I don’t expect this to be a sudden change."
PhD in Computer Science, Purdue | Associate Professor Dept. of Electrical Engineering & Computer Science University of Tennessee | Author of multiple books | Founding Editor-in-Chief of the International Journal of Nanotechnology and Molecular Computation
"I think that the issue of machine consciousness (and consciousness in general) can be resolved empirically, but that it has not been to date. That said, I see no scientific reason why artificial systems could not be conscious, if sufficiently complex and appropriately organized."
PhD in Computer Science, University of Texas at Austin | Professor of Information Science at the University of Arkansas at Little Rock | Member of the UALR / UAMS Joint Program in Bioinformatics | Member of the Association of Professional Futurists | Coordinator of the Technology Innovation Graduate Certificate program | Partner at DeepLit LLC | Partner at thingpagecentral.com/web search engine | Author of the book The Human Race to the Future: What Could Happen — and What to Do (3rd ed.)
"The real question is, "how can we know for sure if a machine is conscious?" The answer is we can't. We can't even know for sure if another person is conscious or if instead it just seems that way. The Turing test offers a way around that, suggesting that if we can't tell the difference between communications from an AI and communications from a person, then the AI is, for practical purposes, at least as intelligent or conscious as a person."
PhD in Mathematics | Chief Scientist of Hanson Robotics, as well as of financial prediction firm Aidyia Holdings | Chairman of AI software company Novamente LLC and Biomind LLC | Chairman of the AI Society and the OpenCog Foundation; Vice Chairman of Humanity+ | Scientific Advisor of Genescient Corp. | Advisor to Singularity University and Institute | Research Professor in the Fujian Key Lab for Brain-Like Intelligent Systems at Xiamen University, China | General Chair of AGI Conference Series
"I think that as brain-computer interfacing, neuroscience and AGI develop, we will gradually gain a better understanding of consciousness — but this may require an expansion of the scientific methodology itself. I wrote a blog post titled “Second Person Science” considering this issue"
Cofounder and Executive of RoboCup@Home | Visionary at Brobotix | Director of Cognitive Robotics Laboratory at University of Groningen| Founder of Assistobot
"It is absurd to think that humans are the only ones that can have consciousness, since we know apes also have it. Anyone claiming that only biological machines, such as humans, can have consciousness is being a biochauvinist. It might be hard to imagine though, looking at the current technology".
PhD in Psychology, University of Oklahoma | Max-Planck Institute for Brain Research | Research Fellow at Frankfurt Institute for Advanced Studies | Ernst Strungmann Institute | Professor at University of Zagreb
"Human level of consciousness is possible to gradually approach, but machines will never get quite there. The key limitation will be the lack of a biological body which will make it practically impossible to experience the qualia of hunger, sexual pleasure, fear, having a flu etc. Difficulties will arise also with things like envy or a sudden insight because our bodies play a role in conscious experiences of those too. So, these future machines--albeit conscious--will never quite understand us."
Professor of Computer and Information Science, Bioengineering Science, Genomics and Computational Biology, Operations and Information Management, and Psychology at the University of Pennsylvania | Distinguished Research Fellow at Annenberg Public Policy Center | Member of Center for Cognitive Neuroscience | Member of Center for Pharmacoepidemiology Research and Training | Member of Institute for Research in Cognitive Science | Member of Institute for Translational Medicine | Member of Penn Center for BioInformatics | Member of Penn Positive Psychology Center | Member of Penn Research in Machine Learning
"Computers won't become conscious "in the same way humans are" -- e.g. using synchronized neural firing or activity in their prefrontal contrex or thalamus."
PhD in Computer Science from Columbia | Professor of Computer Science at Georgia Tech College of Computing and the GVY Center | Director of the Augmented Environments Lab | Co-founder and Co-Director of Georgia Tech Game Studio | Co-founder Aura Interactive (AR design and consulting firm)
"I believe that we will have conscious machines at some point. I believe our brains are incredibly complex, and our current machines are incredibly simple. Whether through biological or quantum computing, I expect we will dramatically increase the computational capabilities of our machines to exceed that of our brains."
PhD in Electrical Engineering, Stanford Professor of Engineering (Emeritus) in the Department of Computer Science at Stanford University | Kumagai Professor of Engineering (Emeritus) in Computer Science at Stanford (retired) | Author of multiple books
"I think that, in principle, it will be possible to build machines that are conscious in much the same way that we are. My reasoning for this is that we humans are machines, and we are conscious, or at least claim to be, so we ought to be able to build machines like us (eventually)"
Chief Executive Officer and Chief Data Scientist at Parknav Technologies | Cofounder, CEO, and CDO of AI Incube | Associate Professor, Computer Science Dept., University of Illinois, Urbana-Champaign
"We already have conscious machines. Their degree of consciousness will evolve and become greater and greater as we progress in the technology and knowledge that we put into them. For example, autonomous cars will be very self conscious."
PhD in Computer Science, University of New Mexico | Research Scientist at Department of Astronomy, University of Washington | Author
"Some theories preclude the possibility of machine consciousness while others allow it. The simplest reason that I think machines can likely possess consciousness is that I favor the theories of consciousness that are compatible with machine consciousness. In particular, I suspect consciousness is a natural consequence of sufficiently complex and appropriately interconnected signal-processing networks. As to *how* such networks give rise to consciousness, well, that is one of the central questions of our ongoing explorations of the topic. Hopefully, progress will be made toward illuminating such questions as the natural sciences advance and as engineering itself advances."
Cognitive Scientist at MIT Media Lab and Harvard Program for Evolutionary Dynamics | Founder of the MicroPsi Project
"Our conscious mind is bootstrapped in the first months and years of your interaction with the world, yet all the information that governs that bootstrapping is encoded in a small fraction of the information content of our genome. Our current computers also begin to approach the computational complexity of our nervous systems, so I do not see a reason to believe that there are any insurmountable obstacles to creating artificial sentience."
PhD in Social Sciences, CalTech | Associate Professor of Economics at George Mason University | Chief Scientist at Consensus Point | Author
"Machines can be conscious, because we humans are conscious, and we ARE machines. Complex squishy machines, but machines nonetheless."
PhD in Engineering, Corpus Christi | Senior Lecturer in Machine Learning at Cambridge University | Fellow of Trinity College | Member of Programme Committe for the First Conference on Artificial Intelligence and Theorem Proving, as well as for the 5th International Conference on Pattern Recognition Applications and Methods 2016 | Editorial Board for Artificial Intelligence Review
"Yes, it's possible. Humans are made from stuff that obeys the laws of physics - they constitute an existence proof. The difficulty is just that of working out how the machine (taken in a very wide sense) works and how to build an equivalent."
Emeritus Professor in Accounting, Finance and Informatics, Kingston Business School | Member of Royal Society of Arts Commonwealth Club | Consultant to UK Government Departments, European Commission Directorates-General, UN specialized agencies: UNESCO, ILO, WHO, World Bank
"For some years people have talked of machines as being "intelligent": we must expect this to continue. This confuses debates about human responsibility: we see increasing "artificial irresponsibility", compounding "artificial stupidity". We must expect people to continue the irresponsible use of machines, aided by uncritical reporting."
Professor Emeritus at Northwestern University | Founder of Socratic Arts and XTOL
"Are dogs conscious? It seems so, but we really don’t know what that question means. We assume conscious awareness in people, but not in machines. We are “fleshists.” There is no way to build something when we don’t really know what it is."
PhD in Mathematical Physics | Author, Scientist, Physicist, Entrepreneur | Currently involved on following projects: Possibility Research, Self-Aware Systems, Design Economics advisory Board, Dfinity Cryptocurrency advisory board, Cryptocurrency Research Group board, Institute for Blockchain Studies advisory board, Cognitalk advisory board, Center for the Study of Mind, Silicon Valley ACM Special Interest Group on AI
"Human consciousness has many aspects ranging from having a self model, to having subjective experiences or "qualia", to having a sense of being a unitary being with continuity through time. We don't yet understand the nature of these in us, so I think it's too early to predict whether machines will have them. Some aspects, like having a model of self or creating theories of other minds are already being exhibited by some AI systems and are likely to become much further developed in coming years."
Emeritus Professor of Artificial Intelligence and Professor of Public Engagement, University of Sheffield, UK, Co-founder and chair elect of the NGO International Committee for Robot Arms Control, Co-director Foundation for Responsible Robotics. Founding Editor-in-Chief Journal of Connection Science. Formerly EPSRC Senior Media Fellow and Leverhulme research fellow on the ethics of battlefield robots, Head Judge on BBC’s Robot Wars.
"This question is not possible to answer because consciousness is still shrouded in mystery with no adequaute scientific theory or model. People who talk with certainty about this are delusional. There is nothing in principle to say that it cannot be created on a computer but until we know what it is we don’t know if it can occur outside of living organisms."
PhD in Electrical Engineering and Computer Science, MIT | Assistant Professor of Robotic Engineering at Worschester Polytechnic Institute | Founding Director of Sensitive Robotics Laboratory
"The scope of the context will increase as we learn more from the human brain. However, we are far away that anything with human capabilities. Just to have some perspective let’s consider the visual perception problem. There has been great progress because of the amount of data available to implement machine learning algorithms. Computers can now recognize patterns like a cat with great rate of success. This great success is not close to human visual capabilities yet. Moreover, it helps little to organize the information to make a computer understand the concept of a cat as humans do."
Founder of the ACM Movement and Computation, Chair of the Musical Metacreation Workshops, CEO of Metacreative Technologies, and Advisor of Generate Inc.
"If we restrict sentience/consciousness to what logicians call positive introspection (I know what I believe, ...), then we are already there with some cognitive agent architectures. If we look to encompass all dimensions of human consciousness, then there are no evidence this is possible and feasible (for example, genuine intrinsic motivations are not present in artificial agents)."
PhD in Computer Science, McGill University | Professor Department of Computer Science and Operations Research | Canada Research Chair in Statistical Learning Algorithms | Head of the Machine Learning Laboratory | CIFAR program co-director of the CIFAR Neural Computation and Adaptive Perception program | NSERC-Ubisoft industrial chair | Action Editor for Journal of Machine Learning Research | Associate Editor for Neural Computation journal | Editor for Foundations and Trends in Machine Learning
"I believe that subjective self-awareness is possible, and not even particularly challenging, but I also believe that when that challenge is addressed, it will be something that humans can choose to put or not put in intelligent machines. We can have very intelligent machines without a sense of self."
PhD Computer Science, University of Buffalo | Associate Professor Computer Scientist at Speed School of Engineering, the University of Louisville | Founder and Director of the Cyber Security Lab | Author of multiple books | Senior Member of IEEE and AGI | Member of Kentucky Academy of Science | Research Advisor for MIRI and Associate of GCRI
"Consciousness is not a scientific concept; it can’t be detected or tested for in any way. It also doesn’t do anything so no reason exists to invest in research developing artificial consciousness."
CLOSE
x