AI Future Outlook Articles and Reports
Explore future perspectives on artificial intelligence applications and trends - including products and applications in marketing, finance, and other sectors.
In the first week of 2016, Facebook’s Mark Zuckerberg announced in a post that his goal for the year was to “build a simple AI to run my home and help me with my work.” He clarified, "You can think of it kind of like Jarvis in Iron Man.” Zuckerberg went on to describe his plan to explore presently available smart home technologies, implement them into his home, and train the system to coordinate with his family life and workaday. (Interestingly, Zuckerberg’s AI may utilize a number of devices, but he refers to the technology as a singular system, implying that he intends to develop a unified AI to oversee the many individual devices.)
Before we welcome a new technology into our lives, it’s wise to consider what effect it will have on us as human beings. What might a technologically disruptive app do to our innate empathy or self-esteem? When that technology is so sophisticated to actually resemble human beings, this forethought is that much more important. Robots and artificial intelligence will disrupt the very fabric of society, and dramatically change the way we relate to technology and to each other. (In fact, they already are.) So preparing for this change is perhaps as important – if not more important – than the development of the technology itself.
In this vein, Brown University recently put their support behind the Humanity-Centered Robotics Initiative (HCRI), a faculty-lead effort to explore, uncover, and report on the many facets of integrating robotics into our everyday lives. As anyone who’s read Isaac Asimov or Arthur C. Clarke can attest, even if we’re very cautious, this integration has the potential to augment or destroy. HCRI hopes to anticipate this disruption and help engineers, researchers, and social scientists steer robotics in the most reasonably right direction.
“We want to leverage the atmosphere, interests, and talent at Brown University with the goal of creating robotic systems that work with people for the benefit of people,” computer science professor Michael Littman told Emerj. "And we’re dedicated to understanding what the actual problems are – not just to create fancy technology, but actually to try to understand where the difficulties and short comings are and to focus on those.”
HCRI’s focus will be split into six, collaborative research focuses: robots for scientific research; motion systems science; design and making; perception and decision-making; robots for independent living; and ethics, policy, and security. Combining elements of design and making with ethics, policy, and security, one DARPA-funded project plans to explore ways of engineering robots that have some awareness of social norms.
Littman co-founded HCRI with Professor Bertrand Malle three years ago, with the intent to focus a number of academic perspectives on robotics and collaborate in the process. Brown’s recent support now allows Littman and Malle to bring an associate director and a postdoctoral researcher on board, as well as offer seed funds to new robotics research and symposia. Already, two HCRI-sponsored symposia have brought more than 60 Brown faculty members from 20 teams together in the interest of a better robotic future.
How many times have you heard tales of automation and job loss? How fervently have these harbingers announced the arrival of robot-run industries? The International Federation of Robotics recently projected that about 1.3 million industrial robots will be put into operation within the next couple years. From experts to laymen, automation is too tempting and unsettling a topic to ignore.
In the ongoing "pop culture" debate as to whether or not the pursuit of AI will result in Terminators that destroy humanity, there are many other more informed and nuanced discussions occurring in academic and business circles about the consequences and implications of continued advances in AI. A surprising number of legitimate AI researchers are of the belief that many of us will live to see "conscious" artificial intelligence in our lifetime.
In 2015, more than a billion dollars was spent on artificial intelligence research. That's more than in the field’s entire history combined. AI systems saw advancements in aspects as diverse as consciousness and comedy. Even the entertainment industry seemed to ride the wave with films like Ex Machina and Chappie performing well with critics and fans. And yet, according to venture capital database CB Insights, last year business investment (particularly on the corporate side) in AI slowed from the six-year high it found in 2014. Investment, it seems, didn’t match public and private interest.
Just a few weeks ago, the artificial intelligence community and board game community alike were shocked when an AI system named AlphaGo defeated an expert player five games straight in the sophisticated game, Go. The surprise wasn't so much that the system won, but that it was capable of winning nearly a decade before experts expected.
Go can colloquially be called the “chess of East Asia”. But Go actually trumps chess in terms of complexity and required intuition from players. By alternating black and white pieces on a grid of 19 horizontal and 19 vertical lines, players are challenged to surround and trap their opponents’ pieces. The result is a game with many more potential moves at a given time and no apparent method to determine any players specific advantage. Thus, Go demands immense practice and subtle, human-like intuition. Likewise, the system must be developed to process data more like a human than like a machine. Dennis Hassibis, head of the Google team that developed AlphaGo, told a press briefing after the win, “Go is the most complex and beautiful game ever devised by humans.”
In case we haven’t been cautioned enough about the threats of emerging artificial intelligence, a panel of academics addressed the American Association for the Advancement of Science (AAAS) on Sunday with a warning that advancements in intelligent and semi-intelligent automation could lead to overwhelming unemployment across many industries.
Machines’ ability to recognize patterns is yet to match our own, but their increasing sophistication in regards to tasks like speech recognition and data analysis has seen AI applied to real world applications such as autonomous driving. In this vain Bart Selman, professor of computer science at Cornell University, said, “For the first time, we’re going to see these machines and systems as part of our everyday life.”
The predicted success of self-driving cars may prove to be a blessing that greatly reduces car accidents, but – with 10% of U.S. jobs requiring some degree of vehicle operation – the technology will also undoubtedly effect the labor market. Moshe Vardi, professor of computer science and director of the Ken Kennedy Institute for Information Technology at Rice University, told AAAS, “We can expect the majority of these jobs will simply disappear.” He went on to suggest that the disconnect between the manufacturing industry and job growth is a result of automation. Though manufacturing volume is right now at its peak, U.S. manufacturing jobs are currently below the figures from the 1950s. He pointed to the 250,000 industrial robots in the U.S. and the increasing growth rate of their use.
What Vardi suggests will happen is “job polarization”, a phenomenon that emerges when high-skilled jobs demand complex human intelligence and low-skilled jobs are too expensive to automate. Thus, the middle ground jobs will be the easiest to automate, leading to greater economic inequality. Vardi also noted that although this issue is widely regarded as a threat that could make a huge impact on American economic life, there is no discussion of it in politics, particularly not in the presidential election. “We need to start thinking very seriously: What will humans do when machines can do almost everything?” he said. “We have to redefine the meaning of good life without work.”
Furthermore, Wendell Wallach, an ethicist at Yale University’s Interdisciplinary Center for Bioethics and the Hastings Center, said “There’s a need for concerted actions to keep technology a good servant and not let it become a dangerous matter.” He also proposed that 10% of AI research funding should be put towards studying the impact that AI machines will have on society, echoing Vardi’s concern that politics has failed to address the tremendous issue. “We need strong, meaningful human control,” he said.
How emotions influence consumer buying habits has long intrigued and evaded the business sector. Face recognition technology, once limited to security and surveillance systems, has made it possible to gauge more specific metrics to allow companies to predict consumer behavior and accelerate revenue growth.