Episode Summary: We talk a lot about the future of technology on Emerj – the long-road potentials and ethical considerations that intersect the various paths of artificial intelligence. But keeping the conversation real and present necessitates looking through binoculars rather than a telescope from time to time. In this episode, Eyal Amir, a tech entrepreneur and associate professor of Computer Science at University of Illinois, gives his zoomed-in perspective of the types of technological progress that he believes will be relevant in the next 5 to 10 years.
Guest: Eyal Amir
Expertise: Data Science and Artificial Intelligence
Recognition in Brief: Eyal Amir received his PhD in Computer Science from Stanford University in 2002, and also worked as a Postdoctoral Researcher at University of California, Berkeley. His first startup, Faspark, which gave street parking in real time, won the State Farm’s Amplify’d event at NextDoor, and was featured in numerous media outlets. More recently, he is Co-founding CEO and CDO of Parknav and Co-founding CEO of AI Incube, Inc. His articles have been published in numerous scientific publications and other media, and his thesis – Dividing and Conquering Logic – won best thesis for the Computer Science Department at Stanford in 2002.
Current Affiliations: Associate Professor of Computer Science at University of Illinois, Urbana-Champaign; Co-founder, CEO and CDO of ParkNav, and Co-founder and CEO of AI Incube, Inc.
The Significant Leaps: Near Past Tense
When I ask about tangible progress made in the past decade or so, one of the first that Amir voices is the ability for AI to detect images and content i.e. deep learning and neural nets. Seeing the many media headlines that have covered technologies following these exact trends, this does not come as too much of a surprise. Deep machine learning has many ramifications in development, from autonomous cars, which are becoming ever more likely in the foreseeable future, to the ability for algorithms that better understand data on the web.
The latter may be getting slightly less attention, though the underlying technology is penetrating the space in our daily lives. “There’s another kind of technology that hasn’t really attracted as much attention, but it is happening – it’s happening a lot in industry…that is the ability to connect a lot of data together,” Eyal explains. The advertising industry is one of the most obvious sectors using these advanced technologies to connect the dots between swaths of data. Ever feel like advertisers are reading your mind? Close, but it’s more akin to pointed inferences about the types of ads that might interest an individual or groups. “Companies connect the dots between who you are, what you’re doing, where you’re going…we already see a lot of this, you know the term big data – it usually refers to that (idea).”
Amir brought up an important point about this ‘big data’ phenomenon, which was surprising, and reminded me about the power of assumption. Data is not available in the way that we often think. For example, companies often don’t know if a user is a man or woman, the amounts of money that he or she made in a year at X employe, etc.; instead, AI now has the capability to make really good inferences by connecting the dots between a string of buyer actions on the web over time (Facebook certainly has a large hand in feeding this equation). “In the end,” Amir comments, “they are all inferences.”
Data points could include everything from having children, to moving, to product likes and dislikes. When advertisers are targeting groups, they’re not using what many would envision as a set list with detailed demographic information; instead, advanced algorithms are taking bits and pieces of data and forming deeper correlations in order to make more accurate inferences.
This same concept is used with image recognition.”Many pictures will say, ‘this is a man on a horse’, many programs will detect that it’s a man on the horse, but many of them will miss that it’s actually a statue of a man on a horse,” Amir explains. This common error is one of the reasons that autonomous cars are not driving around by themselves yet, because inferences are not precise enough.
This makes sense when you realize that making extremely accurate inferences is really what’s at work in the human brain i.e. the evolved ability to make sense of our surroundings based on putting together a bigger picture from lots of different pieces of information. Today’s AI scientists are diligently working to bring machines up to speed – and perhaps beyond human capabilities – at some point in the more distant future.
The Significant Leaps: Near Future Tense
Since I’m always looking ahead to anticipate the future possibilities, I had to ask Eyal about where he thinks AI will be in the next 5 to 10 years. Linking predictions to the technologies previously mentioned, Amir believes we will see autonomous cars in some shape or form – they won’t just drive by themselves and get the groceries or find parking while we get a head start inside. The autonomous cars of the near future will be more along lines of precise and continuous cruise control on freeways and more predictable roads that are generally free of potential obstructions.
More developed digital neural networks will also lead to new big data applications. Consumers are a huge market and hence many apps will be geared towards their wants and needs. For example, apps that make precise enough inferences to know where parking spots are available in the absence of direct sensors (Amir originally founded Parknav around this idea, as covered by Crain’s Chicago Business), or that can give a pretty accurate estimate as to how long the line is at your favorite coffee house house, that can tell you the likelihood that a particular store has the shirt you want. The information isn’t going to be given directly by producers – it’s not generally worth their time and effort – but big data can make this possible.
More autonomous computers is the more general trend that will take hold in the next decade. Eyal already sees society starting to trust in the ability of computers to do basic tasks and to have knowledge that we do not have. In the near future, we’ll likely allow them even more choice. We might make a comparison to TiVo, which today can “guess” about which shows will appeal to a user and record those at will. Perhaps one day soon, we’ll be able to set alerts on the contents of our fridge, which will detect when we’re out a staple like milk and send out for a doorstep delivery.
As with any new advancement in technology, there are legitimate concerns underpinning these realities. AI is often stigmatized in the media, particularly in light of recent developments in military drone technologies and comments about malevolent AI made by tech moguls such as Elon Musk and Bill Gates (though these weren’t their only view expressed). There are risks, but Amir thinks these should be split into distinct camps. “Half of what is happening in AI, relating to trends like big data, is due to market forces alongside advancing technology. While we label these developments as AI, it’s really AI controlled by the market.”
This poses some conceptual and valid threats, such as loss of privacy. We might adapt, but then AI seems likely to also lead to loss of abilities in daily routines and jobs. In that respect, if this is not the future desired by a person or group of people, then such technologies are intimidating. “I don’t think computers will terminate us because they want to, but maybe because some hackers want to use the technology to do so.” Bottom line – the human component is still the scariest variable amidst AI developments.