Dyllan Furness
Dyllan explores technology and the human condition for Tech Emergence. His interests include but are not limited to whiskey, kimchi, and Catahoulas.
Articles by Dyllan
30 articles
Horse betting is harder than it looks. At the 142nd Kentucky Derby last week, only one of five experts from Churchill Downs Racetrack correctly predicted the winner. None of them correctly predicted the top four horses. Known as a superfecta, this latter bet came with 540 to 1 odds, meaning $100 down would return $540,000. And although the experts failed to predict the finishing order, an anonymous group of internet users did.
If you aren’t yet convinced by the real world potential of artificial intelligence, Microsoft’s chief envisioning officer, Dave Coplin has a few words for you. Speaking at an AI conference in London on Tuesday, Coplin emphatically told business leaders that AI is “the most important technology that anybody on the planet is working on today,” reports Business Insider.
The inventors of Apple’s virtual assistant software, Siri, have just demonstrated their secret, next-generation, artificial intelligence assistant. Created by Dag Kittlaus and Adam Cheyer, Viv is four years in the making. She offers an open platform through which queries can connect with third-party merchants. And yesterday, during Kittlaus’s demo at TechCrunch Disrupt NY, Viv performed flawlessly as her creator gave her tasks that would likely see Siri fumbling for answers.
Viv’s talent is in analyzing natural language queries, breaking them down to their components to determine intent, and feeding these requests off to bots that can easily process them. In this sense, Viv is a flexible “top bot” that commands a number of very inflexible specialized bots. This enables Viv to satisfy many unique requests by delegating tasks vertically. Whereas the highly specialized, personal assistant X.ai is capable of competently managing a calendar, Viv – like Siri – is designed to handle multiple tasks at various times. However, where Siri often falls back on a search engine to answer complicated questions, Viv seems to capably find an answer by relying on the support of the company’s partners. See Kittlaus's demonstration and interview at Disrupt NY below.
Artificial intelligence may be making the world smarter, safer, more functional and accessible. But can it make the world more beautiful? A number of researchers hope to do so by developing AI systems that can paint, write, and colorize photographs.
Big data is big business. But in an age of digital privacy paranoia, it isn’t always easy for tech companies to get their hands on information – particularly when some of the most potentially beneficial data is also confidential, locked up in healthcare and finance companies who aren’t comfortable sharing.
How secure is your company’s online data?
Probably not as secure as you think. Recent statistics from a security risk benchmarking startup called SecurityScorecard suggest that the United States federal government ranks dead last among major cybersecurity industries, despite having spent $100 billion on cybersecurity measures over the past decade.
Despite what the media tends to depict, artificial intelligence is being put to better use than winning video games and board games. In fact, two of the world’s leading tech giants have begun using AI to help the blind perceive the world in helpful new ways.
Last week, Goldman Sachs led a $30 million investment round into Persado, a company that offers AI-based copywriting and marketing services. Persado claims its system "outperforms man-made messages 100% of the time” in a process they call "persuasion automation." In other words, according to Persado's promotional video below, human marketers hardly stand a chance.
Persado’s unique selling point is its ability to create and test the strength of words, phrases, and entire sentences used in marketing content. The company claims to have tagged, scored, and categorized some 1 million of these terms to determine their effectiveness in marketing copy.
Backed by this database, Persado says its software can “effectively parse hundreds of thousands of ways to convey emotions” and apply those expressions to augment marketing campaigns. Thus, the software can alternate copy between feelings of safety, intimacy, and anxiety, depending on a client’s needs. (See below for their "Wheel of Emotions" infographic). This automated system enables the company to generate and test more texts than human-managed marketing departments, which tend to rely on individually created A/B test samples. For international clients, Persado boasts that its software can translate texts into 23 languages.
Science Magazine’s report on Friday that an artificial intelligence system was caught stealing banking customers’ money may have made you rethink vesting your funds in the burgeoning technology. But have no fear – the article was an April Fool’s joke.
Picture this: Mad Men returns for a final season set in the near future. The advertising agency Sterling, Cooper, Draper, Pryce is still a powerhouse though its namesakes have since retired. Actually, the entire human staff has been reduced to just a few account men, managers, and technicians. Where are the creatives? They're in the computers.
Artificial intelligences are becoming better storytellers by the day. Last week, a novella written by an AI program nearly won a Japanese literary contest. “The Day a Computer Writes a Novel” (Konypyuta ga shosetsu wo kaku hi) is a surprisingly human tale of an AI that recognizes its writing skills and abandons its programmed task of aiding humanity in order to satisfy an artistic urge. The Japanese News reports (in an article that appears to be taken down at the time of this article update, September 2017) that this meta-novella and 10 other AI-authored submissions faced competition from over 1,400 man-penned manuscripts for the Hoshi Shinichi Literary Award.
If the story of Cyc were written by Aesop, it would probably read something like The Tortoise and the Hare. The 30-year-old artificial intelligence engine's slow, steady, and idiosyncratic development is set to challenge recent pattern recognition methods that have seen AI algorithms conquer centuries-old board games and rush-hour traffic. Where the latter found success creating statistical models by processing troves of data on its own, Cyc’s professed skill will come from hardcoded rules and logic that allow it to understand how and why data points are related.
Cyc is a common sense engine, which over the past three decades has been fed thousands and thousands of encyclopedic facts. Since computers lack human-level inference, Cyc’s creators also fed it background knowledge – facts that we’d consider self-evident – to help connect the dots between what, how, and why things happened.
So, if Cyc is told that in 1492 Columbus sailed the ocean blue, the system is also informed that Columbus sailed on the Mayflower, the Mayflower is a ship, a ship is a boat, and boats float. This degree of specificity is designed to make Cyc a comprehensive and unique resource with real-world applicable knowledge; it also helps explain why the knowledge base took so long to develop.
From Silicon Valley to South Korea, artificial intelligence has been one of the hottest tech topics of the year. In fact, 2016 was meant to be “the year that virtual reality becomes reality”, and yet AI seems to be dominating the discussion. Now, top business schools around the world – from University of California, Berkeley to National University of Singapore – are turning to AI to help bolster their programs and train MBA students to apply machine learning processes to business problems.
If you’re sick of selfie sticks, Boston-based software company Neurala may have an alternative for you. The Selfie Dronie is a paid mobile application compatible with Parrot Bebop drones that offers users a relatively hands free way to record selfies and dronies (those aerial shots often associated with extreme sports and Redbull advertisements).
In the first week of 2016, Facebook’s Mark Zuckerberg announced in a post that his goal for the year was to “build a simple AI to run my home and help me with my work.” He clarified, "You can think of it kind of like Jarvis in Iron Man.” Zuckerberg went on to describe his plan to explore presently available smart home technologies, implement them into his home, and train the system to coordinate with his family life and workaday. (Interestingly, Zuckerberg’s AI may utilize a number of devices, but he refers to the technology as a singular system, implying that he intends to develop a unified AI to oversee the many individual devices.)
Before we welcome a new technology into our lives, it’s wise to consider what effect it will have on us as human beings. What might a technologically disruptive app do to our innate empathy or self-esteem? When that technology is so sophisticated to actually resemble human beings, this forethought is that much more important. Robots and artificial intelligence will disrupt the very fabric of society, and dramatically change the way we relate to technology and to each other. (In fact, they already are.) So preparing for this change is perhaps as important – if not more important – than the development of the technology itself.
In this vein, Brown University recently put their support behind the Humanity-Centered Robotics Initiative (HCRI), a faculty-lead effort to explore, uncover, and report on the many facets of integrating robotics into our everyday lives. As anyone who’s read Isaac Asimov or Arthur C. Clarke can attest, even if we’re very cautious, this integration has the potential to augment or destroy. HCRI hopes to anticipate this disruption and help engineers, researchers, and social scientists steer robotics in the most reasonably right direction.
“We want to leverage the atmosphere, interests, and talent at Brown University with the goal of creating robotic systems that work with people for the benefit of people,” computer science professor Michael Littman told Emerj. "And we’re dedicated to understanding what the actual problems are – not just to create fancy technology, but actually to try to understand where the difficulties and short comings are and to focus on those.”
HCRI’s focus will be split into six, collaborative research focuses: robots for scientific research; motion systems science; design and making; perception and decision-making; robots for independent living; and ethics, policy, and security. Combining elements of design and making with ethics, policy, and security, one DARPA-funded project plans to explore ways of engineering robots that have some awareness of social norms.
Littman co-founded HCRI with Professor Bertrand Malle three years ago, with the intent to focus a number of academic perspectives on robotics and collaborate in the process. Brown’s recent support now allows Littman and Malle to bring an associate director and a postdoctoral researcher on board, as well as offer seed funds to new robotics research and symposia. Already, two HCRI-sponsored symposia have brought more than 60 Brown faculty members from 20 teams together in the interest of a better robotic future.
How many times have you heard tales of automation and job loss? How fervently have these harbingers announced the arrival of robot-run industries? The International Federation of Robotics recently projected that about 1.3 million industrial robots will be put into operation within the next couple years. From experts to laymen, automation is too tempting and unsettling a topic to ignore.
In 2015, more than a billion dollars was spent on artificial intelligence research. That's more than in the field’s entire history combined. AI systems saw advancements in aspects as diverse as consciousness and comedy. Even the entertainment industry seemed to ride the wave with films like Ex Machina and Chappie performing well with critics and fans. And yet, according to venture capital database CB Insights, last year business investment (particularly on the corporate side) in AI slowed from the six-year high it found in 2014. Investment, it seems, didn’t match public and private interest.
Just a few weeks ago, the artificial intelligence community and board game community alike were shocked when an AI system named AlphaGo defeated an expert player five games straight in the sophisticated game, Go. The surprise wasn't so much that the system won, but that it was capable of winning nearly a decade before experts expected.
Go can colloquially be called the “chess of East Asia”. But Go actually trumps chess in terms of complexity and required intuition from players. By alternating black and white pieces on a grid of 19 horizontal and 19 vertical lines, players are challenged to surround and trap their opponents’ pieces. The result is a game with many more potential moves at a given time and no apparent method to determine any players specific advantage. Thus, Go demands immense practice and subtle, human-like intuition. Likewise, the system must be developed to process data more like a human than like a machine. Dennis Hassibis, head of the Google team that developed AlphaGo, told a press briefing after the win, “Go is the most complex and beautiful game ever devised by humans.”
In case we haven’t been cautioned enough about the threats of emerging artificial intelligence, a panel of academics addressed the American Association for the Advancement of Science (AAAS) on Sunday with a warning that advancements in intelligent and semi-intelligent automation could lead to overwhelming unemployment across many industries.
Machines’ ability to recognize patterns is yet to match our own, but their increasing sophistication in regards to tasks like speech recognition and data analysis has seen AI applied to real world applications such as autonomous driving. In this vain Bart Selman, professor of computer science at Cornell University, said, “For the first time, we’re going to see these machines and systems as part of our everyday life.”
The predicted success of self-driving cars may prove to be a blessing that greatly reduces car accidents, but – with 10% of U.S. jobs requiring some degree of vehicle operation – the technology will also undoubtedly effect the labor market. Moshe Vardi, professor of computer science and director of the Ken Kennedy Institute for Information Technology at Rice University, told AAAS, “We can expect the majority of these jobs will simply disappear.” He went on to suggest that the disconnect between the manufacturing industry and job growth is a result of automation. Though manufacturing volume is right now at its peak, U.S. manufacturing jobs are currently below the figures from the 1950s. He pointed to the 250,000 industrial robots in the U.S. and the increasing growth rate of their use.
What Vardi suggests will happen is “job polarization”, a phenomenon that emerges when high-skilled jobs demand complex human intelligence and low-skilled jobs are too expensive to automate. Thus, the middle ground jobs will be the easiest to automate, leading to greater economic inequality. Vardi also noted that although this issue is widely regarded as a threat that could make a huge impact on American economic life, there is no discussion of it in politics, particularly not in the presidential election. “We need to start thinking very seriously: What will humans do when machines can do almost everything?” he said. “We have to redefine the meaning of good life without work.”
Furthermore, Wendell Wallach, an ethicist at Yale University’s Interdisciplinary Center for Bioethics and the Hastings Center, said “There’s a need for concerted actions to keep technology a good servant and not let it become a dangerous matter.” He also proposed that 10% of AI research funding should be put towards studying the impact that AI machines will have on society, echoing Vardi’s concern that politics has failed to address the tremendous issue. “We need strong, meaningful human control,” he said.
Despite the progress made in artificial intelligence over the past few years, deep learning software still lags far behind the pattern recognition and learning capabilities of the mammalian mind. Where a human might be able to recognize an apple after seeing just a couple apples, even the most sophisticated deep learning software has to review hundreds of thousands of apples to identify one.
At the heart of our present day sharing economy is the often lauded, sometimes corrupted, and occasionally controversial open source model. Though the open source model has its roots in the early days of automobile development, our Internet age has proved an ideal medium for free licensing and distribution.
The world’s biggest names in technology – particularly those in Silicon Valley – have released their artificial intelligence technology via the open source model over the past few months in a domino effect that has made some of the most sophisticated AI programs available to anyone with Internet connection. In huge maneuvers, Google, Facebook, Microsoft, and China’s search engine giant Baidu have taken deep learning even deeper.
In November of last year, Google open sourced the software library for TensorFlow, the tech giant’s perceptual and language comprehension program. Though TensorFlow wasn’t the first open source AI software out there – software such as Torch, Caffe, and Theano – it is widely regarded as some of the most advanced AI algorithms in the world. Thus Google’s move to make TesorFlow open source marked an unparalleled step forward, which its competition couldn't resist but to follow.
White collar professions were once considered safe from automation. It was blue collar work such as labor and manufacturing jobs that appeared at risk of becoming redundant in the wake of advancing technologies. But according to the Word Economic Fund – who held a conference last week in in Davos Switzerland – white collar work is not so secure as it seemed. AI systems continue to advance and challenge the status quo.
The "fourth industrial revolution" is upon us and, according to the World Economic Fund, it is set to drastically disrupt business modes, labour markets, and economies across the world. In fact, in a report released this week, the Swiss foundation gave a conservative estimate of 7.1 million jobs that could vanish due to redundancy and automation by 2020. Some 2.1 million jobs will be created and marginally offset that loss – but the 5 million remaining, mostly white collar jobs, will see themselves performed by one or more machine.
Where previous industrial revolutions were powered by tools that workers could control, the current revolution is lead by machines which may well control themselves. The WEF lists artificial intelligence and machine-learning among the most disruptive technologies to date, predicting that the advancements in these fields will cause “enormous change…in the skill sets needed to thrive.”
The report comes just one day before the WEF’s annual forum in Davos, Switzerland – a forum to bring over 2,500 business leaders, governmental figures, and members of society together to discuss the state of the global economy. This year, the focus will be on jobs, with a particular emphasis on the effects of potential but widespread automation.
To formulate their report, the WEF surveyed held a broad survey representing 65 percent of the global workforce, including senior executives from 350 companies from nine industries and 15 economies.
The report found that healthcare, energy, financial services, and investors will take the biggest hit from automation. We’ve seen how AI and robots already perform as surgeons and caregivers. Earlier this year, Financial Times and the BBC reported how AI programs are transforming the financial industry.
China entered 2016 with a struggling stock market that made many analysts question the strength of the world’s largest economy. Despite this economic omen, China closed 2015 with a pretty stellar year in artificial intelligence and robotics, sparking what may be the beginning of a revolution.
One of the world's oldest and most prestigious universities is offering a new study focus that sheds light on its progressive approach to academia. With a grant from the non-profit foundation, the Leverhulme Trust, academics at England’s Cambridge University will be able to study artificial intelligence ethics over the next ten years.
Machines like IBM’s Deep Blue and Watson are already capable of beating chess champs and Jeopardy! champs respectively, and prove that strategy and trivia are easily conquered by a machine. But this knowledge doesn’t necessarily transfer over into everyday use.
Most people aren’t kept up at night for fear of a robot apocalypse – but, in our economically unstable society, many people do worry about their future employment. Robots and AI aren’t yet dominating our lives but they are taking our jobs.
In business and capitalism, the value of an idea is initially measured by the investment it earns. Keen investors expect financial profit from their economic commitments; profits that fatten their wallets but don't always coincide with the betterment of society. Meanwhile, a $1 billion joint investment by some of business and technology’s biggest names has shed that principle of return and, in the act, validated artificial intelligence as one of today's most important topics.
The original Star Wars film is nearly 40 years old – but the hologram buzz the films inspired has never settled down. In recent years some researches want to bring Dante back in 3D.