Using Wearable Data for Artificial Intelligence Applications – Current Use Cases

Ayn de Jesus

Ayn serves as AI Analyst at Emerj - covering artificial intelligence use-cases and trends across industries. She previously held various roles at Accenture.

Using Wearable Data for Artificial Intelligence Applications - Current Use Cases

International Data Corporation reports that the global wearables market continued to grow in the second quarter of 2018 as shipments reached 27.9 million units, an increase of 5.5% year-on-year. This growth translated to $4.8 billion year-on-year for the quarter. Smartwatches continued to be the most popular wearables.

We researched the use of AI in wearable technology to better understand where AI comes into play in the wearables sector with an aim to answer the following questions:

  • What types of wearables are currently in use and what AI technologies drive them?
  • What tangible results has AI driven for wearable technology vendors?
  • What common trends can be gleaned from these innovation efforts? How could these trends affect the future of wearable technologies?

This report covers vendors offering software across two applications in the healthcare space:

  • Assistive Technology
  • Health Monitoring

This article intends to provide business leaders with an idea of what they can currently expect from AI in this industry. We hope that this report allows business leaders to garner insights they can confidently relay to their executive teams so they can make informed decisions when thinking about AI adoption. At the very least, this report intends to act as a method of reducing the time business leaders spend researching AI companies with whom they may (or may not) be interested in working.

Assistive Technology

OrCam

OrCam developed OrCam MyEye, which they company claim assists the blind and vision-impaired people read by way of computer vision, gesture recognition, and natural language processing

OrCam MyEye can be attached to a person’s eyeglasses. When the wearer points MyEye to a piece of text in the physical world, the camera captures the text, and the computer vision behind MyEye matches the visual text information to that in its pre-installed database of physical text images. Then, it seems the system uses natural language processing to convert the understood text into speech audio. MyEye then “speaks” the text it is pointed at to the wearer. 

OrCam claims that MyEye can “read” from books, newspapers, menus, signs, product labels, and computer screens. The company also claims MyEye can be used to assist the wearer in shopping, identifying items and announcing their colors and specifications to the wearer. According to OrCam, MyEye also uses facial recognition technology to announce to the wearer the names of people they encounter. Additionally, when the user raises their wrist to check their watch, the wearable will tell the time and date.

Below is a short 2-minute video demonstrating how OrCam MyEye works:

The company has raised $86.4 million in funding. It has not made any case studies available, and it does not list any marquee clients, likely because it sells B2C.

At the moment, the OrCam MyEye is available in English, Spanish, German, Italian, Dutch, French, Hebrew, Danish, Polish Norwegian, Portuguese, Romanian, Finnish, Swedish, Czech, Mandarin, and Japanese. Additional language support is in development, according to the company.

Amnon Shashua is the co-founder, Chairman of the Board, and CTO of OrCam. He holds the Sachs chair in computer science at the Hebrew University of Jerusalem, specializing in teaching computer vision and machine learning. Prior to this, he co-founded Mobileye and served as Senior Vice President of Intel Corp., where he led the Autonomous Driving Group.

Brain Power

Brain Power developed Empower Me, a coaching application that runs on Google Glass hardware and Affectiva emotion-recognition software. The wearable is targeted at children and adults with autism and aims to aid in teaching teach social and cognitive skills.

The company claims Empower Me uses a combination of augmented reality (AR), virtual reality (VR), and artificial intelligence to address social interaction, language, behavior self-control and job skills. Google Glass captures images of what the wearer is seeing and the voices of people the wearer is interacting with, while the AR and VR technologies layer information about those images and people in the wearer’s field of vision.

Affectiva’s Emotion AI software uses facial recognition technology to map key regions on the face, such as the corners of the eyebrows, the corners of the mouth, and the tip of the nose. The company claims that its machine learning algorithms analyze these regions of the face to classify facial expressions and correlate them to emotions based on a database of faces labeled with different emotions. Afectiva also states that its software is able to analyze vocal signals such as tone, loudness, tempo, and voice quality to distinguish emotions and determine gender.

Below is a short two-minute video demonstrating how the coaching application works through a gamified format:

Brain Power has not published case studies but claims Amazon Web Services, Harvard, MIT, Massachusetts General Hospital, and Augmate as partners, in addition to Google and Affectiva.

Ned Sahin is CEO and Founder of Brain Power. He holds a PhD in Cognitive Neuroscience from Harvard University. Rana el Kaliouby, CEO and Co-founder of Affectiva, was a research scientist at MIT, developing technologies that measured and communicated emotions and technologies that help individuals with ASD. She holds a PhD degree in Computer Science from Cambridge University.

Starkey Hearing Technologies

Starkey Hearing Technologies offers Livio AI, a multipurpose hearing aid that is fitted with gesture-recognition technology natural language processing.

The company reports that impaired or loss of hearing affects a person’s sense of space and balance, potentially resulting in risks of falling, injury, and dementia. For this reason, the company has fitted the hearing aid with 3D motion- and gesture-detection sensors that capture physical activity data. This data is passed to an accompanying app that allows wearers to track their physical activity.

The company claims the Livio AI hearing aid is also capable of translating incoming speech with 27 languages, including Arabic, Chinese, Danish, English, Finnish, French, Italian, and Japanese. This is made possible by way of natural language processing technology that seems to take incoming auditory speech data and matches it to data labeled as corresponding to words and phrases in different languages.

We could not find any available videos demonstrating how the software works. Starkey does not make available any case studies reporting success with their software, but the company reports an annual revenue of around $10 million.

As CTO and Executive Vice President of Engineering, Achin Bhowmik leads the company’s research and product development efforts. He earned a PhD from Auburn University, although his LinkedIn profile does not specify in what field he earned his degree. Prior to joining Starkey, he was vice president and general manager of the Perceptual Computing Group at Intel Corp, where he focused on areas of 3D sensing and interactive computing, computer vision and artificial intelligence, autonomous robots and drones, and immersive virtual and merged reality devices.

Health Monitoring

Sensoria Health

Sensoria Health developed what it claims to be intelligent socks targeted at the elderly. The socks are made of fabric embedded with sensors that detect pressure points and the balance of the wearer to predict and prevent falls through machine learning technology.

We were unable to find information on what the system does with the sensor to yield an output, but we can infer that the sensors collect data about the user’s physical coordination, posture, and gait in relation to the environment and the angles between their adjacent body parts. This data is transmitted to the Sensoria Core, the hardware integrated into the socks that contains the machine learning algorithms that compare the patient data with those in its database. If the algorithms find similarities between the wearer’s coordination and posture data to the data one might expect of someone falling, the company claims it will accurately determine if the wearer is actually falling or not.

Knowing whether or not someone wearing the socks is falling or not could reduce the chance that an ambulance is called to the wearer’s home or that hospital staff is alerted to check up on the patient in their hospital room.

The company reports the importance of gait monitoring, as well, especially in patients with neurological disorders such as Parkinson’s, multiple sclerosis, and Alzheimer’s.

We could not find a video demonstrating how the socks send alerts to whoever is monitoring the patient. The videos Sensoria make available are mainly for marketing purposes.

Sensoria does not make available any case studies but claims to have partnered with Genesis Rehab Services to deploy the technology to about 450 nursing homes and senior living communities owned by Genesis. Under the agreement, Sensoria and Genesis will work together to monitor the activities of the elderly to help with fall prevention, if needed. The partnership is co-branded “Sensoria Health powered by Genesis Rehab Services.”

Maurizio Macagno is the CTO and co-founder at Sensoria Health Inc. He holds a Master’s degree in Telecommunications Technology from the Politecnico di Torino. Prior to Sensoria, he served at Microsoft Xbox Live for eight years as a senior program manager.

Tinylogics

Tinylogics offers Foci, an AI sensor that clips to a user’s waist to track their breathing and gauge their state of mind using machine learning technology.

The tool is accompanied by a smartphone app called AI Mind Coach, which provides the user advice to achieve a calm and focused state. To use the tool, the user must clip it to their waist. The tool then collects respiratory data on the wearer and sends this to the mobile app. Foci’s machine learning algorithms are trained to find patterns in its dataset to determine if the user is focused, distracted, stressed, fatigued, or calm by detecting and analyzing tiny movements in the user’s breathing and then comparing this movement data to labeled data in its database, the company claims.

If the app determines that the user is distracted, stressed or tired, it will alert the wearer or vibrate. Foci also display color-coded orbs on the screen to show the wearer their current state and gives them tips on how they can regain or achieve focus and perform better. Below is a 2-minute video demonstrating how Foci works:

Our research yielded no results when we tried to find case studies for the software.

Meichen Lu is the co-founder and Lead Data Scientist at Tinylogics and is responsible for leading algorithm development for Foci. She has a Master’s degree in mechanical and chemical engineering from Cambridge University.

Takeaways for Business Leaders in Wearable Technology

Most of the companies covered in this report are supported by C-level executives that hold PhDs in hard sciences or computer science. This indicates that the companies covered in this report have AI at the forefront of their offerings. This is a positive sign for the space as a whole in terms of whether or not AI is legitimately providing value or not.

At the moment, Tinylogics is the least established of the companies covered in this report, having used crowdfunding through kickstarter to launch.

Many of the companies we covered in this report offer products that make use of a variety of AI technology, including computer vision and natural language processing. These technologies are applied to assist users in performing daily tasks that require functions related to vision, hearing, emotions, touch, and cognition. Moving forward, we believe the trend of multifunctionalism will continue to enable users to do more with one gadget provide additional value per dollar.

Another trend among these companies is their B2C nature. Except for Sensoria Health, which has partnered with Genesis Rehab Services, none of the companies covered in this report seem to have enterprise clients, which makes sense considering the technology.

This focus on individual customers does mean that these AI are at the “tool” level, requiring no integration. In most cases, the only setup required of the individual customer is downloading an accompanying app on their smartphones.

 

Header Image Credit: Revtechno

Subscribe