According to a Deloitte study, 90 percent of worldwide retail sales are still done in physical stores. To compete with the convenience and endless aisle assortment offered online, this research suggests that meaningful customer experiences and brand engagement is crucial.
The study adds that technologies that are anticipated to enhance the shopping experience include augmented and virtual reality (AR/VR).
A virtual or smart mirror is a two-way mirror with an electronic display behind the glass. The display can show the viewer different kinds of information in the form of widgets, such as weather, time, date, and news updates. Virtual mirrors may also be called magic mirrors, interactive mirrors, digital mirrors.
Virtual mirrors are usually fitted with computer vision technology, which according to the British Machine Vision Association and Society for Pattern Recognition, is the science that aims to give machines or computers the ability to analyze visual patterns — much like how humans see. The technology uses algorithms to collect, analyze and make meaningful inferences from data from one or multiple images.
In retail fashion, virtual mirrors could combine sensors, displays, and computer vision-equipped cameras to offer shoppers clothing recommendations based on the data collected as well as fashion trends.
The potential benefits of virtual mirrors in retail stores could include enhancement of brand experiences as well as an ease of sales associate workloads.
Research from Cisco suggests that these technologies also benefit shoppers as they can view several outfits without waiting in line for the fitting area.
In this report, we explore how virtual mirrors and computer vision claim to offer new business use cases and how they could change the industry landscape. The technologies we discovered in this space seem to fit broadly into four major categories:
- Virtual Fitting Rooms
- Fashion Assistant
- In-Store Navigation
- Augmented Reality
If the article below, we’ll examine each of these four categories of applications, as well as examples, demos, and case studies of each.
Virtual Fitting Rooms
Virtual mirrors simulate the process of trying on clothes in a retail store, but still offer the possibility of a clothing purchase. These technologies use a computer vision-enabled camera or scanner that captures the shopper’s shape and size, which are then reflected as a virtual mannequin on the screen display to allow shoppers to see how the clothes will fit before deciding on a purchase.
TopPlus
Along with apparel, virtual mirrors can also be used in fitting and choosing eyewear. TopPlus is a Chengdu-based computer vision startup which offers TopGlasses. They say this is a software development kit for eyeglass and contact lens retailers who want to add a try-on experience to their eCommerce platforms or mobile apps.
To use this mobile application, a retailer must first take a short video of the shopper’s face, making sure to get shots of the front, right and left sides. As the computer vision algorithms capture the shape and measurements of the face on the recorded video, the application will display eyewear from the product catalog at the bottom of the screen that will fit the shopper’s face. When the user taps a product, it will be superimposed on the shopper’s face.
The user can keep swiping through and tapping the products until they select a product they like. The process is demonstrated in this video.
The shorter demo below shows how shoppers can try on glasses with TopPlus:
The company claims that the application uses MagicBar, a camera module with depth-sensing and virtual simultaneous localization and mapping (VSLAM) algorithms. The company says this enables tablets or mobile phones to sense the environment and build a 3D map of the face in real time.
The company claims to serve Qualcomm, Sunny Group, Tmall, ThunderSoft, ZeroTech and Kedo.com as clients for its other computer vision technologies but did not reveal clients for its TopGlasses application.
The company recently announced the release of its Vision of Me (Voome) software development kit, which developers can download without cost from Voome.cn. Voome is a face alignment and tracking algorithm engine that can be integrated into gaming, live streaming, marketing and eCommerce applications.
The company was recently invited to join the Nvidia Inception Program, which aims to support AI and deep learning startups in product development, prototyping, and deployment. It is not clear if the company joined the program.
Ruixiang (Ray) Gu is the Chief Strategy Officer of TopPlus, leading initiatives in artificial intelligence and computer vision technologies. He earned his Doctorate in earth system engineering from the Colorado School of Mines. We could not find any robust AI experience on TopPlus’ leadership team.
However, we can tell from LinkedIn searches that the company has at least one algorithms engineer.
Fitnect
Fitnect claims to be an augmented reality 3D fitting room platform that uses an external camera powered by Microsoft’s Kinect AI software. The application features depth and infrared sensors that reside in Azure, Microsoft’s AI Cloud service. Previously used in Xbox gaming technology, Kinect’s fourth-generation version combines depth-sensing and AI capabilities, which Fitnect says allows near and far objects to be captured and projected onto the display clearly.
The 1-minute video below shows how a shopper selects and “fits” clothes in a fitting room with a virtual mirror:
Fitnect is connected to a database of 3D images of clothes and accessories hosted on the Azure remote cloud servers. Shoppers can search and select clothes and accessories from a retailer’s catalog with various hand gestures. They can also gesture in the direction of a specific item to it on virtually.
If the user likes the look, Fitnect says shoppers can take a snapshot and share the image using the built-in social sharing tools. The system also features real-time cloth physics, which the company claims can show what the clothing item looks like when a user moves. It also allows full-body motion capture and can be embedded in websites using a browser plugin.
The Fitnect mirrors have been used at events such as Elle Fashion Show last April 2018 as well as in Aggreko and Expo Noivas in Brazil, according to its website. It is also installed in stores including Bilshirt and La Praille in Geneva. While the website listed Fitnect clients, it did not note any case studies.
Adam Horvath serves as Founder and CEO of Fitnect. Prior to Fitnect, he was Head of IT at Verlag Dashofer Ltd. He earned an IT Engineering degree at the Gabor Denes University in Budapest. His profile does not indicate AI skills and expertise. We could find no other robust AI background on the Fitnect team.
It was unclear which Kinect team members, if any, were involved with Fitnect.
FindMine
FindMine claims to offer virtual fitting rooms as part of an array of retail marketing solutions that include AI-driven eCommerce, mobile, in-store and personalized email services.
The in-store virtual fitting room is rooted in the flagship “Complete the Look” recommendation technology, which the company claims creates complete outfits around one product from the curated catalog of clothing and accessories.
For instance, if a female shopper walks into the fitting room wearing a coat. The virtual mirror reportedly recognizes the coat and recommends a matching pair of shoes, top, bag, jewelry, scarf and other items to complement the look, according to FindMine. If the shopper says they like the look, they can tap on the mirror and associates will collect the items from inventory and deliver them to the shopper in the fitting room.
In a 2-minute video interview, FindMine CEO Michelle Bacharach explains how the AI application determines what the human perceives as stylish:
According to the company, image processing plays a key role in analyzing the products by color, pattern, titles, description, price, gender targets, and other parameters.
Along with its virtual mirror, FindMine also offers a recommendation engine which retailers can integrate into their eCommerce site. We highlighted it in a recent report on recommendation engines in fashion.
While we could not find a demonstration of the in-store virtual mirror, this 1-minute video walks through how FindMine can be used on mobile eCommerce platforms:
Among the company’s clients are brands such as Adidas and John Varvatos. A case study of recommendation engine use with the latter showed that deploying FindMine in the eCommerce site for a pilot phase resulted in a 74-percent increase in the average amount a customer spends per order on a website or mobile app.
The pilot also resulted in an 83-percent increase in conversation rate, 107-percent increase in time on site, and a 6.5-percent increase in revenue per customer. No other figures were provided in the deck.
The company has not made other case studies available specifically about its virtual mirror but claims Adidas and Perry Ellis as among its clients. The company also recently partnered with Perry Ellis International to build an automated Complete-the-Look solution for its eCommerce site.
According to FindMine, the partnership also tasks the company in building an application that will facilitate outfit creation and provide better customer experiences for Perry Ellis’s brands such as Original Penguin, Callaway, Cubavera, and Laundry by Shelli Segal.
Konstantin Itskov was Co-Founder and CTO of FindMine. However, his LinkedIn profile shows he left FindMine in February 2018 to found his own company. He graduated with magna cum laude honors and a Bachelor of Arts in computer science from the Columbia University in New York, where he focused on machine learning.
In-Home Fashion Assistant
Echo Look
Another popular product is Amazon’s Echo Look, a spinoff from the Echo home speaker that is equipped with the voice of Amazon’s virtual assistant Alexa. Echo Look comes with the Style Check computer vision technology.
To start using it, the user instructs Alexa by voice to take a full-body photo or a 6-second video, which are stored in a gallery.
Combined with Alexa’s AI capabilities, the application then analyzes and judges if the outfit looks good on the wearer. Over time, Amazon says machine learning enables the device to compare two outfits and judge which looks better on the user. Amazon says the algorithm will continue to improve based on input from users and the Amazon style team.
This 2-minute video review shows how Echo Look’s Style Check feature captures the user’s image wearing an outfit and judges if the outfit looks good, based on color, fit, trend and style:
At the moment, the Echo Look speaker is for home use and is priced at $200. It is not clear if Amazon intends to develop and market a version for retail businesses. There were no case studies available on retailers who might be using the product to market their own clothing.
Because Amazon has made this product specifically for home shopping, we are unaware of retail stores using it for their specific clothing.
Hassan Sawaf is the Director for Artificial Intelligence at Amazon Web Services, where he leads the building of service and technology initiatives related to human language technology and machine learning. He earned his doctorate in computer science, focusing on speech and language processing, from the RWTH Aachen University in Germany.
SenseMI
SenseMySA claims to offer a style and home assistant, virtual fitting room, and smart home speaker all in one device.
The device, called SenseMI looks like a projector with a tablet screen. To activate the device, the user must look at the screen and unlock it. A computer-vision camera lock is also able to recognize and distinguish a specific user’s face, according to the company.
At a user’s home, SenseMySA claims its device projects images of outfits owned by the user, or wardrobes that can be purchased from the company’s retail partners, onto the smart TV screen or a nearby wall. The clothes can also be projected on to an image of the user.
To begin using the application for fitting, a user can take photos of their wardrobe with the device and store them in a gallery. The user will then be prompted to stand in front of the machine as it scans a full body photo. From there, the user can place their own clothes or a retailer’s clothes on to their body in the photo.
This 1-minute video below demonstrates how a user can activate the item to see what clothes will look like on them:
The machine, which can connect to WiFi, is able to track the user’s wardrobe and is capable of recommending outfit suggestions. The company says can also show a user different clothes based on an occasion, such as a wedding, or upcoming weather. The AI-powered virtual assistant is also embedded with voice recognition technology, which allows it to understand and respond to the user’s navigation commands.
The in-store retail version of this application projects clothing onto a larger virtual mirror where shoppers can choose from the store’s catalog. The application recognizes swiping hand gestures so that the user waves his or her hand to view and choose other outfits.
The 1-minute video below shows a shopper in Sri Lanka using the technology.
The company claims that the in-store virtual mirror’s face recognition capabilities will save the shopper’s likeness to remember users for the next time they shop. The company also claims its system saves the shopper’s try-on and shopping history to remember the specific shopper’s preferences.
To date, SenseMi has been installed at the Sharjah Tourism Booth in Gitex, Carrefour Hypermarts, Arabic Kandora Store in Dubai, Abu Dhabi Culture and Arts Gallery, and Avirate Fashion Stores. No case studies were available on the product’s use.
Arash Ahmadi, Founder and CTO of SenseMI, claims he led in the inventing process of the company’s virtual mirror.
Prior to SenseMI, he also founded ArmaxTech where he also served as Software Development Manager. He was also the Head of IT Department at Eaton. It is not clear how robust his AI experience is.
In-Store Navigation
These applications claim that studying customer behavior helps retailers understand the various factors that influence the buying decision. Customer behavior could include where in the store they go, the route they take, how much time they spend inside, among others. Understanding the behavior enables store owners to respond to customer needs, and potentially increases the chance of customers buying a product.
Lowe’s
Lowe’s uses augmented reality combined with geolocation technology for their mobile In-Store Navigation app, The inhouse-created application is powered by Google Tango, a computer vision technology that uses mobile devices to determine the user’s position relative to their surroundings. This technology is used to create applications for indoor navigation, 3D mapping, physical space measurement, environmental recognition, augmented reality, and windows into a virtual world.
To use the navigation software, the customer must download Tango app on their smartphone and create a list of the items to buy in the app. The app makes available product reviews and information, enabling customers to make informed purchasing decisions. From there, the application will direct the customer to the products inside the store, following turn-by-turn instructions from the application.
The application allows store associates to spend more time advising customers on home improvement projects.
The 30-second video demonstrates how the app, using a mobile device’s camera, guides them through the store through the quickest route possible. The application is in a pilot stage at the Sunnyvale, California and Lynwood, Washington stores:
Mason Sheffield, Director of Technology at Lowe’s Innovation Labs, took over after Kyle Nel who left the company in May 2018 to found his own company. Sheffield has a Bachelor’s degree in Computer and Video Imaging from Cogswell Polytechnical College, but his AI background is unclear.
Walmart
Another retailer that is using computer vision in-store is Walmart, which recently announced that it is testing shelf-scanning robots developed by Bossa Nova Robotics in 50 of its stores.
The robots are equipped with computer vision to navigate the aisles inside the stores. The six-minute video below shows how the robot moves along the aisles, using computer vision to scan the shelves and capture images as a way of collecting data, which the algorithms will analyze and interpret and determine out-of-stock items, missed price changes on products, and missing labels:
In a case study, Walmart describes the in-store robots as standing about two-feet tall and fitted with six-foot arms that can be extended above the robot’s body. The arm is fitted with computer-vision cameras that capture and detect data about our products and labels on the shelves.
In addition, the robot has light detection and ranging (LiDAR) sensors, a remote sensing method that uses laser light to measure distances to the earth. This technology enables the robot to be aware of its surroundings, allowing it to navigate past poles, customers, associates, and other obstacles that might be on the aisles.
The data from the robot is compressed and processed to analyze the image for discrepancies against data it has been trained with. The robot then feeds this data back to store associates, who will take action to restock shelves, provide correct labels and prices, or do the needed action.
In the study, Walmart claims that the robot will serve as the eyes of the employees and give them more time to care for customers.
As of March 2018, Walmart reports that the robots have covered more than 620 miles in-store without mishap, and identified tens of thousands of out of stocks. According to the company, the data collected by the robots will be integrated to train other AI-driven store applications, to reduce the time spent by store employees on manual, repetitive tasks, and enable them to focus on helping customers.
Claude Jones is the Senior Director for Software Engineering at Walmart Labs, where he leads initiatives to automate, test, and build software for Walmart’s web, internal applications, and store experiences. Prior to Walmart, he spent nine years at Yahoo in management roles. His AI experience is not evident.
Measuring Foot Traffic
Density
Density developed a camera that measures the number of people and manages entrances and exits through a door. Density claims it uses a mix of computer vision and deep learning algorithms and depth-sensing technology to anonymously measure the number of people who enter and exit through a door. The company says its algorithms filter the noise from the surroundings such as chatter, squeaking doors, or footsteps while capturing the images.
The company explains that the quality of the images captured is rendered in gray to maintain people’s privacy. The link below shows a sample of the images captured by the camera. This image is also featured on the company website.
In interviews, CEO Andrew Farah said, unlike a security camera, the sensors can’t determine the gender or ethnicity of the people it tracks. Because of this, the Density claims its hardware can be installed in places where a video camera is not allowed — such as public bathrooms, churches, secure office spaces, schools, and dressing rooms, the company says.
CEO Andrew Farah claims the technology can handle complex human behavior such as group collisions, bi-directional movement, lines and lingering.
Retailers can integrate the application into their pre-existing software solutions, the company website claims. No case studies are available but the company says its customers include homeless shelters, Ivy League colleges, theme parks realtors, hospices, businesses examining the use of office space.
According to the RFID Journal, Density was also piloted by a startup mobile app company called Workfrom, The Portland-based startup, which aimed to help remote workers find coffee shops with Wi-Fi, used Density to determine when seating might be available based on door traffic in partnering coffee shop locations. It is not clear if the pilot was successful.
The hardware can be installed above a door or plugged into a wall and tracks movement frame-by-frame with two infrared beams that bounce off of the floor.
In the video below, Farah explains that the hardware counts people by collecting unique Wi-Fi or Mac addresses from smartphones carried by the individual through the door:
Use of the hardware is free, but the company says it sells the data collected. The data is sent to a cloud-hosted server, where it is processed and analyzed. Farah says that the data collected is aggregated to show what other establishments shared customers visit. The video also shows a simulation of the dashboard to visualize the data.
In the above video, Farah also explains that customers entering establishments have the chance to opt out being tracked.
The company has raised $16.2 million in three rounds of funding; the most recent of which totaled $12 million in June 2018.
Jordan Messina, Co-Founder and CTO at Density. He holds a bachelor’s degree in computer science, from the State University of New York at Binghamton. We could not find anyone on the leadership team with higher computer science degrees or robust AI backgrounds.
Computer Vision for Play
Playfusion
Playfusion created a computer vision-based game for the former Toys R Us, which announced a the AR application at its stores in October 2017. Augmented reality (AR) in computer vision is the process by which computer-generated content is overlaid onto images or video streams.
Users could download the game, called Play Chaser on their smartphone or tablets. The players then use the device to scan signages throughout the store to activate different AR games or activities.
The 1-minute video below shows how the Play Chaser game displays on screen when it is played:
Playfusion’s game technology features simultaneous localization and mapping, which plots the environment in relation to the position of a camera that scans that environment. It also uses object tracking, a technology that follows a moving object such as an animated character, with the aid of the computer vision system. Combined with image recognition, these technologies provide the basis of an augmented reality experience, the company said in its website.
Playfusion is responsible for AR games such as Lightseekers and Warhammer Age of Sigmar.
Because Toys R Us announced store closings shortly after implementation, it is not clear if this application had any results related to in-store shoppers.
Mark Gerhard is Co-Founder, CEO & CTO of Playfusion, but it is not clear who is responsible for AI development in the company. Prior to Playfusion, he served as CEO, CTO, and Director of Jagex Ltd,. He has also held C-level or other leadership positions at GTech, Ministry of Data, and Serious Global.
Concluding Thoughts
In the clusters that we discussed above, the aim of the AI technologies seems to be to enhance the shopping experience by providing convenience or increasing engagement.
Virtual mirrors, such as Fitnect, claim to expedite the process of fitting clothes, which could also speed up purchase decisions. In the case of in-store robots at Lowe’s and Walmart, the artificial intelligence claims to help customers navigate through the aisles or ensure shelves are filled with correctly priced and labeled products.
While most retail stores we covered use mirrors to help shoppers choose clothing and fashion accessories, the technology can be used in other different ways. For instance, Lululemon reportedly uses virtual mirrors to provide information about the community activities.
According to news reports, the retailer’s flagship stores in Manhattan and Vancouver use the mirrors as a directory to display community events such as film screenings, places to exercise, and information about healthy living.
There is still the issue of data privacy as the AI collects information about individuals, as voiced by panelists during the pitch of Density CEO Andrew Farah, whether the technologies are used in-store or at home such as the virtual style assistants that take photos of private spaces.
We also noticed that most of the startups we covered face the challenge of having AI experts on the leadership or management teams. Some companies were also not transparent about the results of their technology in real-world use cases. However, this may be due to the nascency of the companies and technology.
Header image credit: Bloomberg News