Machine Vision in Insurance – Current Applications

Niccolo Mejia

Niccolo is a content writer and Junior Analyst at Emerj, developing both web content and helping with quantitative research. He holds a bachelor's degree in Writing, Literature, and Publishing from Emerson College.

Person Taking Picture Of Damaged Car

When it comes to the possibilities for AI in the insurance space, it may seem like machine vision is an inevitable choice for any insurer looking to have as much information on the insured property, car, or people as possible. However, this may still yet to be the case for many insurers. The insurance space has a relative lack of case studies for this type of AI software. This may indicate a lack of demand from the insurance industry or a lack of traction on the part of the technology’s use in insurance.

In this report, we’ll investigate some of the applications of machine vision in insurance, where it’s helping insurers the most, and where demand may need more time to grow. Our report consists of four AI vendors claiming to offer machine vision to insurers.

We’ll be investigating the machine vision software offered by the following companies:

  • Ant Financial offers a smartphone app they claim recognizes vehicle damage and can estimate repair costs in reports to the insurer.
  • Tractable.ai also offers software in the form of a desktop application. The software can purportedly use multiple images of the same vehicle to create an all-around image of the car’s damage before reporting damage results to an insurance agent.
  • Cape Analytics offers a machine vision software that analyzes satellite images of homes and buildings to evaluate their condition for insurers at the time of underwriting. They claim this helps insurers by giving them access to property data and precludes the need for scheduled inspections.
  • Nexar offers smartphone and dashcam-enabled computer vision and crash detection for insurers to offer their customers. They claim the software can analyze the speed and force of collisions and use this information to create detailed incident reports.

Before we begin our investigation into these AI vendors, we’ll share some findings about the state of machine vision applications in the insurance industry.

Machine Vision in Insurance – Insights Up Front

The vendors explored in this report claim to offer machine vision software for claims processing, underwriting, and driving assistance. We’ve taken a look at the AI talent each company employs, along with their venture capital any case studies about their software to determine their traction. Ant Financial, the vendor that seems to have the most traction, and Tractable.ai which has the least traction both offer claims processing solutions. Cape Analytics claims their service facilitates the underwriting process, while Nexar states their software can record driving incidents and assist drivers in real time by providing alerts about distracted driving and objects on the road.

Of the companies we explored for this report, Ant Financial is the most likely to actually be leveraging AI. This is because they employ numerous employees that hold PhDs in computer science or machine learning. Cape Analytics and Nexar each show less promise than Ant Financial, but may be equivalent in their chance of truly leveraging AI in their solutions. They have comparable lists of AI talent but Cape Analytics has a small few more than Nexar. Finally, Tractable.ai may be using AI for their software, but it may be that they are still working on honing their machine learning models. This is because of their relative lack of PhD-level AI staff.

All of the companies in this report except for Ant Financial have raised between $30 and $50 million in venture capital. Ant Financial is of course a large enterprise.

Our research yielded little to no results when searching for case studies regarding a client’s success with any of the software investigated in this report. Only Cape Analytics’ software had case studies available online, and they lacked the detail necessary to trust the claims they asserted. In addition to the fact that Ant Financial is the only company that lists major companies as past clients, this leads us to believe the space for machine vision in insurance is nascent, at least in the west.

We’ll start our investigation of machine vision in insurance with Ant Financial and Tractable.ai, two vendors claiming to offer claims processing solutions.

Machine Vision for Claims Processing

Ant Financial

Ant Financial, an affiliate of Alibaba Group, is a chinese fintech vendor with over 7,000 employees. They offer an app called Ding Sun Bao that they claim can recognize damage to vehicles and facilitate claims using machine vision.

The app can purportedly analyze smartphone pictures of vehicle damage, assess the severity of that damage, and report on its findings to a human insurance agent. These reports would include a list of the damaged parts, plans to repair them, and how the accident affects the driver’s premiums moving forward.

The machine learning model at the core of Ding Sun Bao was likely trained on thousands of images of damaged cars labeled according to severity of the damage and paired with the repair costs to fix it. The pictures would need to have been taken with smartphones, in various lighting conditions, and from various angles so the model could still discern the damage to a car in different weather conditions and at different times of day.

An Ant Financial data scientist would then need to run all of this data through the machine learning model in order to train the algorithm to recognize vehicle damage and how much different severities of damage cost. For example, the software could recognize the difference between two cars of the same model that each have a damaged bumper, but one of the cars also has a flat tire. The report would factor in the cost of the tire for the car that needs one replaced.

A user could then upload an image of their damaged car into the app. The algorithm behind it would then be able to correlate the damage to a database of every damaged car it has been exposed to, and determine the severity of the damage. The app would produce a report detailing which parts are damaged and how much the repairs are likely to cost. The report would also show an estimate of how much the driver’s premium will rise as a result of the accident. The software would then send the report to an insurance agent for approval so they could pay the insured customer.

We couldn’t find a video demonstrating this particular application of Ding Sun Bao, and Ant Financial does not make any case studies available for the software. That said, Ant Financial claims China Taiping, China Continent Insurance, Sunshine Insurance Group, and AXA Tianping are some of their past clients.

Yuan Alan Qi is the Vice President and Chief Data Scientist at Ant Financial. He holds a PhD in Computer Science from MIT. Previously, Qi served as Vice President and Founder of the institute of Data Science and Technologies at Alibaba.

Tractable.ai

Tractable.ai is a 2014 startup based in the UK that claims their software helps insurers automate car insurance claims processing with machine vision.

Tractable.ai claims that their software can detect which parts of the vehicle are damaged across multiple images and estimate repair costs based on that damage. This means that the software would be less likely to miss minor damages on parts of a car that are more easily seen from unusual angles, such as the curve of a car’s fender around a front wheel.

We can infer the machine learning model behind Tractable.ai’s software was trained on millions of pictures of damaged cars. The damage would need to be labeled according to severity and which parts are damaged. The pictures were likely also labeled according to which parts of the car are shown so that the software can compare pictures of the same car at different angles. This would allow the software to effectively “map” the pictures across what it recognizes as the shape of a car for the purpose of detecting damage that may be hard to see from just one angle.

Each set of images of a given car would also be labeled with the costs to repair their damage, and the costs would likely be itemized across all damaged parts. The images would need to be in various lighting conditions and taken from various angles to make sure the system can process and detect damage in images taken at various times of day or weather conditions. An employee would then expose the machine learning model to all of these labeled images. This would train it to discern different areas of a vehicle, differences in severity of vehicle damage, and repair costs for different types of damage.

A user could then take ten pictures of their damaged car and upload all of them into the software. The algorithm would then be able to detect the parts of the car shown in each image, and determine which images cover which parts of the car. If multiple images show a certain part of the car, such as a back bumper, the software would recognize this and detect any damage to the bumper from both images. Once all of the damage is assessed, the software would be able to produce a report of the estimated cost to repair all of the damaged parts.

Below is a video of Tractable.ai co-founder and CCO Adrien Cohen demonstrating how the software works. The demonstration begins at 2:18 and ends at 9:45.

Tractable.ai does not make any case studies available showing success with their software. They also do not list any major companies as clients, but they have raised $34.9 million in venture capital and are backed by Ignition Venture Partners, Insight Venture Partners, and Zetta Venture Partners. We caution readers about vendors who claim to use AI but do not have case studies with statistics relating to a client’s success.

Razvan Ranca is CTO at Tractable. He holds an MS in Computer Science with a machine learning distinction from the University of Cambridge.

Underwriting

Cape Analytics

Cape Analytics is a 2014 German startup with 38 employees. They offer an AI-enabled service for analyzing satellite imagery so that insurers can view relevant property attributes before underwriting. Their software purportedly uses machine vision to find property features like the condition of an outdoor pool or a building’s dimensions.

The company claims client insurers can provide them with an address of a property they want to analyze. Cape Analytics’ software then uses satellite images of the property to find attributes the insurer may be interested in. These could be knowledge of of an existing pool or trampoline as well as how many square feet a yard takes up. In addition, these attributes could include the condition of areas prone to damage over time like a roof or garage.

It’s likely that the machine learning model behind Cape Analytics’ software was trained using millions  of satellite images of homes and buildings. Each image would have all property attributes labeled, including dimensions of the building, the condition of parts like the roof or back porch, or any expansions to the property such as a shed or children’s club house. A Cape Analytics data scientist would then need to upload all of these labeled images into the software. This would train the algorithm to recognize individual parts of a property and discern the condition of those parts.

For example, the software would be able to determine the condition of a property’s roof by comparing it with every previously uploaded image of a roof. If it resembles past roofs that had no damaged and were not labeled as damaged, the software will conclude that the roof is in good condition.

A client insurer could then request Cape Analytics to analyze a new property for them. The algorithm behind the software would then find satellite images from above the property and itemize each attribute. It would then assess the condition of each attribute, such as any weather damage on a deck or porch or damage to a shed. The software would then present these images and information to the client via a user dashboard so that the client can make an informed decision about underwriting the property without scheduling an in-person inspection.

Below is a short video from Cape Analytics explaining how their software may help insurance companies quickly access property details:

Cape Analytics does not have any case studies accessible from their website, however there are some case studies showing success with their software produced by Oxbow Partners. These case studies are brief and do not go in depth about each client’s situation, so we caution readers to consider this when reading their claims. Cape Analytics also does not list any major companies as clients, but they have raised $31 million in Venture funding and are backed by CSAA Insurance Group, Montage Ventures, and XL Innovate.

Suat Gedikli is  CTO at Cape Analytics. He holds a PhD in Computer Science, Image Processing, and Probability State Estimation from the Technical University of Munich. Previously, Gedikli served as Chief Architect at NavVis GmbH.

Ride Recording for Insurance Adjustment

Nexar

Nexar is a Tel-Aviv startup with 88 employees. They offer a namesake software they claim can help insurers automate claims processing for auto insurance with machine vision.

Nexar’s software is built to work with the company’s dashcams. These dashcams can record a driver’s car ride in the form a timelapse. If anything out of the ordinary happens, such as a hard break or an accident, the software will produce a 40-second video of the incident, separate from the timelapse format used for the rest of the ride.

Users can also select a “create an incident” option in Nexar’s mobile app to request the software to produce the 40-second video at their will. When the software records automatically or when requested by the user, the video will consist of 20 seconds prior to the initial request and 20 seconds afterwards.

Nexar also claims their software can create detailed collision reports within the mobile app and provide drivers with advanced driver assistance systems (ADAS) alerts in real time. These alerts come from Nexar’s network of Nexar-enabled vehicles that share road safety updates with each other as they drive.

It’s likely that the machine learning algorithm at the core of Nexar’s software was trained on thousands of videos of recorded car rides. These videos would need to be labeled with what speed the car is moving at and at which times, as well as when in the video any accidents or unusual events happen. These videos would have to be recorded with Nexar’s proprietary dashcams and smartphones in order to prepare the algorithm to accept videos from both. Then, a Nexar data scientist would need to expose the machine learning model to all of this labeled video data to train the algorithm to detect when something out of the ordinary happens on the road such as an accident or a hard break to prevent an accident.

A user could then drive while using their smartphone or a Nexar dashcam to run Nexar’s software. The software would then be able to detect unusual occurrences like road obstructions, hard breaks, or accidents, and report on them if they result in a collision. Collision reports include images and video of the incident, the location of the collision, analyses of the force and speed of the collision, weather conditions, and driver information. The videos would be time stamped when the incident occurs.

Nexar claims all of these images, videos, and reports about the statistics of the incident are created and sent to an insurance agent within minutes. Their chief value they can provide their clients with is their software’s ability to instrument nearly every aspect of a car ride and transfer that visual and numerical data quickly

Nexar records rides in the form of a time-lapse unless something unexpected happens. Watch this short 3-minute video below for an example of one of these time lapses:

Nexar does not show any of their clients’ success with their software in the form of a case study. They also list any major companies as past clients, but they have raised $44.5 million and are backed by Aleph, Ibex Ventures, Nationwide Ventures, and True Ventures.

Eran Shir is Co-founder and CEO of  Nexar. He holds a PhD in Electrical Engineering and Computer Science. Previously, Shir served as Senior Direct and Head of the Creative Innovation Center at Yahoo.

 

Header Image Credit: NearSay