Machine Vision in Finance – Current Applications and Trends

Raghav Bharadwaj

Raghav is serves as Analyst at Emerj, covering AI trends across major industry updates, and conducting qualitative and quantitative research. He previously worked for Frost & Sullivan and Infiniti Research.

Machine Vision in Finance - Current Applications and Trends

According to the Automated Imaging Association (AIA), machine vision is a combination of hardware like cameras, image sensors, and image processing software that can help automate applications like inspection and analysis by allowing machines (such as robots) to ‘see’ their surroundings.  

Machine vision is starting to be applied to applications in finance such as using satellite imagery to gauge high-level sentiments in economic trends (such as identifying a retail firm actual traffic levels), as well as human emotion.

For example, looking at satellite images of containers in a shipping port and analyzing their movement in and out of the port might help investors gain insights on import and export trends. Similarly, in-store cameras in a retail store may be capable of gauging customer satisfaction levels based on human expressions (might be classified as happy or sad) or non-verbal cues captured through machine vision.

We aim to highlight the current use cases for applications of machine vision in the financial domain. Through our research we are able to classify these into the following broad segments:

  • Analyzing Property Attributes for Insurance
  • Satellite Image Analysis for Investment Strategies
  • Automating Document Information Extraction

For more information on how AI solutions can help financial institutions and banks improve business operations, download the Executive Brief for our AI in Banking Vendor Scorecard and Capability Map report.

We look at case studies undertaken by AI vendors for finance applications starting off with Cape Analytics below:

Analyzing Property Attributes for Insurance

Cape Analytics

Cape Analytics was founded in 2014 with headquarters in Mountain View and in Munich, Germany. The 38-employee company claims to be using AI to analyze satellite imagery to allow insurers to view valuable property attributes at the time of underwriting. The company claims to use image recognition and machine learning to identify physical features for properties such as the condition of the roof or building footprint.

Cape Analytics offers a database of property information that can be accessed by their clients in the insurance space to:

  • Validate property features such as the current state of the house exteriors and paint jobs. When an insurance agent needs to provide a quotation, being able to immediately view property features data might drastically improve quote speed and accuracy since the traditional method is to schedule physical inspections.
  • Help re-insurers monitor client portfolios and test the accuracy of the data being provided by these clients, (For example, it claims to identify if a property has the same number of gable roofs as reported).

Insurance companies might use the Cape Analytics software to access specific property details like roof or building size and condition through the software’s user dashboard. Insurance agents might be able to quickly verify details about properties to see if they are giving clients the most optimal quote, according to Cape.

Some readers may notice that the capabilities Cape Analytics advertises take a different approach to some common services in the insurance space. This has a lot to do with how AI startups develop their products and where they get their inspiration. We spoke with Lee Smallwood, COO of Markets and Securities, North America at Citi about this on our podcast, AI in Banking.  When asked about how AI startups tend to create their offerings, Smallwood said:

Typically the model is a startup will identify a very specific vertical, and kind of unbundle the offering that a large financial institution has and work towards that. And so … any sort of high dimensional, multi-barrier problem I think is one where there is a lot of opportunity for banks to adopt more AI. That might be things like looking at alternatives to credit scores when issuing loans [or making] trading decisions based on a variety of factors. I think the applications are myriad but what’s really important is what’s driving those.

When using the  Cape Analytics dashboard, an agent would enter the property address, or the latitude and longitude coordinates for the property in a search bar. The software will display a report of the property features, including satellite images that can be used to gauge the veracity of insurance claims or suggest the appropriate quotes.

Below is a 3-minute video from Cape Analytics which explains how the software could help insurance firms gain quick access to property details:

We were unable to find any detailed case studies from Cape Analytics. However, there seems to be some evidence that the company has worked on projects with insurance firms like Security First Insurance, XL Catlin, and Nephila.

Cape Analytics’ Work with Current Insurance Carriers

According to Oxbow Partners, Cape Analytics worked alongside a leading US regional carrier to improve efficiency, customer experience and reduce costs for their traditional human-based inspection process.

Cape Analytics claims they provided the insurer with access to their clients’ data, such as roof geometry and roof conditions, to help the company analyze which client properties might need actionable inspection intervention.

Cape also claims that the insurer reduced inspection spending by over 50 percent and was able to eliminate customer involvement from the loop for physical inspections, while still ensuring that underwriting was accurate.

In another project, Cape claims to have worked with XL Catlin’s reinsurance business to assist with improving reinsurance purchasing and pricing decisions, although further details on the project were unavailable at the time of writing.

Cape Analytics currently provides services to insurance firms engaged with individual or personal houses and plans to expand their available property features commercial insurance and mortgage inspections.

No further details were given by Oxbow about Cape’s work with the above-listed companies.

Suat Gedikli CTO & Founder of Cape Analytics has previously earned a PhD in computer science, image processing and probability state estimation from the Technical University of Munich. He also served as a Research Engineer at Willow Garage (which spawned several spin-off companies including Industrial Perception and hiDOF Inc., later acquired by Google)

Satellite Image Analysis for Investment Strategies

Orbital Insight

Orbital Insight, founded in 2013 in Palo Alto, Calif., claims to be using AI to analyze geospatial imagery from satellites, UAVs, and airplanes to coax out socio-economic trends at global and regional scales.

Orbital Insight claims that it can help clients such as hedge funds or Fortune 500 companies make use of the multitude of geospatial data sources using image processing, machine learning algorithms, and cloud computing to generate tradable insights.

An investment firm, for example, might use the Orbital Insight platform to assess the number of containers moving through the port of Rotterdam, the rate of construction for a project in China, or the number of cars in Walmart parking lots using images captured at spatial resolutions from 30 meters to 20 centimeters. It can also use analyze images on multi-year to hourly time scales. Orbital Insights claims that they look at historic imagery and known economic trends to train their image analysis models and then forecast results for the current period using satellite imagery.

While we could not find specifics on how the application interface looks to users, from our research, we believe this platform can be integrated into existing financial institutions content management systems. Investment firm employees can use the insights tab of their CMS to see Orbital’s generated insights and the satellite imagery being analyzed.

This 3-minute video below explains what led to the development of Orbital Insight’s technology:

According to their website, Orbital Insight might help a client in the financial services space with:

  • Investment Research – Equities, Commodities, Derivatives and Physical Assets
  • Multi-Asset Class Trading – Consumer Demand, Production and Distribution, Crop Health and Yield Estimates
  • Real Estate Investment Trusts (REITs) – Asset Utilization and Monitoring, Traffic Density
  • Investment Banking and Private Equity – Mergers and Acquisitions (M&A), Consolidations or Roll-ups, Divestitures, Carve-outs, and Due Diligence

The company claims they can offer these services by:

  • Monitoring and tracking global oil inventories for investors and stakeholders in the oil and gas domain.
  • Identifying the level of activity in steel plants across a particular region or country
  • Quantifying traffic at a retail store as a proxy for sales or revenue for each day or week
  • Measuring and predicting the yield of certain crops in farms to gauge the profits for one crop growing cycle.

Although we could find evidence of Orbital Insights being used in various industry sectors, or details about any specific projects that the company undertook for stakeholders in the financial domain. Orbital insight, however, has entered into a partnership with Meyers Research, a provider of home construction data. The partnership involves Meyers integrating Orbital Insight’s geospatial analytics into its housing market data app, Zonda.

Dave Story, Chief Development Officer at Orbital Insight previously earned BS and MS degrees in Electronics and Computer Science from MIT and was the CTO of Lucasfilm and VP Strategy at Tableau Software.

The company’s Chief Software Architect, Javier Barreiro, has a degree in Operations Research from the University of Texas at Austin and previous experience as a Computer Scientist at NASA Ames Research Center.

Automating Document Information Extraction

HyperScience

HyperScience, founded in 2014 in New York, has around 61 employees. HyperScience offers machine vision products under the brand name HyperEXTRACT, which the company claims can help finance and insurance firms automate document data extraction. This can help businesses save time and costs in assessing large numbers of incoming paper documents or forms which was traditionally done by human workers.

  • HyperScience first signs a contract with financial firms allowing them to automate their data collection processes. In many cases, before adding this service, mailrooms of financial institutions or insurance firms might receive large volumes of documents that need organizing, which could include handwritten loan applications or claims forms.
  • HyperScience claims that their software can help businesses to digitize these incoming paper documents by using machine vision and then segregate them for review by human employees. Traditionally this process was done through manual data entry or optical character recognition technology, both of which are tedious and error-prone.
  • HyperScience works with clients to integrate their image recognition system into the existing mailroom processes at financial institutions ensuring that the system continues to function even after the initial integration period.
  • When a document is fed to the system from a financial institution employee, the company claims that EXTRACT can automatically identify and understand each type of incoming document and sort them to be sent to the appropriate processing team.
  • Data entry teams, which originally made sense of these documents, could potentially reduce manual organization, according to the company. They note that these teams could then adjust their roles to involve reviewing and ensuring the accuracy of fields entered by the machine learning software.

While we could not find a demonstration video or high-resolution product shots, we can tell from HyperScience can analyze documents or handwritten words. It then converts this data into digital text documents which it categorizes and sends to the proper company staff.

In the video below, beginning at 3:33, Anatola He, VP Business Development at HyperScience demonstrates how the software can ‘read’ handwritten text with more accuracy than the Optical character recognition technology

Cape Analytics’ Work with Current Insurance Carriers

HyperScience claims to have worked with insurance firm QBE which is listed on the Australian Securities Exchange to streamline their claims processing by improving accuracy and lowering costs. QBE had the following challenges with their existing claims processing:

  • The data extraction from the claims was highly manual and time-consuming.
  • The slow progress meant that extracting full data-sets from the forms was highly expensive and rarely completed. (In most cases only the important data points were extracted).
  • The extracted data was not organized and ‘clean,’ making it very difficult to gain insights from data analytics.

We could find no quantifiable results for HyperScience collaboration with QBE in terms of what monetary or operational benefits were achieved post the integration. HyperScience claims that QBE has entered into a multi-year agreement with HyperScience to roll out their solution across QBE offices globally.

HyperScience claims that their software was fully implemented at QBE within seven weeks of signing the agreement. According to HyperScience the integration required only minor configuration changes to the system to integrate it with QBEs content management system.

HyperScience claims that the integration helped QBE collect full data sets from customer documents at no increased cost (aside from paying for the service) as compared to when the task was being done by human workers, although we could not independently verify this claim.  HyperScience also claims that QBE was able to improve insurer-to-customer response times after the integration of their system in QBE’s mailroom.

Peter Brodsky, CEO of HyperScience earned a bachelor’s degree in Computer Science from Cornell. He was the Director of Engineering for SoundCloud before moving to HyperScience. We could find no other instances of HyperScience having worked in projects with marquee companies.

Captricity

Captricity was founded in 2011 in Oakland, CA and has around 58 employees. The company offers an AI solution that they claim can help businesses extract data from customer documents including handwritten documents much faster than human workers doing the same task. Captricity also claims to have worked machine vision projects in the insurance, healthcare and government sectors.

The company says that fully automating paper-to-digital strategies dramatically increases processing abilities for businesses. Businesses can potentially reduce operational costs and increase the throughput of data extraction from paper documents using Captricity’s machine vision solution.

Below is a 2-minute explainer video showing how Captricity can help businesses automate paper document digitization for integration with CRM tools like Salesforce:

Captricity works alongside these companies using Robotic Process Automation (RPA) to help open up processes that were previously closed off to automation.

American insurance firm New York Life Direct wanted to process business reply cards that were part of their lead generation campaign. However, they were having a hard time finding a software that could capture handwriting accurately while simply hiring a team of human data entry specialists to do the task wasn’t scalable or cost effective.

Captricity claims they helped New York Life Direct launch an automated data digitization solution with the ability to sort between over a 100 different versions of their insurance promotions. Captricity’s claims its solution could also integrate with New York Life’s existing legacy systems.

As a result, they improved accuracy to 99.5 percent (versus less than 94 percent with third-party card validation). They also reduced total operations cost by 50 percent and decreased processing time from days to hours. It was unclear how long the period of integration lasted or how soon after the integration these results were achieved.

Near-Term Trends of Machine Vision in Finance

As part of our research, we took note of trends and patterns that businesses looking to use machine vision might want to be aware of. We list these trends below and attempt to condense the need-to-know facts for business leaders in the finance domain.

What Businesses Need to Know:

  • One of the most common limiting factors for business process automation in finance seems to be the continued use of paper-based forms and documents.
    • As seen with companies like HyperScience and Captricity, AI can today help businesses overcome these paper-trail limitations. Image recognition using AI can today help businesses automated the process of data collection from documents and forms at a speed and scale impossible for humans.
  • Although the accuracy of these systems gets better over time, there is a three to four week training period in most cases before any tangible results can be assured.
    • HyperScience claims that their collaboration with QBE lasted over seven weeks. Business leaders in finance may need to understand that machine vision integration does not work like a ‘black box” where a business can buy software and start using it immediately.
    • The integration requires input from team members in different departments who deal with the forms for context on sorting and identifying the correct department for each incoming form.
  • Satellite imagery can help give insights to investors on actually boots-to-the-ground work happening in new projects.
    • For example, companies like Orbital Insights or Cape Analytics might be able to provide financial institutions and investors with insights like the state of construction for a project or the consumer count for a particular store based on a number of vehicles in the parking lots.

 

Header image credit: TechCrunch