Home
arrow
Case Studies
arrow

Automating Vehicle Damage Claims for Allstate

allstate

About Allstate

  • $44.79B annual revenue (2020)
  • Fourth largest auto insurance provider in U.S. with 9% market share
  • Global corporation with companies based in four countries (US, Canada, UK, India)
  • 45,000+ employees

Problem: Manual, time consuming car damage claims processing

The company contacted super.AI to help with a scalable processing of their claims for car damage.. They were looking for a reliable partner with high quality video processing capabilities. At super.AI, we work with several customers from the insurance industry. For them, being able to automate their cost estimates in auto claims for damaged vehicles using AI provides significant improvements in customer experience by reducing cycle times. It also drives up efficiency and cost reduction for their business.

Solution: Scaling claims processing with super.AI

To help the customer achieve their automation objectives, we adapted our image segmentation data program for the purposes of identifying damaged parts of vehicles. This data program is designed to identify and process (at the level of individual pixels) each instance of predetermined objects within an image. As in the example above, each pedestrian is a separate instance of the `pedestrian` class, and each vehicle and lamppost, etc., is likewise a single instance of its own class.

Adapting the image segmentation data program for the task of vehicular damage evaluation is an illustration of how the data program paradigm allows for flexibility. Simply by setting the classes to `Window` and `Body` and automatically creating subclasses `Damaged` and `Not damaged` for both, we could create a project suited to the needs of the customer.

Once configured, the data program is ready to do its part in breaking down the customer’s task into simplified chunks.

The stages in breaking up and reassembling the task are as follows:

  • AI pre-processing and masks for each instance of a car
  • Data program routes the masked images to our data processing team
  • Data processors identify and segment the key areas of the vehicle
  • Once all vehicle parts have been processed, the segmented images are sent for processing again, this time for damage
  • Final QA via the AI compiler’s combiner module

1. AI pre-processing and masks instances of cars

We use a mask R-CNN model to process the input images and provide a collection of masked images, each mask covering everything but one car in the image. It’s the assembly-line nature of the system that allows for this automated step. By simplifying the task in this way and breaking it up into multiple tasks per image input, the cognitive and physical load placed on the data processors is reduced.

2. Route the masked images

The AI compiler sends them to the router, which sits over a database of processing sources. The router decides, according to the project’s quality, cost, and speed requirements, which processing sources are best to handle the task and sends it there for processing. The router can choose to use multiple sources for a single task, be that people, machines, or a combination of both.

Route the masked images


3. Data processors identify and segment key areas

The data program generates a data processing interface that our human data processors use to segment the image. The task instructions tell them how to segment the image, and a selection of tools make the process as painless as possible. They can use the line, bucket fill, or super pixel tools to easily add and subtract parts of the mask.

All data processors involved in the project complete a general image segmentation training program and undergo training on data specifically related to the project before processing production data.

4. Segmented images processed for damage

The router now performs its job again, sending out the segmented—and again pre-masked—images so that a new set of data processors can identify damaged and undamaged parts of the now segmented body and windows. Being able to chain together different data processing tasks through different data programs in this way is a core value of the assembly line structure.

The data processors involved in this task, as in the previous step, have qualified through both general and specialized data training programs.


5. Final QA

Once all the processing sources have completed a task, the combiner produces a trust-weighted combination of them as the final output. This is one of the most complex parts of the system. The result is a high quality segmented image output with all body and window parts of all vehicles segmented. The damaged and undamaged sections of each are also processed. This end product is achieved at the cheapest cost through the use of the lower cost labour, an option available to us thanks to the assembly line system we’ve created.

Results

The company was able to review thousands of images and video footage of claim damage in cars and expedite the claims adjustment process from weeks to a few days. 

Integrated via API to upload data programmatically and fetch results automatically

Results

Get a customized demo with your documents

Book a free consultation with our experts.

Related Case Studies

Button Text