Data Scientists
Insurance
Industry
Northbrook, USA
Location
60,000+
Company size
We were looking for a scalable and cost-effective way to automate our car damage claims processing. We were impressed with super.AI's video labeling capabilities. Thanks to their platform, we were able to expedite the claims adjustment process from weeks to a few days.
Data Scientist

Starting point: Manual, time consuming car damage claims processing

The company contacted super.AI to help with a scalable processing of their claims for car damage.. They were looking for a reliable partner with high quality video labeling capabilities. At super.AI, we work with several customers from the insurance industry. For them, being able to automate their cost estimates in auto claims for damaged vehicles using AI provides significant improvements in customer experience by reducing cycle times. It also drives up efficiency and cost reduction for their business.

Scaling claims processing with super.AI

To help the customer achieve their automation objectives, we adapted our image segmentation data program for the purposes of identifying damaged parts of vehicles. This data program is designed to identify and label (at the level of individual pixels) each instance of predetermined objects within an image. As in the example above, each pedestrian is a separate instance of the `pedestrian` class, and each vehicle and lamppost, etc., is likewise a single instance of its own class.

Adapting the image segmentation data program for the task of vehicular damage evaluation is an illustration of how the data program paradigm allows for flexibility. Simply by setting the classes to `Window` and `Body` and automatically creating subclasses `Damaged` and `Not damaged` for both, we could create a project suited to the needs of the customer.


Once configured, the data program is ready to do its part in breaking down the customer’s task into simplified chunks.

The stages in breaking up and reassembling the task are as follows:

  • AI pre-labels and masks for each instance of a car
  • Data program routes the masked images to our labelers
  • Labelers identify and segment the key areas of the vehicle
  • Once all vehicle parts have been labeled, the segmented images are sent for labeling again, this time for damage
  • Final QA via the AI compiler’s combiner module

1. AI pre-labels and masks instances of cars

We use a mask R-CNN model to process the input images and provide a collection of masked images, each mask covering everything but one car in the image. It’s the assembly-line nature of the system that allows for this automated step. By simplifying the task in this way and breaking it up into multiple tasks per image input, the cognitive and physical load placed on the labelers is reduced.

2. Route the masked images

The AI compiler sends them to the router, which sits over a database of labeling sources. The router decides, according to the project’s quality, cost, and speed requirements, which labeling sources are best to handle the task and sends it there for labeling. The router can choose to use multiple sources for a single task, be that people, machines, or a combination of both.

3. Labelers identify and segment key areas

The data program generates a labeling interface that our human labelers use to segment the image. The task instructions tell them how to segment the image, and a selection of tools make the process as painless as possible. They can use the line, bucket fill, or super pixel tools to easily add and subtract parts of the mask.

All labelers involved in the project complete a general image segmentation training program and undergo training on data specifically related to the project before labeling production data.

4. Segmented images labeled for damage

The router now performs its job again, sending out the segmented—and again pre-masked—images so that a new set of labelers can label damaged and undamaged parts of the now segmented body and windows. Being able to chain together different labeling tasks through different data programs in this way is a core value of the assembly line structure.

The labelers involved in this task, as in the previous step, have qualified through both general and specialized data training programs.


5. Final QA

Once all the labeling sources have completed a task, the combiner produces a trust-weighted combination of them as the final output. This is one of the most complex parts of the system. The result is a high quality segmented image output with all body and window parts of all vehicles segmented. The damaged and undamaged sections of each are also labeled. This end product is achieved at the cheapest cost through the use of the lower cost labour, an option available to us thanks to the assembly line system we’ve created.

Results

The company was able to review thousands of images and video footage of claim damage in cars and expedite the claims adjustment process from weeks to a few days. 

Related Case Studies