Button Text
Home
arrow
Blog
arrow
AI Trends
Oct 28, 2021
Min Read

Modernizing Optical Character Recognition (OCR) with Artificial Intelligence

Share on TwitterShare on Twitter
Share on TwitterShare on Twitter
Share on TwitterShare on Twitter
Share on TwitterShare on Twitter
Christopher Marshall
Data science technical writer
SUMMARY

With constant pressure to increase efficiencies and conserve resources, many organizations are turning to intelligent process automation to reduce labor costs, prevent human error, and free up human workers to focus on other high-level tasks. According to the Bureau of Labor Statistics, data entry and forms processing takes up to 60% of the average office worker’s day. 

Process automation techniques powered by machine learning and artificial intelligence can automate these highly repetitive business processes, effectively "taking the robot out of the human." As a result, organizations can save money, boost efficiency, and innovate in ways that provide a real competitive advantage.

What is optical character (OCR) recognition?

Optical character recognition (OCR) is the process of converting text, whether printed or handwritten, into a digital format. OCR is a form of computer vision, a field of study concerned with how machines see. Computer vision is one of the more established and obviously useful applications of machine learning (ML), as it enables computers to process images and video to extract meaning from them.

Optical mark recognition (OMR) is a related technology that involves reading and compiling information marked on surveys, tests, and other paper documents. Scantron is a form of optical mark recognition that is commonly used to score tests; however, it is being phased out in favor of more advanced solutions. Other related techniques include barcodes, which encode information into bars and alphanumeric characters. Pairing these technologies with AI and a series of simple business rules can result in automated workflows that eliminate paper-pushing in essential business processes. 

How does OCR work?

OCR extracts text from scanned images by analyzing the patterns of light and dark that make up letters and numbers. OCR programs recognize text almost instantaneously as long as clear images or videos are used for processing. Blurred text, or marks on the copy, can create errors and influence accuracy. However, in the right conditions, OCR software provides close to 99% accuracy.

The proof of concept for OCR was invented in 1912

OCR has a longer history than its computer vision compatriots that dates back to 1912 and the invention of the optophone. The optophone converts printed text into sound for interpretation by the blind. Sadly, the optophone’s slow reading speed and exhausting manual operating mechanism prevented it from achieving mainstream adoption.

It wasn’t until the 1970s that OCR staked its enduring claim with software that worked alongside scanners to convert physical documents into a digital format. Although OCR was originally intended for text-to-speech processing for the blind and visually impaired, the technology ultimately saw much wider adoption. Thanks to omni-font OCR, it became possible for almost any document to be scanned and converted into digital text.

Problems with traditional approaches to OCR

Until recently, the standard OCR technique remained largely unchanged and followed a few simple steps:

1 .Create a digital image of the text (e.g., document scan).
2. Filter the image to enhance the contrast between light and dark areas.
3. Identify text through contour detection.
4. Compare each character or word against a library of examples to find the closest match.

Alternatively, feature detection can be used to identify consistent traits of each character (e.g., a lowercase ‘b’ is a straight line with a semicircle on the right side of its base). This rule-based approach is effective at the task it was designed for, but it’s also quite restrictive. Feature detection OCR is limited by the number of character sets available to match against. The process is largely confined to the conversion of structured text, such as text formatted in paragraphs or columns.

The accuracy of these techniques drops off rapidly as artifacts from second-generation documents (e.g., scans of scans) and coffee mug stains are introduced. Furthermore, the rule-based approach has prevented OCR from being applied to a huge range of real-world applications, where text is not always neatly structured.

Real-world applications of optical character recognition

The real-world applications of OCR extend far beyond the familiar digitization of dusty documents, and the advent of machine learning means OCR has a long and lucrative road ahead of it. Anyone dealing with large quantities of unstructured text data, no matter what form, stands to benefit from OCR.

Frequently, companies that were established before digitization was the norm seek to convert vast document archives into digital formats. Google’s ill-fated endeavor to digitize every book on Earth is another prominent example of OCR in action. Here are some applications of OCR that are already in use today:

Banking and finance

  • Automated check deposits
  • Annual report collation, translation, and information extraction

Transport and logistics

  • License plate recognition
  • Mail and shipment sorting
  • Automated street sign reading

Healthcare

  • Searchable, exchangeable hospital records
  • Automated remittance filing
  • Automated processing of notes into HIPAA-compliant formats

Critical tasks across industries are already being reshaped by OCR, and this trend is likely to accelerate further in the coming decades. Corporations, academics, and individuals should be excited by the prospect of widespread optical character recognition and the wealth of data it produces. More accessible data brings with it the potential to identify previously undiscovered patterns and create fresh insights.

The future of OCR

Artificial intelligence has allowed OCR to progress more in the past few years than it did in the previous hundred. The critical advancement is that OCR is no longer limited to scans of documents, and can now be applied to any image or video of text. As long as enough accurate training data is used, an OCR ML algorithm can be applied to any imaginable real-world scenario that requires the identification and conversion of text. The enormous breadth of possible specialized use cases mean OCR has a bright future ahead of it, as illustrated by the recent use of machine learning to transcribe an ancient Japanese cursive script, Kuzushiji.

The power of ML usually resides in its ability to provide generalized solutions, a model that can work in a wide variety of instances so long as it’s trained on enough accurate, relevant training data. OCR is more unusual in this sense, as the text that needs to be identified comes in such a breadth of possible forms, arrangements, and types that a more specialized approach often provides better results. More exciting problems means more research, more experimentation, and more incentive for investment in creative solutions.

What’s more, with deep learning, producing a digitized version of text is only the beginning: extracting meaning from it is where the real fun begins, with a wealth of insight to be gained from identifying hidden patterns.

Additional optical character recognition AI resources:

Thanks to rising compute power and declining compute costs, artificial intelligence has never been more accessible. No-code AI solutions like those offered by super.AI make it possible for non-technical business users to take advantage of artificial intelligence. For more information about applied AI and how to get started with the technology, book a personalized demo with one of our experts.

Other Tags:
AI Trends
OCR
Share on TwitterShare on Twitter
Share on FacebookShare on Facebook
Share on GithubShare on Github
Share on LinkedinShare on Linkedin

Get a customized demo with your documents

Book a free consultation with our experts.

You might also like