Oil & Gas Meters Processing with Artificial Intelligence and Computer Vision

Azati developed an AI-powered service for a Canadian oil & gas customer to automate the reading and processing of data from meters that measure produced oil & gas resources. The service used machine learning and computer vision technologies to extract valuable information from scanned graphs, barcodes, and handwritten notes.

Discuss an idea
98%

barcode/label processing accuracy

4x

faster data-ingestion throughput

70%

reduction in manual correction workload

All Technologies Used

Python
Python
Kubernetes
Kubernetes
Tesseract OCR
Tesseract OCR
TensorFlow
TensorFlow
Keras
Keras
PyTorch
PyTorch
OpenCV
OpenCV
Pylibdmtx
Pylibdmtx
Docker
Docker

Motivation

The customer faced inefficiency and human errors when manually reading data from meters, which include scanned graphs, barcodes, and handwritten notes. Azati’s goal was to automate the data extraction process, increase accuracy, and reduce the dependency on manual intervention.

Main Challenges

Challenge 01
Round Disk Data Processing

The equipment printed data on round discs, which were scanned and sent to the system for reading and processing. Azati developed an algorithm to unfold the round disk and convert the image into a rectangular format to trace and read the coordinates of the curves accurately.

#1
Challenge 02
Curve Highlighting on Graphs

The graphs contained multiple curves with different colors, sometimes overlapping with the background. Azati trained a neural network to accurately select and highlight curves of different colors, even when extraneous interference occurred, ensuring accurate reading of each curve.

#2
Challenge 03
Data Format Variability

The customer had multiple partners using different types of equipment, generating data in various formats. Azati created a neural network that identified the input data's format, categorized it, and routed it to the correct data processor for processing.

#3
Challenge 04
Handwriting Recognition

Handwritten data, such as dates and numbers, presented challenges due to variability in legibility. Azati used Google Tesseract along with a trained neural network to recognize handwritten data from multiple regions of the scanned images, overcoming issues caused by human factors like haste or poor handwriting.

#4

Our Approach

Prototype Development
The project began with the creation of a pilot prototype for reading curves on the graphs. After successfully achieving this, Azati expanded the system to recognize and process additional aspects of the input data, including barcodes and handwritten notes.
Coordination and Communication
Azati maintained continuous communication with the Canadian team for project management, prioritization, and delivery schedule coordination. This ensured that the project goals and timeline were aligned with the client’s expectations.
Iterative Development
The development process followed an iterative model, with data recognition services being built and tested in parallel according to the schedule agreed upon with the client. The team worked to refine the recognition accuracy with each iteration.

Want a similar solution?

Just tell us about your project and we'll get back to you with a free consultation.

Schedule a call

Solution

01

High-Accuracy Barcode Processing

The system detects and reads barcodes from scanned images with over 90% accuracy. This ensures that critical information, such as equipment IDs and labels, is reliably extracted for processing and reporting.
Key capabilities:
  • Automated barcode extraction from scanned images
  • Recognition of multiple barcode types and formats
  • Integration with downstream processing pipelines
  • Error detection and correction for damaged or incomplete barcodes
02

Curve Recognition on Graphs

Curves on meter graphs are processed with over 80% accuracy. Neural networks identify each curve's coordinates and distinguish overlapping lines, allowing extraction of key performance metrics.
Key capabilities:
  • Identify and separate multiple curves on a single graph
  • Handle overlapping lines and noise in scanned images
  • Extract coordinates for quantitative analysis
  • Support various graph formats from multiple equipment sources
03

Handwriting Recognition

The solution recognizes handwritten numbers and dates from scanned images using Tesseract and custom neural networks. Accuracy depends on input quality, ranging from 30% to over 70%, significantly reducing manual data entry.
Key capabilities:
  • Detect handwritten regions within images
  • Transcribe dates, numbers, and notes
  • Handle variations in handwriting style and legibility
  • Support multi-region recognition for complex forms
04

Custom Neural Network Routing

The system identifies the input data format and automatically routes it to the appropriate processing pipeline, accommodating diverse equipment outputs from multiple partners.
Key capabilities:
  • Detect and classify data format automatically
  • Route data to the correct processing service
  • Maintain high recognition accuracy across formats
  • Adapt to new equipment and partners without system redesign

Business Value

Reduced Manual Effort: Automated reading of meter data decreased the need for human operators and reduced errors.

Improved Accuracy: High-accuracy recognition of curves, barcodes, and handwriting increased reliability of operational data.

Operational Efficiency: Faster data processing improved decision-making and resource tracking for oil & gas operations.

Scalable Automation: The solution accommodates multiple equipment types and data formats, ensuring scalability across partners.

Foundation for Further Digitalization: Handwriting recognition improvements and pipeline automation set the stage for fully digital workflows.

Ready To Get Started

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.