Skip to content
 

“Uncertain” is not how an engineer who’s determining the potential consequence of a pipeline threat should feel. Accurate analysis that supports confident integrity predictions requires high-quality data. However, pipeline operators may still invest in software that delays the cleanup of integrity data till the last minute.

By opting to own their on-premise integrity management (IM) solution, some companies are forced to wait until implementation is complete to clean their pipeline data. At OneBridge, we believe a modern IM software should shrink data-use delays, not extend them.

Our Cognitive Integrity Management (CIM) is a SaaS solution that lowers costs and shortens the time needed to clean pipeline data. Let’s look at the details.

The cost of poor-quality integrity data

Information is the basis for integrity work—and the value of an integrity engineer’s decisions is directly proportionate to the quality of the data they access.

According to Gartner, “organizations believe poor data quality to be responsible for an average of $15 million per year in losses.” Those losses are more than just numbers on a ledger for pipeline operators as poor data affects safety.

Inadequate data quality leads to:

  • Digging in the same place twice (unearthing a previous repair)
  • Missed opportunities to address threats such as interacting anomalies
  • Exaggeration of minor integrity issues and a failure to flag others

Factors affecting data cleanup

Several parameters influence the scope and timing of data cleanup:

Number of records – The more records there are, the longer cleanup takes. Pipeline integrity deals with an enormous amount of information—for example, just a single ILI run can “produce thousands upon thousands of records for just a few miles of pipeline.” On top of ILI information, integrity work requires GIS, PODS, dig data, construction information and more.

Age of records – Inaccuracy increases with a record’s age. This can be due to multiple updates, inadequate record-keeping practices or the inability to verify accuracy.

Location of records – When data is “siloed” (that is, kept in isolated places), the complexity of cleaning increases. It is not easy to align records from a personal spreadsheet with a dataset saved to a shared drive.

Integrity software choice – When companies adopt on-premise software, there’s a significant delay between entering data into the new system and checking it over for quality. Data cleaning doesn’t even begin until your software is implemented and in use. Only then will poor data quality become apparent. Comparatively, cleanup happens upfront for a SaaS solution with machine learning.

Easing the burden of data cleanup

OneBridge’s Cognitive Integrity Management (CIM) employs advanced algorithms, machine learning and data science to clean integrity records in a fraction of the time it would take a human workforce.

With CIM, data cleanup and data usability are immediate. CIM’s automation capabilities simplify data entry and harmonization by:

  • Removing duplicates
  • Appending missing data
  • Reformatting data
  • Validating data and flagging bad records
  • Using templates to ingest data from different silos

And as CIM ingests data, its cloud-computing algorithm continues to learn, meaning the more records it consumes, the more efficiently records are cleaned and ready for use; a topic we’ll discuss in another blog post.

Look forward to our upcoming post examining the timing of data utilization in Integrity Management software.