Everyone in our business knows—or ought to know—about the pipeline maintenance crisis that puts billions of dollars, lives, property, and the reputation of midstream oil & gas industry at risk, leading some in the public to call it a “ticking timebomb.” Statistics indicate tens of thousands of miles of pipes decades beyond their predicted end-of-life, scattered so wide and buried so deep that just finding them on a map can be a problem.
If you have ever looked through 20 years of inline inspection tally sheets, you will understand why it takes a machine learning technique (e.g. random forest, Bayesian methods) to ingest and normalize them into a database effectively. It would be a monumental task if attempted manually by a human … not to mention the risk of endless errors. However, by training a machine learning data classifier on enough log data, this task becomes the perfect scenario where data science can drastically improve integrity management practices.
Do you live within 200 yards of an oil or gas pipe? More than 60% of Americans do, but no one – not public agencies, not commercial customers, and not even the energy companies that own the pipes – could tell you exactly where defects in those pipes are. As that infrastructure ages far beyond its intended lifespan, the costs of maintaining and servicing pipelines pose a $68 billion headache for the industry and a ticking time bomb for the public.