Skip to content

It seems that every time an article or presentation is published highlighting some fancy new improvement to oil & gas pipeline integrity management related to inline inspection (ILI) and external data analysis, it comes attached with professional services to operationalize it. This is not so surprising, since historically the industry has been conditioned to compartmentalize everything on a project basis. It’s simply the way things have always been.

Outside of data analysis this process probably makes sense. However, identifying integrity threats based on ILI and external data requires a new paradigm. We think it’s time to address this challenge with modern Data Science techniques to free up human capital, so engineers can focus on high value engineering. We believe it’s the only way the industry is going to reach its goal of zero pipeline failures.

We ascribe to Wikipedia’s definition of data science, a “concept to unify statistics, data analysis and their related methods” in order to “understand and analyze actual phenomena” with data.[3] Once you begin to apply some of these data-driven scientific principles to the work being done within integrity management, it’s disruptive to the current paradigms and practices.

Data science is a broad field and can be applied to many facets of integrity management, the possibilities are limitless. That’s why over the next couple of months, we are going to release a series of blog postings that share our vision on how data science is applicable to certain areas of integrity management, how we’re deploying it today, and the immediate benefits industry is able to leverage so organizations can identify threats that they may otherwise be unable to find.

Our mission is to predict pipeline failures to save lives and protect the environment with the assistance of machine learning. We’d love it if you joined us.