Skip to main content

Opportunities and challenges for the digital oilfield

Oilfield Technology,


Mark Claxton, Energy Sector Director at data analytics company, Tessella explains how data can be used to optimise the drilling process.

It will surprise no-one that data is transforming the way we find, drill for, and extract oil - improving efficiency and avoiding errors.

But how is this data explosion actually being used by the industry? Tessella work with a wide range of oil and gas companies and we find data being employed for a variety of end-goals – identifying where to drill, optimal extraction processes, spotting problems before they occur - but many of the techniques and challenges are similar across the board.

What’s in your data?

One example of data making big savings is adverse event detection during drilling. By capturing data from the drilling process we can understand what is happening in real time, so experts can make the best decisions. To be effective here, we need to turn terabytes of data into clear visualisations such as 3D maps of lithography and casing/drill position; or indicators of problems such as changing pressure or increased energy required to raise a drill string. One such project we work on has saved the client US$200 million in avoiding stuck strings.

This process is about presenting vast and diverse data in a usable form. We can take this further – adding additional layers of complexity – by analysing geological and drilling data to provide insights to improving future processes. Using a process called history matching – we can look at data taken from comparable past events to inform future decisions.

For example if you are drilling lots of similar wells across a region with modest geological variation, you can compare the performance of each rig to identify the team that gets to target depth most efficiently at minimum cost. This data can be used to explore what the best performing team does and thus spread best practice, reducing non productive time, minimising environmental impact and optimising the use of expensive resources and consumables. Here we often use Bayesian statistics – where multiple complex parameters are assigned probabilities to understand most likely outcomes, and neural networks – computer programmes that behave like the brain.

Data collected from seismic surveys and probe measurements has long been used to build up detailed models up sub surface rock structure, oil quality, water levels, etc. Using ever-advancing statistical methods we can then model the reservoir and how to get oil out in an economical way.

For example we might inject water or gas or heat, to create energy to extract the oil, and surfactants to change the rock chemistry. All has to be incorporated into the geological model. Using all this information combined with the laws of flow and energy conservation, we can then predict the right amount of pressure, heat and chemicals to apply for optimal extraction.

Getting it right

In these and other diverse projects, we consistently come up against similar issues.

Data quality is one of the biggest. The better your data, the better your decisions.

Seismic surveys in particular collect many terabytes of data to build up sub surface models. These recordings need to be deconvoluted to eliminate noise and combined with probe measurements at key points – taken in different formats. Cleaning up our data and matching different sources and different formats is a big task.

Exploration allows you time to process the data. During drilling, fast decisions can be critical to avoiding costly errors. Here in particular there remains much room for improvement in sensors and the transmission of data to be processed. In a perfect world we would even process the data on the rig in real time. However we can still extract maximum value from sub-standard data, by using mathematical methods to assign data values based on levels of confidence in its accuracy.

There is also lots of data to deal with from different suppliers in different formats. To do something useful with it, we need to convert it into standardised formats with unified time and date stamps. In well management for example, Energistics WITSML allows diverse data - from hook load to mud pressure - to be processed in a consistent and comparable format. This is effective, but it requires that the data being fed into it is reliable – and this is always the biggest challenge.

Underlining all of this is one major issue: oil and gas data is vast and complex and few people really understand how to use it properly. Even the best data is full of uncertainties, and it exists in a world of almost infinite variables. Data analytics platforms alone will not deliver. Getting the most out of your data requires an understanding of data and its limitations – only then can it be used to its full effect.

Mark Claxton, Energy Sector Director at data analytics company, Tessella


An article by Mark Claxton.


Read the article online at: https://www.oilfieldtechnology.com/digital-oilfield/03122015/opportunities-and-challenges-for-the-digital-oilfield/

You might also like

 
 

Embed article link: (copy the HTML code below):


 

This article has been tagged under the following:

Oil & gas news