Apr 4, 2017

How managing risk in the oil and gas industry can prevent a large-scale disaster

Oil and Gas
Lana Ginns
6 min
The US Bureau of Labor Statistics suggests more than half a million of the US population work in the oil and gas industry – with the number of...

The US Bureau of Labor Statistics suggests more than half a million of the US population work in the oil and gas industry – with the number of global workers estimated between five and six million. The nature of working with hazardous materials in hostile environments means oil and gas workers are often exposed to dangerous conditions. Though many oil and gas companies are taking responsible steps to eradicate possible dangers, reducing risk must be at the centre of all new projects going forward. New technology and an increased awareness of safety must be the driving force behind operational reform. How can oil and gas companies ensure they are fulfilling duty of care by protecting the welfare of their employees? In this article, Lana Ginns, Marketing Manager at Fluenta, discusses how managing risks through continuous asset monitoring and connected technologies can highlight equipment failures and operational issues before they contribute to disaster.

Safety first

Although rare, there is a risk of on-site explosions in the oil and gas industry. The Deepwater Horizon oil spill (also referred to as the BP oil spill) began on April 20, 2010, in the Gulf of Mexico on the BP-operated Macondo Prospect. At approximately 9:45 pm, methane gas from the well expanded into the drilling riser and rose into the drilling rig, where it ignited and exploded, engulfing the platform. 126 crew members were on board: seven BP employees, 79 from Transocean and employees of various other companies. Despite a three day US Coast Guard search, eleven missing workers were never found. The Deepwater Horizon sank on the morning of 22 April 2010. Considered the largest accidental marine oil spill in the history of the petroleum industry, the US Government estimated the total discharge at 4.9 million barrels.

There are many potential causes of such an accident, including the sudden release of gas under pressure or the introduction of an ignition source into an explosive or flammable environment. The Hydrocarbon Releases (HCRs) that cause explosions like these are, in simple terms, leaks.

Leaks will inevitably happen during operations, and while significant efforts are being made to reduce HCRs, innovations in remote monitoring technology can still be further exploited to reduce risk. The recent Step Change in Safety campaign was supported by stakeholders in the UK offshore industry and demonstrated what could be achieved by increasing focus on safety procedures. In 2010 the total number of HCRs was 187. The Step Change in Safety campaign aimed to reduce this by 50 percent over a three year period. Whilst the campaign fell short – reducing the number of leaks by 49 percent - the approach demonstrated what was possible when more attention is given to on-site safety.  Following an investigation into the Deepwater Horizon disaster, it was discovered that a number of asset failures had contributed to the explosion and a dangerous HCR had not been detected.

Hostile working environments

Monitoring potentially explosive environments for the presence of flammable vapours is an important health and safety practice and should be supported by the use of accurate measuring equipment. Historically, equipment checks were the responsibility of on-site personnel. This meant placing a human being into a potentially dangerous environment. 

Due to the nature of fossil fuel location and extraction, critical infrastructure for the Oil & Gas Industry is often located in remote environments: in the middle of the ocean or in extreme heat or cold. The Deepwater Horizon oil rig was a semi-submersible, mobile, floating, drilling rig that operated in waters up to 10,000 feet deep. Sea water could impact asset performance, underwater debris could affect flow and cold deep-water temperatures could freeze equipment. Monitoring assets in hostile environments is crucial for risk reduction, but these punishing conditions can mean maintenance is a dangerous task.

The management of plants located in dangerous and extreme environments is traditionally provided by a large and expensive workforce. A North Sea oil rig will typically station between 50 and 100 permanent staff on board - each working 12 hour shifts. Should an incident such as an explosion occur, access to deep water rigs is only possible by helicopter. However, new connected technologies can help counteract the effect of hostile environments and identify or deal with potential issues before they become a catalyst for disaster.

In late 2015 a fire aboard a rig in the Caspian Sea resulted in the deaths of a number of oil and gas workers. This was caused by a gas pipeline that was damaged in high winds. By recording critical data to the cloud, companies can understand the impact of extreme weather on oil rigs and implement procedures to reduce the risk of an incident occurring again. Had the owners of the rig been more aware of the likelihood of such an incident happening, the site could have been evacuated earlier.

Connecting technology and separating risk

Cloud technology and the availability of internet connectivity now enables remote asset management. Cloud infrastructure is able to support the constant monitoring and storage of data on remote servers anywhere in the world in real time via IoT.  Monitoring equipment installed on local assets transmits information to software that is stored on central servers, rather than physically on an oil and gas site. If an asset is malfunctioning – or is about to do so – oil and gas companies will be alerted. When this real-time data is fed into software such as a continuous emission monitoring system (CEMS), organisations can then collect, record and report data remotely.

With internet connectivity available almost anywhere, businesses can access the CEMS data feeds of remote assets from multiple sites around the world. It is not necessary to store and run the software on a machine on-site, which removes the need for on-site staff. Additionally, the data is stored securely on multiple remote servers with back up and is not dependent on the health and reliability of an on-site machine. 

The remote measurement and testing of assets can almost eliminate human risk. With continuous measurement, operators can discover leaks through a process called mass balancing. By accounting for material entering and leaving pipes, mass flows can be identified which might have been unknown, or previously difficult to measure. For example, operators can use mass balancing to identify faulty valves within their pipe systems that may be causing dangerous leaks. Remote action can be taken to update software, shut down failing or faulty systems, and if there is a danger of explosion, extract on-site personnel immediately.

By implementing connected technologies the oil and gas industry is mitigating risk and reducing the threat of a large scale oil and gas disaster. Through cloud technology oil and gas companies can remotely manage assets and reduce the number of personnel having to maintain equipment in dangerous environments. IoT and cloud technologies empower oil and gas companies to pre-empt an incident before it occurs, significantly reducing human risk. Mitigating risk in the oil and gas industry could be the difference between a well-managed oil rig and a large scale disaster.     

Lana Ginns is the Marketing Manager at Fluenta, the global leader in ultrasonic flow measurement for the Oil & Gas and Chemicals Industries. Lana has a passion for languages and is fluent in German, English, Spanish and French. Her extensive marketing experience, which covers both B2B and B2C, ranges from complex engineering equipment to essential oils. 


Read the March 2017 edition of Energy Digital magazine

Follow @EnergyDigital

Share article

Jun 12, 2021

Why Transmission & Distribution Utilities Need Digital Twins

Petri Rauhakallio
6 min
Petri Rauhakallio at Sharper Shape outlines the Digital Twins benefits for energy transmission and distribution utilities

As with any new technology, Digital twins can create as many questions as answers. There can be a natural resistance, especially among senior utility executives who are used to the old ways and need a compelling case to invest in new ones. 

So is digital twin just a fancy name for modelling? And why do many senior leaders and engineers at power transmission & distribution (T&D) companies have a gnawing feeling they should have one? Ultimately it comes down to one key question: is this a trend worth our time and money?

The short answer is yes, if approached intelligently and accounting for utilities’ specific needs. This is no case of runaway hype or an overwrought name for an underwhelming development – digital twin technology can be genuinely transformational if done right. So here are six reasons why in five years no T&D utility will want to be without a digital twin. 

1. Smarter Asset Planning

A digital twin is a real-time digital counterpart of a utility’s real-world grid. A proper digital twin – and not just a static 3D model of some adjacent assets – represents the grid in as much detail as possible, is updated in real-time and can be used to model ‘what if’ scenarios to gauge the effects in real life. It is the repository in which to collect and index all network data, from images, to 3D pointclouds, to past reports and analyses.

With that in mind, an obvious use-case for a digital twin is planning upgrades and expansions. For example, if a developer wants to connect a major solar generation asset, what effect might that have on the grid assets, and will they need upgrading or reinforcement? A seasoned engineer can offer an educated prediction if they are familiar with the local assets, their age and their condition – but with a digital twin they can simply model the scenario on the digital twin and find out.

The decision is more likely to be the right one, the utility is less likely to be blindsided by unforeseen complications, and less time and money need be spent visiting the site and validating information.

As the energy transition accelerates, both transmission and distribution (T&D) utilities will receive more connection requests for anything from solar parks to electric vehicle charging infrastructure, to heat pumps and batteries – and all this on top of normal grid upgrade programs. A well-constructed digital twin may come to be an essential tool to keep up with the pace of change.

2. Improved Inspection and Maintenance

Utilities spend enormous amounts of time and money on asset inspection and maintenance – they have to in order to meet their operational and safety responsibilities. In order to make the task more manageable, most utilities try to prioritise the most critical or fragile parts of the network for inspection, based on past inspection data and engineers’ experience. Many are investigating how to better collect, store and analyze data in order to hone this process, with the ultimate goal of predicting where inspections and maintenance are going to be needed before problems arise.  

The digital twin is the platform that contextualises this information. Data is tagged to assets in the model, analytics and AI algorithms are applied and suggested interventions are automatically flagged to the human user, who can understand what and where the problem is thanks to the twin. As new data is collected over time, the process only becomes more effective.

3. More Efficient Vegetation Management

Utilities – especially transmission utilities in areas of high wildfire-risk – are in a constant struggle with nature to keep vegetation in-check that surrounds power lines and other assets. Failure risks outages, damage to assets and even a fire threat. A comprehensive digital twin won’t just incorporate the grid assets – a network of powerlines and pylons isolated on an otherwise blank screen – but the immediate surroundings too. This means local houses, roads, waterways and trees. 

If the twin is enriched with vegetation data on factors such as the species, growth rate and health of a tree, then the utility can use it to assess the risk from any given twig or branch neighbouring one of its assets, and prioritise and dispatch vegetation management crews accordingly. 

And with expansion planning, inspection and maintenance, the value here is less labor-intensive and more cost-effective decision making and planning – essential in an industry of tight margins and constrained resources. What’s more, the value only rises over time as feedback allows the utility to finesse the program.

4. Automated powerline inspection

Remember though, that to be maximally useful, a digital twin must be kept up to date. A larger utility might blanche at the resources required to not just to map and inspect the network once in order to build the twin, but update that twin at regular intervals.

However, digital twins are also an enabling technology for another technological step-change – automated powerline inspection.

Imagine a fleet of sensor-equipped drones empowered to fly the lines almost constantly, returning (automatically) only to recharge their batteries. Not only would such a set-up be far cheaper to operate than a comparable fleet of human inspectors, it could provide far more detail at far more regular intervals, facilitating all the above benefits of better planning, inspection, maintenance and vegetation management. Human inspectors could be reserved for non-routine interventions that really require their hard-earned expertise.

In this scenario, the digital twin provides he ‘map’ by which the drone can plan a route and navigate itself, in conjunction with its sensors. 

5. Improved Emergency Modelling and Faster Response

If the worst happens and emergency strikes, such as a wildfire or natural disaster, digital twins can again prove invaluable. The intricate, detailed understanding of the grid, assets and its surroundings that a digital twin gives is an element of order in a chaotic situation, and can guide the utility and emergency services alike in mounting an informed response.

And once again, the digital twin’s facility for ‘what-if’ scenario testing is especially useful for emergency preparedness. If a hurricane strikes at point X, what will be the effect on assets at point Y? If a downed pylon sparks a fire at point A, what residences are nearby and what does an evacuation plan look like?

6. Easier accommodation of external stakeholders

Finally, a digital twin can make lighter work of engaging with external stakeholders. The world doesn’t stand still, and a once blissfully-isolated powerline may suddenly find itself adjacent to a building site for a new building or road. 

As well as planning for connection (see point 1), a digital twin takes the pain out of those processes that require interfacing with external stakeholders, such as maintenance contractors, arborists, trimming crews or local government agencies – the digital twin breaks down the silos between these groups and allows them to work from a single version of the truth – in future it could even be used as part of the bid process for contractors.

These six reasons for why digital twins will be indispensable to power T&D utilities are only the tip of the iceberg; the possibilities are endless given the constant advancement of data collection an analysis technology. No doubt these will invite even more questions – and we relish the challenge of answering them. 


Share article