How to minimise the IT and business impact of a power outage
There are some things in life that can’t be avoided. Death, taxes, and software updates to name but a few. Power failures, when caused by natural disasters, would seemingly fit in this category too – a familiar story in this era of more severe and unusual weather events. However, while the weather can’t be controlled, what can be prevented is the lasting business impact these unfortunate occurrences can bring.
As the recent case with Delta Airlines shows, power outages can span anything from minutes to weeks, shutting down an entire IT infrastructure in the process and costing organisations time, money and brand reputation. In order to quickly recover, businesses need a stringent disaster recovery (DR) and business continuity (BC) plan to minimise the impact to operations.
For many, disaster recovery is becoming part of their everyday reality: Gartner’s “2015 Business Continuity” survey reported that 72 per cent of organisations had used their DR plan. That such a huge percentage of organisations used their DR plan should serve as a wake up call for anyone without a DR/BC plan in place. Far too often, DR planning happens in the wake of a disaster when it is too late to ensure business continuity. The best plans should be put in place long before data is lost, business is halted and the organisation suffers.
Even with a DR plan in place, business continuity might not be guaranteed. What happens if the DR site goes down or the DR plan isn’t executed well because everyone has gone into crisis mode? Whether you are creating a DR plan for the first time or updating an existing one, there are three key considerations to include to ensure the plan is robust, effective and allows true recovery to take place.
A clearly documented failover process is a must for a DR plan to be successful, but an automated one is better. Manual failover processes have many steps and each manual step is an opportunity for a mistake to be made. If you are executing a production failover, you are in crisis and that can cause people to panic, elevating the probability of human error. An automated failover ensures consistency and repeatability. Automation eliminates the risk of mistakes while ensuring fast, seamless and predictable recovery.
Don’t just failover, failback
Once organisations have an automated and well-documented failover process, they should create an automated and well-documented failback process. During downtime companies want business continuity, but once they are back up and running, they want to be able to transfer the data from the DR site back to the production site.
Many organisations will not execute a failover because they do not have a documented failback process and/or they know they cannot successfully failback. This leaves them more vulnerable to a variety of disruptions that can significantly hurt business operations. The failback should not be more cumbersome than the original failover. To be completely confident it will work, organisations should fully test production failover and failback.
Have a DR plan for your DR
What do you do if the DR site is down? There are some instances where DR sites – even if located far apart – can experience a disaster recovery failure. For example, if there was a huge blackout spanning from one end of the country to the other, it wouldn’t matter that your business had located their DR site “far away” as you still couldn’t recover due to the wide range of the power outage. This is why it is critical to have Plan A, Plan B and Plan C. For example, recovering to a public cloud provider can offer greater geographical diversity to avoid situations like this.
Some things in life can be avoided. By implementing the right strategy – such as an automated plan that takes over in a crisis – downtime as well as damage to brand and company reputation is entirely escapable. With an additional plan to protect from disasters at the DR site, you can rest assured that your business will thrive come wind, rain, sleet or snow.
Peter Godden is VP, EMEA at Zerto
Why Transmission & Distribution Utilities Need Digital Twins
As with any new technology, Digital twins can create as many questions as answers. There can be a natural resistance, especially among senior utility executives who are used to the old ways and need a compelling case to invest in new ones.
So is digital twin just a fancy name for modelling? And why do many senior leaders and engineers at power transmission & distribution (T&D) companies have a gnawing feeling they should have one? Ultimately it comes down to one key question: is this a trend worth our time and money?
The short answer is yes, if approached intelligently and accounting for utilities’ specific needs. This is no case of runaway hype or an overwrought name for an underwhelming development – digital twin technology can be genuinely transformational if done right. So here are six reasons why in five years no T&D utility will want to be without a digital twin.
1. Smarter Asset Planning
A digital twin is a real-time digital counterpart of a utility’s real-world grid. A proper digital twin – and not just a static 3D model of some adjacent assets – represents the grid in as much detail as possible, is updated in real-time and can be used to model ‘what if’ scenarios to gauge the effects in real life. It is the repository in which to collect and index all network data, from images, to 3D pointclouds, to past reports and analyses.
With that in mind, an obvious use-case for a digital twin is planning upgrades and expansions. For example, if a developer wants to connect a major solar generation asset, what effect might that have on the grid assets, and will they need upgrading or reinforcement? A seasoned engineer can offer an educated prediction if they are familiar with the local assets, their age and their condition – but with a digital twin they can simply model the scenario on the digital twin and find out.
The decision is more likely to be the right one, the utility is less likely to be blindsided by unforeseen complications, and less time and money need be spent visiting the site and validating information.
As the energy transition accelerates, both transmission and distribution (T&D) utilities will receive more connection requests for anything from solar parks to electric vehicle charging infrastructure, to heat pumps and batteries – and all this on top of normal grid upgrade programs. A well-constructed digital twin may come to be an essential tool to keep up with the pace of change.
2. Improved Inspection and Maintenance
Utilities spend enormous amounts of time and money on asset inspection and maintenance – they have to in order to meet their operational and safety responsibilities. In order to make the task more manageable, most utilities try to prioritise the most critical or fragile parts of the network for inspection, based on past inspection data and engineers’ experience. Many are investigating how to better collect, store and analyze data in order to hone this process, with the ultimate goal of predicting where inspections and maintenance are going to be needed before problems arise.
The digital twin is the platform that contextualises this information. Data is tagged to assets in the model, analytics and AI algorithms are applied and suggested interventions are automatically flagged to the human user, who can understand what and where the problem is thanks to the twin. As new data is collected over time, the process only becomes more effective.
3. More Efficient Vegetation Management
Utilities – especially transmission utilities in areas of high wildfire-risk – are in a constant struggle with nature to keep vegetation in-check that surrounds power lines and other assets. Failure risks outages, damage to assets and even a fire threat. A comprehensive digital twin won’t just incorporate the grid assets – a network of powerlines and pylons isolated on an otherwise blank screen – but the immediate surroundings too. This means local houses, roads, waterways and trees.
If the twin is enriched with vegetation data on factors such as the species, growth rate and health of a tree, then the utility can use it to assess the risk from any given twig or branch neighbouring one of its assets, and prioritise and dispatch vegetation management crews accordingly.
And with expansion planning, inspection and maintenance, the value here is less labor-intensive and more cost-effective decision making and planning – essential in an industry of tight margins and constrained resources. What’s more, the value only rises over time as feedback allows the utility to finesse the program.
4. Automated powerline inspection
Remember though, that to be maximally useful, a digital twin must be kept up to date. A larger utility might blanche at the resources required to not just to map and inspect the network once in order to build the twin, but update that twin at regular intervals.
However, digital twins are also an enabling technology for another technological step-change – automated powerline inspection.
Imagine a fleet of sensor-equipped drones empowered to fly the lines almost constantly, returning (automatically) only to recharge their batteries. Not only would such a set-up be far cheaper to operate than a comparable fleet of human inspectors, it could provide far more detail at far more regular intervals, facilitating all the above benefits of better planning, inspection, maintenance and vegetation management. Human inspectors could be reserved for non-routine interventions that really require their hard-earned expertise.
In this scenario, the digital twin provides he ‘map’ by which the drone can plan a route and navigate itself, in conjunction with its sensors.
5. Improved Emergency Modelling and Faster Response
If the worst happens and emergency strikes, such as a wildfire or natural disaster, digital twins can again prove invaluable. The intricate, detailed understanding of the grid, assets and its surroundings that a digital twin gives is an element of order in a chaotic situation, and can guide the utility and emergency services alike in mounting an informed response.
And once again, the digital twin’s facility for ‘what-if’ scenario testing is especially useful for emergency preparedness. If a hurricane strikes at point X, what will be the effect on assets at point Y? If a downed pylon sparks a fire at point A, what residences are nearby and what does an evacuation plan look like?
6. Easier accommodation of external stakeholders
Finally, a digital twin can make lighter work of engaging with external stakeholders. The world doesn’t stand still, and a once blissfully-isolated powerline may suddenly find itself adjacent to a building site for a new building or road.
As well as planning for connection (see point 1), a digital twin takes the pain out of those processes that require interfacing with external stakeholders, such as maintenance contractors, arborists, trimming crews or local government agencies – the digital twin breaks down the silos between these groups and allows them to work from a single version of the truth – in future it could even be used as part of the bid process for contractors.
These six reasons for why digital twins will be indispensable to power T&D utilities are only the tip of the iceberg; the possibilities are endless given the constant advancement of data collection an analysis technology. No doubt these will invite even more questions – and we relish the challenge of answering them.