WHITEPAPER PROACTIVE TRANSMISSION AND DISTRIBUTION … · Asset Management Top-Down and Bottom-up...
Transcript of WHITEPAPER PROACTIVE TRANSMISSION AND DISTRIBUTION … · Asset Management Top-Down and Bottom-up...
SAFER, SMARTER, GREENER
< <
Date: October 28, 2016 Authors: Bert Taube, Paul Leufkens, Jim Weik, Jesse Dill
WHITEPAPER
PROACTIVE TRANSMISSION AND DISTRIBUTION ASSET MANAGEMENT Utilizing advanced data management and predictive analytics
Reference to part of this report which may lead to misinterpretation is not permissible.
No. Date Reason for Issue Prepared by Verified by Approved by
1 2016-10-28 First issue Bert Taube
Paul Leufkens
Jim Weik
Jesse Dill
Jesse Dill Jesse Dill
Date: October 2016
Prepared by DNV GL - Software
© DNV GL AS. All rights reserved
This publication or parts thereof may not be reproduced or transmitted in any form or by any means, including copying or recording, without the prior written consent of DNV GL.
| Whitepaper | Cascade | www.dnvgl.com/software Page 1
Table of Contents
1 ABSTRACT ..................................................................................................................... 2
2 KEYWORDS .................................................................................................................... 2
3 EXECUTIVE SUMMARY ..................................................................................................... 3
4 ADVANCED FIELD TESTING AND ONLINE MONITORING METHODOLOGIES FOR T&D ASSET MANAGEMENT & OPTIMIZATION ............................................................................. 4
Asset diagnostic categories 4
Examples of non-intrusive asset diagnostics 5
5 DATA MANAGEMENT AND ANALYTICS SOLUTIONS FOR T&D ASSET MANAGEMENT AND
OPTIMIZATION ............................................................................................................... 8
Risk-based maintenance 10
6 MAXIMIZING THE VALUE OF ASSET MANAGEMENT AND OPTIMIZATION THROUGH ADVANCED DATA MANAGEMENT AND PREDICTIVE AND PRESCRIPTIVE ANALYTICS .............. 12
The transformation from condition to risk based asset management 12
Embrace data analytics 12
Where are utility data analytics today? 13
Utility big data capabilities to increase value from utility data analytics 14
7 PROACTIVE ASSET MANAGEMENT AND OPTIMIZATION DRIVEN BY PREDICTIVE AND PRESCRIPTIVE ANALYTICS IN COMBINATION WITH ADVANCED DATA MANAGEMENT, FIELD TESTING AND ONLINE MONITORING METHODOLOGIES ........................................... 18
Risk-based maintenance – case study 18
8 REFERENCES ................................................................................................................ 21
| Whitepaper | Cascade | www.dnvgl.com/software Page 2
1 ABSTRACT
This paper will merge the concepts of asset field testing and online monitoring with asset criticality-health-
risk (CHR). The goal for that is to design and deploy predictive top-down and bottom-up asset management
(AM) and optimization programs for power transmission and distribution. It will show how such programs
can be enhanced with scalable situational awareness (SA) enabled through data driven software capabilities,
such as advanced predictive and prescriptive analytics and big data processing. This development will drive
next-generation asset management & optimization with informed, event-driven and real-time decision-
making.
2 KEYWORDS
Predictive Asset Management & Optimization, Asset Field Testing and Online Monitoring Methodologies,
Distributed Energy Resources (DER), Energy Storage Systems (ESS), Asset Criticality-Health-Risk (CHR),
Asset Management Top-Down and Bottom-up Strategy, Asset Data Management & Analytics, Big Data, Asset
Data Driven Scalable Situational Awareness, Predictive Data, Test and Online Monitoring Driven Asset
Maintenance
| Whitepaper | Cascade | www.dnvgl.com/software Page 3
3 EXECUTIVE SUMMARY
Utilities work continuously to leverage their assets. They are challenged to grow earnings even when they do
not have the corresponding revenue growth. For this there are no standards, only best practices. And
everything is performed under the strict and severe supervision of a public commission while at the mercy of
local circumstances and considerable history. As a result, questions come up: “What field testing can be
done to predict asset lifetime and support a maintenance methodology? How can a testing program be put
together to ensure an outcome of solutions and real data leading to more accurate conclusions about the
remaining lifetime of components and necessary efforts and investments into maintenance?”
Asset management is the name of the game. It maximizes the lifetime of the assets, prevents outages and
other disturbances from happening, and optimizes the maintenance effectiveness and efficiency. NERC
compliance represents only a minimum requirement in asset management. In addition, utilities get new
responsibilities such as safely and securely integrating and operating new distributed energy resources (DER)
composed of renewable sources as well as energy storage systems (ESS) including the necessary power
electronics devices that monitor and control these systems. This all happens while there is still so much
uncertainty about lifetime performance and efficiency of these new disruptive technologies and how they
combine with traditional generation as well as the existing T&D infrastructure. In addition to that, storms
such as Katrina and Sandy have challenged utilities to provide a proper response and demonstrate grid
resilience under abnormal weather conditions. All too often such catastrophic events are claimed to be an
act of God while in many cases weather-related outages can be avoided by applying a tight quality
assurance system to the equipment that is impacted and under distress.
Besides DER, utilities are also faced with a number of new and innovative software technologies to deal with
an exponentially growing variety of networked data sources. Wide-area situational awareness enabled by
better data integration and advanced analytics poses opportunities, but a substantial problem is that the
current utility workforce has not been trained for that. There is huge upside potential leveraging these
innovative software technologies that bring powerful capabilities such as big data processing as well as
predictive and prescriptive analytics. This will hugely impact the effectiveness and efficiency of asset
management and will change the way it is done. Real-time automation enabling event-driven informed
decision making in asset operation and maintenance is at our fingertips. The necessary hardware and
software technologies are available today. The challenge is to integrate them into the existing information
systems infrastructure such that reliable and effective grid operation and maintenance are guaranteed at the
same time.
| Whitepaper | Cascade | www.dnvgl.com/software Page 4
4 ADVANCED FIELD TESTING AND ONLINE MONITORING
METHODOLOGIES FOR T&D ASSET MANAGEMENT & OPTIMIZATION
What should be the role of testing of aged asset components? Refurbishment and retrofit are a viable
alternative to investments in new equipment, once a sample test of the refurbished asset demonstrates the
capability for starting a new life. In quite a few cases, experience shows that “vintage” equipment far
exceeds its projected lifetime because at the time of its design, much more margin was included than
nowadays. Also, intelligent use of temporary overloading practices (e.g. dynamic loading of cables and lines)
can be considered as an AM solution.
The first part of AM, acquiring new material, is largely covered by global industry standards, manufacturers’
type-tests, and effective commissioning tests. The reliability of the assets during usage depends upon their
age, conditions at the moment of purchase, specific wear and tear, weather circumstances at their location,
and the maintenance in the field. So far, field testing mainly consists of oil measurement for transformers,
some lubricating and mechanical maintenance, and condition checks on critical assets.
Condition monitoring and advanced maintenance strategies further reinforce reliability. Reliability surveys on
aged components, such as the one recently carried out by Cigré on HV switchgear (Cigre, Oct 2012) and
power transformers can provide major input on failure modes at advanced age and thus help to prioritize
maintenance targets.
The general problem is that both in transmission and distribution there is no real opportunity to take assets
out of service for a condition check. There are too many, it is too costly, the objects and connections are too
critical in their function, the traditional condition check is not sufficiently forward looking, and with
traditional means the economics are not proven.
Asset diagnostic categories
The adjectives intrusive/non-intrusive and invasive/non-invasive are commonly used in technical
literature ,the CIGRE working group WG A3.32 recommends using non-intrusive in the context of electrical
equipment because it is more specific and refers to the fact that there is no intrusion into the system.
In medicine, non-intrusive procedures are well defined and known as having clear advantages over other
procedures, as they eventually respect the fundamental principle “first do not harm.” Adopting this to the
domain of electricity is not straightforward. There are two major criteria to classify an asset diagnostic
method as non-intrusive:
1. How the integrity of the asset itself could be potentially affected by the diagnostics and
2. How the grid is affected by the diagnostics.
CIGRE working group WG A3.32 proposes to consider the usefulness of a diagnostic method as its cost
effectiveness, a comparison of its value (benefits versus cost). The value of a diagnostic method is
expressed in terms of condition indicators and the potential diagnosis which one can get using it. The cost of
a diagnostic method equals the total of expenses and effort needed to be able to apply it. WG A3.32
provides guidelines for evaluating value and cost in order to help grid operators appreciate non-intrusive
diagnostic methods.
| Whitepaper | Cascade | www.dnvgl.com/software Page 5
Examples of non-intrusive asset diagnostics
Examples of non-intrusive asset diagnostics are manyfold as asset management and optimization develops
further. The introduction of sensor and measurement components in existing assets as well as in new asset
solutions enjoys growing popularity due to the increasing expectations and possibilities from data driven
approaches to unveil significant value with new and innovative utility business models.
Non-intrusive diagnostics for MV and HV switchgear
MV and HV switchgears are composed of highly costly circuit breakers and represent an important asset
solution category in power delivery. No surprise that CIGRE WG A3.32 has established particular focus on
this asset class. More than a hundred diagnostic methods, mostly non-intrusive, have been identified. The
methods generate a multitude of condition indicators using diagnostic tests, diagnostic measurements and
sensing, signal processing, data analysis as well as soft- and firmware.
The following (Figure 4.1) illustrates the distribution of the different types of diagnostic methods (non-
intrusive, minimally-intrusive, intrusive). For further detail, please see Uzelac, Pater, Heinrich (CIGRE 2016).
Figure 4.1 – Distribution of diagnostic methods for each intrusion and voltage category of
switchgear
As can clearly be seen, there are a vast majority of non- and minimally intrusive diagnostic methods (95%)
that can be used for proper high- and medium voltage switchgear diagnostics without intrusion during power
delivery service. This implies the possibility to apply data driven analytics to test and identify major
indicators of asset health without service interruption. As a result, the asset conditions can be permanently
monitored and analytics applied in real-time.
69 %
26 %
5 %
Distribution of Number of Diagnostic Methods for each Intrusion Category
Non-Intrusive
MinimallyIntrusive
StronglyIntrusive
26 %
28 %
46 %
Distribution of Number of Diagnostic Methods for each Voltage Category
MediumVoltage
High Voltage
Medium + HighVoltage
| Whitepaper | Cascade | www.dnvgl.com/software Page 6
Figure 4.2 – Typical setup of a smart cable guard sensor placed around the earth leads of a three phase XLPE MV power cable in a substation.
A X
B
Non-intrusive diagnostics with smart cables
New test methodologies offer solutions for a part of the problems. A good example is the Smart Cable Guard
which as an approach has also proven to be useful for other asset types in addition to cables. It is an
instrument to monitor underground power cable systems while the cable is in service (on-line).
It uses two inductive sensors around the cable ends and synchronized fast
communication to a central data acquisition system (Figure 4.2 and 4.3).
SCG’s ability to locate weak spots and to create an on-line PD map has
resulted in many interesting cases of avoided faults, showing its ability to
reduce the system average interruption duration as well as its frequency. On
top of that the collected information describes the health condition at all
cable points to support the correctness of the maintenance strategy.
Non-intrusive diagnostics with smart wires
Another good example is Smart Wire’s distributed PowerLine Guardian
technology (see Figure 4.4). The device, similar to a current transformer
with on-board computing and cellular connectivity, is mounted directly on
the conductor near the transmission structures. It adds impedance as
needed to “choke” the flow of electrons through overloaded lines and redirect
it to other
transmission
corridors. The
technology represents part of an evolving grid
optimization toolkit to help utilities alleviate
congestion, improve network utilization, manage
changing generation profiles and maintain reliable
electric service. In addition to the previously
mentioned direct operational benefits, the device
collects fast data to describe the dynamic electric Figure 4.4 – PowerLine Guardian technology for power flow control on high voltage line
Figure 4.3 – Typical setup of a Smart Cable Guard sensor placed around the earth leads of a three phase
XLPE MV power cable in a substation.
| Whitepaper | Cascade | www.dnvgl.com/software Page 7
profile of the overhead lines and adjacent components. This technology provides another valuable source for
a new class of asset monitoring information acquired in real-time with the assets in service. It can be
leveraged to improve asset management for overhead transmission lines and related asset components
including the monitoring of DER-related impact on more flexible power conduits for an increasingly solar-
and wind- powered grid.
In addition to the PowerLine Guardian device, Smart Wires also developed the PowerLine Router. Its
objective is to directly increase the throughput of underutilized transmission lines, just as the larger and
more capital-intensive flexible AC transmission systems (FACTS) but at much lower cost. The router affects
digital power controls on the transmission grid just as similar devices from companies such as GridCo and
Varentec perform on distribution grids (see Figure 4.5).
Interestingly, all the new monitoring and indicative
signals available from these different technologies
now turn out to be a challenge for traditional data
acquisition systems due to lack of standard
interoperability. If this problem can be solved
through adequate design and integration of data
acquisition, communication and collection solutions
to feed existing and new utility information systems
it will result in valuable contributions to better asset
health and predictive maintenance strategies.
In addition to the well-known and still emerging
advanced metering and synchrophasor
infrastructures, the above new and innovative solutions are available to measure, monitor and control
specific points and areas of the power delivery network. These technologies provide access to fast regional
data in the second and millisecond range, system frequencies where capture of information is not supported
by the currently available and deployed AMI communication systems. While the hard and firmware products
available from various vendors represent valuable options for utilities to improve monitoring and control at
the grid edge (e.g. secondary feeder side of power distribution infrastructures) the development of larger
centralized big data management and analytics solutions fed by the massive amount of newly available data
from a wider range of data points is still in its infancy. This is by and large due to the fact that wide-area
communication technologies to transport all this data over larger distances to central data center locations
(i.e. data is moved to and processed at the utility head-end where the main utility information systems are
located) has not yet sufficiently matured to justify its costs and support for the needed real-time, event
driven data solutions. In addition, today’s trend is clearly toward more distributed grid intelligence with
decentralized grid monitoring and control options. This not only avoids extra time and cost of data
transportation but also enables distributed real-time, event driven monitoring and control performance as
expected from the growing number of intelligent nodes in the transformation toward a more intelligent and
smarter power grid. Nevertheless, an integrated centralized asset data management and analytics solution
will be a critical part of the overall concept of distributed intelligence to enable and manage the single
version of the asset data truth.
The integration of renewable energy sources and energy storage systems currently provides utilities with
new concerns. The first question is what requirements to establish for a product to be purchased particularly
Figure 4.5 – ENGO device for decentralized
sensing, monitoring and control of grid edge
| Whitepaper | Cascade | www.dnvgl.com/software Page 8
when it represents a first generation development. Today, there are no or inadequate standards available to
do so. As a consequence, utilities must make difficult technology choices given the lack of opportunity to
find proof of performance. Another critical aspect is the necessary interoperability between the new
components as there is no or little validation. For instance, it is not yet clear whether the best choice for
storage is lithium or flow batteries. Testing technology needs to develop aligned with the technology
evolution itself. However, this is often not the case. In addition, the multi MW size of renewable installations
makes field testing a technically and financially challenging option due to necessary investments in high
power installations.
Part of the solution to the above problems can be found in a so-called “telescope” approach which is based
on the principle to test as much as possible on the smallest scale and work up in size up to in modules
wherever an option exists. This way, only reliability testing of integrated modules is necessary. Two
considerations are to be made. One is that proper functioning of power electronics is heavily related to the
interaction within the immediate grid vicinity. Power flow ripples and electromagnetic surges can produce
responses depending on the specific circuit in which the inverter is positioned. This condition can only be
tested at a specific location and at various circuit loading conditions. The second problem is that proper
functioning of inverters in the grid is highly impacted by their controls and software. This represents again a
local interaction with the grid. As a result, the development of adequate test methodology is critical.
5 DATA MANAGEMENT AND ANALYTICS SOLUTIONS FOR T&D ASSET MANAGEMENT AND OPTIMIZATION
When properly applied, a mature, predictive asset management strategy works and provides
numerous benefits to implementing organizations. Chief among these benefits, it maximizes the
value of physical assets to the company’s bottom line. This means back office systems working in
continuity and complement to accurate and critical field work such as inspections and
maintenance.
To develop this type of predictive asset management program, a company must understand what asset
management is and how to get the most out of it. Asset management treats the company and all of its
assets holistically. Asset management is both a top-down and bottom-up endeavor. It is a top-down process
because for asset management to work there has to be a philosophical shift and change leadership at the
top levels. Departments and divisions that used to focus solely on maintaining equipment in their territory
will need to start looking at assets as parts in a company-wide system (Figure 5.1).
It is also a bottom-up system, in so far as equipment data is of paramount importance. To implement an
effective, evolving asset management program, a utility will need to identify and evaluate each maintainable
asset and then develop a comprehensive maintenance strategy to increase the reliability and maximize the
performance results of that asset. Field personnel must be engaged and involved.
Second, asset management brings information from diverse sources (nameplate data, online monitoring
information, conditional information including periodic diagnostic test results, repair activities and so forth)
into one locus of information. All analysis and decisions are derived from this master data set. Having a
current, normalized data source helps eliminate ‘turf wars’ between departments and allows a utility to make
financial decisions based on current, accurate data.
| Whitepaper | Cascade | www.dnvgl.com/software Page 9
Third, a mature asset management program monitors equipment health (H) and determines a device’s
criticality (C) to the overall performance of the company. By combining criticality and health, a utility can
evaluate the risk (R) to the organization’s operation, represented by a given piece of equipment.
Using the CHR approach, a utility can effectively identify which devices should be temporarily but
purposefully ignored, which should be maintained, and where and when replacements are required. This
cuts down on unnecessary maintenance and predicts capital expenditures to where they are needed and
most beneficial.
Fourth, asset management provides flexibility, so categories of devices can be evaluated based on individual
corporate situations and goals. A category might be as broad as all oil-filled reclosers or as specific as
substation transformers made by a specific manufacturer in the 1960’s. A category can also include all
devices on a critical transmission line. As more equipment data is collected, it will become easier to identify
trends and, therefore, target equipment groups with similar characteristics and levels of importance.
An asset management system can only be truly considered a predictive maintenance program
when health and criticality can be quantified and used to determine when to ignore, maintain or
replace a given device.
Figure 5.1 - Vertical Enterprise Asset Reliability System conceptual map
| Whitepaper | Cascade | www.dnvgl.com/software Page 10
Risk-based maintenance
Risk-based maintenance (RbM) has many guises and comes in many forms. The bottom line is this:
Maintenance programs move from being reactive to being proactive. The focus shifts from preventing
failures to predicting what the optimal maintenance schedules are – when maintenance work is most cost
effective. This may seem like a minor difference, but it has powerful ramifications.
To begin, criticality is now included in the decision making process. This is vitally important. Using criticality,
work can be prioritized based on the impact to the corporation upon a specific asset’s failure. Through the
monitoring of operational stress and measuring key electrical and mechanical parameters utilities can
identify when a device crosses a performance threshold that would negatively impact grid operations.
RbM, which is necessary to support organizational reliability goals, is only enabled by a robust predictive
maintenance (PdM) system which allows utilities to identify those assets, which if they fail, have the highest
impact to the enterprise. PdM uses all available equipment health data. As a result, there has to be one
comprehensive, trustworthy source of data. All decisions are made based on this common source of truth.
The advantages
Predictive maintenance is the most efficient and effective way to schedule maintenance. It also maximizes
the value of diagnostic and monitoring data which produce the most reliable results. This includes the high
volume of data collected from diverse sources, like Smart Grid technologies, such as Smart Meters, or IoT
devices, such as new online monitoring sensors.
PdM allows a utility to view the company as a single entity, without separating goals by department (e.g.
Operations, IT, Budgeting, Financial). By using PdM, a utility can develop risk-based maintenance plans.
Maintenance triggers can be created and alerts sent to allow just-in-time maintenance.
The disadvantages
Moving from a condition-based to a predictive maintenance approach requires a philosophical shift in the
way everyone in the utility thinks about equipment and the purpose of maintenance. For example, line
workers normally change out oil-filled reclosers every three years. Before PdM or RbM, they thought they
were maintaining the lines. With PdM, they should be thinking, ‘I am ensuring the revenue stream from the
customers on this line, by maintaining or improving this line’s reliability.’ Substation crews might find that
the normally scheduled outage in the spring has been cancelled, because the risk of equipment failure is low
and the loss of revenue does not justify shutting down the substation.
Depending on the previous maintenance system, a PdM system may or may not require training. It may or
may not require the integration of new monitoring systems to get data into a central data storehouse. If
various departments and divisions were used to working autonomously, there may be some resistance to
sharing data and giving up decision making power. However, the cost savings, improved reliability, and
increased organizational efficiency make overcoming these challenges worthwhile and critical to continued
organizational growth and success.
Once a PdM system is in place, a utility can develop a risk and condition-based maintenance system, adding
more sources of data and fine-tuning work and capital expenditure plans, to meet corporate goals.
| Whitepaper | Cascade | www.dnvgl.com/software Page 11
Figure 5.2 – Large substation infrastructure requires better analytic and maintenance tools than historical time based methods can provide.
| Whitepaper | Cascade | www.dnvgl.com/software Page 12
6 MAXIMIZING THE VALUE OF ASSET MANAGEMENT AND
OPTIMIZATION THROUGH ADVANCED DATA MANAGEMENT AND PREDICTIVE AND PRESCRIPTIVE ANALYTICS
The transformation from condition to risk based asset management
As elaborated in the previous sections, the current objective of utilities is to move from reactive condition-
based to proactive risk-based asset management. In order to do so, utilities need to introduce the concept
of asset criticality as illustrated in Figure 6.1.
But what does this
transformation mean
from the perspective of
innovative data solutions
driven by capabilities
such as advanced
analytics or big data?
While reactive,
condition-based asset
management is driven
by the actual asset
health identified through
field testing and asset
online monitoring.
Proactive risk-based
asset management
introduces the concept
of asset criticality in
addition to asset health to also weigh in the impact and importance of each asset on the overall performance
of the utility enterprise. This new predictive approach not only needs to introduce the advanced concepts of
predictive and prescriptive analytics in order to identify and perform forward-looking maintenance strategies,
it also requires far more granularity to move from the asset class to the individual asset level, which
essentially requires big data capabilities to allow for the necessary scalability and flexibility to handle both
top-down and bottom-up asset management.
Embrace data analytics
Electrical utilities are in the process of moving into the data analytics business. This is the result of several
global forces – one being the proliferation of less expensive electronic monitoring technologies and the
speed and availability of communications systems.
Also, everyone wants to have the ‘smartest’ grid possible. As a result, an unprecedented amount of raw data
is being collected by utilities each day. On the one hand, all that data creates a real opportunity for utilities
to better monitor and understand how a device or system is operating. On the other hand, converting that
sea of data into actionable information can be a daunting task. Therefore, it is imperative to have an asset
management system that can handle, integrate, and verify the data to maximize its value.
Figure 6.1 – Transforming from reactive to pro-active Asset Management
| Whitepaper | Cascade | www.dnvgl.com/software Page 13
A major strength of a mature asset management system is the ability to bring all the data into one ‘store
house’ and develop algorithms that can analyze the data and predict which devices should be ignored,
maintained, or replaced.
Where are utility data analytics today?
At this point, most utilities are still in the first information-based phase of descriptive and diagnostic
analytics. In other words the data sets are used to answer questions such as “What happened?” or “Why did
it happen?” while some utilities won’t even use the data for exploring those vital concerns. The following,
Figure 6.2, (Gartner’s value curve) addresses that.
Only few utilities leverage
available datasets to
design optimizing
predictive and prescriptive
analytics solutions that
address questions such as
“What will happen?” or
“How can we make it
happen?” which is not
surprising given the
increasingly difficult
nature of the problems as
well as the need for more
advanced data scientists,
which utilities do not
usually have in their own
workforce. While those
would still be available
from top consulting firms, utilities are also mandated to protect the privacy of their customers as well as the
cyber security of their infrastructure. That makes it difficult for them to provide the collected data to
external parties and have those perform the necessary data discovery as well as the development and
deployment of the desired data analytics. There is still plenty to do in order to achieve true value from the
collected data at all levels of difficulty. Unlike the scenario anticipated by many analysts in the last few years,
utilities by in large are still in the first phase where:
data is only collected without specific objectives (‘Yikes! - we have a lot of data’)
data is stored, secured and made available (data fortress)
data is used in basic reporting to deliver information about what happened with limited data
representation and without intuitive explanations (basic reporting)
data is feeding simple dashboards using dynamic data representation to answer the question “What
happened?” in a more intuitive manner (business intelligence)
Figure 6.2 – Analytics capabilities framework
| Whitepaper | Cascade | www.dnvgl.com/software Page 14
While the above types of value extracted from data are certainly helpful to increase situational awareness at
some enterprise levels they do not support comprehensive analysis that leads to closed-loop automation
with elements such as actionable triggers and real-time decision making.
Tomorrow’s utility data analytics will execute on real-time and near real-time data. It will be predictive and
prescriptive in nature to warrant the necessary modeling and planning based on historic data. And it will
drive business transformation where business process change is initiated by analytics-derived information.
Utility big data capabilities to increase value from utility data analytics
The concept of big data has been around for more than a decade. Its potential to transform the
effectiveness, efficiency, and profitability of virtually any enterprise has been well documented. Yet, despite
the concept of big data being well-defined, and the general enormity of its opportunity well-understood, the
means to effectively leverage big data and realize its promised benefits still eludes many.
Big data’s remaining challenge that prevents the realization of these benefits comes in two parts. The first is
to understand that the true purpose of leveraging big data is to take action - to make more accurate
decisions, more quickly. We call this situational awareness, an idea that is quite self-explanatory. Regardless
of industry or environment, situational awareness means having an understanding of what you need to know,
have control of, and conduct analysis for in real-time to identify anomalies in normal patterns or behaviors
that can affect the outcome of a business or process. If you have these things, making the right decision in
the right amount of time in any context becomes much easier.
Achieving situational awareness used to be much easier because data volumes were smaller, and new data
was created at a slower rate, which meant our worlds were defined by a much smaller amount of
information. But new data is now created at an exponential rate, and therefore any data management and
analysis system that is built to provide situational awareness today must also be able to do so tomorrow.
Thus, the imperative for any enterprise is to create systems that manage big data and provide scalable
situational awareness.
The utilities industry is in particular need of scalable situational awareness so that it can realize benefits for
a wide range of important functions that are critical for enabling smart grid paradigms. Scalable situational
awareness for utilities means knowing where power is needed, and where it can be taken from, to keep the
grid stable. When power flow is not well understood, the resulting consequences can quite literally leave
utilities and their customers in the dark: a fitting-though-ironic analogy considering the goal of awareness.
Utilities can learn much about how to achieve scalable situational awareness from other industries, most
notably building management and telecommunications, which have learned to deal well with big data’s
complexity and scale.
The utility industry’s time scales vary over 15 orders of magnitude due to the unique diversity of sensors
and critical business processes, and often at much faster intervals than other industries, which, when trying
to create scalable situational awareness, impacts all five V’s of the industry’s big data pressures.
Analyzing huge volumes of data that span multiple orders of timescale magnitude falls short of traditional
data management technologies’ abilities. Traditional methods of data management, such as relational
databases (RDB) or time-serialized databases, may not have the capability to capture the causal effects of
years or decades of events that may occur in a millisecond or microsecond range, and therefore cannot
meet the real-time smart grids’ scalable situational awareness needs. Additionally, such an array of devices
| Whitepaper | Cascade | www.dnvgl.com/software Page 15
and processes create an especially-wide variety of data types and formats that must be considered when
making any decision, and thus for enabling scalable situational awareness. The following figure (Figure 6.3)
summarizes the complexity of utility big data use cases.
A typical utility asset infrastructure is composed of thousands of networked asset components which result
in petabytes of rich and linked grid asset data with deep inheritances (Figure 6.4).
The datasets are not only large in volume but also vary substantially due to the variety in data types and
several orders of magnitude in terms of sample rates. There is a spectrum of data velocity, variety, validity
Figure 6.3 – Definition of the utility big data problem
Figure 6.4 – Illustration of the Utility Assets Big Data Challenge
| Whitepaper | Cascade | www.dnvgl.com/software Page 16
and veracity. In addition, the base of data generating technology is growing at an exponential rate. Taking
all that into account it can only be concluded that a predictive and prescriptive asset management and
optimization problem for a complete utility enterprise asset infrastructure with asset monitoring, control and
maintenance at the individual component level would greatly benefit from big data management and
analytics capabilities.
Routine maintenance and repairs to power lines and other grid infrastructure account for a substantial
portion of utilities’ ongoing costs. With a sophisticated data management system that enables advanced
analytics, fault locations can be identified more precisely and characterized even before a truck is sent to fix
it. This can also allow utilities to determine if a truck and crew are needed to fix a problem at all, resulting in
immediate cost savings. Given what has been laid out in previous sections it should be clear that a
comprehensive predictive asset management and optimization solution should leverage big data capabilities
to utilize the power of asset information at the individual as well as collective level. It should take advantage
of advanced parallel computing capabilities (grid and cluster) as well as virtualization and cloud
infrastructure. The utility asset infrastructure evolves more and more into a network of networks. To monitor,
control, model and simulate this infrastructure cannot be done without advanced big data engines to
leverage top-down and bottom-up approaches, representing a trend that will continue in general and is
essential for predictive asset management and optimization.
Data analytics systems requirements for scalable situational awareness in utility asset networks
The underlying data management and analytics solutions required to provide scalable situational awareness
for intelligent utility asset networks must have five key characteristics: flexibility, interoperability through
connectivity, a control network, it must use open, standards-based data management technologies, and it
must support scalable data analysis.
Flexibility - Unlike many industries, power delivery is notoriously variable, with daily, weekly, and annual
variations due to variability in customer load, generation dispatch, delivery system outages, and other
reasons. This variability has challenged the industry to discern patterns that can be used to identify
abnormal conditions and anomalies that spur critical decisions-making processes. Advanced object-oriented
database technologies can deal with data that looks at voltage and current rate data just as easily as any
other type of data from any other industry. By embedding a variety of different data object models to
capture the different energy data types, as well as corresponding sample rates, object-oriented
programming allows for an integrated data management and analytics concept. It creates the necessary
flexibility to deal with the challenging characteristics of big energy data in real-time. Fast and reliable data
retrieval, suitable data formats for data analysis, one object-oriented programming language (for DDL and
DML), connectivity between objects without application code, direct use and storage of object identities, and
advanced, as well as traditional data management, features merged together represent critical values of a
fully-integrated object-oriented data management and analytics solution. This is what gives you the
situational awareness that is needed for utilities: understanding the immediate value of making a decision to
solve an abnormality in normal data patterns within a relevant time frame.
Interoperability and connectivity – The intelligent utility asset network of the future will be a massive
collection of devices, sensors, actuators, and systems, all of them creating ever-larger data volumes and
| Whitepaper | Cascade | www.dnvgl.com/software Page 17
ever greater analytics complexity. In this form, this will be a hugely complex network that must have full
accessibility of all these devices and sensors. Central to enabling this is Internet connectivity, something
again that Versant’s technologies have proven highly capable of by managing data and analysis for many
global telecommunications service providers.
Control network - Not only is collecting all of the asset data that asset sensors and devices produce a
challenge, but all of these devices must be fully communicative, interconnected, and critically controllable.
The decisions made based on having full situational awareness must be rapidly translated in to the
functioning grid, which, like enabling interoperability, requires a single, cohesive control system enabled
heavily through Internet connectivity.
Open, standards-based data management systems – A network as complex, variable, and fast-moving
as the intelligent asset grid requires billions of devices, sensors, and machines. It is impossible to expect
that any one data management technology vendors’ systems will be used across every grid application and
scenario. But more to the point, smart grids will be integral to the everyday life of billions of people, so as
new technologies are developed and adopted over time the smart grid must be able to adjust and change
the data management systems to meet new requirements. To enable this, utilities must leverage open
system architectures across five specific areas to permit ease of adoption and avoid costly vendor lock-in:
Network infrastructure: Includes protocol, routers, media type, IT connectivity, etc.
Control devices: Heavily-utilized devices that produce, consume, and manipulate data, as well as
control and monitor the energy grid network.
Network management and diagnostic tools: Enable configuration, commission, and
maintenance for the system.
Human-machine interface (HMI): Includes the visualization tools through which users and
managers obtain a view into the system, including both PC software and instrumentation panels.
Enterprise/IT level interface: Connects the control network into the data network. No gateways
other than open systems standards-based routers and IT-based data exchange mechanisms are
used.
A critical sixth factor is the data management system itself, which must also be considered part of this open
standards-based architecture. The DB represents the configuration database for the complete network of the
grid, storing the configuration profile data of every device participating in the open, fully interoperable and
integrated control network, and enabling effective communication and control between them all.
Scalable data analysis - Utilities will face immense data volume increases over the next several decades,
making the job of ensuring the validity and veracity of data analysis ever harder. Open architectures and
data management technologies will play a pivotal role in enabling data analysis that scales to these new
volume demands. These systems must not only be capable of dynamically scaling to account for and
manage increased data complexity, but also sheer volume as new types of devices are deployed on the grid
network.
| Whitepaper | Cascade | www.dnvgl.com/software Page 18
7 PROACTIVE ASSET MANAGEMENT AND OPTIMIZATION DRIVEN
BY PREDICTIVE AND PRESCRIPTIVE ANALYTICS IN COMBINATION WITH ADVANCED DATA MANAGEMENT, FIELD
TESTING AND ONLINE MONITORING METHODOLOGIES
Today, asset management is one of the most critical components of the utility business model. The
identification of asset health is instrumental in the approach. It is driven by asset field testing as well as
asset online monitoring. While field testing has only limited possibilities of application online monitoring
becomes more and more important as the asset infrastructure can remain in-service.
While current asset management is reactive in nature for most utilities, the newly available data streams
from asset online monitoring offer tremendous opportunity for development and deployment of more
advanced proactive predictive and prescriptive analytics solutions supported by capabilities such as big data
engines and advanced computing. As a result, top-down and bottom-up concepts can be applied to asset
management going from the asset class to the individual asset level, the predictive and prescriptive concept
embraced by asset criticality and risk can be integrated in the asset management approach to move from a
reactive to a proactive asset management, situational awareness in the asset infrastructure becomes more
and more real-time and event driven, and informed decisions can be taken without excessive delay.
One of the key elements in this transformation toward a more proactive and data driven asset management
is a properly defined asset management system software which can model the asset infrastructure, identify
bottlenecks, and act where needed. If a utility is collecting more data, it only makes sense to put that data
to use in as many ways as possible to maximize ROI. The most obvious use is to evaluate the criticality,
health and risk of individual devices. Engineers can use standard industry evaluation criteria, such as
performing maintenance on breakers after ‘X’ number of operations or when a single event had a fault
current above ‘Y.’ With the right asset management system, utilities can also create their own evaluation
criteria quite easily.
Risk-based maintenance – case study
The following case study demonstrates risk-based maintenance leveraging a study titled “Evaluating oil-filled
Circuit Breakers using CHR Criteria” that can be found in [5]. In this study,engineers at a large investor-
owned utility (IOU) identified the most important risk factors associated with the failure of oil-filled circuit
breakers. They created an algorithm to calculate the chance of failure and rated each of its approximately
20,000 oil-filled circuit breakers in the following four areas:
1. Overstress (A)
2. High maintenance (B)
3. Bushing type (C)
4. Manufacturer (D)
In each category, every breaker was given a score of ‘0’ through ‘3’. The higher the score, the greater the
concern. For example, certain bushing types had a history of failure, so that any breaker with that type of
bushing automatically received a score of ‘3’ for “Bushing Type.”
| Whitepaper | Cascade | www.dnvgl.com/software Page 19
Also, historical data showed that overstressed breakers were at significantly greater risk of failure. This was
addressed by creating an algorithm which weighted in the “Overstress” criterion by a factor of 6.
A final score (0…3) was calculated for each breaker using the following algorithm:
𝐹𝑖𝑛𝑎𝑙 𝑆𝑐𝑜𝑟𝑒 =[6𝐴 + 𝐵 + 𝐶 + 𝐷]
9
Based on the calculated final score the following recommended maintenance activity was triggered for every
breaker:
𝑀𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑛𝑐𝑒 𝐴𝑐𝑡𝑖𝑜𝑛 = {𝑵𝒐 𝑨𝒄𝒕𝒊𝒐𝒏: 𝐹𝑖𝑛𝑎𝑙 𝑆𝑐𝑜𝑟𝑒 = 0 𝑜𝑟 1
𝑪𝒍𝒐𝒔𝒆 𝑴𝒐𝒏𝒊𝒕𝒐𝒓𝒊𝒏𝒈: 𝐹𝑖𝑛𝑎𝑙 𝑆𝑐𝑜𝑟𝑒 = 2𝑹𝒆𝒑𝒍𝒂𝒄𝒆 𝑩𝒓𝒆𝒂𝒌𝒆𝒓: 𝐹𝑖𝑛𝑎𝑙 𝑆𝑐𝑜𝑟𝑒 = 3
As a result of this evaluation, the utility scheduled the replacement of 800 of its oil-filled breakers (4%) over
a ten year period. Roughly 1,400 breakers (7%) were monitored more closely. About 89% of the breakers
did not require any action. The following figure 11 illustrates the percentage split of the identified
maintenance actions:
Figure 7.1 – Oil-filled circuit breaker CHR results
By using the CHR approach, the utility identified where the greatest risk existed and took action to reduce it.
This capability represents one of the benefits of a robust AM system.
Also, predictive and prescriptive maintenance systems have the capability to determine and set thresholds
that trigger maintenance (or replacement) to reduce the risk of failure. For example, a transformer can be
operated under heavy-load conditions for a long time without suffering undue damage. But, if a transformer
is overheated once, its life span can be reduced to essentially zero. Preventing a transformer from crossing
the threshold (from ‘hot’ to ‘too hot’) can mean the difference between regular maintenance and potential
replacement.
In addition, moving to a predictive/prescriptive or reliability-centered maintenance system is to use CHR to
optimize non-operational aspects of the corporation. This can include required reports on reliability metrics
(SAIDI, SAIFI, MAIDI, MAIFI) and on regulatory compliance.
| Whitepaper | Cascade | www.dnvgl.com/software Page 20
Asset management systems can provide a host of benefits to utilities wanting to capitalize on their data
systems and maximize asset health and reliability. Asset management, when systematically applied:
Collects and analyzes available data and uses it to make informed decisions about the conditions of
the equipment.
Identifies and schedules necessary maintenance on the most critical assets, while reducing or
eliminating unnecessary work.
Removes device, personnel and system risk by eliminating unnecessary maintenance and inspection
work.
Determines the most cost-effective capital replacement plan.
Provides regulatory compliance information and reporting capabilities.
Improves reliability by managing system risk, thereby improving customer satisfaction and
increasing revenue.
| Whitepaper | Cascade | www.dnvgl.com/software Page 21
8 REFERENCES
1. “A Case for Best of Breed Technical Asset Management and Predictive Maintenance Utility Software –
A Solution for Engineering Operations and Asset Management”, White Paper 2013, Digital
Inspections.
2. “Asset Management of T&D Equipment and Integration of Renewables Needs Advanced Field Testing
Methodology”, Paul Leufkens, February 2016.
3. “Data Correlation – Effectively Combining Grid Data with Public Data and Social Media Data to
Maximize Forecasting Accuracy,” T. Borst and P. Myrseth, DNV GL Presentation, 2016.
4. “Flexibility in Wind Power Interconnection Utilizing Scalable Power Flow Control,” P. Jennings, F.
Kreikebaum, and J. Ham. CIGRE Grid of the Future Symposium, 2015.
5. “Fundamentals of CIM for Big Data Integration and Interoperability,” S.Pantea, N. Petrovic and I.
Kuijlaars, Presentation, Grid Analytics Europe, April 2016.
6. “Growing an Asset Management Program – Steps to Take and Advantages along the Way”, White
Paper 2014, DNV GL AS.
7. “Leveraging Big Data and Real-Time Analytics to achieve Situational Awareness for Smart Grids”.
White Paper 2012, Versant.
8. “Overview of Non-intrusive Condition Assessment of T&D Switchgear,” N. Uzelac, R. Pater and C.
Heinrich, Paper AS-101, CIGRE Symposium, 2016.
9. “Smart Cable Guard – A Tool For On-Line Monitoring And Location of PD’S AND Faults In MV Cables
– Its Application And Business Case”, Fred Steennis at al., Paper 1044. Cired 23rd Conference on
Electricity Distribution, June 2015.
10. Cigre report 510: Final Report of the 2004 – 2007 international Enquiry on Reliability of High Voltage
Equipment Part 2 - Reliability of High Voltage SF6 Circuit Breakers – Cigré Working Group A3.06 -
October 2012
| Whitepaper | Cascade | www.dnvgl.com/software Page 22
About the authors
Bert has spent more than 20 years with technology and consulting companies such as DNV GL, Siemens, General Electric, Versant and Supertex creating, leading and delivering projects for high-voltage power transmission and electric transportation networks, industrial manufacturing as well as big data analytics and automation software to
serve large-scale, mission-critical infrastructures. He earned a Masters and Ph.D. in Technical Cybernetics and Automation from the University of Rostock and an MBA from the Kellogg School of Management at Northwestern University.
Bert Taube Contact Info: [email protected] 408 307 4424
Paul Leufkens, President of the consulting firm Power Projects Leufkens, has more than 20 years of experience in the power sector. He has worked internationally in Business Development and Leadership for consulting and testing companies, including 13 years with KEMA in the Netherlands as well as in Chalfont, PA. Previously, Paul directed product development for the T&D cable industry and witchgear manufacturing. He holds a MS EE
degree from Delft Technical University in the Netherlands.
Paul Leufkens
Contact Info:
267 963 8812
Jim Weik is Regional Sales Manager for DNV GL Software’s Electric Grid product center. For the past six years, he has managed sales of asset management solutions for electric utilities in North America. He has over
30 years experience in sales management of engineered solutions with 17 years experience in Asia. He holds an undergraduate degree in Mechanical Engineering from Washington University in St. Louis and an MBA from Webster University in St. Louis.
Jim Weik Contact Info: [email protected] 541.752.7233 x 76115
Jesse Dill is the Global Marketing Manager for DNV GL Software’s Electric
Grid product center. He manages digital campaigns and outreach designed
to help electric utilities adapt their business processes and systems to meet the challenges of the modern power market. He has over a decade of business consulting and marketing experience, with 4+ years in the electric utility industry. He holds an undergraduate degree in Business Management as well as an MBA from Oregon State University.
Jesse Dill Contact Info: [email protected] 541 752 7233 x 76114
| Whitepaper | Cascade | www.dnvgl.com/software Page 23
ABOUT DNV GL Driven by our purpose of safeguarding life, property and the environment, DNV GL enables organizations to advance the safety and sustainability of their business. We provide classification and technical assurance
along with software and independent expert advisory services to the maritime, oil and gas, and energy industries. We also provide certification services to customers across a wide range of industries. Operating in more than 100 countries, our 16,000 professionals are dedicated to helping our customers make the world
safer, smarter and greener.
SOFTWARE
DNV GL is the world-leading provider of software for a safer, smarter and greener future in the energy, process and maritime industries. Our solutions support a variety of business critical activities including design and engineering, risk assessment, asset integrity and optimization, QHSE, and ship management. Our worldwide presence facilitates a strong customer focus and efficient sharing of industry best practice
and standards.