Reliability of future power grids

Page 1




2 Reliability of Future Power Grids

EXECUTIVE SUMMARY Our present-day society has become highly dependent on a reliable electricity supply due to increasing networking and electronic information exchange requirements. This dependency will further increase and, together with the shift in the fuel mix – the substitution of other energy sources for electricity - and growing share of variable renewables in an ageing infrastructure, creates a ‘double risk trend’. Investors, owners and operators of electrical power systems are becoming more and more uncertain about whether the system is adequate for the task, performs as intended (fit-for-purpose now and in the future) and is capable of recovering quickly after failures and outages … and of course is acceptable to society and sustainable. In order to mitigate the above-mentioned double risk trend, independent quantitative assessments performed with a validated suite of tools are required by stakeholders and customers. The stakes are high, and roles and objectives are sometimes incompatible or even conflicting. Technology and regulatory developments can have both positive and negative reliability consequences. A single tool is not sufficient for addressing all types of reliability issues relating to future power grids. Expert judgment is needed in order to select the right tools, validate models, measure relevant data and translate the results into information that answers the questions and allows the correct decisions to be taken. This paper summarizes DNV GL’s position on the reliability of future power grids. We are already working in different areas for and with customers to improve reliability and mitigate the double risk trend. We take a holistic integrated approach, combining electrical engineering technology with mathematical techniques, our knowledge and experience of testing and risk assessment and our understanding of markets and regulation.

Contact Details: Theo Bosma, Arnhem, Netherlands

Reliability of Future Power Grids





Development of Reliability of Power Systems.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Considering Past Events; Examples of Blackouts.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Transmission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Customer View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12



Developments in Generation, Loads, and Markets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Technical Developments for the Grid. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Markets, Society, and Regulation Developments.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Summary of Developments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19



Developments in Analysis and Tools.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Reliability Does Not Stand Alone. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21



Testing of (E)HV Components.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Testing LV and MV Components for Smart Grids.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Testing of systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26



Optimising Distribution Automation.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Optimisation of Storage.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Reliability of Offshore Grids. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Stochastic Powerflow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Power-Flow Simulation Allowing Temporary Current Overloading. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Rare Event Simulation Using a Splitting Technique. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Using Structural Reliability Methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Setup Data Collection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Cooperation with Academia and Key Clients. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39



Position. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41




4 Reliability of Future Power Grids

INTRODUCTION Our society is dependent on access to a reliable electric energy supply. With ever increasing information and networking, this dependency will increase further, and together with the shift in the fuel mix – with substitution of other energy sources for electricity – we are heading towards unprecedented changes in the power system. Largely precipitated by the unstoppable transition to a sustainable energy supply, this is a rather volatile process. Numerous illustrative examples are available, including the German “Energiewende”, the discovery of large quantities of shale gas in the US, the North Sea grid initiative for connecting large numbers of offshore wind power plants, and the Chinese 35GW target of installed capacity for solar power generation by 2015. The scene is set for change. Among the various examples of this are: the exponential increase of installed HVDC converter capacity, the first thoughts of a meshed hybrid (AC+DC) grid to strengthen Europe’s ageing transmission system and, simultaneously, the deployment of smart meters, the introduction of active demand and demand response through agents, the rise of electric vehicles (EV) and the massive deployment of solar (PV) systems in the distribution grid.

As we are in a global transition to a sustainable energy system, the nature and function of the future power grid changes drastically due to new technology in generation, transmission & distribution, end-use, stakeholders, markets and policies. Our increased dependency on electric energy has been recognised by the general public and has also attracted the attention of writers and Hollywood with books, films, and TV series being produced that have “blackout” as a theme. It is a huge challenge to ensure the reliability of an affordable future power grid with large amounts of fluctuating renewable energy sources (RES) and to safeguard that the power is delivered to the right place, at the right time, and in the right quantity. There is considerable uncertainty which necessitates novel thinking and new analysis tools, along with validation and de-risking the operation and control of the grid. DNV GL is taking the initiative in this area. Together with other companies and clients, DNV GL is working on the development of suitable future tools and is open to further collaboration.

Reliability of Future Power Grids

RELIABILITY OF POWER GRIDS DEVELOPMENT OF RELIABILITY OF POWER SYSTEMS Our present electric power system evolved over more than 100 years. However, increased reliability came at the cost of complicated network configuration. In order to improve reliability and availability, transmission grids of different power companies were connected so that the generating resources for peak shaving and backup capacity were pooled. The outcome was a complicated transmission infrastructure in which the power-flow calculation became similarly complicated. Nevertheless, because of relatively smaller number of generators with predictable generation and loads, relative simple grid topology, and relatively few scenarios to be checked, the calculation tools available were sufficient to determine the power flow in different transmission lines. The main reliability requirement was that the power system was able to handle the failure of a single component, and adjust the generation and transmission schedules such that the effect of the failure was not felt by the end consumers; this is referred to as the deterministic N-1 reliability criterion. The power-flow calculations to check the grid contingencies for all possible failures remained manageable.

Deregulation of power generation, transmission, and the distribution industry resulted in economic considerations becoming the driving factors in the loading of the power system. Some lines are loaded to their limits and power is transmitted over longer distances than in a regulated and hierarchical grid system. The grid also grew in complexity and so did the power sources. Now when a system experiences a single component outage, it is no longer N-1 secure should the fault not be cleared. Another outage during the downtime of the first component could lead to catastrophic results for the power system. The increased penetration of variable renewable energy sources (RES), such as wind and solar, combined with the highly interconnected power grid, means that considerable computing power and time are required in order to cover all possible scenarios to satisfy the N-1 security criterion. The problem is complicated by the fact that it becomes increasingly difficult to schedule maintenance, due to increased loading of the lines and cables; expansion is not the answer because obtaining permission for new transmission lines takes many years. Upgrading the existing transmission lines is the alternative solution. More and more calculations must be performed in shorter time intervals (to cope with the fluctuations) for grids that have become bigger and more complex, which have to accommodate more (market-driven) power flows and which also have to cover many more scenarios to ensure secure and stable operation.


6 Reliability of Future Power Grids

103 America Asia-Pacific Europe

Zanzibar Blackout 2008 Hurricane Sandy New York, Oct 2012


Japan Fukusima Aftermath, Mar 2011 Winter Storm Central and Southern China, Jan 2008

Duration (hour)

102 India Blackout, July 2012

New York City blackout, July 1977

Northeastern USA/Canada Blackout, Aug 2003 Italy Blackout, Sep 2003 US Northeastern Blackout, Nov 1965


Java-Bali Blackout, Aug 2005

Geomagnetic storm Hydro Quebeck, Mar 1989 Sao Paulo Blackout, Mar 1999 Brazil-Paraguay System Blackout, Nov 2009

Southern Greece Blackout, Jul 2004 UCTE System Disturbances, Nov 2006

100 1965












Figure 1.  Large blackouts since 1965 (up to 2012).

A probabilistic-based approach to power system reliability is imperative and the research community has indicated this to be the only possible solution to cope with the growing complexity of the power system.

CONSIDERING PAST EVENTS; EXAMPLES OF BLACKOUTS A blackout is an unplanned, temporary loss of electricity in a large area. Usually many customers are affected and very often a blackout is characterized by cascading failures in the generation, transmission, and/or distribution system. Major blackouts are usually caused by cascading contingencies, such as a short circuit, an overloaded single component, a generator outage, etc., with complicated interactions. The vulnerability of the system to these low-probability incidents that expand to a cascading outage (the domino effect) increases when the system is already stressed by other causes. These can include operating close to the limits or bulk exchange of power between parts of the system through congested transmission corridors. Forces of nature or extreme weather conditions, such as storms, high temperatures, forest fires etc., often initiate a cascading outage. The sequence of events leading to a large blackout is diverse, but the result

is always the same: an interruption to the electric power supply for many customers. As more people have access to electric power, consumption increases, the power system becomes more complex, and under these conditions, coupled with the increase in extreme weather events, the risk of disruptive blackouts increases. A comprehensive list of major blackouts since 1965 is provided in [1], from which a number of representative power blackouts are plotted in Figure 1. Here the year of occurrence (x-axis), duration (y-axis), and number of people impacted (relative size of the bubbles) are compared with each other. The most common cause of these blackouts was “natural phenomena” (6 times), while design and application error, communication failure, and operator errors were the second largest contributors, all of these contributing to five blackouts. In the next paragraphs some blackouts are highlighted. It is clear that all blackouts are different and, as the numbers are low, statistical analysis and forecasting are difficult. However, there are a few common characteristics of major blackouts:

Reliability of Future Power Grids

¾¾ Development is fast, usually catastrophic within 10 to 15 minutes after the triggering event - grid operators are pressed for time, and control and protection settings are not adapted quickly enough to the new “emergency” situation; ¾¾ System (state) awareness of the grid operators is often significantly reduced prior to the event; ¾¾ A blackout is very seldom caused by one single event - while this demonstrates the effectiveness of the prevailing N-1 criterion on power system planning and operation, it also implies that it is not enough just to analyse the first order contingencies (events involving failure of one single component), but the number of potential combinations that should be checked is huge. Event development normally includes the following stages that are characterized by different physical phenomena: it is worth mentioning that the first four stages impact upon relatively limited areas, whereas the last two can have impact over a huge geographical region. 1. Local events rooted in, for example, human error or weather conditions causing lines tripping in a limited area. 2. Power that was previously carried by the tripped lines is automatically re-routed to flow through alternative paths; this, in turn, causes even more lines to be tripped. 3. The healthy part of the system becomes heavily loaded and some improperly configured protections trip even more lines. 4. The widespread line-tripping results in a significant imbalance of reactive power in certain areas, and this develops into local voltage collapse.

India blackout, summer 2012 The July 2012 India blackout [3] was the largest power outage in history, occurring as two separate events on 30th and 31st July 2012. The outage affected over 620 million people, about 9 % of the world population or half of India’s population, and was spread across 22 of India’s 28 states. An estimated 32 GW of generating capacity was taken offline during the outage. Prior to the events, the grid was operated in an insecure state due to a number of planned or forced key transmission line outages. Overdraw of power from some local utilities caused a specific line to become strongly overloaded; protection reacted and tripped this connection. This event quickly developed into a widespread cascading event, in which the grid underwent frequency excursion, and inter-area power oscillation, resulting in network split and blackout of a vast area (Northern Region in the first event and Northern, Eastern and Northeastern Region in the second event). UCTE disturbance, 2006 In the evening of November 3rd 2006, the German transmission system operator (TSO), E.ON Netz, disconnected a 400 kV overhead line, ConnefordeDiele, over the Ems River in order to allow the cruise ship “Norwegian Pearl” to enter the North Sea [4]. Immediately after the disconnection, the situation changed rapidly with unexpected load flows threatening the N-1 safety limit. Soon an electrical blackout had cascaded across Europe, extending from Poland in the northeast, to the Benelux countries and France in the west, through to Portugal, Spain, and Morocco in the southwest, and across to Greece and the Balkans in the southeast.

5. The tripping introduces a strong imbalance in active power for the whole system; this develops into wide area frequency excursions and more generators and loads are tripped by under- & over-frequency relays.

The UCTE system was split following the tripping of interconnection lines into three separate areas (West, North-East, and South-East) with significant power imbalances in each area. The power imbalance in the Western area induced a severe frequency drop that caused an interruption of supply for more than 15 million European households. The UCTE system was fully resynchronized within 40 minutes after the split and power was restored to most customers within 2 hours.

6. The topology change in the grid causes large amounts of power to be transmitted over long distances through weak connections; this change triggers inter-area power oscillations that spread quickly into very large areas and eventually result in network splitting and/or blackout.

Northeastern USA/Canada blackout, 2003 The Northeastern blackout of 2003 [5] was a widespread power outage that occurred throughout parts of the northeastern and midwestern United States and the Canadian province of Ontario on August 14, 2003, just after 4 p.m. EDT (UTC−04).


8 Reliability of Future Power Grids





(n-0)-state, no loss of elements, no load mismatch, etc (n-1)-state, no violation of operational limits Violation of op. limits, Global security is endangered


System collapse Blackout

Figure 2.  System Restoration (From: ENTSO-E RG CE OH 2nd release)

While some power was restored by 11 p.m., for many customers power was not restored until two days later. This blackout affected an estimated 10 million people in Ontario and 45 million people in eight U.S. states. Before the blackout, the system in the area of Cleveland-Akron was operating in secure condition, but with a reduced safety margin due to the sustained high temperatures and planned outages. In addition, the energy management system/ supervisory control and data acquisition (EMS/ SCADA) system of the inter-regional system operator (MISO) became ineffective due to a flaw in the software design. The triggering event was the failure of a few 345 kV lines due to overgrown trees below. These failures led to a domino effect, as every failure caused the remaining lines to carry more power and sag down even further; the Cleveland-Akron area became “dark” within an hour of the initial failures. Following this local blackout, the resulting heavy loading and lower voltages on the remaining system triggered a cascade of interruptions, causing power oscillations and generation tripping. As a result, within seven minutes the blackout had spread from the Cleveland-Akron area across much of the northeastern United States and Canada.

TRANSMISSION There is a clear distinction between transmission and distribution grids. The basic function of transmission grids is to get the required (bulk) power from A (where it is generated) to B (where it is further distributed to the customers). Transmission grid operators have to provide sufficient transmission capacity to ensure reliable operation. They are responsible for keeping the balance between load and generation, and thus maintain the frequency close to 50Hz or 60Hz (North America/Japan). Regional long-term planning of transmission grids is based on developments and scenarios. In Europe this is done by The European Network of Transmission System Operators for Electricity (ENTSO-E); individual TSO’s cover their own grids. ENTSO-E addresses planning in its ten-year network development plans, the TYNDP series [28]. In the main report for 2012 onwards, the method is also described - that is the “what”, not the “how”. Separate volumes describe the TYNDP-outcomes for each of 6 regions in Europe. Operation of transmission grids is extensively covered by ENTSO-E and an impressive amount of information is published on their site [32]. Relevant documents include Policies in which, amongst others, both the N-1 approach and “blackout” are defined and described. Many details are available in the Operational handbook, which,

Reliability of Future Power Grids

Scenario X

Scenario Y

Case 1

Case 2

Case 3



Low load

Voltage levels

Cascade tripping


Loss of load

Thermal loading


Curative measures

Short circuit

Figure 3.  Planning process, from: ENTSO-E Ten-Year Network Development Plan 2012 [32] (appendix 3)

as the title says, concerns Operations (starting from a year in advance to minutes or faster). A typical operation procedure prescribes an N-1 analysis every 15 minutes. Figure 2 shows the system restoration process schematically and Figure 3 shows the planning process. In the planning process, the following technical issues must be addressed: ¾¾ Voltage levels: voltage levels in the system should be maintained at the appropriate level and should not collapse with foreseeable load increase in a certain area. ¾¾ Cascade tripping: a series of equipment (line, transformer, etc.) tripping in which the first tripping triggers the subsequent ones in a cascade-like manner. ¾¾ Stability: this refers to the property of a power system that enables it to remain in a state of operating equilibrium under normal operating conditions, and to regain an acceptable state of equilibrium after being subjected to a disturbance. ¾¾ Loss of load: the cause for the loss of load in power systems can be: lack of generation capacity, lack of transfer capacity and low

frequency (usually a consequence of power imbalance). ¾¾ Thermal loading: the actual power transferred by specific equipment should not exceed its thermal capacity. ¾¾ Curative measures: also known as corrective measures, these are the measures taken by a system operator to mitigate the impact of a given failure/outage; for example, rescheduling the power flow, reducing power demand for one area. ¾¾ Short circuit: the maximum and minimum short circuit currents should be calculated at all major stations. The maximum short circuit currents are used to ensure that the equipment can withstand the initial stress caused by the most severe failure. The general methodology implies a grid analysis in which base case topology (all network elements available) and different types of events (failures of network elements, loss of generation, etc.) are considered, depending on their probability of occurrence. (Note: in this context “depending” means that a subset is used, the cases with lower probabilities are excluded). In the evaluation of results, the consequences are checked against the main technical issues given previously. Risk is implicitly introduced as acceptable consequences


10 Reliability of Future Power Grids

can depend on the probability of the occurrence of the event. Deterministic criteria are currently used in the planning of the grid. ENTSO-E has already formulated some opinions about non-deterministic approaches. The key challenges for transmission are to improve observability of the grid, “near real-time” power-flow capabilities, and the introduction of probabilistic techniques to cope more effectively with the vast number of scenarios, grid configurations, and components that may be involved in contingencies.

DISTRIBUTION In the decades ahead, distribution network operators will have to invest heavily in the medium voltage grids and their MV/LV distribution substations in order to cope with the increased share of distributed renewable generation. These investments have been estimated to be in the same order of magnitude, or even bigger, than those needed for transmission. In the 10-year transmission development plan of ENTSO-E, 100 billion euro is mentioned as being needed for the transmission grid. At the distribution level the quantity of grid equipment increases tremendously. In contrast with the transmission level, where the power flows, voltages and currents levels are measured at “near real-time”, information about the momentary status of the distribution grid is limited or not available at all. As we move down towards medium and low voltage distribution, the status information becomes less and less; in many places values are only measured once per year at the end-user level (the annual electricity consumption). Figure 4 shows the (estimated) number pyramid for Europe.

this factor is 1. Due to the increase of RES at the distribution level, it is estimated that as many as 30 % of the MV/LV substations should be automated/ upgraded/replaced by 2020 in order to cope with the capacity and voltage requirements. Or should we strive for full automation and control at all levels? This would undoubtedly be expensive - and what about security? Do these developments at the distribution level impact on the transmission level? There is no need to increase transmission capacity and distribution capacity can even be reduced (or operated for a longer period of time) if generation at small customers’ sites is mainly used to cover their own demand and is complemented with balancing/ storage at the community level or the local distribution substation. However, renewables are placed for massive future production, also for the market; this is already the case in Germany with huge amounts of wind power in the North and many medium PV systems (tens of kW to even MW size) on barn roofs and land parks in rural areas. In such cases, the energy generated locally then has to be collected and transported to remote load centres, (e.g., the German Ruhr area) and, as a consequence, more transmission capacity is needed These developments call for an integral smart approach to the transmission and distribution grids, taking into account developments in generation. A key element will be the smart distribution substations of the future with the ability to manage and control the local low voltage grids and also serve as an intelligent MV layer interface, connected to the HV transmission grid. One issue that will have a very large impact in the future is that when the grid becomes “inverter-rich”, with more and more automatic switching (reconfiguring) and active controls of voltage, load, generation and power flow are introduced, then it will change its behaviour with respect to short circuit protection and control.

There are about 150 connections between countries, some 1200 extra high voltage (EHV) substations, 15000 HV substations, 5 million MV/LV substations, more than 300 million customer connections, and more than 5 billion appliances. Everything at the top of the pyramid is automated, whereas only a very small percentage of the 5 million MV/LV substations are equipped to take measurements and an even smaller percentage have actuators/controllers.

For distribution, the basic theory [11] relies on assuming negative exponential (independent) failures, an underlying Markov process that can be analytically solved and has been extended to Weibull distributions. Approximation formulae are used throughout for failure frequency and duration. The approximations yield results that are expressed as averages.

One of the big questions concerns the degree of automation and control that is needed for the various voltage layers of the grid. Obviously for transmission

As long as distribution grids are radially operated (or even designed), reliability analysis can be done by enumeration of all single events. Higher order

Reliability of Future Power Grids







Figure 4.  Estimated number pyramid for Europe

events are ignored (minimal cut set philosophy). The duration of outages is influenced by restorative actions, like switching (either automatically, telecontrolled, or manual) or rapid repair/replacement response teams. Additionally, common mode or common cause failures (e.g. bad weather) are accommodated by extending the enumeration or by changing the input parameters. Local circumstances, operator interventions, settings or switching limits mean that this is often a tailor-made approach. Implicitly, many assumptions have to be made, such as assuming the protection functions correctly, dimensioning is adequate, that the generation/ demand profile is known (or gives no problems), etc. If these assumptions cannot be made, as might be expected for future grids, then techniques used in transmission are necessary, such as state estimation, real-time measurements using phasor measurement units (PMU), automatic protection setting configuration etc. A new element may result from smart controls at the distribution level (MV/LV), like software agents rescheduling demand, control energy storage, or perform fast self-healing restoration procedures.

Huge numbers and many different timescales are involved, from 30 years for long-term infrastructure planning & optimisation, via quality and capacity plans (timescale of 2 to 5 years), maintenance planning (weeks/months), through power flows for operational support (15 minutes) and down to adaptive protection settings and switching operations (seconds) based on realtime measurements. This means that one of the key challenges, apart from deciding the amount of automation, will be accepting the change of operation with advanced tools, from a passive stable grid to an actively held stable grid with a safe “fallback mode�.


12 Reliability of Future Power Grids








25 kV


10.2 %

75,9 %

13.9 %


45–63 kV


7.9 %

76.8 %

15.3 %


35–120 kV


0.8 %

69.9 %

29.3 %


20–38 kV


12.0 %

78.9 %

9.0 %


35 kV


4.5 %

64.3 %

31.2 %

The Netherlands

36 kV


21.5 %

61.1 %

17.4 %

9.5 %

71.1 %

19.4 %

Overall Average

Figure 5.  Incidents per customer, according to voltage level (CEER)




















Figure 6.  Customer Minutes Lost, per voltage level, Netherlands, 2003-2012 min/year from [20]

CUSTOMER VIEW From the point of view of the customer, whether an outage originates from transmission or distribution is not important; both are equally inconvenient, and the customer cannot even tell the difference. Although few countries have provided reliable data regarding the voltage level of the incidents, the data available clearly indicate that around 70 % of both SAIDI and SAIFI (System Average Interruption Duration Index and System Average Interruption Frequency Index) for LV users are caused by incidents on MV networks, as illustrated by the table Figure 5 [21]. Similar results from the Netherlands [20] are illustrated in Figure 6.

The differences are plausible. HV grids are of relatively shorter length and higher redundancy; few failures occur and those that do mostly do not lead to outages. Furthermore, HV grids are well monitored, fully protected and operated, so outages tend to be short. Low outage frequency and short outage duration, result in a low value for customer minutes lost (CML), despite many customers can be affected with outages in HV grids. In contrast, LV failures are frequent, and due to fault localisation time and necessary repairs their duration may be long. However, as few customers are affected, the overall contribution is low.

Reliability of Future Power Grids

MV grids are, indeed, in the middle; they are often radially operated with (some) alternative routing available, leading to a moderate frequency of failures and outages, the duration often dependent on manual switching and with a significant amount of affected customers. This does not mean that this is wrong, but the question is raised of whether the marginal reliability benefit of investment may be give better returns for MV grids than for HV or EHV. Answering this question requires that the reliability analysis yields comparable results based on equal assumptions. Values for outage risks should include “value of non-delivered energy� or similar, either explicitly

or implicitly. Interdependencies with other infrastructures (IT, gas, heat, transportation, etc.) must also be included. Customers take uninterrupted power supply for granted. As electric power grids are different, due to physics and regulation, exhibit faster dynamics, contain less controls and have so far no (significant) buffers, it is also a multi-dimensional, multi-stakeholder, multi-connected (with other infra) system. Therefore a holistic approach is needed.


14 Reliability of Future Power Grids

GRIDS DEVELOPING FAST Power grids are developing very quickly. This is due to the new challenges associated with renewable integration, the availability of new technologies like power electronics and storage, and also due to the mind-set of customers evolving (awareness and involvement). Furthermore, there are the regulations to channel development towards a sustainable future energy system and thereby solve the energy trilemma (Figure 7). These rapid developments in different areas, combined with increased uncertainty, inevitably lead to an increase in risks.

DEVELOPMENTS IN GENERATION, LOADS, AND MARKETS The growth of the renewable share creates more, larger, and faster fluctuations in generation, leading to a paradigm shift from “generation matches load” to “load matches generation”. Increased dependency on electricity and the democratisation of power generation – the emergence of prosumers – gives rise to new actors, new roles, and new markets.

Affordable & Available

Secure & Reliable

Green & Clean

Figure 7.  The energy trilemma

Reliability of Future Power Grids

HVDC Interconnector Off shore windfarm DC connections

(U)HVDC bulk power link

HV meshed AC grid HVDC back to back connector to another synchronous area

MV ring AC grid DC power supply for transportation

DC applications LV radial AC grid

Local DC grid

Figure 8.  DC in the present AC grid

New actors may provide (distributed) storage, reliability, and automated efficient energy use systems and services; new markets that may emerge and develop can be global, on capacity and ancillary services, or local (region, city even neighbourhood) providing flexibility in load and generation, local trading and balancing, for reliability services, Power Quality services, and new types of Service Level Agreements. The new roles that may emerge include, for example, aggregators.

operations, and should be considered during the planning phase as well. Dynamic rating, presently for components, in future for (sub) systems, e.g., thermal loading, increases the utilisation and provides additional flexibility for transmission lines, cables, and transformers. Adaptive protection and control, e.g., future grids may be designed for automatic islanding, with autonomous parts that have less dependencies and assist in prevention of cascading failure.

TECHNICAL DEVELOPMENTS FOR THE GRID Transmission Transmission grids are required to provide more and bigger power flows over longer distances. Nowadays, for example, some 15 % of all electricity used in Europe crosses one or more national borders. This calls for an increase of transmission capacity, assessment of its availability under many changing constraints, and more and better steering and control tools (HVDC and FACTS-components). Fast fluctuating generation, such as that from wind farms or PV installations, has a much quicker rate of change than the ramp rate of conventional generating units. This presents new challenges for

Large, centrally operated storage, notably hydro, will be needed to “absorb” the higher volatility caused by renewable generation and new highly coincident loads. Control can be distinguished between TSOcontrolled, or market-controlled. New regulations will be required. The (partial) shift of generation to the distribution level and local balancing will challenges TSOs to seek more flexibility in the distribution grid (aggregated inverter control) and “increase” the visibility and insights about what is happening at the lower voltage levels. This is necessary as they could have an impact on the operation of the transmission grid.


16 Reliability of Future Power Grids

The connection of large amount of remote renewables, e.g. offshore wind farms, introduces HVDC connections into the existing AC grid, but also the provision of additional control and capacity for the grid. This “creeping in� of DC also happens at the low voltage distribution, see Figure 8, and will lead to new operational challenges as DC links (VSC-technology type) exhibit superior control

features but do not automatically take over (part) of the power flow if a parallel AC line is tripped. The aim of the e-Highway2050 project is development of a methodology to support the planning of the PanEuropean Transmission Network to ensure the reliable delivery of renewable electricity. Pan-European market integration and HVDC technology are seen as key elements for the realisation of this project.

Reliability of Future Power Grids

Distribution Increased local generation and greater “horizontal” power flows at the distribution level will probably result in distribution grids developing into more transmission-like systems. Functionality that becomes incorporated includes, for example: ¾¾ Greater redundancy, e.g. 20 kV transmission above 10 kV distribution, ¾¾ Increased observability through more sensors, by e.g. smart meters, ¾¾ More protection functionality, like distribution automation or equivalent, ¾¾ Smart software agents will emerge to replace human operators for trading, but will also resolve grid constraints and contribute to local system service issues. Other developments that will influence the reliability analysis of distribution grid include: ¾¾ More market effects, caused by Demand Response/Active Demand and virtual power plant operation, ¾¾ Adding storage, through EV or stationary. When “second hand” EV batteries become available for stationary use at more affordable prices this phenomenon may be subject to a steep increase, ¾¾ The distribution grid will become inverter-rich with all the grid-connected PV generation, EVs and even (partly) electronic transformers; this has consequences for the protection philosophy and the power quality, ¾¾ The emergence of more micro grids and autarkic systems, ¾¾ DC creeping in as a replacement or alternative for the LV AC connections. For distribution also, self-organising-criticality may emerge, since controls, protection, software agents, or customer behaviour may push the grid to its limits. It should be noted that the older heuristic methods of estimating maximum demand for planning are no longer valid. Methods such as Strand-Axelsson or Rusck assume homogeneous independent loads. In modern grids the loads are of different behaviour, come in small numbers per type, and may be

strongly correlated, either through physics (wind, sun) or market effects. Moreover just maximum demand is not enough when generation is distributed, nor is the set of combinations (maximum load & minimum generation) / (minimum load & maximum generation), etc. A meaningful analysis needs the full patterns for each load or generator. Storage again Energy storage may become a really disruptive development for power grids and has important implications for grid reliability and its calculation analysis. Storage will influence both transmission and distribution and, because it introduces chronology, has a major impact on the methodology of power system analysis. Many questions on storage have yet to be answered: where should energy storage be installed, and at what sizes? Should the focus be on distributed energy (up to a capacity of a few hundred kW), or should it be on the larger MW-scale grids? In both cases, energy storage can be used for balancing the fluctuating renewable generation system and load. It can also serve as virtual (spinning) reserve and improve reactive power and voltage control, as well as power system availability. Another trend in energy storage is seen at the (very) small, individual customer level, where home solar PV or micro CHP (combined heat and power) systems are combined with electric energy storage. This is becoming increasingly more prevalent as storage technology becomes more affordable. Inverter prices are dropping due to solar developments, while battery prices are also going down due to developments in electric and hybrid vehicles. This trend means that single households can now start to install a grid-connected storage system—as part of an Uninterruptible Power Supply (UPS)—in the power range of a few kW for four to eight hours. The consequence of integrating storage into reliability (and other) analysis is huge. Not only can a storage device act as a generator, or a load, or a null component, but it also has control functionality through the grid-connected inverter to create specific ancillary functions as previously mentioned. The cumulative effect of having storage means that chronology becomes important. Furthermore, the behaviour of storage must be included, and the


18 Reliability of Future Power Grids

type and setting of the control must be modelled. The ownership of the control is also relevant: is the storage controlled by a grid operator - i.e., does it follow technical grid conditions? Or does it follow market rules, steered by an economy-driven stakeholder? Since adding storage behaviour may either improve or deteriorate grid behaviour, location is also very relevant, and the analysis must consider topology, for storage too. Storage also adds extra constraints, as it can become full or empty, and its behaviour will alter based on its own condition.


Timing is also of relevance: is there automatic, immediate control, or is there a delay? That is, is the power flow known, and will that lead to new settings, and is it real-time behaviour? When multiple storage units are present they should be designed such that counterproductive behaviour is avoided. However, when stakeholders are independent; such optimisation may not be obvious. When using reliability tools, settings are not always known beforehand; in order to optimise those, tools should be able to handle “all” types and combinations of controls. When storage is included in the grid at a wider scale, it is obvious that conventional reliability and power-flow analysis methods are no longer adequate.

Growth in renewable energy sources (RES) is mainly driven by non-traditional investors:

Due to rapid developments in the power system, new (sub) markets develop for reliability, (scarce) capacity, quality, and balancing power at the distribution level. Other markets include congestion management and reactive power support. New phenomena also become apparent due to the shorter timescales, e.g. increased imbalance during the hourly interval changeover.

¾¾ Only 12 % of installed RES capacity is owned by traditional utilities ¾¾ Private entities (residential & farmers) account for nearly 50 % of RES capacity Strategic investors are often foreign, neighbouring transmission system operators / owners. Their rationale for investment is synergies, political influence, and optimised investments. Examples include: TenneT (NL), E.ON transmission (GE), Elia (BE), and Vattenfall transmission (SE). Financial investors expect stable and reasonable returns for perceived low-risk grid investments,

Reliability of Future Power Grids

and are looking for long-term commitments and diversification in the investment portfolio. Examples are Commerzbank (DE), RWE transmission (DE), IFM (AU), and Vattenfall transmission (SE). Another trend is “re-municipalisation”, which is driven by municipal energy policies, the wish to establish a “greener” energy supply, becoming self-supporting (energy autarky), and obtain greater citizen involvement in “public service”. As society (customers) becomes more aware and engages more in disputes and liabilities concerning power system issues, greater transparency is demanded and this leads to a need for more monitoring and benchmarking. New codes may be developed for future power grids that will cover a broad array of issues including: operation of grid connected inverters with RES and storage; requirements for smart, adaptive and autonomous control systems; and knowing who is allowed to control what. However, waiting for new codes is not an option, and before decisions about new codes are made the expected consequences should be analysed. Thus, to assist in the development process it is important to have dedicated tools for different types of codes. This should also enable tools to be

applied in different parts of the world where different codes may be observed. The many stakeholders will sometimes have conflicting interests, and therefore independent and impartial expertise is essential.

SUMMARY OF DEVELOPMENTS There is considerable unavoidable and inherent uncertainty, but developments may nevertheless happen faster than TSOs and distribution network operators (DNOs) are able to accommodate, e.g. regarding investments. Thus, robustness and flexibility are needed for power grids, and should be demonstrated in advance. As uncertainty increases, so does grid complexity. This in combination with the many (new) stakeholders, sometimes with conflicting interests, means that it is unavoidable that risks will arise - but opportunities will also increase. These developments demand both action and also sound, independent, impartial decisions.


20 Reliability of Future Power Grids

CONCERNS ABOUT RELIABILITY Power grid reliability is, like most other practical risk analysis problems, characterised by a large set of interrelated uncertain quantities and alternatives. Risk analysis problems constitute complex systems and generally require modelling of the interrelationships between technical disciplines, humans, and organisations. Within conventional risk analysis, different methods (such as fault tree analysis and event tree analysis) have been developed to model and analyse these complex problems. A fault tree analysis seeks the causes of a given event, while an event tree analysis seeks the consequences of a given event. The two techniques are complementary to each other, and, when applied correctly, the formulated model may reveal the entire probability structure. Both fault tree and event tree analysis – applied both separately and in combination – have previously been used successfully in the evaluation of the risk of various hazardous activities. Unfortunately, both fault tree and event tree analyses have their drawbacks. Firstly, it is difficult to include conditional dependencies and mutually exclusive events in a fault tree analysis (a conditional dependency is, for example, the dependence of visibility on the weather; mutually exclusive events are, for example, good weather and storm). If conditional dependencies and mutually exclusive events are included in a fault tree analysis, implementation and the pursuing analysis must be performed with utmost care. Secondly, the size of an event tree

increases exponentially with the number of variables. Thirdly, the global model, which is a combination of fault tree and event tree analyses, often becomes so huge that it is virtually impossible for third parties (and sometimes even for first parties) to validate the model. It is not uncommon for completed fault and event tree analyses to be submitted to public decision makers with serious flaws incorporated into the modelling of the system. Graphical modelling of Bayesian networks provides a superior tool for validating system interpretation.

DEVELOPMENTS IN ANALYSIS AND TOOLS There are many (partly related) reliability methods, e.g. fault tree analysis, event tree analysis, failure mode, effects and criticality analysis (FMECA), Monte Carlo simulation (MCS), etc. The choice of analysis method depends on the problem to be solved. Power grids, with their typical physics and multiple simultaneous purposes, make the subject more complex. A single power grid reliability standard does not exist. Decisions in the ENTSO-E system planning studies [28] are generally based on deterministic analysis, in which several representative planning cases are considered. Additionally, studies based on a probabilistic approach may be carried out. This

Reliability of Future Power Grids

approach aims at assessing the likelihood of risks associated with grid operation throughout the year, and determining the uncertainties associated with such risks. The objective is to cover several transmission system states across the entire year taking into account many cases. Thus, it is possible to:

forecasting the Energy Not Supplied (ENS), the Loss of Load Expectation (LOLE), and congestion costs. Probabilistic assessment of other variables, like short circuit current, could also be very useful for planning decisions.

RELIABILITY DOES NOT STAND ALONE ¾¾ Detect “critical system states” that are not detected by other means, and ¾¾ Estimate the probability of occurrence of each case assessed, thereby

Reliability embedded in the decision process Two key methods presently being used to address all kinds of uncertainties are: ¾¾ Collaborative planning

¾¾ Facilitating the priority evaluation of the new assets that are needed. The basic idea of probabilistic methods is based on creating multiple cases depending on the variation in particular variables (that are uncertain). Many uncertainties can result in multiple cases being built: demand, generation availability, renewable production, exchange patterns, network component availability, etc. The general method consists of the following steps: ¾¾ Definition of variables to be considered (e.g., demand),

¾¾ Multi-scenario and probabilistic methods.





¾¾ Definition of values to be considered for each of the variables and, GENERATE ALTERNATIVES

¾¾ Estimation of the probability of occurrence. In cases in which a variable with many possible values is considered (e.g., network unavailability), the number of different possible combinations could indicate the use of a random approach method. Building a set with all the planning cases is necessary. The number of cases required will depend on the quantity of variables and the amount of different values for each of them. Each case is then analysed separately and the results are assessed. Depending on the number of cases, a probabilistic approach could be required to assess the results. A prioritised list of actions could result from this assessment. If the variables used to build multiple cases are estimated in a purely probabilistic way, then a statistical tool is needed for the assessment. In this case, as well as helping to prioritise the actions needed in a development plan and identifying critical cases that are not known to be critical in advance, the probabilistic approach allows





Figure 9.  Diagram of a typical decision process

Figure 9 shows a typical decision process in which, following generation of the alternatives that are checked with the constraints, the objectives are calculated, a decision is taken, and risk avoidance options (least cost or minimal regret) are compared. The process can be repeated if necessary.


22 Reliability of Future Power Grids

Reliability of Future Power Grids

Damages per outage per kilowatt power €/kW, per industrial company, related to outage duration 100






0 0









15 - brewery 15 - production starch and derivatives 17 - textile improvement 21 - hygenic paper production 21 - papermill 21 - paper production 21 - paper production (newspapers) 23 - oil refinery and chemicals 24 - chemical industry 24 - production industrial gasses 24 - production sales enamel 26 - cementproduction 27 - industrial production 27 - production primairy aluminum 27 - chemical productionplant 27 - production zinc from ore 28 - steel production 29 - develop. manufact. medical equipm. 29 - prod. distrib. information carriers 60 - workshop garage tramcars 99 - transshipment, passenger terminal

outage duration (h)

Figure 10.  Value of non-delivered energy per type of industry

This process is often augmented with sensitivity analysis and break-even point analysis. Typical reliability tools can be applied for checking constraints or evaluating objectives. The type or dimension of the answer and its accuracy should be aligned with the questions that were to be answered. Changes in the regulatory environment had a profound effect on transmission companies by shifting the focus of regulatory scrutiny from the generation side of the business to the transmission side. This change resulted in greater transparency for transmission costs and enabled structuring of performance and efficiency measures for transmission companies. As a result, utilities have been motivated to begin development of more rigorous and quantitative methods for business case analysis in order to justify investments in assets. From the examples of business case analyses provided it appears that a fully monetized approach is used next to other approaches that consider impacts

on business values in a quantitative, but not fully financial, manner such as multi-criteria decision analysis [36]. Value of reliability to society The value of non-delivered energy (kWh) differs per stakeholder; it should be noted that this value is not regulated compensation. It excludes markets contracts, disconnectable loads, demand side management, etc. It also does not show value for generators, although these too have a value. The value for non-delivered energy for residential customers in European urban areas is, on average, 10 €/kWh. In Figure 10 the value of non-delivered energy for industry is shown.


24 Reliability of Future Power Grids

DNV GL AND COMPONENT RELIABILITY Testing individual components and systems is an essential element in assessing or avoiding risk in power grids. The possibilities of testing should be in-line with, or, better still, ahead of, installation of new components and systems in the grid. This holds true for all voltage levels. DNV GL has a long history of testing and is preparing to maintain this lead by investing in new laboratories, as well as developing new testing methods.

TESTING OF (E)HV COMPONENTS Development of new technology is bringing unfamiliar and unpredictable risks to power systems. There are no reliability data for new components and reliability is often estimated based on similar objects. Nevertheless, this is not a precise method. Testing is one way to assess the reliability of an object. The main causes of operations failure include: ¾¾ Mistakes during the design and testing stage ¾¾ Inappropriate production methods or materials ¾¾ Mistakes during commissioning and on-site testing ¾¾ Insufficient maintenance or too much time between scheduled maintenances ¾¾ Aging of equipment.

High power transformers are a good example of equipment that has a high demand for testing. They are often produced on client request and their number is limited. This means that it is difficult to manufacture, test, and ship a spare transformer at short notice, so there can be a considerable recovery time after a failure. At the same time, transformer failure affects a large area and can cause high risks in the power system. The failure frequency of transformers during short circuit testing in highpower laboratories has been studied and indicates an overall failure rate of 23 % for a total of 3934 tests [7]. These data confirm the experience from the KEMA laboratories of DNV GL where short circuit testing has demonstrated that about 25 % of equipment does not fulfil requirements and fails the tests. New equipment design or technology generally results in lower reliability (burn-in effects or teething troubles), which improves over time, based on collective experience from operations. Competition in the market forces manufacturers to optimise design for the minimum price, resulting in lower margins in equipment parameters and use of less or cheaper material. Failures in such cases will usually happen during abnormal situations or under critical conditions. This ‘design on the edge’ situation can decrease the reliability of equipment and pose a risk to the grids. Testing helps to mitigate such risks; by developing new tests or even new laboratories, DNV GL will be able to test new equipment for Super and Smart Grids.

Reliability of Future Power Grids

TESTING LV AND MV COMPONENTS FOR SMART GRIDS The Flex Power Grid Laboratory (FPG Lab) in Arnhem, the Netherlands is an independent, medium-voltage laboratory equipped for testing and researching innovative control and (grid-connected) power electronics under complex realistic conditions [26].

¾¾ (Distributed generation) equipment under severe future grid conditions to ensure efficiency and proper functionality, ¾¾ Whether the grid will operate reliably and stably under extreme circumstances (high penetration of certified DGs), ¾¾ Possible abnormal equipment responses to simulated grid events. DNV GL uses this facility to research and develop innovative solutions for future grid applications. The benefits are: ¾¾ Ensuring the effectiveness of new controls and operating guidelines prior to full-scale deployment and a significant reduction in fieldtesting time when integrating any type of new control;

Figure 11.  Detail of FPG lab

The FPG Lab is capable of supplying industrial medium-voltages (400 V – 24 kV) at a continuous high-power rating (1 MVA), in combination with complex realistic ‘bad grid’ conditions, including pre-programmable distortion (harmonics or dynamic network phenomena up to 2.4 kHz). This facility is one of the few installations worldwide capable of ascertaining whether distributed energy resource (DER) units function correctly under realistic system conditions as the hardware (power components) and the controls (ICT) are verified simultaneously. The FPG Lab enables realistic testing for clients of: ¾¾ Whether their distributed generation equipment will interfere with grid performance,

¾¾ A significant decrease in DG integration time and market introduction facilitated by testing for compliance with standards such as EN50438, EN50160, IEEE 1547, BDEW (FGW TR3), IEC 61683 and IEC 62116; ¾¾ Avoidance of field installation surprises, and prevention of unplanned outages, decreased production income, lost man-hours, and negative publicity; ¾¾ Provision of insights on the impact of high (certified) DG penetration levels on the grid and verification of stability; ¾¾ Observation of equipment behaviour under abnormal grid conditions.


26 Reliability of Future Power Grids

Relative share





Figure 12.  Relative share of testing & simulation

TESTING OF SYSTEMS The main goal of validation (testing) is to de-risk a component, system, or technology for its intended purpose in a ‘risk-free’ environment before installing it in the real world. Experience teaches that with new low voltage equipment, two out of three fail the first time they are tested (for certification); for medium and high voltage equipment, the failure rate is roughly one out of two. Human behaviour is difficult to predict and cannot yet be simulated. Both these components – the many new types of equipment being developed and integrated into the power grid and human behaviour – are essential for successful transition to a sustainable energy system for the future. There are generally two types of tests that are needed in order to guarantee reliable and stable operation of the grid; component testing and system testing. Component research and testing is usually not a problem, because it will be covered by the manufacturers and independent test laboratories. Only for the highest transmission voltages, e.g. UHVDC, will the testing be so costly that collaboration is needed. For system testing, the efforts for rebuilding “a sufficient part” of the system, to perform experiments that are not normally conducted, or are not permitted in the actual grid (e.g. creating short circuits and blackouts that affect many people), can

be expected to become very expensive (or even impossible) as the voltage level increases. Therefore, for system testing at transmission level we must rely on simulation. As the consequences of incorrect estimates and predictions can be disastrous, validated models have to be developed and made available for the new transmission equipment e.g. HVDC links, voltage source converters and their controls. People must be trained on transmission system operation, including the energy market that is an integral part of it, with large, validated simulators. Experiments are very costly and must not compromise grid integrity and stability; therefore only wide-area, real-time measurements are made and system behaviour is collected from switching and transient analysis. As a consequence, the actual transmission system becomes part of the simulation model. For the distribution grid, rebuilding a part of the grid is possible. This can be done relatively easily for low voltage levels, but is more costly at medium voltage distribution level. The main questions to be answered here are: what is a representative part of the grid? and how can we integrate customer interaction? For continental or transnational transmission research, system testing, and validation, large validated simulators are needed for system

Reliability of Future Power Grids

System Studies Consultancy

In House Testing

Remote Laboratory

Laboratory Power Source

Electrical Power Interface


Device under test Control Interface

Distibuted Testing

Electrical Power Interface Device under test

RTDS Hardware in the Loop

Control Interface Modelling Simulation and Gaming

Figure 13.  Highlighting distributed testing in the set of four DNV GL de-risking activities

integration in which the grid itself and the market are both embedded, together with component testing for the highest voltage levels. For distribution research infrastructural needs, a collaboration/ mutual access of rebuilds of the distribution grid are needed. This should reflect the local situations, including validated smart components, and allow for customer interaction. A future in-house component test facility will conduct fewer tests; some tests will be virtual or performed in the modelling, simulation & gaming environment, but two types of test are still needed. These are: 1) the performance and integrity testing of the design, addressing the question “does it work as intended?” And 2) the interaction / integration test with the power system, in order to validate the correct functioning of the interfaces, including control and protection. Future test facilities will also be able to “play back” recorded disturbances that have occurred in the grid such that the effects can be analysed and learned from in a safe environment. Real-time grid measurements of load and generation can also be transferred to the test facility and converted to real “power and load” in order to study their effects and interactions with test objects. By using actual measurements it even becomes possible to study human behaviour and its effect on the power system operation.

Testing facilities that can cope with distributed equipment and systems are needed for several reasons. First, because more and more equipment will become interconnected and controlled by information technology. Such equipment includes many small PV power plants operating together in a virtual power plant (VPP), charging a fleet of electric vehicles (EV), and active demand (AD) control of many electric appliances. Second, because different laboratory facilities, with their own specialisations, can be effectively combined to cover the multilevel, multi-actor approach for actual power system situations with its various timescales (long-term stability - control actions – transients). For these reasons a distributed test infrastructure is envisioned that would consist of a number of laboratories working together and covering all the test layers, ranging from systems of large DER equipment to VPP / EV / AD controllable generation & loads down to mini- and microgrids. The connected facilities should be equipped with power “hardware” testing equipment, a real-time – real-power simulator, sensors, and communication with the outside-world power system to allow for interaction.


28 Reliability of Future Power Grids

DNV GL AND GRID RELIABILITY DNV GL constantly strives to develop insights and knowledge for our customers’ benefit. In order to improve its reliability-related services to its clients, DNV GL has instigated several initiatives and follows a holistic approach, including all of the following: ¾¾ Simulations for overall analysis and planning; ¾¾ Laboratories , preferably independent, for testing hardware and communication and interoperability; ¾¾ Pilots to gain practical experience, including facilitating this for the relevant staff of clients ¾¾ Advice regarding regulation for regulators or other stakeholders And also: ¾¾ Post-fault analysis (in labs) to learn from failures for future application and to guide improvements in components or system design; ¾¾ Outage registration (for future data) - near misses are important (e.g. component failure without system problem); ¾¾ Independent benchmarking of performance.

Our general approach is to combine engineering expertise with mathematical methods, both proven, but in new (innovative) combinations. Thus, expert knowledge is needed to ensure selection of the appropriate approach for each separate study or provision of advice. Simply having a tool or a set of tools is not enough; purpose, methodology, and limitations must also be realised. The independent meta-knowledge of choosing the right tool is essential for delivering answers that can be trusted. Our ambition is typically advanced by applying selected mathematical methods that have already proven their strengths in the power grid environment. These are then aimed at solving a practical problem, preferably in collaboration with one of our clients. DNV GL is currently working on strengthening it services regarding grid reliability in several complementary directions. Below are several examples that are in different stages of progress. For each the application and purpose are described first, then the approach or methods; cooperation with partners and the current status and outlook of each project are also provided.

Reliability of Future Power Grids

OPTIMISING DISTRIBUTION AUTOMATION An example of DNV GL using mathematical techniques to extend typical electrical engineering practice is a Distribution Automation (DA) development project.

1 device

2 devices

3 devices

4 devices

5 devices

6 devices

Figure 14.  Optimal locations of DA-devices differ as more DA-devices are included

This was originally intended for EDP Distribuição (the Portuguese Distribution System Operator), but can be used generally [22]. The project is essentially about investment planning, while also considering new protection settings and operator procedures. The aim of DA here is to improve the reliability of supply by applying additional circuit breakers in overhead medium voltage distribution grids. By dividing the grid into more distinct sections, the average customer outage frequency (CAIFI) will decrease, and by supporting the failure search process, resulting in faster fault isolation, the average duration of outage (CAIDI) will also be reduced. Decisions regarding the number and location of the new devices must be taken, and this is essentially a combinatorial optimisation problem. A sequential approach does not guarantee finding the overall optimum, because allowing one more new device would also change the optimal locations for those that have been already selected. Figure 14 provides an illustrative example. The DA project decisions are based on a balance between technical performance (reduction in outage minutes) and economic performance (total costs of DA deployment), which are related through the monetary Value of Energy Not Delivered. The optimum number of DA devices and the most effective locations are determined using two types of new tools.


30 Reliability of Future Power Grids

Feitosa-Portulezo, optimal number of OCR3 switches 1800



Linear (INV RECC) Power (Value € END)

1400 1200

Cost (€)

1000 800 600 400 200 0





















Number of switches

Figure 15.  Example results for 1 feeder, total costs = END value + investments, varying number DA-devices

The first is a heuristic easy-to-use tool that is applied for most of the feeders. It is restricted to radial feeders and does not guarantee an optimal solution. It is intended for use by the typical planning engineer and requires minimal practical data that can easily be obtained from the DNO’s database. The second tool is based on genetic algorithms and a dedicated reliability analysis (electrical and geographical computer model built in PowerFactory); it is used for extraction of rules and for validation of the first tool. Outcomes are also used to benchmark the results per feeder in order to achieve an investment portfolio that is based on relative marginal benefits. Validation for a pilot network shows that both tools yield consistent results. In Figure 15 the results obtained with the two tools are compared.

use of circuit breakers. The second tool is already prepared for this development.

The project is now completed (see [22] for details). Based on the results obtained, a field pilot was started before the progressive rollout of DA in order to improve the national reliability performance. Application to cable-based grids or low voltage grids is straightforward, and the approach has been further used within DNV GL.

Considering only MAIFI this could appear to be a disadvantage. However, from a broader perspective it is mostly considerably advantageous. During the initial development of the tool there was no economic value for MAIFI, and therefore it was not included in decisions in the heuristic tool.

This development could be continued in the future by including momentary interruptions or by investigating other concepts for DA besides the

Considering momentary interruptions It is anticipated that Power Quality, particularly momentary interruptions, will become more important in the future. MAIFI (Momentary Average Interruption Frequency Index) is the relevant quality indicator (or MAIFIE, which is the same but just counting events). This is an issue when considering sectionalisers instead of reclosers due to the limitations from protection coordination perspective. Although the total number of events does not change, the number of momentary interruptions may increase because otherwise sustained interruptions become very short.

More innovation in DA – the MSDA concept Compared with the chosen DA concept based on reclosers and sectionalisers, DA could alternatively be based on simple non-load breaking switches

Reliability of Future Power Grids


32 Reliability of Future Power Grids

that communicate with a central “intelligent” unit known as MSDA (Master-Slave DA). Fault switching is done with the substation circuit breaker alone. Fault isolation is very selective, with many slave devices switching in a non-energized condition. A low price allows many devices, yielding very small sections, and reducing the time for fault location and power restoration. There are several advantages of MSDA: it is cheap, more efficient, and has larger reliability improvements. Additionally, although relying on telecom, it is robust in design: when a slave device fails due to telecom, the isolated section will be only slightly larger than optimal; when the master device fails due to telecom, the original performance will be unaffected. Only in the (rare) case of telecom faults is performance level reduced. Maximum benefit is usually achieved and safety is never sacrificed. The present disadvantage of the MSDA concept is that although it is based on proven technologies, the typical application is new to the energy industry, thus demanding a change in thinking. MSDA can be used in combination with the more conventional way of applying DA (local automation), leading to a mixed concept, minimising the risks associated with dependency on telecom and central systems. This may be another step towards smart(er) grids.

OPTIMISATION OF STORAGE Storage systems can have a beneficial effect on reliability, in particular when decentralised power generation may result in operational problems in electricity distribution networks, such as current overloads and voltage deviations being so large that reliability of supply is affected. However, storage systems are still relatively expensive and have not yet been applied much in electricity grids. Adding electrical energy storage to power systems involves decisions about the types of storage to use, the amount of storage to use, the optimal size, and the best location of the storage devices. One of the tasks of the EU-funded GROWDERS project was to develop tools for this purpose [25]. The first approach within the GROWDERS project was to start bottom up, using the usual engineering tools (here: power flow analysis using PowerFactory). A library of storage components and storage controls was added, and the annual time patterns were then analysed. Storage was modelled as a means to alleviate grid problems or to support energy trading. In addition, the economic dimension was included to cover investments and operational costs. In order to support further decision-making,

alternative measures were included, e.g. using tap changers or the conventional approach - add cables or lines. Solving the combinatorial problem of finding optimal combinations of storage was undertaken using the same genetic algorithm approach as was used in the DA project described in the previous section. Another task within GROWDERS was to validate the tool, PLATOS, using results from field tests [24]. The results demonstrated PLATOS to be a very useful tool for “what if” analyses (analysing one “solution”, in this case a proposed set of storages), but too slow practically for determining full optimisation (finding the best set of storages). The main causes lie in the detailed power analysis, the intensive user interfacing, and the iterative character of repeating the analysis over a time series in a combinatorial setting. A second approach was undertaken, this time with a top-down approach, looking at the same problem as a mixed-integer optimisation problem with some power constraints. This is another example of using mathematical techniques to extend typical electrical engineering practice, and is an example of cooperation with a university (Utrecht, the Netherlands). Development of the model is described in more detail in a publication [23]. The model can also be used to support the analysis of the operational benefits and investment costs of storage systems. It uses a simulated annealing approach to find suitable storage configurations, with a linear programming model to determine the load and optimal storage control, maintaining all the power-flow constraints. The linear programming model is solved using a commercially available solver (here Cplex). In contrast to PLATOS, the model addresses all power-flow constraints at once, throughout the year, rather than iterating through a time pattern and provides optimal results very fast. Not only are the optimal number, set of locations, types, and sizes determined, but also the optimal state-of-charge. Thus, the controls for the storage are available without the need to specify them in advance. This model seems to be an interesting approach to solving storage location problems. Furthermore, this way of modelling seems a promising approach to solving other investment problems. In future analysis we may investigate the potential use of information from the so-called dual solution. This sensitivity information may further enhance the decision-making process.

Reliability of Future Power Grids

MTTF MTTR Time series Logics

DISCRETE EVENT SIMULATIONS Time-sequential Random sampling (Monte Carlo)

System state

V,P,Q r, X, B €/MWh etc...

Load served


OPTIMAL POWER FLOW with automatic and remedial actions

Figure 16.  Sketch of how the PoweRisk tool works.

This top-down part of the research provided proof of principle that optimisation really is feasible for genuine problems. The ideal combination would be to use conventional tools as a pre- and post-processing environment, with optimisation embedded. Currently, after GROWDERS, the PLATOS tool is being further developed for “what if” analyses in projects like SOPRA (development of an off-grid renewable-power station including storage for rural applications) and CSGriP (development of a smart grid concept with connected SOPRA units). One of the other research goals is to investigate whether the two approaches can be combined into one single practical tool that can perform the optimisation both sufficiently rapidly and within a practical setting. Our ambition is to develop this new tool in such a way that it is independent of size and topology, and therefore is suitable for both distribution and transmission grids.

RELIABILITY OF OFFSHORE GRIDS An on-going development project within DNV GL Research & Innovation is aimed at assessing the reliability of HVDC offshore grids. Paper [18] presents a methodology to quantify the contribution of wind power to the adequacy of power systems in addition to violations of operating reserve requirements. It can deal with AC systems including radial DC connections. The methodology uses a novel power system simulation and analysis tool developed at DNV GL (PoweRisk), and is based on discrete event simulations combined with optimal power-flow simulations. A case study has been performed using the Institute of Electrical and Electronics Engineers (IEEE)-Reliability Test System as the model of the power grid, with wind power generation data from Denmark. Results show that wind power’s relative contribution to system adequacy drops as wind power penetration increases. This is in line with reported studies of real systems. In addition, the violations of reserve margins tend to increase with increasing wind penetration.


34 Reliability of Future Power Grids

The following points highlight some of the current limitations of the PoweRisk tool, and hence suggest ideas for further research. ¾¾ A power-flow simulation assumes that the system is in steady state. In fact, transients (due to, e.g., large generator or line trips) could cause instability even if the post state is considered steady. A stability analysis must be done in order to test the system security for a given transient. It is worth noting that no commercial tool presently exists for probabilistic security studies of power systems [9]. Including functionality to capture the more severe transients and performing stability analysis on these is a topic for further work. ¾¾ A power-flow simulation is a snapshot of the system state and does not consider the history of events leading up to the present situation. Hence, limitations on generator ramping and re-dispatch are not taken into account. ¾¾ Load shedding is conducted by dispatching loads continuously downwards. In reality, loads would be shed in discrete steps. Therefore the results tend to be optimistic with respect to load shedding. Block dispatching of loads would, however, lead to a mixed integer problem that is substantially more complex and time-consuming to solve than the linear Optimal Power flow (OPF). ¾¾ The generator maintenance schedule lacks intelligence. A more sophisticated model for maintenance should be developed that postpones maintenance on main components if the system is severely stressed. The following points list some main shortcomings of the study and some recommendations for studies of real systems: ¾¾ Correlation between wind and load is not captured, as the IEEE-RTS is a hypothetical power system. In studies of real systems it is essential that time-correlated wind and load data are used for the same area. Wind-load correlation could significantly impact wind power’s contribution to system adequacy. ¾¾ The study assumed equal wind conditions on all wind parks in the system. Studies of real systems should use wind generation data for individual parks, or, at least, parks that are in relatively close proximity of each other. This would capture the actual wind power infeed at each bus and the

resulting power flows in the grid more accurately. This is especially important if the transmission grid is constrained and if the system under study spans a large geographical area. Further development of the PoweRisk tool is foreseen to include: ¾¾ Implementation of all relevant functionalities for solving AC power flow (steady-state) problems with the possibility of including point-to-point HVDC transmission lines. ¾¾ Development of a conceptual methodology for solving power flow in multi-terminal HVDC (MTDC) grids and consideration of the possibility for simulating combined AC/MTDC power systems. ¾¾ Assessment of the possibility of combining the AC-based tool with dynamic (time-domain) simulations. ¾¾ It is important to note that the PoweRisk tool will not be developed as commercial software. The intention is that it is to be used and maintained internally in DNV GL for consultancy services.

STOCHASTIC POWERFLOW In 2013 DNV GL started a research project, Stochflow, which aimed at exploring the use of stochastics in power system analysis and developing tools for future services. Using stochastics is seen as one way forward in addressing the increasing amount of uncertainty in planning and operating conditions. The primary objective of the work for 2013 was defining the way ahead for DNV GL’s research in this field. As a part of this a proof-of-principle tool and case study were developed for power-flow calculations. It should be noted that the method used could be equally applicable to other types of analyses, for example, a short circuit calculation. The objective of the proof-of-principle case was to find the full cumulative distribution functions of all currents and voltages in a generic grid, given stochastic inputs for wind generation, solar generation, and demand. The brute force Monte Carlo Simulation (MCS) approach is traditionally used to perform “standard” power-flow calculations for each combination of inputs. The large number of calculations (100,000 – 1,000,000) that are required for the desired accuracy requires

Reliability of Future Power Grids

Stochastic Collocation: 125 sampling points

Monte Carlo: 50 000 sampling points

Figure 17.  Excerpt from Stochflow results.

prolonged computation times, and therefore this method is not well suited for inclusion in further complex decision processes. The Stochflow project is supported by CWI (Center for Mathematics and Computer Science in Amsterdam, the Netherlands) in applying mathematical methods that have already been proven elsewhere. Stochastic collocation, with pre- and post-processing, is used to avoid the high number of iterations (e.g. the right graph of Figure 17) and associated long computation times typical for MCS, while obtaining results of similar or better accuracy (reliability) . In the left graph of Figure 17 each point represents one intermediate simulation result; in this project each point requires one power-flow analysis. The number of points provides an indication of the effort required in the full simulation. The present status of Stochflow is that the proofof-principle test case (based on the 300 bus 825 connections IEEE test grid) gives promising results (these will be published in a future paper). One advantage of this approach is that it is fast (in the test case it was more than 1000 times faster than brute force MCS) and provides at least the same level of accuracy. Another advantage of this approach is that as the full probability density functions become available, the post-processing allows sensitivity studies regarding constraints on current capacity or voltage limits afterwards. A limitation of the stochastic collocation method is that the chronological character is lost as only probability

functions are used as input for generation and demand. Another limitation is that failures of the grid have not yet been modelled. The Stochflow project has proven that it is possible to obtain useful results using existing mathematical methods and commercial power system analysis software. It is also clear that solving the broad range of problems, of which determining cumulative distribution functions is only one, will probably require the use of multiple methods. Stochflow will develop, firstly by improving the tool that has been built, including proper validation of the outcomes (using full MCS as a reference method), the remaining variance, and more specific (and more efficient) post-processing. Subsequently, the focus of Stochflow will shift to developing more examples, and solving other types of problems (such as ones requiring chronology), probably using other methods. The roadmap compiled in 2013 supports the further development.

POWER-FLOW SIMULATION ALLOWING TEMPORARY CURRENT OVERLOADING An ongoing PhD project [19] is considering a probabilistic power-flow model, subject to connection of temperature constraints. Renewable power generation is included and modelled stochastically in order to reflect its intermittent nature. In contrast to conventional models that


36 Reliability of Future Power Grids



0 days

6 days

12 days

18 days

24 days

30 days

6 days

12 days

18 days

24 days

30 days



0 days

Figure 18.  Less violations of temperature constraints than current constraints.

enforce connection current constraints, short-term current overloading is allowed. Temperature constraints are weaker than current constraints, and hence the proposed model quantifies the overload risk more realistically. Using such a constraint is justified by the intermittent nature of the renewable power source. Allowing temporary current overloading necessitates the incorporation of a time domain in the model. This substantially influences the choice of model for the renewable power source. Wind power is modelled by use of an autoregression and moving average (ARMA) model, and appropriate accelerations of the power-flow solution technique are chosen. Several IEEE test case examples illustrate more realistic risk analyses. One example, Figure 18, shows that a current constraint model may overestimate these risks, and this may lead to unnecessary investments on network assets by the grid operator.

RARE EVENT SIMULATION USING A SPLITTING TECHNIQUE DNV GL is supporting a second PhD project at CWI that aims at speeding up Monte Carlo simulation (MCS). MCS is a robust and popular technique to estimate various grid reliability indices, but the

computational intensity involved is very high for typical reliability analyses. Various reliability indices can be expressed as expectations, depending on the rare event probability of a so-called power curtailment. Rare event simulation techniques have been developed to provide an accurate estimate of very small probabilities. Importance sampling and importance splitting are two well-known variants of these. In importance sampling, samples are taken from an alternative distribution, then the estimator is multiplied by an appropriate likelihood ratio in order to correct for the induced bias. For variance reduction it is essential to find a distribution that increases rare event occurrences. Adaptive importance sampling techniques have been developed to learn this distribution iteratively. However, in common power grid structures various typical paths may lead to rare events, especially when considering a large number of stochastic sources and a large time domain. Changing the distribution of random variables may then have counterproductive effects. This is further elaborated in paper [35], in which an importance splitting approach is pursued. Splitting techniques do not change the distribution, but resample simulation trajectories as soon as they are presumed substantially closer to the rare event.

Reliability of Future Power Grids

In this way, variance reduction (hence computational efficiency) is achieved using an increased occurrence of rare events, and without the need to understand the most likely occurrences a priori. In the literature, splitting techniques have rarely been applied to power systems. Our research, however, considers the rare event of power curtailments over a certain time domain due to (and given) the uncertain nature of generation. The model developed considers Markov processes with a continuous state space, and allows for the assessment of general reliability indices. It speeds up an MCS method for grid reliability estimation with an existing splitting technique called Fixed Number of Successes (FNS). It uses a stochastic model for the intermittent energy sources and maps these to the outcome of a power curtailment. The computational intensity of a brute force MCS approach compared with the MCS-with-FNS shows that the workload required is orders of magnitude less, while controlling the estimate relative variance.

USING STRUCTURAL RELIABILITY METHODS An obvious research line to investigate is whether the vast experience that DNV GL has gained in risk analysis when considering maritime and oil & gas businesses, can be re-used in the field of renewable generation (with legacy GL experience) power grid reliability (using legacy KEMA expertise). Currently two approaches are being investigated. Bayesian networks and power grid reliability Investigating application of Bayesian Networks to power grid reliability. A Bayesian network is a graphical representation of uncertain quantities (and decisions) that reveals explicitly the probabilistic dependence between the set of variables and the flow of information in the model. A Bayesian network is a network with directed arrows and no cycles. The nodes (to which the arrows point) represent random variables and decisions. Arrows pointing at random variables indicate probabilistic dependence, while arrows pointing at decisions specify the information that is available at the time of the decision. A Bayesian network is most effectively built by focusing on the causal relationship among the variables in the system. This implies that a Bayesian network becomes a reasonable realistic representation of the problem domain; this is useful when the modeller’s intention is to reach a common understanding about a specific problem domain.

In addition, knowledge of the causal relationships allows us to make predictions in the presence of interventions. Last, but not least, model building through causal relationships makes it much easier to validate and convey the model to third parties. Building a Bayesian network consists of two main steps. The first step is to elicit the graphical model that displays the conditional independence assertion in the model; this is the qualitative model. This is most easily achieved by considering the causal effects between the variables. The second step is to build the joint distribution of the variables in the model; this is the quantitative model and is done by specifying a sequence of conditional probability distributions. Because the Bayesian network has been formulated as a knowledge representation of the problem being considered, assessment of the relevant (prior) probabilities is relatively straightforward. Combining the prior knowledge with Bayesian statistical techniques enables combination of domain knowledge and data. In recent years, there has been an increasing interest not only in learning Bayesian networks from data, but also in learning or updating the conditional probability distributions. A potential drawback of Bayesian networks is that they generally require that the state space of nodes is countable and discrete. Thus, their application requires the random variables (the nodes) to be discretised. Although this has been claimed to be disadvantageous, neither fault trees nor event trees offer a better alternative. The main consequences of the discretisation are that the result of the Bayesian network may be sensitive to the selected discretisation and that the calculations involved in the evaluation of the Bayesian network may grow almost exponentially with the number of states of the nodes. The latter is because a Bayesian network encodes the entire probabilistic structure of the problem. However, the efficient algorithm developed for inference in Bayesian networks may limit the computational consequences of the exponential growth of the state space. First and Second order reliability Methods (FORM/SORM) For some classes of problems it is necessary to include accurate information about the full distribution of the random variables. Different procedures are available that may be applied in such cases: Monte Carlo simulation (MCS) and the analytically based first and second order reliability


38 Reliability of Future Power Grids

methods (FORM/SORM) are alternatives that are commonly applied in engineering.

Madsen, Krenk and Lind (1986) and / or Ditlevsen and Madsen (1996) [31].

In this section we will briefly describe the principles for FORM/SORM analysis. Within the field of structural reliability methods, a limit state function is defined such that it divides the set defined by random variables into a failed state and a safe set. For grid reliability analysis, the limit state function is constituted by the power-flow constraints.

One advantage of using FORM/SORM is that by locating a sampling density at or near the identified design point, very efficient MCS importance sampling may be performed. The importance sampling can significantly reduce the variance of the probability estimator, theoretically to zero if the sampling density is well located.

Standard numerical integration techniques are generally not suitable for solving the highdimensional integral problem, and, in general, either MCS, or analytically based FORM/SORM must be used. Although it is straightforward to apply MCS to obtain the cumulative density function, it can be a drawback that MCS may require a significant amount of function calls to evaluate the power-flow constraints, especially for small probability levels. FORM/SORM are analytical probability integration methods, and are thus, when they can be used, quite fast. FORM and SORM are appropriate for random variable reliability problems, where the set of basic variables are continuous. For an in-depth coverage of FORM/SORM, the reader is advised to refer to

SETUP DATA COLLECTION In order to be able to conduct independent reliability assessments, DNV GL will also collect an independent set of data to be used as default or reference data. This will involve grid models and probabilistic data on demands and all types of generation, and may include times series, probability distribution functions, specifications of components, control and protection schemes, etc. This collection will be accompanied by conversion tools for changing formats, for performing standard calculations, or for interpolation or extrapolation.

Reliability of Future Power Grids

COOPERATION WITH ACADEMIA AND KEY CLIENTS We collaborate with several universities and research institutes on research topics related to power systems, including the risks and reliability of power grids. DNV GL already cooperates closely with technical universities or mathematical institutes, such as, in the Netherlands, CWI in Amsterdam, TUD in Delft, TU/e in Eindhoven, and in Norway, NTNU in Trondheim and Statistics for Innovation (sfi)² in Oslo. In addition DNV GL collaborates with the University of Strathclyde in Glasgow, Scotland. DNV GL has been proposed as a partner to support several newly proposed research projects for a recent Dutch NWO (Netherlands Organisation for Scientific Research) programme called URSES (Uncertainty Reduction of Smart Energy Systems) [9]. These will include: ¾¾ Distributed Intelligence for Smart Power routing and matching;

¾¾ Smart and robust support and use of electric vehicles in neighbourhoods with high penetration of photovoltaic systems; ¾¾ Robust market mechanisms for uncertainties in Smart Energy systems; ¾¾ Decision support for multi-party power systems planning; ¾¾ Stable and scalable decentralised power balancing systems using adaptive clustering. Each proposal involves a different combination of Dutch universities. The proposals are waiting for final approval and it is expected that several PhD students will start on these topics in 2014/2015. The role of DNV GL in these projects is to ensure that the results obtained will be valid and practical. Some of these projects already have involvement with key customers. We welcome the opportunity of inviting new clients to participate in our research or in demonstration projects.


40 Reliability of Future Power Grids

Reliability of Future Power Grids

CONCLUSION In order to mitigate the double risk trend of increased uncertainties and increased dependency on electric energy in power grids, independent quantitative assessments, performed with a validated suite of tools, are required by stakeholders and clients. The stakes are high, and roles and objectives are sometimes incompatible or even conflicting; therefore developments can have both positive and negative reliability consequences. A single tool is not sufficient for addressing all types of reliability issues for future power grids. Even if more off-the-shelf tools become available, expert judgment is needed for selecting the right tool, for validating models, for choosing relevant data, and for translating the results to information that answers the questions and allows the correct decisions to be taken. It is not only engineering experts who need to understand power systems, but also professionals in regulation, economics, and decision-making. This requirement calls for optimisation including stochastics, and moreover necessitates transparency, objectivity, and independence. DNV GL is already working in different areas for and with clients to improve the reliability of future power grids. A holistic integral approach is followed, combining electrical engineering technology with mathematical techniques, our knowledge and experience in testing and risk assessment, and our understanding of markets & regulations.

POSITION DNV GL is aware of the complex reliability issues for future power grids, and prepares fundamental research, develops new tool suites, and integrates disciplines. The role of DNV GL Research & Innovation regarding future grid reliability is to be a partner and catalyst for transforming innovative ideas into practical implementation. In other words, DNV GL will collaborate, coordinate and support the development from academia to TSO/DSOs and other power grid stakeholders such as manufacturers and regulators. This requires a holistic approach combining the capabilities of DNV GL Energy regarding power grid knowledge, engineering skills, mathematical and economic insights, embedded in a decision support framework that addresses uncertainties. The full development process is covered, from understanding new analytical methods, through proof-of-principle studies, laboratory validation and de-risking, to benchmarking and implementation of pilot projects in the field.


42 Reliability of Future Power Grids


List of major power outages. Wikipedia. From:


Li, Ben. Report from WG C2.21 –Lessons Learnt. Workshop on Large Disturbances – Cigre Session Paris 2012. August 27, 2012.


Bakshi, Shri A.S., et al. Report of the Enquiry Committee on Grid Disturbance in Northern Region on 30th July 2012 and in Northern, Eastern & North-Eastern Region on July 31st 2012. 2012.


Planning our electric future: a White Paper for secure, affordable and low‑carbon electricity, Secretary of State for Energy and Climate Change, 2011, from:


E-energy, ICT-based Energy System of the Future, from Leuchtturm_E-Energy_E_s4.pdf


What comes next for Power Generation & Grid Management, EPRI Journal summer 2013. From aspx?ProductId=000000003002001742


Evidence for Self-organized Criticality in Electric Power System Blackouts, B.A. Carreras et al., Hawaii International Conference on System Sciences, IEEE, January 2001.


UCTE. Final Report - System Disturbance on 4 November 2006. 2006.


The U.S.-Canada Power System Outage Task Force. Final Report on the August 14, 2003 Blackout in the United States and Canada. 2004.


UCTE. FINAL REPORT of the Investigation Committee on the 28 September 2003 Blackout in Italy. 2004.


DNV GL Technology Outlook 2020, from


Giorgio Bertagnolli: “Results of short-circuit tests carried out by high-power Laboratories”. CIGRE TC 12 colloquium, Preferential subject 2: Short-circuit performance of transformers, Workshop 1: Tests and Failures”; Budapest, 1999.



Risk related to Large Scale Implementation of Wind Power into a Regional Power Transmission System, Christopher J. Greiner, Johan Solvik, Yongtao Yang, Tore Langeland, ESReDA Conference 2012, Risk and Reliability for Wind Energy and other Renewable Sources, 15-16 May 2012, Glasgow, UK.

[9] magw/urses/urses.html



P. Pourbeik et al, “Review of the current status of tools and techniques for risk-based and probabilistic planning in power systems”. Cigré, Tech. Rep. WG C4.601, publication 434, Oct 2010.

Probabilistic Power Flow Simulation allowing Temporary Current Overloading, W.S. Wadman, G. Bloemhof, D. Crommelin, J. Frank, Proceedings PMAPS 2012, Istanbul, Turkey, June 10-14, 2012.


Betrouwbaarheid van elektriciteitsnetten in Nederland, Resultaten 2012, Netbeheer Nederland (in Dutch), RMME-13L10440006, 26 april 2013.


Reliability Evaluation of Power Systems, R. Billinton, R.N. Allan, 2nd ed 1996, Plenum Press.



IEEE Guide for Electric Power Distribution Reliability Indices, Std. 1366 2003.

CEER, 5th CEER Benchmarking Report on the Quality of Electricity Supply 2011, from EER_HOME/CEER_5thBenchmarking_Report.pdf

Reliability of Future Power Grids


Increasing Quality of Supply of EDP through optimal and strategic Distribution Automation design, R. Oliveira, G.A. Bloemhof, A. Blanquet, CIRED 20th International Conference on Electricity Distribution, Prague, 8-11 June 2009, Paper 0459.


Optimizing storage placement in electricity distribution networks, J.M. van den Akker, S.L. Leemhuis, G.A. Bloemhof, International Annual Conference of the German OR Society 2012, September.


Storage Optimization in Distribution Systems, R. Cremers, G.A. Bloemhof, paper 180, 21st Cired June 2011.


Growders project, see also


Flex Power Grid Lab, from:


DSO Priorities For Smart Grid Standardisation, EDSO & Eurelectric, from: index.php?page=edso-s-publications


10-Year Network Development Plan 2012, ENTSO-E, from:


The energy island – an inverse pump accumulation station, W.W. de Boer, F.J. Verheij, D. Zwemmer, R. Das, EWEC 2007.


Why investments do not prevent blackouts, Daniel Kirschen, Goran Strbac, UMIST, Manchester, UK. The Electricity Journal, Vol. 17, No. 2, March 2004, pp. 29-36.




Cascading Failures: Extreme Properties of Large Blackouts in the Electric Grid, Paul D.H. Hines, Benjamin O’Hara, Eduardo Cotilla-Sanchez, Christopher M. Danforth, SIAM Mathematics Awareness Month 2011, From: complexsystemsHines.pdf


Requirements for advanced decision support tools in future distribution network planning, M.O.W. Grond, J. Morren, J.G. Slootweg, 22nd CIRED 2013, June, Stockholm, Sweden.


Applying A Splitting Technique To Estimate Electrical Grid Reliability, Wander Wadman, Daan Crommelin, Jason Frank, Proceedings of the 2013 Winter Simulation Conference.


Asset Management Decision Making using different Risk Assessment Methodologies, Technical Brouchure 541, Cigré WG C1.25, June 2013.


44 Reliability of Future Power Grids


DNV GL AS NO-1322 Høvik, Norway Tel: +47 67 57 99 00

DNV GL Driven by its purpose of safeguarding life, property and the environment, DNV GL enables organisations to advance the safety and sustainability of their business. DNV GL provides classification and technical assurance along with software and independent expert advisory services to the maritime, oil & gas and energy industries. It also provides certification services to customers across a wide range of industries. Combining leading technical and operational expertise, risk methodology and in-depth industry knowledge, DNV GL empowers its customers’ decisions and actions with trust and confidence. The company continuously invests in research and collaborative innovation to provide customers and society with operational and technological foresight. DNV GL, whose origins go back to 1864, operates globally in more than 100 countries with its 16,000 professionals dedicated to helping their customers make the world safer, smarter and greener. DNV GL Strategic Research & Innovation The objective of strategic research is through new knowledge and services to enable long term innovation and business growth in support of the overall strategy of DNV GL. Such research is carried out in selected areas that are believed to be of particular significance for DNV GL in the future. A Position Paper from DNV GL Strategic Research & Innovation is intended to highlight findings from our research programmes.

The trademarks DNV GL and the Horizon Graphic are the property of DNV GL AS. All rights reserved. ©DNV GL 12/2014 Design and print production: Erik Tanche Nilssen AS