Issuu on Google+

Vol ume1 0•Numbe r5•S e pt e mbe r / Oc t obe r2 0 1 2

magazi ne

f o re l e c t r i cp o we rp r o f e s s i o n a l s

h p: / / ma g a z i ne . i e e e pe s . or g


From the Editor By Mel Olken

Streams of Data Automating Knowledge and Information The ever-increasing sophistication required to operate the electric grid requires the rapid assimilation of captured measurements from installed management systems to ensure system reliability while reducing operating and maintenance costs. This paradigm is the focus of our issue devoted to automating knowledge and information. At the IEEE Power & Energy Magazine (P&E) Editorial Board meeting in Detroit at the PES General Meeting in 2011, Mladen Kezunovic of Texas A&M University offered a presentation related to the issues and challenges that are present as our industry moves forward and, as a result of that presentation, the Board accepted his proposal to guest edit aP&E issue devoted to the subject. As an adjunct to the presentations in the issue, it should be noted that the U.S. National Institute of Standards (NIST) is in the process of undertaking the development of protocols to assist in the preparation of standards to accomplish this integration. This work is coordinated through the Smart Grid Interoperability Panel (SGIP).

In This Issue This issue presents five feature articles, covering the myriad issues associated with this subject. Though Mladen will offer detailed descriptions of the ​ a rticles in his guest editorial, allow me to introduce them in their order of ​ a ppearance in the issue: “The Situation Room” by Jay Giri, Manu Parashar, Jos Trehern, and Vahid Madani “Operating in the Fog” by Patrick Panciatici, Gabriel Bareux, and Louis Wehenkel “Metrics for Success” by Murat Göl, Ali Abur, and Floyd Galvan “Measures of Value” by Tomo Popovic and Mladen Kezunovic “One Step Ahead” by Ganesh Kumar Venayagamoorthy, Kurt Rohrig, and István Erlich.

The Berkshires, Part 2 Our issue’s “History” column returns to the Berkshire Mountains of Massachusetts, which, I might offer, are magnificent in September. The column by Tom Blalock, edited by C arl Sulzberger, is a continuation of that which appeared in our previous issue. It again focuses on William Stanley’s contributions to the electrification of Berkshire C ounty in the late 19th and early 20th ​ centuries. And yet again Tom Blalock has graced our pages, and it is hoped that he will continue to offer his contributions.

Cause and Effect The “In My View” column, authored by Stipe Fustar, president and C EO of Power Grid 360, is a fitting conclusion to the ​ issue theme. In his column, Dr. Fustar looks at architecture and technology perspectives, interoperability perspectives, and the standards efforts underway to develop a comprehension of the complexities that are associated with this allimportant, though difficult, analysis. He concludes, “A common semantic model should be used to ​ describe data and cause-effect action ​ unambiguously.” Amen!

P&E Wins Grand Award IEEE Power & Energy Magazine won a Grand Award for the July/August 2011 issue in the 2012 APEX Awards for Publication Excellence. Even though this magazine was entered in the “Green” Magazines and Journals category for the issue titled “Getting Around: Transportation Goes Electric,” it was chosen as one out of 100 best entries in the competition and was honored with a Grand Award. According to APEX, only 100 Grand Awards, in 11 major categories, were presented. This is the highest recognition that APEX judges can present. This magazine is produced by the IEEE Power & Energy Society, and this award recognized Mel Olken as the editor-inchief. IEEE staff for this ​ magazine are Geri Krolin-Taylor, ​ senior managing editor, Janet Dudar, senior art director, and Gail Schnitzer, assistant art director.

Share

Tw eet

0

EMAIL | PRINT | HOME


Guest Editorial By Mladen Kezunovic

Data Analytics Creating Information and Knowledge The feature articles in this issue are devoted to the ​ e merging field of data analytics, which is a computational capability to extract a cause–​ e ffect understanding of power system events. This knowledge gets extracted from field measurements through analytical methods and, in many cases, involves the use of various data and power system models. As it links the cause of an event with its consequence, it may be readily used by operators or in designing controllers to enhance power system operation. The data analytics solutions use field data obtained from various intelligent electronic devices located in substations and a variety of databases spread across the utility enterprise. The data analytics tools then convert the data to information and eventually information to knowledge. In this process, the knowledge of experts is formulated as set of rules or equations and combined with computational models to provide the match between measured data and event hypothesis. Once the data and the hypothesis are matched, the desired knowledge about the cause–effect relationship is inferred. Since the process of matching prior experience with measured data to obtain knowledge is done often in an automated way, the ​ results of the process are typically made available online and may be used in real-time decision making. The described process of converting data to knowledge using data analytics is illustrated inFigure 1. Many existing applications in power systems are also focused on processing data, but only a few are using innovative monitoring and control concepts enabled by data analytics solutions. To illustrate the trend, this issue of IEEE Power & Energy Magazine provides several examples of the advanced solutions. Since this is an emerging field, the articles are selected from the user and research groups that closely collaborate with industry in demonstrating the benefits. The examples that follow are an important step forward and illustrative of the new trend, but there are many other similar ideas that did not make it into this issue due to the practical publishing limitations. A careful selection of the article authors and topics illustrates emerging data analytics for control center applications, enhanced security assessment and management, tuned state estimation, automated fault analysis, and renewable resource integration. The topics of the articles shown in the context of automated data analytics are depicted in Figure 1. The first article, “The Situation Room,” discusses a suite of advanced data analytics solutions based on phasor measurement unit (PMU) measurements: a) angular separation, b) oscillatory stability, c) disturbance location identification, and d) islanding and resynchronization. The authors illustrate how such advanced solutions may be integrated with the legacy emergency management system (EMS) design to provide a major enhancement in operator’s ability to make decisions. This requires advanced graphical representation of the data analytics results.

Figure 1. Data analytics for the conversion of data to knowledge.

The article provides operator views that incorporate combined graphical and geographical views. C orrelating electrical and spatial components of decision making enhances the ability to make prudent decisions. As examples of the synergies that have occurred due to the advanced analytics and visualization framework, the operators’ ability to monitor operating limits, understand complex events, and enhance post-mortem analysis are discussed. This development leads to the new concept of enhanced situational awareness. The authors indicate that such an ​ improvement “maximized ​ human understanding and com​ prehension without increasing operator stress.” This is achieved through the analytics that ​ offer enhanced perception, comprehension, and projection leading to better informed decision making and action. Since the implementation of the new EMS solutions carries a substantial risk, the authors use an example from a utility company deployment to ​ illustrate how the risk may be managed. The company has decided to first implement a proof of concept (POC ), and then proceed with full implementation. The POC included a PMU, as well as the stability and EMS ​ a nalytics, and allowed for ample testing of equipment and software. As a result, a long list of benefits of the POC is experienced and shared with readers. The second article, “Operating in the Fog,” provides broad user perspective of the new data analytics. The Pan-European network plans are outlined and the main conclusion is that the uncertainty in short term planning requires new tools to handle operation decisions. This additional knowledge for decision making is envisioned coming from better description of neighboring systems, improved forecasting and enhanced model accuracy. This led to a discussion of the overall toolbox structure for the future operator needs that includes existing application and new data analytics for security assessment. The security assessment tools are envisioned as being used for on-line decisions, but they will be widely supported by offline tools helping define security rules, validate dynamic models and outline defense plan and restoration strategy. To


achieve this new way of handling uncertainties, the framework for contingency assessment including corrective and preventive actions is proposed. This approach is illustrated though several examples of how the tools may be used in some critical operating conditions. While given at a high abstraction level, the new data analytics clearly indicate the reliance on better models, more up-to-date data, and knowledge from the past experiences. The next article, “Metrics for Success,” illustrates data analytics applied to evaluate state estimator (SE) performance. As well known, SE is an indispensable tool for matching the measurements with models to account for erroneous and missing data. The knowledge of the authors used to evaluate SEs comes from combining the experience with designing measurements in existing SEs with new experience of using synchrophasor data. This leads to a concept of a synchrophasor assisted state estimation (SPASE), which allows for improvements based on statistical properties of the measurements while taking into account model uncertainties. To make the point about how the new approach differs from the traditional one, the article explores the issue of network observability and bad data detection, two key design components of an SE. This leads to an analysis of what affects the accuracy of an SE, and the measurement design and selection of critical measurements are recognized as the key impacts. As noted by the authors, the two issues are becoming difficult to handle as the size of the SE design grows. The scaling up of the SE design happens when attempts are made to represent the entire transmission and distribution system or an entire electricity market with all the participating players using one unified power system model and a generalized SE. As the critical need to improve existing SE is elaborated upon, the authors point to the importance of metrics that should be used to evaluate any improvements. The recommended metric for an SE solution is the number of iterations for the results to converge. While this was always known, the additional insight is given to “distinguish such reasons” and the quality of measurements and their design were pinpointed as the focus of additional metric. The impacts that are quantified by the metric are: a) the objective function and largest normalized residual impact on quality of measurements and b) the measurement system vulnerability, pseudomeasurements ratio, and SE accuracy that impact measurement design. With such insight, data analytics for the extensive evaluation of SE solutions have been developed and demonstrated using cases from a utility company. It becomes obvious that this type of data analytics is quite useful in making decisions about SE improvements using new measurements and their optimal placement. The use of the proposed data analytics as the metric for assessment of measurement quality and design enables SE designers to make the right choices as the new measurement infrastructures such as PMUs become available for the use in SEs in the future. The following article, “Measures of Value,” points out how understanding the manual for the disturbance analysis can be translated into the data analytics solution executed automatically online. The core benefit is the ability to determine a cause–effect relationship between an event such as a transmission line fault and a consequence such as an incorrect relay or breaker operation in a matter of seconds. This way the nonoperational data obtained from digital relays and transient recorders is actually turned into operational knowledge available for operator decision making. The results of the data analytics processing can tell operators the basic information about the fault type and location, as well as whether the fault clearing sequences were executed correctly and whether they included auto reclosing that cleared a temporary fault or circuit breaker operation that isolated a permanent fault. Based on this result obtained in seconds after the event has occurred, operators are able to make key decisions whether to restore the line or whether to issue a work order request for the repair crew to go to a very accurately located site and repair the damage. To provide such a powerful processing capability, this data analytics function utilizes the knowledge of experts to develop a​ model of expert reasoning that links cause–effect rules in a software solution called an expert system, which, in this case, is the core of the data analytics approach. The article illustrates how once the experts’ knowledge is embedded in a software solution, the rules formulated by experts get “fired” automatically for each new set of measurements. The measurements come from intelligent electronic devices (IEDs) located in substations that are triggered by such events. The firing of the rules results in the cause–effect analysis that presents operators with clear decision-making options to react in the case that inferior performance of the relaying system and/or circuit breakers require their action. This data analytics benefit should be compared with the events of the blackout in 2003 when it took days and weeks to actually perform a post-mortem analysis of the events that could have been identified in a matter of second with the proposed data analytics enabling operators to react and perhaps contain the cascade that led to the blackout. The final article, “One Step Ahead,” focuses on data analytics needed for the integration and use of renewable resources such as wind power. Since it is widely known that the wind is intermittent, the authors are proposing new data analytics for wind power forecasts that may be utilized for predictive control. This idea is already attracting several research groups, and many approaches using different forecasting technique are being proposed. The authors introduce a simplified forecasting method that uses just the outputs of active and reactive power from wind turbines to predict the next control action. A neural network based data analytics tool is developed and tested using data from multiple wind farms in Germany. An optimization scheme that takes into account load tap changers and shunt reactors is developed and tested using several cases of reactive power controllers embedded with the wind generators. This new data analytics tool for predictive control is incorporated in a system solution that, besides the wind farm, also has access to the battery storage and wind power balancing controller. The authors acknowledge the need for new data analytics to perform short-term wind power prediction in the order of seconds, minutes and a few hours and its application in control centers. They also state that “This will become critical for the real-time operation of the electricity supply system as more and more wind power penetrates into it. The value of short-term wind power forecasting is high considering the reduction in power losses, as is maximizing the security and stability of the power system, especially when stochastic security-constrained optimal power flow is far from reaching


control centers in the near future.” They also recognize that this solution may become quite attractive to wind power providers once short-term wind power forecast based system applications become common in control centers as the results “enable the maximization of revenue by minimizing penalties.” In summary, all the articles have something in common that paves the way for future thinking about new data analytics. Almost all of the applications use some new data not used in ​ legacy solutions The analytics take advantage of the formulation of experts knowledge and improved models. The advantages are obtained from being able to better understand cause–effect relationships. The combined physical, electrical, and data model views of the results enhances decision-​ making. The applications are helping operators in more accurate planning and robust operations. Since the software tools for data analytics are new, their integration in legacy solutions is critical. In closing, this special issue has targeted data analytics as a promising development that will enhance future EMS solutions. It will, however, require close attention to the methods for capturing experts’ knowledge and translating it into analytical tools that can produce new value out of abundance of data about the power system.

Share

Tw eet

0

EMAIL | PRINT | HOME


The Situation Room By Jay Giri, Manu Parashar, Jos Trehern and Vahid Madani

Control Center Analytics for Enhanced Situational Awareness The modern power grid is one of the most complex engineering machines in existence. Its millions of components comprise the entire electricity supply chain, from point of generation to the end consumer. Each of these pieces must work together, reliably, 24 hours a day, seven days a week, to power our homes and businesses. In 2001, the U.S. National Academy of Engineering voted to recognize the grid as the supreme engineering achievement of the 20th century. Making matters more complex is the reality that grid conditions are continually changing—every second, every minute, and every hour of the day. C hanges in demand for electricity necessitate instantaneous changes in electricity production; consequently, voltages, currents, and power flows are dynamically changing at all times across the electricity supply chain. The challenge is to ensure these changing power system operating conditions stay within safe limits in the present and during potential future contingencies as well. Timely visualization of real-time grid conditions and response guidance are essential for successful grid operations.

Figure 1. An EMS operator in the early 1970s. (Photo courtesy of the Irish control center, Eirgrid.)

The largest blackout in the history of the North American power grid occurred on 14 August 2003. The subsequent investigation identified four root causes for this historic collapse: inadequate system understanding, inadequate situational awareness, inadequate tree trimming, and inadequate reliability coordinator diagnostic support. Wide-area situational awareness is vital to enabling a coordinated response among operators within a large interconnection. Large-scale disturbances (such as the August 2003 event) involve multiple, cascading aberrations that gradually weaken the grid and lead to eventual grid failure. Visibility of the early signs of the grid approaching a vulnerable state requires both a widearea view and analytics to recognize the condition. The 2003 event report gave a sudden new prominence to the term “situation awareness” or “situational awareness” (SA). Put very simply, SA means being constantly aware of the health of changing power system grid conditions. An advanced analytics and visualization framework (AAVF) is required to present the grid operator with real-time conditions in a timely, prompt manner. A good AAVF provides the ability to efficiently analyze and present data for decision making; this includes the ability to navigate and drill down to discover additional information, such as the impact and specific location of a problem. More important, AAVF provides the ability to identify and implement corrective actions, in order to mitigate potential risks to successful grid operations. In other words, operators do not just want to know “we have a problem”; operators want to know how to fix the problem. This article describes current and emerging capabilities and trends in control center analytics, operator visualization, and solution recommendations.

Evolution of Energy Management Analytics and Visualization

Figure 2. An EMS operator today. (Photo courtesy of Alstom.)

An energy management system (EMS) monitors and manages flows in the higher-voltage transmission network. Early control centers were hardwired analog systems with meters and switches; thumbwheels were used to change operating set points in the field. Modern-day EMS functions were initially developed in the 1970s with the advent of digital computers. The fast processing capabilities of computers were exploited to efficiently solve large, complex mathematical problems. Over the past several decades, these EMS functions have continually evolved. Figures 1 and 2 illustrate how an EMS operator’s reality has evolved from a hardwired, analog system to today’s digital system. EMS applications have evolved in line with the following business and operational objectives:


Real-time monitoring of grid conditions: The first EMS application implemented was supervisory control and data acquisition (SC ADA). SC ADA lets the operator visually monitor grid conditions from a central location and take manual action, if warranted. Maintaining system frequency: The objective of load frequency control (LFC ) is to automatically maintain system frequency as load changes by automatically modifying generation to meet demand. When it was introduced in the early 1960s, LFC was a pioneering “smart grid” EMS automation application. Minimizing electricity production costs while following system load changes: Economic dispatch is used in conjunction with LFC to satisfy multiple ​ objectives simultaneously, such as maintaining normal frequency, maintaining tie-line flows to contractual values, and dispatching generators to minimize total systemwide generation cost. Real-time monitoring of network grid conditions: State estimation (SE) runs about every ten to 30 ​ seconds and uses SC ADA measurements with a network model to calculate a “best guess” of system conditions across the entire network, especially for network nodes not measured by SC ADA. SE has become a must-run, critical EMS function, since it forms the foundation for subsequent network analytics and also provides a coherent gridwide view of conditions. Performing what-if studies: C ontingency analysis (C A) uses SE to carry out a series of “what-if” studies by simulating the effects of potential user-defined contingencies. A contingency is an unplanned loss of key grid components. C A assesses the potential overloads or problems that could consequently result. Operators usually have a dedicated display screen to show these results, as a heads-up to warn them of what may be lurking ahead. Optimizing grid operating conditions: Optimization applications use SE to achieve desired operational objectives, such as minimizing transmission losses, flattening the grid voltage profile, and recommending corrective and preventive control actions. Assessing grid stability: Recently, operators have been pushing the grid to operate closer to its limits. This is due in part to deregulated electricity markets that want to maximize utilization of all available transmission capacity. In some situations, this has resulted in the grid being pushed closer to its dynamic voltage and transient stability limits. Therefore, stability limits need to be updated in real time, a requirement that has led to the implementation of advanced dynamic simulation applications that simulate grid dynamic behavior in the face of disturbances. These stability applications also use SE and are computationally very intensive. They provide alerts to warn of situations that could cause dynamic instability of the grid.

Today’s EMS Figures 3 and 4 depict a typical modern-day EMS control center. Operators must monitor data on their consoles, ​ coordinate with other staff members within the control center, coordinate with plant operators, and periodically exchange information with neighboring EMS operators. The various functional applications described above fall under the application domains indicated in Fig​ ure 3: SC ADA, generation, transmission, dynamics, and so on. Figure 4 shows a typical EMS one-line diagram that ​ features details relating to substations, lines, and circuit breakers; operators can manually open and close breakers from these displays.

Advanced EMS Analytics Using Synchrophasor Data

Figure 3. EMS applications: a functional overview. (Photo courtesy of Alstom.)

What Is a Synchrophasor?

Figure 4. Grid one-line diagram.

The North American Synchrophasor Initiative (NASPI) defines synchrophasors as follows: “Synchrophasors are precise grid measurements now available from field device monitors called phasor measurement units (PMUs). PMU measurements are taken at high speed (typically 30 observations per second—compared to one every 4 s using conventional technology). Each measurement is timestamped according to a common time reference. Time stamping allows synchrophasors from different utilities to be time-aligned (or “synchronized”) and combined together, providing a precise and comprehensive view of the entire interconnection. Synchrophasors enable a better indication of grid stress and can be used to trigger corrective actions to maintain reliability.”

Synchronized phasor measurements (synchrophasors) provide a phasor representation of voltage and current waveforms that shows a sinusoidal signal simply as a magnitude and phase angle, with an associated time stamp. Subsecond data rates mean the dynamic behavior of the grid can be readily assessed; this was not possible using the EMSs of the past. Accurate GPS time stamping allows voltage phase angle differences to be compared at two different parts of the grid, providing a summary indicator of power system stress.

Synchrophasor Analytics Wide-area management systems (WAMSs) that use synchronized measurement technologies can generate very high volumes of synchrophasor and synchroscalar measurement data. This massive amount of data needs to be converted into useful, operator-actionable information. The range of advanced analytics currently available includes:


angular separation oscillatory stability disturbance location identification islanding and resynchronization. One focus of analytics for synchronized measurement data is extracting information independently from a system model and without full observability. This approach is particularly valuable for large interconnections in which the individual system operators do not have full observability or models of the entire system.

Angular Separation Angle differences are Figure 5. Synchrophasor angle indicative of differences. the steadystate stress of the system. The angle difference across a transmission corridor increases either because of increasing power flow or because of weakening Figure 6. Wide-area angle condition of the monitoring showing areas of net supply (pink) corridor and demand (blue). Strong contrast shows through loss stress (e.g., southwest). of transmission lines. Angle difference is in some cases a better measure of the corridor’s capability than power flow because it is related to both the steady state and dynamic limitations of the corridor. Figures 5 and 6 are angle-separation displays indicating limit violation alarms and system stress.

Oscillatory Stability The power system has several natural modes of oscillation. In normal operation, the modes are damped and of no concern for the operator. Situations can arise, however, in which oscillations become poorly damped or even negatively damped, leading to growing power swings that can split the system. The 1996 blackout in the western United States and C anada was a result of unstable oscillations. Oscillatory stability analytics allow the dy​ namic characteristics of observable modes to be extracted continuously from ​ measurements of the ambient perturbations of the grid. These characteristics (mode frequency, mode damping, mode amplitude, and mode phase) can be used to alert operators to stability conditions that have been degraded, either because oscillations become poorly damped or the amplitude becomes large. This approach was first developed for the interconnection between Scotland and England in the United Kingdom in 1995 and has been used for real-time control room monitoring and alarming for more than 15 years. Operational guidelines are provided that recommend actions to be taken when an oscillation alarm occurs. Since synchronized measurements are accurately time-stamped, it is possible to compare the amplitude and phase of oscillations across a network. This mode shape of the grid can be presented for each mode of oscillation, indicating the groups of generators that are oscillating in phase or out of phase and showing where the amplitude of the mode is most significant. Figure 7 is an oscillatory stability management display that shows oscillation frequencies, damping, mode shape, and alarm status. Measurement-based analytics are available that enable operators to identify whether an area is contributing to an oscillation or is just passively responding. This technology can be used at the regional level to enable an operator to understand if anything can be done about a problem or if the best course of action is simply to ensure that neighboring regions are aware of it. The ​ technology can also be used on smaller regions of the grid to identify whether a specific


generator or group of generators is the primary contributor to a mode and is therefore the location at which remedial action should be taken. This analytic lets the operator mitigate an oscillation problem in real time, without the burden of extensive offline analytical studies. Figure 8 shows the specific location of the greatest oscillation stress.

Disturbance Location Identification The angular changes in the system can be used together with frequency changes to determine whether a disturbance has occurred, the magnitude of its impact, and which measurement point is closest to the triggering disturbance. Even with a relatively sparse penetration of measurements in an interconnection, the disturbance can be identified and observed at a wide-area system level. Thus, an operator in the interconnection has information about external events such as proximity to his own system, impact on his system, and the level of risk posed. Figures 9 and 10 illustrate how disturbance location alerts Figure 7. Oscillatory stability in the Western Electricity Coordinating Council (WECC) region of North America.

Figure 8. Oscillatory stability drill-down in Canada region showing raised mode amplitude.

on the grid are displayed.

Islanding and Resynchronization Synchronized measurement information allows for a very rapid identification of system separation by observing excessive frequency deviation and/or by freely rotating voltage angle vectors. The operator immediately sees the creation of electrical islands and can visually assess the extent of the imbalances in the islands. Identification includes out-of-step conditions that are not identified through topology-based approaches. The analytic provides an indication of a problem, and also recommended actions that are necessary to ensure the safe and effective restoration of the grid. Following resynchronization, the operator has immediate feedback on the success of the operation. If successful, the angles stay aligned; if not, they will drift apart, and the system will split again. Figures 11 and 12 are displays that indicate normal and islanded conditions of the grid; an island is indicated by a group of vectors that rotate at a different, independent speed.

The AAVF Figure 9. Disturbance location A key feature of an AAVF is its ability to synthesize information from multiple view showing triggering points. sources and quickly and selectively mix and match that information to render a composite display of multiple overlays on a single, unified display screen. These sources of information could be advanced analytics or data feeds from external sources. The goal is to present the operator with a comprehensive, holistic portrayal of current grid conditions.

Synergies of Emerging and Traditional EMS Analytics Figure 13 illustrates today’s advanced EMS. With the introduction of fast synchrophasor measurements in the control center, the EMS now has realtime visibility into the dynamics of the power system. This complements the visibility of the steady-state behavior of the grid provided by traditional SC ADA measurements. The functions that appear on the left are the traditional EMS functions that have evolved over the past few decades. The functions on the right are the new, synchrophasor-based analytics being introduced to augment the EMS. They do not require a user-built network model since the actual power grid serves as the model. These emerging synchrophasor applications are capable of quickly detecting and alerting the operator when sudden disturbances occur in the grid. More important, they can characterize the dynamic stability (i.e., mode frequency and damping information) based purely on synchrophasor observations following a disturbance. This type of “measurement-based,” real-time stability analysis could help avoid major, widespread outage events resembling the 1996 blackout in the western U.S. interconnection, where poorly damped oscillations were observable for several minutes prior to eventual system separation.

Figure 10. Disturbance location showing regional level detail.

Beneficial Synergy from the Marriage of Control Center Analytics Many of the new synchrophasor analytics complement and corroborate traditional EMS analytics and can therefore be used together to jointly validate and fine-tune the analytics themselves for improved precision and accuracy. For example, the oscillation-monitoring analytic using a network model can be “married” with its counterpart measurement-


based analytic to compare results and gradually improve the network dynamic model parameters. Other model- and measurement-based analytic “marriages” can also provide beneficial synergies; these include SC ADA and WAMS, state estimator and state measurement, and voltage stability monitoring. An additional key benefit of model-based analysis is that it can perform “what-if” studies on potential contingencies and simulate transmission stress to determine the operational limit (OL) for a particular transmission corridor. This OL can then be used in the faster measurement-based analytic to quickly alert the operator when the limit is being approached. Working in tandem over time, the various EMS and synchrophasor analytic “marriages” will produce more accurate and reliable results, as well as up-to-date real-time operational limits and faster limit alerts. This will further enhance operator trust in the analytics; this trust is an absolute necessity if we want the operator to ​ subsequently “pull the trigger” and implement controls to protect the grid.

Figure 11. Normal interconnected operation shown in islanding view.

Synthesized Alarms and Events: Composite Event Processing The fastest rate of traditional EMS alarms has been the two– four-second SC ADA rate. Synchrophasor data are retrieved at a much faster, subsecond rate. It is therefore possible that operators will be overwhelmed with too many subsecond alarms during a system disturbance. Such challenges are addressed in an integrated alarm system framework that creates a single higher-level composite alarm message that intelligently summarizes the situation. For example, if there is a sudden increase in the volume of low bus voltage and high reactive power flow alarms from a particular region, this is a strong indicator of an imminent voltage collapse in that region. Figure 12. Islanded condition. Canada The composite event processing (C EP) framework phasors rotate synthesizes and corroborates EMS and SC ADA information, relative to the United along with PMU and synchrophasor analytics data, to quickly States and colors highlight the nature and location of a grid disturbance more show frequency accurately and reliably. When a disturbance is detected, the deviation. challenge is to quickly provide actionable information so as to promote prompt operator decision making. Enhanced Postevent Analysis: Synthesis of EMS and Historical Synchrophasor Data One of the earliest benefits realized from PMU measurements was its utilization in after-the-fact forensics analysis of major system disturbances, since PMU data have a key advantage (over SC ADA) of being precisely time-stamped. Timestamped synchrophasor data also provide the ability to observe the fast dynamic response of the grid, so that off-line calibration studies can be performed to refine and improve dynamic model data. Unlike SC ADA data, subsecond synchrophasor data are voluminous data streams and pose a major storage challenge. The most common types of data storage schemes are: Continuous rolling archive: all the incoming data are stored at their native data rate (typically 50 Hz or 60 Hz) over a predefined duration (typically six to 12 months). Long-term event archive: a window of data recording a disturbance event, i.e., a contiguous time slice of preevent and postevent data Down-sampled rolling archive: a long-term archive of down-sampled data (typically once per ​ second), for more efficient storage of older data. If a sudden disturbance occurs, the operator can use the PMU continuous rolling archive, along with the online EMS data historian, to quickly recreate the immediate past on the fly to discover more about what just happened. This pro​ motes efficient problem identification and improved, confident decision ​ making.

Situation Awareness for Improved Grid Operations “Situation awareness (SA) is, simply put, understanding the situation in which one is operating.” SA can be more comprehensively defined as the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future. The inherent challenge of SA is to maximize human understanding and

Figure 13. Holistic advanced EMS functions.


comprehension without increasing operator stress. SA consists of the following stages: 1. perception of elements in the environment 2. comprehension of current situation 3. projection of future status. These stages are followed by decision making and, finally, action. Power system SA visualization accounts for multiple streams, or axes, of information, including: spatial and geographical information voltage levels temporal information functional information substation information team SA with neighbors.

Figure 14. Wide-area system overview.

If these visualization axes were spokes of a wheel, rich information on each spoke would make the wheel stronger, and decisions could be made more confidently to steer away from potential adversity. The intelligent synthesis of information from various perspectives improves the operator’s capabilities and confidence to make prompt and correct decisions. This forms the very foundation of the AAVF. A frequently cited human limitation has been described as Miller’s “magical number seven, plus or minus two.” According to Miller, the operator can typically handle only five to nine such “chunks” of information. As the saying goes, “A picture is worth a thousand words.” Taking that a step further, the correct picture is worth a million words. In other words, delivering relevant, actionable information that demands immediate operator attention is infinitely more useful. The path to developing the “correct” picture starts with organizing the visual presentation around operator goals. In other words, what result is the operator seeking by performing this task? Use cases need to be developed to document the specific actions an operator takes in order to achieve a specific goal. Recent trends that are advancing visualization capabilities for power grid operators include:

Figure 15. Call center crew information.

geospatial displays and geographic information systems (GISs) use of MS Virtual Earth, Google Maps, and Google Earth visual correlation techniques using C itrix to share common displays using iPad displays for mobile, decentralized decision making.

Perception, Comprehension, and Projection The following are displays of recent developments in advanced visualization capabilities that have been developed using Alstom’s e-terravision product. This visualization framework synthesizes data from multiple diverse sources and provides composite displays of multiple layers on a single display screen. This represents a major stride in advancing SA for the operator as it significantly enhances perception, comprehension, and projection capabilities. Figure 14 is a typical wide-area overview display of conditions across the grid, including neighbors. Figure 15 shows input from a crew in the field, overlaid on a map, to indicate the location and nature of a grid problem. Figure 16 shows an overlay of an incoming weather front and the regions with the highest probabilities of a contingency.

Figure 16. Moving weather fronts: radar and lightning.

Decision Making and Action To reiterate, operators do not just want to know that there is a problem; they want to know how to fix the problem. Model-based analytics can perform what-if studies to examine alternate projected scenarios of future grid conditions. These analytics determine the current critical system operating limits online, the security margin, and how the margin is trending over time.


Measurement-based analytics can quickly identify a problem in the grid and its location, and model-based analytics can determine the appropriate corrective actions. A key benefit of model-based analytics over measurement-based analytics is that they can identify and recommend solutions to problems, i.e., propose a course of action to mitigate problems. This provides operators with actionable information— precisely what they seek. For example, Figure 17 shows (in orange) regions of potential voltage instability and (in magenta) the regions where controls should be implemented to alleviate the problem. Figure 18 shows how an operator can use the cursor to lasso a geographical region around a problem area free-hand (the dotted line) so as to determine what controls are currently available to fix the problem in that region.

Shared Decision Making Another example of facilitating enhanced decision making is providing utility staff and their neighbors shared, common displays of current grid problems so as to jointly reach a consensus on actions to be taken, if needed. Tablet computers let neighbors quickly view the same, shared SA display screens, enabling decentralized problem ​ solving and ​ decision making. Shared iPad displays can be used by neighboring utility staff members (operators, engineers, and managers) to jointly assess a problem condition and ​ synergistically decide on the actions to take. This facilitates prompt, transparent, shared decision making by a larger group of affected stakeholders. Figure 19 shows such a shared iPad display.

Figure 17. Voltage instability and resources.

Figure 18. Lasso area to identify available controls.

Risk Management for Production System EMS Deployment Establishing a proof of concept (POC ) facility is a vital step toward accelerating the deployment of a production grade power grid monitoring system incorporating the ​ latest ​ synchrophasor technology. Figure 20 presents simplified process flow requirements demonstrating the ​ interdependency of compliance and risk management. Implementation of a POC facility should follow these guiding principles.

The POC Facility Figure 21 presents an overview of a POC facility. The POC streamlines implementation of systemwide, synchrophasor-based monitoring and advance warning systems that will significantly enhance existing grid-monitoring capabilities and improve grid efficiency, reliability, and security. The POC serves multiple purposes and accelerates modernization of electric transmission functions. In addition to facilitating the development of utility standards, the setting of point templates, and the debugging of the platform, the POC serves as a training facility for grid operators, similar to flight simulators for pilots and astronauts. POC s are engineered to act as small-scale replicas of the power system. They consist of PMUs provided by multiple manufacturers, fault recorders capable of generating and streaming synchrophasor measurement data in accordance with the IEEE C 37.118.1 protocol, network switches, routers, clock manufacturers, and IEEE 1588 client devices. They utilize IEEE 1588 and Inter Range Instrumentation Group format B (IRIG-B) time synchronization as well as various methods for mitigating signal

Figure 20. Risk management process overview.

Figure 19. iPad display for shared, decentralized decision making. impairment.

The POC validates synchrophasor system interoperability and functional performance. By engineering, testing, and demonstrating a production-grade system at the POC , the industry benefits by addressing solutions to interoperability issues among various products and applications that need to comply with evolving industry standards. The POC facility needs to be flexible to allow ​ compatibility and interoperability testing for industry Figure 21. POC overview. standards such as IEEE C 37.118.1, IEEE C 37.118.2, and IEEE C 37.238 (IEEE 1588) and to facilitate ongoing efforts with the IEEE phasor data concentrator (PDC ) guide in parallel with PDC production deployment efforts. One such test facility has been established at Pacific Gas and Electric C o. (PG&E) in San


Ramon, C alifornia, in collaboration with academic institutions, manufacturers, consultants, reliability coordinators, and utility experts. Tests are underway at this facility on PMUs from four PMU product manufacturers in addition to PDC s; parallel tests are taking place on PG&E’s upgraded EMS. The POC , along with other established test facilities, has provided a platform for gathering the knowledge needed to provide the industry with direction and a fast-track process for maturing standards such as IEEE C 37.118.2, IEEE C 37.238, IEEE C 37.242, IEEE C 37.244, and IEC -61850-90-5. Preliminary lessons learned from the POC facility have been shared with the appropriate industry communities and government agencies to inform them of the development of best practices for enhancing grid SA. The POC will also be used as a training ground for operators and dispatchers learning to work with this advanced new technology. The testing, performed by a closely integrated project team made up of industry experts and PG&E staff members, has thus far advanced thinking around the use of PMUs by: identifying the need for more comprehensive standards for PMUs and PDC s (these findings have been coordinated with the appropriate industry liaisons for timely inclusion in establishing industry standards; IEC -61850-90-5 was recently approved as a standard for PMU data communications and is implemented at the POC ) identifying field equipment and network compatibility issues that affect time synchronization performing precision time testing and performance compatibility for IEEE 1588 and IRIG-B (both telecommunication and power profiles) requirements in a multifacility (substation) environment

Figure 22. Enhanced control center analytics at the POC.

conducting cognitive task analysis using operator use cases to ensure that synchrophasors provide operators with actionable information performing preliminary validation and correction of field measurements using synchrophasor ​ a pplications establishing a hands-on training and learning environment for diverse groups of users, technical staff members, and management executives developing a framework for preparation of “set points,” templates, training modules, and so on providing grid operators, engineering, and operators with early exposure to EMS visualization tools and an opportunity to recommend system ​ e nhancements helping vendors gain a better understanding of the anticipated system performance of a fully integrated product. Figure 22 shows the suite of enhanced control center analytics that has been implemented at the PG&E POC facility. Figures 23 and 24 show sample displays from the POC . Figure 23 is a high-level view of groups of PG&E generators that are coherently swinging in relative unison with each other: the northern and central generators swing together, while the southern generators form their own coherent group. This shows that in an emergency, the south-central boundary is a natural cut set to consider for partitioning the grid without loss of synchronism. Figure 24 shows a wide area view of voltage violations in the C alifornia grid; part (a) shows the predisturbance, normal voltage conditions, and part (b) shows the postdisturbance voltage violation regions. Red and yellow denote low voltages and blue denotes high voltages.

Summary This article provides an overview of an analytics and visualization framework for Figure 23. Coherent generator a control center grid operator and implementation of emerging advancements in regions. technology and analytics. These advancements provide grid operators with wide-reaching visibility into the status of the grid, as well as the ability to predict and plan for potential problems that may be lurking around the corner. Speed is of the essence when it comes to assessing the cause of grid problems and—more important—quickly implementing corrective actions. As the volume and frequency of grid measurement data grows—especially with the growth of subsecond synchronous PMU measurements—it is critically important to transform this imminent data tsunami into actionable information that can be concisely visualized on an operator’s screen. Today’s AAVF is a major step in this direction. It collects data from a variety of disparate sources; filters, collates, and correlates this vast amount of data; and provides effective, concise display visuals for EMS operators. This further empowers grid operators to do what they do best: keep the lights on.

Acknowledgments Douglas Wilson (Psymetrix, United Kingdom), Jean-Louis C oullon (Alstom Grid, France), and Adam Pratt (Alstom USA) are acknowledged for their review contributions.


For Further Reading M. R. Endsley and D. J. Garland, Eds., Situational Awareness Analysis and Measurement, Mahwah, NJ: Lawrence Erlbaum Associates, July 2000. J. Giri, “Wanted: A more intelligent grid,” IEEE Power Energy Mag., Jan./Feb. 2009, pp. 34–40. M. Kezunovic, C . Zheng, and C . Pang, “Merging PMU, operational, and non-operational data for interpreting alarms, locating faults and preventing cascades,” in Proc. 43rd Hawaii Int. Conf. System Sciences (HICSS), Jan. 5–8, 2010, pp. 1–9. V. Madani, D. Novosel, P. Zhang, A. Meliopoulos, and R. King, “Vision in protection and control area—Meeting the challenges of 21st century,” in Proc. IEEE PSCE, 2006, pp. 1345–1355.

Figure 24. Voltage violation regions caused by a disturbance.

D. Wilson, K. Hay, P. McNabb, J. W. Bialek, Z. Lubosny, N. Gustavsson, and R. Gudmansson, “Identifying sources of damping issues in the Icelandic power system,” PSCC paper, Glasgow, UK, 2008.

Biographies Jay Giri is with Alstom Grid, Redmond, Washington. Manu Parashar is with Alstom Grid, Redmond, Washington. Jos Trehern is with Psymetrix, Edinburgh, United Kingdom. Vahid Madani is with Pacific Gas and Electric C o., San Francisco, C alifornia. Download PDF Version

Share

Tw eet

0

EMAIL | PRINT | HOME


Operating in the Fog By Patrick Panciatici, Gabriel Bareux and Louis Wehenkel

Security Management Under Uncertainty Over the last ten years, we have heard so often in conferences, seminars, and workshops that the power system will soon be operated very near to its limits that this statement has become a cliché. Unfortunately, it is no longer possible to comply with the classical preventive N-1 security standards during all of the hours in a year. The system is indeed no longer able to survive all single faults without postfault actions. More and more corrective (i.e., postfault) actions are defined and prepared by operators, and the “cliché” is now a reality, as a matter of fact. To be more precise, it is no longer possible to maintain the N-1 security of the system at all moments by using only preventive actions, and the number of hours during which the system requires corrective actions to be secure is increasing. More and more, new special protection schemes (SPSs) are deployed to implement some of these corrective actions automatically. Devices such as phase-shifting transformers (PSTs) and static var compensators (SVC s) are added in the system to increase its controllability. As a result, the system becomes more and more complex. This state of affairs has various causes that will not disappear in the near future. One is that it is more difficult than ever to build new overhead lines because of the “not in my backyard” (NIMBY) attitude. People are more and more afraid of hypothetical electromagnetic effects, or they just don’t like to see big towers in the landscape. This is particularly the case in protected areas, which are becoming more and more numerous around Europe. It is very difficult to explain the need for new interconnection lines to people who already have access to electricity at a reasonable price and with high availability. An increase in European social welfare with a positive feedback for the European economy and hopefully for all European citizens is a concept that is too theoretical compared with the negative local impact. Alternative solutions are technically complex, costly, and need even more time to be deployed. The second main reason is the massive integration of renewable but generally intermittent generation in the system. Power flows in the grid are created by difference in location between power sinks and sources. With a significant amount of intermittent power generation, the predictability of the sources (location and levels of power injections) decreases and strongly affects the predictability of power flows. Furthermore, these new power plants are generally small units connected to the distribution grid. Transmission system operators (TSOs) therefore have difficulty observing these power injections, and they have no direct control over them. Another factor is the inconsistency between the relatively short time needed to build new wind farms (two or three years) or install photovoltaic panels (months) and the time it takes to go through all the administrative procedures required to build new lines (more than five years). Some TSOs have proposed that their regulators implement mechanisms that encourage the installation of these new generators in areas where the grid has enough spare capacity to accommodate the new injections. Unfortunately, changing the regulatory framework is difficult. Such mechanisms could only take the form of incentives, and each producer must find the optimal balance between the cost of accessing the grid and the cost of the primary energy. In addition, these mechanisms would only solve some local problems. At the European level, the best locations for wind farms are mostly along the coasts and offshore, while for photovoltaic generation they are in the south of Europe. Since these locations do not generally match those of the large load centers, a transmission network is still required, and this network will still have to cope with the variability of the power flows. The third reason is linked to the liberalization of the electricity markets. Generators, retailers, and consumers view the transmission system as a public resource to which they should have unlimited access. This approach has the desirable effect of pushing the system toward maximization of the social welfare and an optimal utilization of the assets. This optimization is limited by security considerations, however, because large blackouts are unacceptable in our modern societies due to their huge economic and social costs. Since TSOs are responsible for maintaining the security of the supply, they must therefore define the security limits that should be respected. As in any constrained optimization problem, the optimal solution toward which the market evolves tends to be limited by these security constraints. The stakeholders therefore perceive them as constraining their activities and reducing European social welfare. A transparent definition and assessment of the distance to these security limits thus becomes of paramount importance. To maintain the security of the supply in this context, TSOs must adapt the architecture of their transmission systems by considering the following technologies: long-distance HVac underground cables with large reactive compensators HVdc underground cables in parallel with the ac grid with smart controls of the ac/dc converters HVdc grids, first to connect offshore wind farms efficiently and then to provide cheaper interconnections between distant areas. Meanwhile, TSOs will try to optimize the existing systems by adding more and more special devices such as the PSTs and SVC s mentioned above, along with advanced ​ controls and protection schemes. While demand response could offer new ways to control the system, this flexibility will require a rethinking of current operating practices; TSOs will have to assume that part of the generation is an uncontrollable exogenous stochastic variable while part of the load is controllable.

The Need for New Methods and Tools


The previous section suggests that the complexity of transmission system management will continue to increase. In this context, defining security limits and measuring the ​ distance between an operating point and the nearest security limit become more and more difficult. In this article, we present some ideas about new methods and tools that can reliably assess the security of the pan-European transmission network as its complexity increases. The basic functional needs for these tools are: the ability to obtain or construct realistic pictures of the state of the system over different time frames (in real time, intraday, a day ahead, and so on) the ability to define reliability criteria and security limits the ability to assess the security of the system by running time domain simulations. Probabilistic methods (e.g., Monte C arlo methods) could be applied to define these limits because they can deal with the complexity of a system that is nonlinear, nonconvex, and discontinuous. The fast algorithms that will build base cases and run time domain simulations could be at the core of these Monte C arlo methods. As previously mentioned, the complexity of the system is increasing, and we want to have a more robust and accurate assessment of the security margins. Some of the approximations used in the standard methods and some of the tools used to assess security should therefore be reviewed. We have identified three main categories, which we discuss below.

Description of the Neighboring Systems The operational tools used by each TSO rely on rather poor descriptions of the neighboring systems, in particular for the estimation of the state of the systems and of their security in real time. In the past, these descriptions were sufficiently accurate because the variability of the external systems was very low and because it was therefore possible to define time-invariant equivalent models that gave sufficiently accurate results when assessing security. Moreover, the system was operated most of the time with security margins that were large enough to ensure that some inaccuracies in the state estimation were of no consequence. But this is no longer the case: the optimization of the resources utilization produced by the liberalized electricity markets causes much larger power transfers over longer distances, and these interchanges make the national systems much more interdependent. Electric phenomena no longer stop at administrative borders. Today, TSOs are trying to improve their state estimators by incorporating larger and larger parts of neighboring networks in their real-time IT systems, both for the supervisory control and data acquisition (SC ADA) system and the energy management system (EMS). In the medium to long term, this approach is not very realistic. It is indeed nearly impossible, even with a large amount of manpower, to maintain a valid model of the neighboring systems because of all the maintenance and upgrade activities taking place in these systems. C ertain alternative solutions that use the concept of a hierarchical state estimator should be more efficient and more robust.

Forecast States For the day-ahead and intraday security assessments, TSOs must build base cases from which they can run “what if” analyses. The standard method for developing these base cases relies on forecasts of load and generation at each bus of the network. A power flow combined with a contingency analysis is then run on the basis of these forecast injections. Potential corrective actions are difficult to take into account with this approach, however. For example, the contingency analysis usually does not take into account the optimal re-dispatch of the generations after a contingency to suppress an overload. The forecast of generations is generally done using a very poor model based on a static merit order. With more and more active intraday electrical markets, producers maximize their profits by playing on the short-term market, making the schedules of power plants across Europe more difficult to predict a day in advance. Moreover, wind power injections are difficult to forecast accurately more than a few hours ahead. With massive integration of wind power in Europe, conventional generators have to be adjusted to balance the system. Each individual generation node thus becomes more volatile. We propose to discuss how to use advanced optimization methods to build realistic base cases, in particular using mixed-integer programming to deal with discrete variables such as the status of generating units. We will not only propose more accurate models for devices such as capacitor banks and on-load tap changers but also ones that take into account possible remedial actions such as topology changes.

Model Accuracy In static security assessment, a power flow computation gives the stabilized postcontingency state, and the assessment of security is done on the basis of that state. The underlying assumption is that a stable trajectory exists between the initial state and the stabilized postcontingency state. This means that all dynamics are stable and can be neglected. In several countries, however, the dynamic behavior of the system can no longer be neglected. When operating the system close to its limits, unstable dynamic phenomena can appear after a contingency before the static overload problems that are more “easily” manageable using remedial actions. A robust security assessment should check that dynamic phenomena are acceptable. Moreover, when many dynamic devices with complex controllers (dead band, limiters, and so on) interact, it is nearly impossible to predict the stabilized postcontingency state without running a time domain simulation. The proposal is therefore to use time domain simulations to assess security and—if needed from a computational point of view—to use simplified time domain simulations rather than purely static approaches. For detailed time domain simulations, the challenge is to find the right balance among different objectives for very large


power systems. The first objective is simulation fidelity. The time domain simulation method must simulate accurately the system described by a set of differential algebraic equations. In particular, numerical integration methods that approximate unstable dynamics by stable ones are not acceptable. The second objective is computational speed. The time domain simulation must run as quickly as possible. Some parallelism in numerical methods should be found in order to use the new multicore computers efficiently. Improving computational performance is a prerequisite for the following applications: using detailed time domain simulation for dynamic security assessment in real time making possible probabilistic approaches to the definition of the security limits. The third objective is to offer modeling flexibility. As mentioned above, more and more special devices (such as PSTs, SVC s, and HVdc devices) are being installed in the European power system along with advanced controllers and protection schemes. The model for each piece of equipment can be very specific. The only possible approach, therefore, is to give users the capability to define these models themselves. A review of the modeling requirements will need to be done, and a method of exchanging detailed models in a standard framework will have to be prototyped. These three objectives are clearly in conflict. Depending on the application, particular tradeoffs may have to be made.

The New Approach to TSO Decision Making In power system operation, some actions must be decided in advance because the means to implement these actions must be reserved ahead of the actual operation. For instance, the time needed to start up a thermal power plant is often several hours. Likewise, planning for the maintenance of overhead lines and substations must usually be scheduled several days in advance so as to leave enough room to optimize the deployment of maintenance teams. C ertain decisions must therefore be made from a few days to a few hours ahead of the moment when the corresponding actions are actually launched. In order to make these decisions, the TSO must analyze the security of the anticipated steady state of the system at the time when these actions must be implemented. To this end, the TSO takes into account uncertainties about exogenous factors (and their correlations) that may influence the future system state as well as all possible preventive and corrective actions that could be applied in the meantime in order to adapt the system state to these exogenous factors. The problem formulation proposed to handle this issue is a generalization of the current practice of certain European TSOs, such as RTE in France and ELIA in Belgium. Because most of the preventive actions are costly and irreversible for the TSO (e.g., keeping must-run generators operating, generation rescheduling, activation of demand-side responses, postponing of maintenance, and so on), the basic rationale of the proposed formulation consists in postponing for as long as possible the moment a particular decision must be made so as to take advantage of all the information that can be collected in the meantime. The additional information will help reduce the level of uncertainty about the system state at the moment the action is applied. The decision-making process is no longer a two-step process (making day-ahead and then real-time decisions) but more and more a continuous, multistage process with a number of different time slots (i.e., intraday) available for deciding and/or applying different possible actions, depending on the market and on regulatory rules. The whole framework is actually applied “indefinitely,” with a receding horizon of analysis corresponding to the longest delay of action implementation relevant for the TSO.

A Functional Architecture for the Proposed Approach We propose the integration of these various computational engines and data management tools into a unified “toolbox” for performing static and dynamic security assessments of the European transmission network. The security assessment requires a definition of reliability criteria. The proposed implementation clearly uses a probabilistic approach, the most demanding type from the computational and modeling points of view. We propose to use screening methods based on the integration of various computational engines to address this very complex problem. We propose a risk-based approach that takes into account the probabilities of faults and of other events’ influencing the possible states of the system, the estimation of the associated impacts of faults when applied to system states, and a catalog of possible corrective and preventive actions.

The Overall Toolbox Architecture We propose to have a layer of data management tools to provide inputs to the computational toolbox (see Figure 1), including a pan-European state estimation and data-mining features. The toolbox itself is composed of four main parts: online security assessment off-line definition of security rules off-line validation of dynamic models off-line defense plan and restoration plan design and assessment.

Online Security Assessment


Our proposal is to develop a screening method based on a worst-case state (WC S) approach as suggested by the PanEuropean Grid Advanced Simulation and State Estimation (PEGASE) project ( The online security assessment does not only address real-time security assessment. It is a sliding process that starts from two days ahead and ends in real time. On the one hand certain actions must be anticipated (e.g., the start-up of a thermal power plant), whereas on the other hand the level of uncertainty decreases as we approach the real-time context. The period from two days ahead to real time will be divided into various time horizons. The exact splitting of this period will depend heavily on a precise definition of the “last time to decide” principle for every preventive action.

Figure 1. Overall tool box architecture.

Real-Time Security Assessment Our main assumption is that real-time security assessment is based on snapshots, and we don’t take into account any uncertainties. Our approach is a rather classical one. It uses a conservative load flow that computes, for every contingency to be considered (low-probability contingencies may be forgotten at this stage and will be taken care of in the defense plan), the postcontingency state of the grid, taking into account all possible corrective actions. For each contingency where the conservative load flow doesn’t find any solution—or if there are remaining constraints after the use of corrective actions or if corrective actions are not fast enough—a less conservative simplified dynamic simulation is performed to analyze whether or not the grid situation is secure. After this step, if the grid situation is deemed insecure, drastic actions (e.g., load shedding) may be needed. An alternative to this first approach would be to skip the static security assessment and to rely only on a simplified dynamic security assessment. This approach is realistic if we have access to high-performance computing facilities, e.g., 10,000 cores (one for each contingency) a reliable and sufficiently simple dynamic model.

Forecasting-Mode Security Assessment In the real-time security assessment just described, the actions of the operators are limited to corrective actions. Those actions may not be sufficient to ensure the security of the grid, and therefore drastic actions such as load shedding may be needed to avoid dramatic consequences like blackouts. TSOs are not limited to this online security assessment, however. They can also rely on forecast security assessments, so as to take advantage of possible preventive actions to ensure the security of the grid. After predicting what the uncertainties that could affect the system are, the TSO may seek answers to the following questions: What is the WC S for each postulated ​ contingency? Are postcontingency corrective actions sufficient to satisfy the postcontingency operational limits in the WC S? If not, what are the optimal preventive actions, taking into account the corrective actions available to deal with contingencies in the WC S? For one time horizon, the whole process can be schematized as shown in Figure 2.

Off-Line Definition of Security Criteria and Thresholds

Figure 2. Security

The online security assessment described above will require inputs such as the probability of occurrence of a fault, the definition of complex uncertainties, and simplified conservative criteria and thresholds that can be assessed using static or dynamic computation. The efficiency of the online security assessment will clearly depend on the accuracy of these inputs. We propose the use of enhanced Monte C arlo techniques such as importance sampling to make extensive use of the data sets archived in our system. The archived grid data will be patched in order to represent all the uncertainties (load patterns, generation patterns, and so on) and all the contingencies. We will then use optimal power flow computations in order to construct the resulting anticipated states of the grid. All these grid situations will be assessed using highfidelity time domain simulations. The results of these simulations will then be processed using machine-learning algorithms (for example, decision tree–based methods) so as to obtain simple static conservative criteria and thresholds (e.g., bounds on the generation balance in a zone) sufficient to ensure reliable operation of the grid. These rules will be given to the optimization engines of the online security assessment tools as constraints of the optimization problem they are intended to solve. It is an elegant pragmatic means of taking into account dynamic security in extended static security assessment. This procedure represents a generalization of ideas


assessment process.

proposed in the past by Dy-Liacco (1997) and Wehenkel et al. (1994) and developed in the European Seventh Framework Program (FP7) Twenties project as the Netflex work package for interarea oscillations, in which the objective is to derive security rules for operators through PMU measurements.

Validation of Dynamic Models Dynamic models are tremendously complex and their validation can be a very intensive task requiring a very high level of expertise. Nevertheless, the secure operation of the grid depends heavily on the accuracy of the dynamic models used. These models therefore need to be reassessed on a regular basis. We propose to provide tools to help operators evaluate the models. Based on accurate PMU records of the significant events on the grid (e.g., loss of a significant power plant), these new tools will identify the inaccuracies of the models and point out to operators the parameters responsible for these discrepancies. They will provide distance indicators that will highlight discrepancies between the results of the simulation and the measurements.

Data Management All the main functions described above rely on the availability of detailed and accurate grid descriptions and states precise definitions and levels for the uncertainties that need to be taken into account accurate probabilities of the contingencies taken into account a catalog of possible corrective and preventive actions associated with a constraint or contingency. The grid descriptions will come from pan-European state estimation for real-time snapshots and from the optimal merging of individual TSO forecast base cases from two days ahead to one hour ahead. The PEGASE project has already demonstrated the feasibility of a hierarchical state estimator and identified the possible advantages of using PMU data to improve the pan-European estimated state. Online security assessment depends heavily on the definition and the levels of uncertainties that our system will have to face. One challenge is to build realistic forecast states of the grid that capture the uncertainties associated with individual and collective forecast errors of loads and generations. Building up accurate and realistic probability density functions using the huge database available from the TSOs and efficiently sampling from them could enable robust Figure 3. Scatter plots for actual and—hopefully—useful Monte C arlo approaches for power system reliability data. assessment. Standard methods exist to address such issues, but they can’t handle problems of such a large size; the power grids consist of tens of ​ thousands of nodes. To address this, we have developed a pragmatic approach. It consists of an initial dimension reduction stage using principal and independent component analysis (PC A and IC A) and classification techniques. These were ​ obviously suggested by the existing links among electrical nodes (wind generation in a given area, urban load similarities, and so on) that highlight the valuable generation and load patterns. We then use multivariable probability density function estimation on the reduced space, first trying a conventional kernel method and quickly adopting a copulas and pair-copulas decomposition approach for their analytical properties and their associated efficient sampling techniques. We have performed visual appreciation of the estimation methods through comparative scatter plots of initial and generated samples, first using simulated data (a mix of Gaussians and others) and then data from the French SC ADA system for the 2010–2011 winter (compare Figures 3 and 4 for an idea of the differences between the initial data and the generated data from the French system for the 2010–2011 winter). We have also tried to test a quantitative measure to test the goodness of fit of our samples based on an attempt to generalize to a higher dimension the classical Kolmogorov-Smirnov test (in dimension 1). After several attempts, we found that the additional computational burden introduced by IC A is not compensated for by sufficient gains in the quality of the estimation. The conclusions of these tests made us choose an algorithm (labeled “Algorithm #3” in Figure 5) that uses first a PC A and then an estimation of parameters based on pair-copulas for every class identified, along with efficient sampling associated with these pair-copulas. The probabilities of each contingency are also important inputs of the proposed risk-based approach. For example, events whose probability is too low will be disregarded, on the assumption that even in the case of a high impact, defense plans are in place to minimize the most dramatic consequences. Our goal is to use relevant probabilities that take into account several phenomena and their possible correlations (e.g., weather conditions, technology, age of components, and so on). Figure 4. Scatter plots: generated data.


TSO operators have at their disposal a catalog of possible corrective and preventive actions to relieve constraints. These remedial actions should be shared at the European level to optimize their use and assess their panEuropean efficiency. We think that the definition of a framework to exchange this information is required, perhaps in the form of a C ommon Information Model (C IM) extension. As the probability of failure of remedial actions could become an important factor in the assessment of the risk, some estimated value of these probabilities is also required.

Defense and Restoration Plan Design and ​ Assessment

Figure 5. Fitting indices. As mentioned above, certain low-probability contingencies will be discarded from our proposed online security ​ a ssessment, under the assumption that even in cases of high impact, defense plans are in place to prevent excessively dramatic consequences. The restoration process after a blackout is a complex problem, and the dynamic behavior of the system is a critical factor for the success of the restoration process. The process becomes very complex indeed when a blackout affects areas controlled by more than one TSO. We are proposing tools that will help to design and assess panEuropean defense and restoration plans. One of our main ideas is that defense and restoration plans are rather difficult to design and assess; we will ​ therefore try to avoid overly complex designs that may prove to be insufficiently reliable.

Conclusions In this article, we have proposed an overall approach to deal with the security management of electric power systems from two days ahead to real-time operation. We have described all the basic functional needs for this new approach and proposed innovative yet pragmatic ways to implement them. We believe that this new approach will permit a systematic security assessment of the grid from operational planning to real time. Substantial work remains on the actual implementations of the four identified blocks, but this article may pave the way for the work that still needs to be done.

Acknowledgments This article presents ideas developed in the European FP7 projects PEGASE and Innovative Tools for Electrical System Security Within Large Areas (iTESLA). Scientific responsibility rests with the authors.

For Further Reading T. E. Dy-Liacco, “Enhancing power system security control,” IEEE Computer Applications in Power, vol. 10, no. 3, pp. 38– 41, July 1997. L. Wehenkel, M. Pavella, E. Euxibie, and B. Heilbronn, “Decision tree based transient stability method a case study,” IEEE Trans. on Power Syst., vol. 9, no. 1, pp. 459–469, Feb. 1994. P. Panciatici, Y. Hassaine, S. Fliscounakis, L. Platbrood, M. A. Ortega-Vasquez, J. L. Martinez-Ramos, and L. Wehenkel, “Security management under uncertainty: From day-ahead planning to intraday operation,” in Bulk Power System Dynamics and Control (IREP)—VIII, 2010 IREP Symp., Rio de Janeiro, Brazil, 1-6 Aug. 2010, pp. 1–8. F. C . Schweppe, “Power systems ‘2000’: Hierarchical control strategies,” IEEE Spectrum, vol. 15, no. 7, pp. 42–47, July 1978.

Biographies Patrick Panciatici is with RTE (French TSO), Versailles, France. Gabriel Bareux is in with RTE (French TSO), Versailles, France. Louis Wehenkel is with the University of Liège, ​ B elgium. Download PDF Version

Share

Tw eet

0

EMAIL | PRINT | HOME


Metrics for Success By Murat Göl, Ali Abur and Floyd Galvan

Performance Metrics for Power System State Estimators and Measurement Designs Power system state estimators (SEs) have come a long way since the introduction of the concept nearly four decades ago by Fred Schweppe. Over the years, the concept’s initial formulation, implementation techniques, computational requirements, data manipulation and storage capabilities, and measurement types have changed significantly. Today, SEs are instrumental in facilitating the security and reliability of power system operation and play an important role in the management of power markets where transactions have to be carefully evaluated for feasibility and determination of realtime prices. One of the most recent developments in SEs has been the availability of synchronized phasor measurements and their introduction into the state estimation process. Synchrophasor-assisted state estimation (SPASE) is changing the way we view and operate the grid. As such, the ability to monitor and maintain SE performance within known performance standards (metrics) is a new practice. Unlike deterministic applications such as power flow, the state estimation solution is not deterministic and depends on the statistical characteristics of the measurements as well as the level of certainty of the assumed network model. This article will define methods for determining the performance of a given SE, taking into account not only the problem’s formulation, its numerical solution, and the actual implementation of the solution but also the impact of measurement types and quality and the design of the overall measurement system. These methods will incorporate metrics currently in use by existing SEs and users and will also present new metrics that take into account new types of measurements and measurement designs and levels of vulnerability to loss of measurements or bad data. The source of such bad data may be a “natural” cause such as communication noise or failure, but it could also be intentional manipulation of data by malicious third parties. The issues of performance evaluation and the definition of appropriate metrics to facilitate this process have been studied in the past. The main goal of this article is to expand the existing metrics in order to address not only the performance issues related to the state estimation solver but also the measurement quality and design. It will be suggested that metrics be defined in three categories: the solution algorithm, measurement quality, and measurement design. These metrics will also be useful, therefore, in making investment decisions related to the installation of new meters.

Role of the State Estimator Substations are equipped with different monitoring devices that can provide voltage and current samples captured through instrument transformers. These samples are then processed by the SC ADA system to produce real and reactive power flow measurements in addition to voltage and current magnitude measurements. Moreover, the introduction of phasor measurements that are synchronized via the Global Positioning Satellite (GPS) system and provided by the phasor measurement units (PMUs) allows direct measurement of phase angles associated with voltage and current measurements. All of these different types of measurements are transmitted to control centers, where they are processed by the SE. State estimation is a function that takes advantage of the inherent redundancy among the available measurements in order to determine the best estimate of the system state, which is defined as the set of all the bus voltage magnitudes and phase angles in the system.

Figure 2. Combining existing measurements with “pseudomeasurements” makes the entire system observable.

Figure 1. Observable islands and unobservable branches.

The SE makes use of the network model in estimating the system state. This network model is built based on the assumed or monitored status of circuit breakers as well as the entire set of network parameters, including all transmission line parameters such as resistance, reactance, and line charging capacitance; all taps associated with off-nominal tap transformers, shunt capacitors, and reactors; and power control devices such as thyristor-controlled series capacitors (TC SC s), static var compensators (SVC s), and unified power flow controllers (UPFC s).

The number, type, and location of the measurements will determine whether it will be possible to estimate the state of the entire system. Measurement systems with insufficient or poorly placed measurements will be “unobservable,” i.e., their operating states will not be able to be estimated. In such cases, the SE will identify several subsystems whose states can be estimated independently of the rest of the network. These subsystems are referred to as “observable islands,” since their states are defined with respect to an internal reference bus and the flows along the branches that connect them to


their neighboring subsystems cannot be estimated. For that reason, these branches are referred to as “unobservable branches” (see Figure 1). Most SEs introduce what are called “pseudomeasurements” in order to make these branches observable, i.e., to merge the observable islands into a single large island including all the buses in the system (see Figure 2). Typical pseudomeasurements used for this purpose are the scheduled generation and forecast bus loads. Though these are not actual measurements, they provide good estimates and facilitate restoration of full observability in the system.

Figure 3. “Bad” measurements (the red circles) are detected and corrected by the SE.

Another important function of the SE is to detect, identify, and remove incorrect measurements. Measurements calculated using the estimated state will produce values that will generally be close but not exactly equal to the measured values. The differences between the calculated and measured values are referred to as measurement residuals. Very large differences imply the existence of large errors in one or more of the measurements. State estimators are expected to be able to detect, identify, and correct such erroneous measurements and thus provide an unbiased estimate of the system state (see Figure 3).

Factors Affecting State Estimator Performance The performance of an SE depends on a chain of factors that involves both hardware and software elements (see Figure 4). Primary measurements of voltages and currents are captured via instrument transformers (voltage and current) and converted into digital samples for further processing before being sent to control centers. Given the finite accuracy of measuring devices and processors used for this purpose, errors are inadvertently introduced into the measurements. Therefore, measurement error constitutes one of the major factors affecting SE performance. Improving measurement quality will have a direct impact on the SE output as well as on all the other energy management system (EMS) applications that rely on the output of the SE. In order to monitor large, interconnected power systems, a sufficient number of substations will have to be populated with measuring devices. No matter how accurate and reliable these measuring devices are, unless their numbers, configuration, and Figure 4. Factors affecting SE performance. specifications meet certain requirements, SE performance may be quite poor. The placement of different types of measuring devices at various locations in a given power system is referred to as the measurement design. Measurement design is another significant factor affecting the performance of SEs. This factor is more difficult to manipulate because of the legacy measurement systems currently installed and in operation today. But as the monitoring of power systems extends to lower voltage levels and as advanced measuring devices such as PMUs are deployed in transmission systems, measurement design considerations will become quite important to maximizing the value of investments in these new measuring devices. In the extreme case, poor measurement design will lead to an unobservable system that can only be solved with the use of pseudomeasurements. Given the uncertainty of pseudomeasurements, the SE results will also be uncertain. Even when the measurement design is sufficiently well implemented to produce a fully observable system, due to its poor design it may contain vulnerable zones in which measurement errors go undetected. Such measurements are referred to as critical measurements. A desirable measurement design will ensure that such measurements are absent and the system is fully observable based on the existing measurements. If the use of pseudomeasurements is required because of poor visibility in certain parts of the system (such as external areas), then such measurements should be introduced carefully, in such a way that the state estimation results for the existing observable islands are unaffected. Finally, it is important to note the performance of the SE’s computational algorithm, in particular when solving very largescale systems with a large variety of measurements. The past several decades since the introduction of state estimation have produced a large volume of research that has enabled the development of sophisticated numerical applications to address the issues affecting computational performance. A well-designed SE is expected to handle different system sizes and measurement configurations with ease and with similar convergence performance. Well-designed SEs will be those with convergence patterns that remain insensitive to changes in measurement numbers, configurations, and types.

Metrics for Evaluating the Performance of State Estimators The accuracy of the solution for a power flow problem is commonly evaluated by checking the so-called real/reactive power mismatches at each system bus except for the slack bus and also by checking the reactive power mismatches for power-voltage (PV) buses (generator buses). Such a direct measure of accuracy does not exist for the SEs, since the true


values of the measurements are unknown. Although one can artificially create error-free measurements using a power flow solution and test the SE’s solution algorithm by comparing its estimates against this power flow solution, such a test cannot be carried out using actual measurements. Metrics that can be used to evaluate SE performance during online operation are therefore needed. Quantifying the performance of an SE involves the development of metrics in three categories: the state estimation solution, the measurement quality, and the measurement design.

Metrics for the State Estimation Solution The state estimation solution is commonly obtained by iteratively solving an optimization problem where the system state is estimated in a way that minimizes some measure of the difference between measured and calculated values for all the available measurements. The main consideration in evaluating the solution method and its implementation is the convergence behavior of this iterative solution. The number of iterations it takes the solution algorithm to converge to a prespecified threshold, e.g., 10−3 per unit or radian, can be used for this purpose. The typical number of iterations needed for convergence is independent of the system size and remains below ten iterations. Values deviating significantly from this expected value indicate problems associated with the solved case. While the reasons may be related to the specific SE implementation, they may also be related to the quality of the measurements and/or their configuration in the system. Metrics that enable the distinguishing of such reasons will be discussed below.

Metrics for Measurement Quality The results of state estimation are mainly dependent on the quality of the measurements used. Accurate, well-placed measurements of the required type will facilitate the estimation procedure, yielding unbiased state estimates. Existing SEs commonly use postprocessing methods based on calculated measurement residuals in order to detect, identify, and correct measurement errors. There are two well-accepted and commonly used metrics to evaluate the quality of measurements: 1. Objective function: This metric is defined as the weighted sum of the squared residuals of all measurements. The weights are chosen to be inversely proportional to the assumed variances of measurement errors. Measurements known or assumed to be quite accurate are given high weights while measurements with uncertain or low accuracies are given low weights. The expected value of the objective function varies depending on the number of measurements and states for the given system. Values that are larger than the expected value χc imply the existence of errors in the measurement set. 2. Largest normalized residual (rNmax): The objective function defined above is useful in detecting the existence of errors in the measurement set but does not permit identification of the erroneous measurement so that it can be removed or corrected. This is accomplished by another metric, namely rNmax. Measurement residuals evaluated at the converged state estimation solution are normalized by the corresponding standard deviations, and normalized values are sorted in absolute value from largest to smallest. The largest normalized residual rNmax will point to the erroneous measurement if its value is larger than a certain threshold, commonly set at 3.0. The two metrics defined above are useful in evaluating measurement quality, which is independent of the SE solution algorithm and its implementation. These metrics are therefore useful in identifying issues related to the measuring instruments, communication medium, instrument transformers, assumed values of network parameters, and status of circuit breakers.

Metrics for Measurement Design The performance of an SE will also be affected by the existing measurement design, i.e., the configuration, type, and number of measurements placed in the power system being monitored. It is important to develop metrics that will facilitate evaluation of the impact of measurement design on the performance of the SE. A good indicator of a poorly designed measurement set is the existence of the critical measurements mentioned above. As we said earlier, these measurements are known to create vulnerable zones for SEs, since their errors are impossible to detect. If they carry bad data, then the system state will be incorrectly estimated and the operator will have no way of detecting the error. It is possible to identify the critical measurements in a given measurement set, however. Furthermore, there are measurement placement strategies that will enable the transformation of such measurements into noncritical measurements by adding a few new measurements to the system. The network observability function is a preprocessor to the SE. It analyzes the existing measurement types and their locations to ensure that the state estimation solution can actually be carried out, i.e., to ensure that the system is observable with respect to the given set of measurements. This analysis does not take into account the actual values of the measurements but only their types and locations. If the analysis returns a negative verdict and the system is found to be unobservable, then a number of observable islands will be identified. These islands will be connected to each other by means of the unobservable branches discussed at the beginning of this article. In this case, the measurement system needs to be expanded by adding some pseudomeasurements so as to merge these observable islands and transform the unobservable branches into observable branches. This procedure is known as pseudomeasurement placement. It is important to place these pseudomeasurements in such a way that they will not have any impact on the existing estimates of observable islands. This is accomplished by placing them strategically so that they are all critical measurements. When


they are “critical,” they will have no impact on the rest of the measurement residuals. This is something desirable since pseudomeasurements are, in general, inaccurate and we do not want their errors to spread. Having them deliberately chosen as critical, we make sure that their errors are confined to themselves and not spread to the rest of the system states. Unlike the case of the power flow, where the accuracy of the result can be evaluated based on the maximum absolute mismatch of power balance equations at system buses, the accuracy of SEs cannot be readily evaluated since the true state and the measurement errors remain unknown. The accuracy of the estimated states may, however, be gauged statistically by evaluating their error variances. An estimate with a low error variance will be preferable to one with a high variance. The variance of the state estimates depends strongly on the measurement system design rather than the measurement values themselves and can therefore be used as an accuracy metric. Based on the above considerations, three metrics can be defined in order to quantify the evaluation of a given measurement design. Each is described below. The Measurement System Vulnerability (MSV) Ratio This metric is defined to quantify the vulnerability of a measurement design against loss of measurements and/or bad measurements. A large number of critical measurements will indicate vulnerability to bad data. Their locations reveal vulnerability zones and also provide clues as to which areas would benefit from new meters. This metric is defined as follows

Note that MSV can be defined with respect to geographical areas (zones) and/or voltage levels. A robust measurement system is recommended to have an MSV ratio < 3%. The Pseudomeasurement Ratio (PMR) This metric is defined in order to quantify the effectiveness of the pseudomeasurements placed in a given system. Pseudomeasurements are not to be trusted, and locations provide information about zones of low redundancy. They should also remain critical to avoid spreading their errors to existing measurements. The metric is defined as:

Redundant pseudomeasurements increase the chances of corrupting actual (good) measurements. The PMR should be close to 1.0 for optimal results. State Estimation Accuracy (SEA) This metric is used to quantify the accuracy of an SE. Smaller values imply better accuracy. C hanges in this metric are mainly a function of measurement configuration and not measurement values. SEA is defined as follows: SEA = max {variance of estimated states}. C alculation of the variance of estimated states is done by the SE as a by-product of its solution algorithm. SEA value should remain below the acceptable variance of errors in estimated states. A typical threshold is on the order of 10−6. SEA can be calculated for a given voltage or geographical zone, in which case the max function will apply to the states in the designated zones.

Illustrative Examples Two examples will be given to illustrate how these performance metrics can be computed and used to evaluate different measurement configurations. An IEEE 30bus system will be used as the test system on which different measurement configurations will be applied. In order to simulate the zones (which could be defined with respect to voltage levels or geography), the system will be divided into three zones, defined as follows: Zone 1: buses 1–16 Zone 2: buses 17–24 Zone 3: buses 25–30. The SEA metric will be calculated for each zone, while the remaining metrics will be shown for the entire system. Two examples will be discussed, representing one poorly designed and one well-designed measurement system. The proposed metrics will be calculated and discussed comparatively for these two example cases.

Example 1

Figure 5. IEEE 30-bus system with poor measurement design. (a) Observable


A poorly designed measurement configuration is studied in this example. An IEEE 30-bus system was used to design such a measurement system, shown in Figure 5(a). The given measurement configuration did not have sufficient measurements to carry out a full state estimation solution for the entire system, i.e., the system was not observable. Network observability analysis yielded seven observable islands whose boundaries are identified by dotted lines in the figure.

islands for Example 1. (b) Pseudomeasurements placed to restore observability. (c) Critical measurements.

It can be shown that five strategically placed pseudomeasurements can restore observability; one possible arrangement is shown in Figure 5(b). Note that these five pseudomeasurements are not unique, but the minimum required number to make the system observable is uniquely defined. The PMR of the system is found to be 0.8333. This number is less than one due to the fact that one of the boundary injections (at bus 10) will become relevant and help to merge observable islands upon placement of pseudomeasurements. Once the pseudomeasurements are placed, the critical measurements of the system are determined. Out of 37 power measurements, there are 14 critical measurements identified, five of which are pseudomeasurements, as shown in Figure 5(c). The MSV metric for this case is 0.38, significantly higher than the threshold, which is set at 0.03.

Example 2 This example presents a well-designed measurement configuration containing no critical measurements; it is illustrated in Figure 6. In this case, both the PMR and MSV metrics are equal to zero. In both examples, an intentional error is introduced in the injection measurement at bus 13. Table 1 shows the various metrics evaluated for the two example cases. It should also be noted that the increase in the number of measurements in the second example as compared with the first results in a higher expected value of the objective function, χc. Since the injection power measurement at bus 13 was intentionally changed to simulate bad data in both cases, both objective function values are also higher than the corresponding detection thresholds. The SEA metric, which is a measure of state estimation accuracy, can be seen to improve from the first example to the second, validating the benefits of having redundant measurements in the measurement design for improving state estimation performance. In both cases, the reported SEA metrics correspond to zone 3.

Figure 6. IEEE 30-bus system with well-designed measurements.

Table 1. Test results for the IEEE 30-bus system. Example-1

Example-2

Metrics for SE solution

Maximum iterations

5

3

Metrics for measurement quality

Objective Function

1,405.2

119,692.4

χc

33.9

50.9

rNmax

37.5

345.9

MSV

0.38

0

PMR

0.83

0

SEA (zone 3)

0.29

3.61 × 10−7

Metrics for measurement design

Moving Forward This article introduces a set of metrics to evaluate the performance of an SE, its measurements, and its measurement design. It is worth noting that the performance of SEs is as much a function of the solution algorithm as of measurement design and quality. This is highlighted by defining performance metrics for all three categories and showing how changes in measurement design and quality manifest themselves via these metrics. The metrics could be useful in comparatively evaluating the benefits of alternative metering investments with respect to their impact on state estimation function performance. As new types of measurements begin populating power system substations, these metrics will be able to evolve accordingly so as to capture the new measurements’ novel properties.

For Further Reading KEMA Report, “Metrics for determining the impact of phasor measurements on power system state estimation,” Eastern Interconnection Phasor Project (EIPP), Jan. 2006. A. Abur and A. Gómez-Expósito, Power System State Estimation: Theory and Implementation. New York: Marcel Dekker, 2004. F. F. Wu, “Power system state estimation: A survey,” Int. J. Elect. Power Energy Syst., vol. 12, no. 2, pp. 80–87, 1990.

Biographies


Murat Gรถl is with Northeastern University, Boston. Ali Abur is with Northeastern University, Boston. Floyd Galvan is with Entergy Services Inc., New Orleans, Louisiana. Download PDF Version

Share

Tw eet

0

EMAIL | PRINT | HOME


Measures of Value By Tomo Popovic and Mladen Kezunovic

Data Analytics for Automated Fault Analysis The power industry is experiencing an enormous expansion of computer and communication devices in substations. As a result, a massive amount of measurement data is being continuously collected, communicated, and processed. This is partly due to the need for much better monitoring capability as power system loading and complexity of operation have increased. The installation of a large number of intelligent electronic devices (IEDs) to accomplish the monitoring task has created new challenges, such as cyber and physical security, time-synchronized data storage, configuration management, and efficient visualization. Automated data analytics solutions are the key to efficient use of IED recordings. This automated process includes conversion of measurements to data, processing data to obtain information, and extraction of cause-andFigure 1. Converting field measurements effect knowledge. This article provides real-life implementation into digital data records. examples of data analytics developed to handle measurements from digital fault recorders (DFRs) and digital protective relays (DPRs). The discussion addresses the implementation challenges and business benefits of such solutions.

Converting Field Measurements to Digital Data When triggered, substation IEDs capture signals in a small time window that typically contains a few cycles of the prefault and up to three dozen cycles of the postfault data. These recordings consist of digital samples of multiple analog and status channels. A diagram of typical data sampling and processing in a modern IED is given in Figure 1. Prior to analogto-digital (A/D) conversion, the input signals are sampled using the sample-and-hold (S/H) circuit at the times defined by the sampling clock. Synchronous sampling of all the input signals allows determination of phase angles among different analog input signals. This can be accomplished either by using one A/D converter serving all channels but having separate S/H circuits on each channel and a multiplexer that feeds another S/H circuit in front of A/D conversion (see Figure 1) or by using a separate S/H circuit and A/D convertor on each channel. Some older designs use a scanning method in which each channel is sampled and converted one at a time, causing a time skew among the corresponding samples on different channels. The quality of the data is affected by the conversion process and also by wiring, input transformer characteristics, clock accuracy, internal signal propagation, sampling rate, antialiasing filters, and so on. When implementing or using data analytics, it is very important to understand how the measurements are obtained and what the expected impact on the quality of the data is.

Data Analytics Starts with Data Integration As the large-scale deployment of substation IEDs began to produce a â&#x20AC;&#x153;data explosion,â&#x20AC;? it became obvious that data analytics solutions require several data integration functions, as shown in Figure 2.

Interface to IEDs Software for interfacing with IEDs allows automated retrieval of newly recorded event data. The communication is typically implemented using vendor-specific software, which sometimes results in dataâ&#x20AC;&#x2122;s being stored in nonstandard and proprietary file formats. File conversion into a nonproprietary format is then required when importing the retrieved data into the file repository to be accessed by variety of data analytics applications.

Data Warehouse A flexible and standardized data repository called a data warehouse is used to support manual analysis needs as well as automated data analytics solutions. The data warehouse must be implemented using nonproprietary and standard formats. It should contain measurement data, configuration settings, and data analytics reports.

Figure 2. Substation data integration as the foundation for data analytics.

Data Analytics The analytics functions can be implemented as stand-alone programs operated manually or in fully automated mode. The simplest form of data analytics reads the data from the repository, creates an output without corrupting the original data, and then sends the output results back to the data warehouse for storage. The data analytics sometimes utilize their own databases that may be decoupled from the substation data warehouse shown in Figure 2. This creates a challenge when a synchronization and integration of multiple data analytics is needed.


Visualization While each data analytics function may have its own user interface, a universal approach for viewing results from all data analytics functions may also be desirable. Typical user interface options for fault analysis solutions include Web-based portals and event viewers; desktop-based event viewers and configuration editors; and various options for report dissemination, such as pagers, e-mails, text messages, printers, or faxes.

A Data Analytics Example: Fault and Disturbance Analysis Fault and disturbance analysis entails taking measurements from IEDs triggered by the fault events and converting them to data, processing data into information, and then using this information to extract knowledge about the fault event. The fault analysis can automatically provide various details, including identification of the affected circuit, whether the disturbance was a fault, fault type, fault location, duration, and evaluation of protection performance. All of this knowledge can be presented to the users and will help them take actions and make decisions more efficiently. This is especially important when there is a need for quick restoration of the system. The implementation framework for an automated fault analysis is shown in Figure 3. The main components of the implementation example are the data analytics (fault analysis), the data warehouse, and visualization. Between the data analytics and data warehouse we have the following interfaces: IED data import/export configuration import/export analytics reports import/export.

Figure 3. Data analytics example: automated fault and disturbance analysis.

The implementation of these interfaces should be independent of the types of IEDs used, and the same data warehouse should serve various data analytics functions. In this example, the fault analysis converts IED file formats, maps the configuration to imported IED data, performs digital signal processing, applies expert system logic for cause-effect analysis, and finally, prepares customized reports. The reports are then stored at the data warehouse and made available for later use. Visualization enables the user to directly communicate with and configure the particular analytics function. As shown in Figure 3, the visualization piece can interface with the data warehouse directly or, in some cases, be an extension of the data analytics function itself.

Figure 4. Data analytics results can be sent to PI Historian and SCADA.

Sometimes the data analytics can be used to enable connections among different data management and processing systems. One such architecture that includes the fault analysis solution is given in Figure 4. There can be several substations, and each substation can have multiple IEDs: DFRs, DPRs, power quality meters (PQMs), and others. Substation data are collected and communicated via the substation PC and security gateways to the data integration system that belongs to the utility transmission group. The data are integrated, processed, and stored at the master station that also hosts the data warehouse. The data analytics can be used to connect to a wide-area measurement system, utility operations center, and other enterprise systems. If needed, it can even provide a connection to outside parties such as an independent system operator (ISO).

This example illustrates how the IED recordings that are traditionally considered nonoperational data can become operational. This is achieved by automatically downloading and processing the IED data and then sending the analysis reports to the plant information (PI) historian and SC ADA. The reports provide information about affected circuits, fault types, calculated fault locations, assessments of fault-clearing performance, and conclusions as to whether faults were transient or permanent. C orrelated with the rest of the operational data, this knowledge can be used to enhance the decision-making process when operating the system in real time.

The Configuration Challenge Settings related to the power system include descriptions of monitored components such as transmission lines, buses, and transformers. There are also IED-specific settings, such as the details about the way particular IEDs are connected and configured. Finally, the analytics functions may have their own settings and configurations. All of these parameters are continuously experiencing both small and larger changes. These changes may be brought about by various upgrades in the system and equipment. In addition, there are changes to standards and recommendations issued by other entities such as IEEE, the North American Electric Reliability C orporation (NERC ), and the Federal Energy Regulatory C ommission


(FERC ), which are constantly evolving and affect various aspects and possible uses of substation data. Traditionally, such configuration changes have affected short-circuit study programs, simulation tools, the PI historian, and so on. Automated data analytics solutions are even more dependent on the correctness of these settings. All of the changes in the settings must be correctly captured in the configuration files using version control. Figure 5. Measurement points in a typical bus We will now provide an illustration of how substation monitoring breaker arrangement. with disturbance recording needs to be configured to automate the fault analysis application. A typical bus breaker arrangement for a transmission line is displayed in Figure 5. The example shows a breaker-and-a-half transmission line configuration. The measurement points of interest for fault analysis are marked with green labels. In order to enable automated operation of the fault and disturbance analysis for each circuit, we need to know the locations of the measurements of the voltage and current signals. We also need to map digital signals such as the breaker auxiliary statuses, relay trips, and protection scheme communication. The example includes monitoring of circuit breaker control status (element 52 in the IEEE standard naming convention), the associated protection of the transmission line (element 21), directional overcurrent protection (element 67), and the lockout relay (element 86). The protection scheme communications channels are presented with their respective transmit and received carrier and carrier frequency signals (TC and TC F). A detailed list of signals is provided in Table 1. Table 1. Input signals for the fault data analytics applied to transmission lines. Signal

Description

Type

I

Line currents: three phases or two phases and zero sequence

Analog

V

Bus voltage: three phases or two phases and neutral

Analog

PC B

Primary (bus) breaker contact status

Digital

SC B

Secondary (middle) breaker contact status

Digital

PRT

Primary relay trip status

Digital

BRT

Backup relay trip status

Digital

TC R

Blocking signal received status

Digital

TC T

Blocking signal transmitted status

Digital

TC FR

Breaker failure signal received status

Digital

TC FT

Breaker failure signal transmitted status

Digital

In the configuration settings, each transmission line is assigned the metadata describing how the signals from Table 1 are being monitored and mapped to corresponding analog or digital channels within the IED. The metadata may contain additional information, such as the line impedance and line length needed to automatically run fault location calculation. Even the GPS position of the line may be useful for displaying calculated fault locations on geographical satellite maps. In general, the automated fault and disturbance analysis (see Figure 6) needs access to: fault and disturbance records as captured by substation IEDs IED-specific settings such as channel assignments and scaling power system transmission line parameters, such as line impedance, line length, mutual coupling, and GPS location

Figure 6. Handling of the configuration settings is critical for data analytics solutions.

the context in which the recorded disturbance and configuration will be analyzed, e.g., the circuit connection, the type of recording device, and the protection scheme used.

Figure 7. Obtaining configuration settings corresponding to an event occurrence time stamp.

Recordings coming from the IEDs need to be matched with the corresponding IED settings as well as with the correct current power system component parameters. IED-specific settings sometimes come with the IED recordings, but it is not unusual to see those placed in a separate file or even kept on a remote computer. Easy access to the IED settings is critical in order to enable the fault analysis. Some of these issues are being addressed in current IEEE standards development work, including C ommon Format for Transient Data Exchange (C OMTRADE) and C ommon Format for Event Data Exchange (C OMFEDE).


Handling of the configuration settings can be implemented by interfacing to other systems such as short-circuit study program database, relay-setting coordination database, SC ADA PI historian, or the International Electrotechnical C ommission (IEC ) 61850 Substation C onfiguration Language (SC L) files. The data analytics solutions can also have their own management and version control for the configuration settings. Time stamping of the configuration settings is just as important as time stamping of the disturbance recordings. For each disturbance recording, we need to be able to locate the corresponding version of the configuration parameters to be used for the fault analysis. Figure 7 depicts an example of a simplified Unified Modeling Language (UML) sequence diagram for obtaining the configuration settings. In this case, each IED record is first converted into the nonproprietary C OMTRADE file format. Then the preprocessing provides a unique IED identification (“id”) and an event time stamp (“time”). The two parameters, id and time, are then used to retrieve the corresponding version of the configuration settings used by the automated fault analysis.

Inner Intelligence of the Fault Analysis As discussed earlier, the data analytics consists of: 1) converting substation measurements into data, 2) translating data into information, and 3) extracting cause-effect knowledge from that information. For the fault analysis based on substation IED data, these main steps can be described as follows.

Measurements to Data Measurements are being captured and recorded by substation IEDs. The recording is triggered by the occurrence of a fault or disturbance. The recording files are communicated and converted into nonproprietary formats defined by IEEE and IEC standards. The converted data are stored in the data warehouse. The event records are then matched with the configuration settings to perform extraction of the signal features.

Figure 8. Fault analysis rules based on analog signal inputs.

Data to Information The analysis identifies the affected circuit, such as a faulted transmission line or transformer, which further focuses the processing on the signals relevant to the selected circuit. The analysis determines the start and end of the disturbance and, based on that information, calculates prefault, fault, and postfault values for all the relevant signals. For a typical transmission line, these signals include current and voltage for each phase, as well as statuses for the digital signals, such as relay trip, breaker auxiliary contacts, communication, and so on.

Information to Knowledge

Figure 9. Protection system performance evaluation rules.

The information extracted in the previous step is used to acquire knowledge about the event. This is accomplished by applying the expert system rules for various steps of the cause-effect analysis: detecting the disturbance and determining the fault type, analyzing fault clearing, and evaluating the performance of protection relays, auto-reclosing logic, circuit breaker operation, and so on. The inner intelligence of the expert system for automated fault analysis consists of the rules shown in Figures 8 and 9. A circle represents each rule subset, and each circle is related to a possible conclusion of the corresponding rule subset. Rule subsets are connected by directed lines, which means that one subset of rules produces a conclusion that is used by another subset of rules. In some cases, rules require both analog and digital quantities extracted from the fault recording files. The rules are designed to cover a wide range of possibilities, not to focus on special cases. As shown in Figure 8, the event can be identified as “Not a Fault” disturbance, or through the Fault Type Detection, identified as a single line-to-ground, line-to-line, Figure 10. Report example for line-to-line-to-ground, or a three-phase fault. The relay operation can further be viewing signals from the identified as a reclosing attempt. A breaker at the monitored substation can clear affected transmission line. the fault, or the fault can be cleared by the protection at a remote substation. The disturbance can be a temporary fault, often called a “self-clearing” fault. Detection of a fault that was not cleared indicates a protection system failure. A reclosing attempt can result in either failure or success in clearing the fault. Even if the breaker auxiliary status is not monitored, its state (open or closed) can be determined based on the change and levels of the analog quantities representing phase currents. The rules for protection system performance evaluation are shown in Figure 9. These rules are used to analyze the digital statuses from the breakers and communication gear used in protection schemes. The protection analysis evaluates the operation of primary and backup protection relays and the operation of the bus and middle breakers (shown earlier


in Figure 5). Analyzing the states of the digital signals compared with the start and end times of the fault provides an in-depth evaluation of protection performance. A report from automated fault analysis is given as an example in Figures 10 and 11. The report format was ​ customized for the users in the protection group. This actual field recording triggered by a fault was analyzed, and the event was identified as a phase C —to—ground fault. The fault analysis determined the affected circuit, the disturbance’s start and end times, the fault type, a fault location estimate, and protection performance. Figure 10 shows that the disturbance was about seven cycles long. The primary relay needed more than three cycles to trip, and it took the bus and middle breakers a little bit more than three cycles to open. The protection evaluation correctly indicated that the fault was cleared locally and also pointed out that the bus and middle breaker operations were slow (see Figure 11).

Figure 11. Report example for viewing fault analysis results.

Data Analytics Visualization

Figure 12. Example of a Web-based user interface for fault and disturbance data analytics.

The data analytics converts a vast amount of raw substation data into useful information and subsequently into actionable knowledge, typically in the form of user reports. Both the substation data and reports have to be made available to different user groups in a timely fashion. Different needs may result in the use of customized user interfaces and report formats. Figure 12 illustrates a Web-based user interface where a user can access the data analytics results in the form of an events table, using a standard Web browser. This Web solution displays the data and reports that are stored in the data warehouse. The reports are kept in an easily readable and nonproprietary file format (e.g., ASC II, XML, HTML, DOC , or PDF), as other automated data analytics may later use these reports as their input data. In the fault analysis example, this was accomplished using an IEEE file-naming convention (IEEE Standard C 37.232-2007) and a standard SQL database engine.

The analysis reports can be sent automatically using SMS text, e-mail, mobile pagers, fax, and printers. Various notification options can be configured based on the event priority and user category. While the maintenance crew may appreciate a brief message identifying the substation, affected circuit, fault type, and location, the protection group may be interested in all the details about the fault and the related protection operations. When the automated reporting is combined with smartphone technologies, it can be a very powerful tool for “on the go” analysis (see Figure 13).

Figure 14. Example of a desktop-based (rich client) user interface.

The examples shown in Figures 12 and 13 illustrate “thin client” (Web, mobile) visualization. Another approach is to implement Figure 13. Data analytics “on a “rich client” as the universal desktop-based the go” using text/ e-mail user interface for fault analysis; one such tool, messaging and the mobile called Report Viewer, is shown in Figure 14. The Web. viewer starts via the Web, using Java Web Start technology, and runs locally on the user’s desktop or workstation. This tool may be used to manually inspect the signal waveforms as well as to access and read the fault analysis reports.

The rich client visualization enables a more native experience for the user and frees some of the server’s resources. This is beneficial in situations when frequent user interaction and manual data manipulation are expected. Figure 15 depicts the use of manual fault location calculation. This tool lets the user interact with the results and modify the input parameters used in the fault location calculation algorithm. Parameters that can be changed are channel selection and prefault and fault measurements positions. The user can also change the line impedance and length, invert selected channels, and adjust scaling to further tune the data analytics results.

Figure 15. A tool for manual fault location calculation. In addition to human users, it is possible that consumers of the data analytics results may be other systems, such as SC ADA or PI historian systems. The data analytics function could export its results to the SC ADA visualization, as shown in Figure 16. Additionally, the data analytics results can be interfaced with third-party visualization solutions. One such example is displaying calculated fault locations on a Google Earth satellite map, as illustrated in Figure 17.


Benefits of Automated Data Analytics There are several benefits of the automated data analytics coming from fault analysis based on substation IED data. These include: There is a major reduction of time spent on substation data handling, and analysis, either manual or automated, assures higher personnel productivity. Automated integration and archiving of the substation data using nonproprietary and standard data formats facilitate future data analytics implementations. Saving recordings from multiple IEDs corresponding to the same power system event provides the redundancy needed for improved data integrity checking. A standard data warehouse design keeps the solution open for implementation of different user interface tools, including integration with third-party visualization. The universal report-viewing and waveform inspection tools for accessing substation data regardless of IED type, model, and vintage make the data source transparent to all users.

Figure 16. Fault analysis integration with SCADA visualization.

Providing the data analytics reports to multiple user groups in a format customized to fit their needs allows for a more focused and efficient decision-making process. Automated data analytics may be of great help in restoring the system by providing information and knowledge about the fault in a timely fashion. The inherent scalability of the proposed data analytics concept allows for the future addition of new IEDs as well as the implementation of new data analytics functions. The data analytics functions can be used to interconnect the systems within a utility enterprise, or even to connect with external entities, creating value for multiple users. The data analytics value proposition is tied to many opportunities for return on investment such as reliability, productivity, capital investment, regulatory compliance, and standardization (see Table 2). C ombining this fact with the trend toward large-scale deployment of substation IEDs makes automated data analytics solutions highly desirable. table 2. The benefits of automated data analytics. Category

Improvements

Reliability

Reliability of assets, resilience to random events, reliability of operating decisions, robustness of system wiring and data

Productivity Data integration, analysis, viewing, and archiving; event reporting C apital New data collection does not require new wiring, IEDs, communications, procedures, and so on; investment substation data analysis software is not as costly to install as hardware; hardware will not become stranded due to an inability to produce useful data. Regulatory NERC , FERC , public utility commissions (PUC s), reliability coordinators, large customers compliance

For Further Reading M. Kezunovic, B. C lowe, B. Ferdanesh, J. Waligorski, and T. Popovic, “Automated data retrieval, analysis and operational response using substation intelligent electronic devices,” in Proc. CIGRE Session, Paris, France, paper B5-206, pp. 1–8, Aug. 2012. M. Kezunovic, “Translational knowledge: From collecting data to making decisions in a smart grid,” IEEE Proc., vol. 99, no. 6, pp. 977–997, June 2011. P. Myrda, M. Kezunovic, S. Sternfeld, D. R. Sevcik, and T. Popovic, “C onverting field recorded data to information: New requirements and concepts for the 21st century automated monitoring solutions,” in Proc. CIGRE Session, Paris, France, paper B5-117, pp. 1–8, Aug. 2010. T. Popovic and M. Kuhn, “Automated fault analysis: From requirements to implementation,” in Proc. IEEE PES General Meeting, C algary, AB, C anada, July 2009, pp. 1–6.

Biographies Tomo Popovic is with XpertPower Associates. Mladen Kezunovic is with Texas A&M University and XpertPower Associates.

Figure 17. Integration with a thirdparty visualization: displaying the calculated fault location on a satellite map.


One Step Ahead By Ganesh Kumar Venayagamoorthy, Kurt Rohrig and István Erlich

Short-Term Wind Power Forecasting and Intelligent Predictive Control Based on Data Analytics The intelligent integration of wind power into the existing electricity supply system will be an important factor in the future energy supply in many countries. Wind power generation has characteristics that differ from those of conventional power generation. It is weather dependent in that it relies on wind availability. With the increasing amount of intermittent wind power generation, power systems encounter more and more short-term, unpredicted power variations. In the power system, supply and demand must be equal at all times. Thus, as levels of wind penetration into the electricity system increase, new methods of balancing supply and demand are necessary. Accurate wind power forecasting methods play an important role in addressing the challenge of balancing supply and demand. Forecasting is required to maximize the integration of a high level of wind power penetration into an electricity system because it couples weather-dependent generation with the planned and scheduled generation from conventional power plants and the forecast electricity demand. The latter is predictable with sufficient accuracy. Even with state-of-theart wind forecasting methods, the hour-ahead prediction errors for a single wind plant are still around 10–15% with respect to actual production. Wind power prediction determines the need for balancing energy and, hence, the cost of wind power integration. In countries such as Denmark, Germany, Spain, and the United States, wind power prediction is a critical component of grid and system control. The short-term energy balancing of existing electricity supply systems depends on automatic generation control (AGC ), which cannot regulate transmission line flows. Most regional voltage controllers (RVC s) are capable of regulating only the primary bus voltage and do not result in any voltage enhancement at other buses. With a high level of wind power penetration, short-term transmission line overloads and voltage violations may occur because of the limited adaptation capabilities of the AGC s and RVC s. A high degree of wind power integration without intelligent control may result in power system stability issues and penalties that cause wind farm owners to lose revenue. Real-time operation time frames require short-term wind power prediction on the order of seconds, minutes, and a few hours, as well as the integration of that prediction into the control room environment. Short-term wind power forecasting based on the current status of wind power plants (WPPs)—and the application of such forecasting in the development of intelligent predictive optimal control of reactive power and wind power fluctuations for real-time control center operations—are discussed in this article.

Short-Term Wind Power Prediction Short- to medium-term wind power forecasting using numerical weather forecasts and computational intelligence methods has experienced enormous progress in recent years and represents an integral part of today’s energy supply. For asserting predictive control of wind farms, wind farm groups, and the associated transformer, the short-term prediction of active and reactive wind turbine power outputs is essential. C ontrary to other fields of application of the prediction models for the energy market, wind farm control requires a very short forecast horizon, from a few seconds up to 15 min. The approaches used with the existing model, therefore, do not apply here. Weather pattern information will play no role in this task. Rather, it is important to estimate the electrical parameters for the near future based on recordings and analyses of the current situation of wind farms. C ompiling this estimation using analytical approaches is very difficult and imposes a high computational cost; for these reasons, the use of computational intelligence methods is essential. In several studies on wind power prediction, the ability of neural networks to carry out short-term predictions from spatiotemporal information is well known.

Figure 1. Neural network inputs are the active and reactive power of the individual N wind turbines in a wind farm at the current time, t, and the outputs are predicted active power and reactive power of a wind farm at time t + 3t.

In contrast to the previously used methods for very short-range forecasting, the proposed method uses no related numerical weather prediction (NWP) information. The active and reactive power are predicted solely based on power data measured from representative wind farms or wind turbines in a wind farm. Due to the spatial distribution of these wind farms, changes in grid areas are identified, and this information helps to predict the supply in the near future. The suitability of this spatial method for predicting wind power over very short forecast horizons is being investigated in detail. In Figure 1, the predicted outputs are active and reactive power of a wind farm at the next time interval. The short-term active wind power forecast for one of the German network regions (TenneT), based on the measured power data of selected wind farms, is shown in Figure 2. The input data for the neural network consist of the normalized output signals representative of individual turbines or wind farms.


Figure 2. Measured and predicted curve of active power of wind farms in a grid area of TenneT.

The figure shows the curves of the wind energy fed into a network region of TenneT compared with the one-hour forecast and the onehour persistence. The neural network model using the spatial method is clearly predicting large fluctuations significantly better than the approximated persistence method. The root mean square error (RMSE) for the one-year period was 2.5% of the installed plant capacity, and the correlation coefficient was 0.989. Table 1 compares the forecast accuracy of the spatial method for prediction horizons from one to three hours. For the one-hour forecast, the spatial method provides a significantly better result than NWP-based models. In contrast, larger prediction horizons suffer from reduced quality compared with the NWP-based models. With these benchmarks, the neural network method based on spatial power data represents a very good solution for very short-term predictions for grid regions and wind farms.

Table 1. Accuracy (RMSE and correlation) of the spatial method. Spatial Method Prediction Horizon (hour)

RMSE

Correlation

1

2.5%

0.989

2

4.2%

0.970

3

5.7%

0.953

Predictive Wind Farm Reactive Power Control With the increasing integration of WPPs, grid utilities require extended reactive power supply capabilities, not only during voltage dips but also during steady-state operation. According to the grid codes, the reactive power requirements are defined alternatively in terms of the power factor, the amount of reactive power supplied, or the voltage at the point of interconnection. To achieve the reactive power requirement optimally, WPP operators may consider performing reactive power optimization within their own facilities. The stochastic nature of the wind speed, however, poses a serious problem to the reactive power management of WPPs. To consider uncertainties caused by the wind, the optimization must be performed in a predictive manner for a certain future time horizon by taking into account the short-term wind forecast. This idea is depicted in Figure 3. In this approach, optimization of power flows is performed for a given scenario, which includes a Figure 3. Predictive wind farm reactive power set of future operating points. All of these operating points are optimization. optimized simultaneously using the objective function, which can be formulated several different ways. The simplest technique is to minimize power losses within the wind farm area. Taking into account the stepwise movement of on-load tap changers (OLTC s), the power losses and costs of OLTC movements can be considered monetarily. The quality of the optimal wind farm operation depends on the accuracy of the wind power forecast. In the example presented herein, the forecast results shown in Figure 4 have been used.

Figure 4. Results of the wind power forecast using a neural network.

The optimization is carried out over the predicted time period for n discrete time steps simultaneously. Then, the optimal power flow program suggests the optimal OLTC tap settings along with the optimal reactive power references for the entire wind farm for the next n time steps. By conducting this optimization every five minutes, it can be updated if new, improved forecast results become available. The proposed predictive control optimization was tested with a real wind farm model, as depicted in Figure 3. The results are shown in Figures 5 and 6.

For simplicity, in this case study, all wind turbines receive the same optimized reactive power reference set point. Different optimization methods can be used for the described problem, but the optimization task in general is nonlinear and â&#x20AC;&#x2039; nonconvex. The authors therefore used the heuristic optimization algorithm called mean-variance optimization (MVO), also referred to later as the mean-variance mapping optimization (MVMO), which demonstrates excellent convergence properties.


Large wind farms connected to high-voltage transmission grids must either deliver a certain amount of reactive power or control the voltage at the point of interconnection. Often, the reactive power demand is derived from the voltage according to a given characteristic. Alternative methods may exist, but the basic task always remains the same and can be described by the reactive power demand that the wind Figure 6. Optimization farm has to supply. To adapt the reactive results: wind farm var Figure 5. Optimization results: power generation, usually a wind farm reference (sum of all wind OLTC stepping. controller is implemented. The output of turbine var reference set this controller is the reactive power points). reference to individual wind turbines or, alternatively, the local voltage reference if a voltage controller is implemented at the wind turbine level. The question that arises is how the suggested wind farm optimization can be incorporated into the common wind farm optimization loops. Figure 7 illustrates the approach used. The optimization directly controls the OLTC positions and the shunt reactor connected to the bus bar to compensate for the capacitive charging power of the cable. The shunt reactor represents a discrete optimization variable, as it can only be switched on or off. The reactive power reference of the wind farm is usually distributed to the operating wind turbines equally, meaning that the output of the proportional-integral controller, Î&#x201D;Qtotal, is divided by the number of wind turbines. This value may now be modified by distribution factors calculated based on the optimization results. The distribution factors are usually close to unity. Deviating from 1.0 will result in different var references remotely communicated to the wind turbines. The distribution factors are calculated in such a way that even if they are not uniform, the total required power Î&#x201D;Qtotal will be supplied.

Figure 7. Integration of optimization in wind farm control for reactive power control and power loss reduction.

The suggested control and optimization methods have been tested by simulating their behavior over 24 hours. The optimization is carried out every 15 min, resulting in modified distribution factors. For the simulation, it is assumed that the wind is fluctuating and that the reactive power demand is changed by the operator in a stepwise manner in the range of maximum capacitive to maximum inductive values. Wind farm losses are shown for three different cases in Figure 8. C ases 1 and 2 represent operation with optimization. In C ase 1, the var references of the wind turbines are different, whereas in C ase 2, all of the wind turbines have the same (but optimized) var Figure 8. Wind farm losses over 24 hours for references. C ase 3 represents the state of the art as three different control scenarios. implemented in most of the wind farms without any optimization but with a wind farm controller in operation and classical voltage controllers applied to the OLTC . C learly, the wind farm losses can be reduced considerably with optimization. On the righthand side of the plots in Figure 8, C ase 3 shows slightly smaller losses. In this case, however, the voltage limitations in the grid are violated (not shown here). C ases 1 and 2 show similar results. This is due to the fact that the wind turbines in this particular wind farm are close to each other (500â&#x20AC;&#x201C;600 m), so different supplied var references will not result in considerable differences in the loss. Therefore, in this case, a uniform var generation distribution is acceptable. Optimization is required, however, for optimal control of the OLTC and the shunt reactor.

Predictive Optimal Control of Wind Power Fluctuations The dynamic and intermittent nature of wind power causes fluctuations in transmission line flows that may result in power system instability. Power system instability can lead to cascaded outages and, eventually, a blackout. Integrating battery energy storage systems (BESSs) reduces the uncertainty inherent in wind power generation and increases grid reliability and security. In other words, it minimizes the possibility of a blackout. Wind power varies continuously, however, and in order to effectively and continuously utilize limited energy storage to mitigate the power fluctuations, it is necessary to carry out a real-time optimal control of the state of charge (SOC ) of the battery energy storage system with variations in wind speed over a moving time window. Based on the short-term predictions of wind power over any given time window, the optimal charge and discharge power commands for the BESS are determined. In other words, without optimal control, the BESSs will lose their function as shock absorbers once their SOC s charge to their maximum limit or discharge to their minimum limit. Adaptive critic design (AC D) is a powerful computational approach that can determine optimal control laws for a dynamic


system in a noisy, nonlinear, and uncertain environment, such as the power system. C ompared with classical control and dynamic programming–based approaches, AC D is a computationally inexpensive method for solving infinite-horizon optimal control problems. With AC Ds, no prior information is needed about the continuously changing system to be controlled, and optimal control laws can be determined based on real-time measurements. The AC D consists of two subsystems, an actor and a critic. The actor receives the states of the system (wind speed, power flows, and so on) and dispenses the control/decision signals (BESS charge and discharge commands). The critic learns the desired performance index for some function associated with that index and evaluates the Figure 9. A modified 12-bus, three-area, overall performance of the system, like a supervisor. The power multimachine power system with a wind farm, system in Figure 9 is used to illustrate the need for intelligent BESS, and wind power balancing controller. The optimal control of a BESS to provide maximum mitigation of wind power balancing controller uses the transmission line power flows with wind farms. Figure 9 shows a predicted power output of the wind farm to modified 12-bus, multimachine power system with three command charging and discharging of the generators (G2, G3, and G4), an infinite bus (G1), and three BESS. interconnected areas. Generator G4 is a wind farm. The BESS is connected to bus 13 in area 2 of the system. The BESS charges and discharges energy in order to reduce power fluctuations in the two transmission lines (lines 6-4 and 1-6) connected to the wind farm bus. The task of the BESS is to maintain steady-state power flows in lines 6-4 and 1-6 as much as possible with wind power variations. In order to implement this objective, a dynamic optimal SOC controller with the ability to forecast wind power variations was de​ v eloped. The actor (see Fig​ ​ ure 10) is an MVO algorithm, which generates charge and discharge power commands (P*comm(t)) based on the system states and feedback from the critic neural network regarding the actor’s performance. The system states are measurements from the power system, which consists of the following four elements: the current SOC of the BESS (SOC(t)), the varying wind power (Pwind(t)), and the active power flows through the transmission lines 1-6 (P1-6(t)) and 6-4 (P6-4(t)) connected to the wind farm. The critic network is a neural network whose output is an approximation of the cost-to-go function of Bellman’s equation of dynamic programming. The utility function in the approximation of the cost-to-go function is composed of the sum of three terms with different weightings. The first two terms are the transmission line active power fluctuations in lines 1-6 and 6-4. The third term represents the anticipated deviation in the BESS’s SOC from its maximum and minimum SOC limits, which are estimated based on the predicted wind power output over the Figure 10. A dynamic optimal BESS chargenext several seconds. If the SOC of the BESS falls below the discharge power command (P*comm(t)) predefined minimum, the BESS will not be able to compensate controller. for any deficit in wind power. Similarly, if the SOC exceeds the predefined maximum, it will not be able to absorb any excess wind power. Therefore, it is necessary to maintain the SOC of the BESS within its chosen dynamic range at all times. The actor based on the MVO algorithm determines the optimal charge or discharge command P*comm(t). The MVO algorithm is a new, population-based stochastic optimization technique. The MVO algorithm finds the near-optimal solution and is simple to implement. The anticipated SOC deviation of the BESS is obtained using its ampere-hour rating and the forecast wind power over the next several seconds or minutes. The active power flow fluctuations in transmission lines 1-6 and 6-4 caused by the variations in wind power over a few minutes and shown in Figure 11(a) are plotted in Figure 11(b) and 11(c), respectively. Without an AC D controller, significant power fluctuations occur in the lines, which may result in stability issues and penalties that cause the wind farm to lose revenue. The AC D controller reduces the fluctuations in the transmission lines from the reference line power flow values and, hence, minimizes the deviation penalty charged to the wind power provider. The results presented here use five steps of prediction, a total of 25 s, where each step is five seconds ahead.

Conclusions Short-term wind power prediction on the order of seconds, minutes, and a few hours and its application in control centers becomes critical for the real-time operation of the electricity supply system as more and more wind power penetrates into it. The value of short-term wind power forecasting is high considering the reduction in power losses it offers, as is


maximizing the security and stability of the power system, especially when stochastic security-constrained optimal power flow is far from reaching control centers in the near future. Even more attractive to wind power providers is that short-term wind power forecast–based system applications in control centers can result in the maximization of revenue by minimizing penalties.

For Further Reading G. K. Venayagamoorthy, “Dynamic, stochastic, computational and scalable technologies for smart grid,” IEEE Comput. Intell. Mag., vol. 6, no. 3, pp. 22–35, Aug. 2011. B. Lange, K. Rohrig, B. Ernst, B. Oakleaf, M. L. Ahlstrom, M. Lange, C . Moehrlen, and U. Focken, “Predicting the wind—Models and methods of wind forecasting for utility operations planning,”IEEE Power Energy Mag., vol. 5, no. 6, pp. 78–89, Nov.–Dec. 2007. R. Jursa and K. Rohrig, “Short-term wind power forecasting using evolutionary algorithms for the automated specification of artificial intelligence models,” Int. J. Forecast., vol. 24, no. 4, pp. 694–709, Oct.–Dec. 2008. I. Erlich, G. K. Venayagamoorthy, and N. Worawat, “A mean-variance optimization method,” in Proc. IEEE World Congress Computational Intelligence, Barcelona, Spain, July 18–23, 2010, pp. 1–6. V. S. Pappala, I. Erlich, and K. Rohrig, “A stochastic model for the optimal operation of a wind-thermal power system,” IEEE Trans. Power Syst., vol. 24, no. 2, pp. 940–950, May 2009. G. K. Venayagamoorthy, R. G. Harley, and D. C . Wunsch, “C omparison of heuristic dynamic programming and dual heuristic programming adaptive critics for neurocontrol of a turbogenerator,” IEEE Trans. Neural Networks, vol. 13, no. 3, pp. 764–773, May 2002.

Figure 11. (a) Wind power variations over a few minutes. b) Comparison of power flow in transmission line 1-6 with and without ACD controller. (c) Comparison of power flow in transmission line 6-4 with and without ACD controller.

Biographies Ganesh Kumar Venayagamoorthy is with C lemson University in South C arolina. Kurt Rohrig with the Fraunhofer Institute for Wind Energy and Energy System Technology in Kassel, Germany. István Erlich is with the University of Duisburg-Essen in Germany. Download PDF Version

Share

Tw eet

0

EMAIL | PRINT | HOME


Leader’s Corner By Miroslav Begovic

Strategic Goals An update on long-range planning In the spring of 2012, the IEEE Power & Energy Society (PES) undertook the task of evaluating its accomplishments over the previous five years and preparing to set goals for the future. We do not operate on a five-year plan, like some planbased economies, but strategic planning and setting up a general direction are important, ​ e specially when membership is growing as fast as ours and when the Society’s operations are spread across the globe. During the last election for the PES officers, one of the voting members sent me an e-mail, indicating an intent not to vote as “there is little difference between candidates’ platforms to make it worth it.” There is some truth to that, in a good way, but there are also significant differences between the priorities of candidates. I became aware of that while going through the election process as a candidate for president-elect. PES is like a well-greased machine, and its growth and successes over the last decade speak for themselves. Radical changes without testing or fully knowing their consequences might not be the best way to test new initiatives. Minor adjustments and fine tuning of successful programs, as well as critical reevaluation of those initiatives that do not seem to produce the desired effects, are needed and represent a good way to keep the successful operations and adapt to the changes that we are going through. It is also important to know how funding for our membership value initiatives becomes available. Without going into great deal of detail about the Society’s finances, it suffices to say that our funding and resources for continuous operations come primarily from conference and publications revenue, although other sources of income are also becoming important. Due to the biannual periodicity of the T&D C onference and Exposition, our largest income-producing meeting, our revenue oscillates within a two-year cycle, one year with a relatively large surplus followed by the subsequent year with much smaller, or negative, surplus. Due to the fiscal policies of the IEEE, we are somewhat limited in our ability to undertake new initiatives in those years when our revenue is small or negative. Thus, we must carefully plan and prioritize those programs that are deemed high impact and low risk. One of our continuous goals is to maintain our successful ongoing programs and to secure the funds for their uninterrupted deployment, i.e., the training of C hapter chairs across all ten IEEE Regions. This program was initially limited to major events but, following our recent membership growth across all Regions, it increased in frequency and quality under the leadership of C hapters VP Meliha Selak and a group of ambitious and committed regional representatives who undertook the responsibility of organizing the events and making sure that they produce the desired effect, something that is a measure of the regional representatives’ own success on the job. We must continue such programs to be able to spread the benefits of PES membership across all Regions, and we must ascertain that we have a way of measuring their success and ​ e ffectiveness. Among the significant actions regarding image change while asserting our role in the domain of electric energy that the Society has embraced within the last five years was the name change to Power & Energy and the adoption of a meaningful new logo. And as the PES acronym remained the same, name recognition has not been diminished, although the new name addresses the important aspect of energy development as one of the interests of our organization. PES has operated these past five years, 2006–2011, driven by a broad strategic development plan. It was based on four major goals: improving Society nimbleness and effectiveness by rapidly adnmdressing emerging themes and technologies, developing a framework to efficiently enter into alliances, enhancing discussion opportunities for technical papers, reviewing meeting structure to increase attendance, expanding awards, and allocating limited resources for optimal output developing and delivering accessible and relevant information by training PES members through classes and short courses, developing information availability online, achieving growth targeting information of interest for technician and designer backgrounds and not only engineers, and by better packaging the information and its access increasing interaction between academics and industry by promoting collaboration through boutique meetings to increase the rate of understanding for topical themes and encouraging Distinguished Lecturers and regional speakers events that address areas of interest by connecting with nontechnical professionals in the power industry boosting the membership and our engineering image by attracting new members, which is critical to the success of the Society, by building membership ​ worldwide and concentrating on growth in ​ R egions 8–10, by increasing the involvement of GOLD members to ​ a ttract and retain them, by enhancing student image of power ​ e ngineering and the student member value proposition to attract and build a diverse workforce for the future, and via ​ collaborative ​ e fforts and opportunities to communicate the value of the power industry ​ profession to attract the future workforce. The overall activities of PES are designed to support the IEEE Technical Activities Board goal of becoming the global information resource where innovators meet, being essential to the global technical community and universally recognized, being a home for all technical professionals in all disciplines of interest, being recognized globally as the leading organization for forming new knowledge communities, and being the preferred place to go for scientific


information and promote lifelong activity of volunteers. As an example, the Society has strongly supported the activities of the IEEE Smart Grid Task Force, chaired by PES Past President Wanda Reder. That initiative has spawned a number of other useful activities, namely the recently activated Electric Vehicle ​ P ortal and the Smart Grid C learinghouse. PES is actively participating in the IEEE-wide activity on electric vehicles and electric ships, which has resulted in opening yet another portal, and supporting a recent IEV conference in March 2012 in Greenville, South C arolina. PES activity on IEEE standard development is very prominent; recently adopted streamlined procedures for smart grid standard development are expected to greatly support industrial efforts while asserting preeminence of IEEE and the Societies in that area. A recent successful series of Innovative Smart Grid Technology conferences around the world were held in Washington, D.C .; Anaheim, C alifornia; Manchester, United Kingdom; Gotteborg, Sweden; Perth, Australia; Medellin, C olumbia; Kerala, India; Tianjin, C hina; and Jeddah, Saudi Arabia. This attests to the global reach and reputation of the Society as the conferences have attracted authors and experts from 35 to 40 countries while bringing the subjects of global importance within easy reach of the members in all Regions, especially in Regions 8–10. This year’s Power Africa in Johannesburg, held in July, offered the same in Africa. In addition, two of the Technical C ouncil’s coordinating committees, Emerging Technologies and the recently formed Intelligent Grid, have been supportive of new themes of great interest for wider IEEE membership. The Society has a Long-Range Planning (LRP) C ommittee, chaired by the president-elect. The objective of the LRP C ommittee is to develop and evaluate the strategic plans for the Society and to represent the interests of all members in its work. The committee is made of 25–30 volunteer members, chosen to represent the diverse and wide-ranging interests and needs of the rapidly growing worldwide membership base, which increased by almost 50% during the last five years. The entire committee meets twice a year and continues an active development of the strategic plan throughout the year via phone conferences and e-mail. The new plan is under development. The work has started on the revision and assessment of the previous strategic plan, developed for the period 2006–2011 and beyond. The LRP C ommittee is planning to do the majority of its work by the end of 2012. We value our members’ diverse interests and ideas. If you would like to contribute to the development of the new LRP or influence the more immediate goals of developing current membership value initiatives and have them possibly included in the Society’s budget, please contact me (Miroslav.Begovic@ieee.org) and put “long-range planning ideas” in the subject line of your e-mail.

Share

7

Tw eet

0

EMAIL | PRINT | HOME


History By Thomas J. Blalock

In the Berkshires, Part 2 Stanley’s Early Work Expanded The first part of guest “History” author Thomas J. Blalock’s article on electric power developments in the Berkshires of western Massachusetts appeared in the July/August 2012 issue of IEEE Power & Energy Magazine. In that part of the article, Tom’s focus was on William Stanley’s pioneering 1886 demonstration of an alternating current (ac) power system in the town of Great Barrington and on other significant early electric power advances in the area up to the mid-1890s. In this second and concluding part of his article, Tom picks up his account in 1895 and discusses the further electrification of Berkshire County during the late 1890s and well into the 20th century. Tom Blalock is well known to the readers of these pages and needs no further introduction here. Suffice it to say that we welcome him back as our guest history author for part 2 of the 14th history article that he has contributed to IEEE Power & Energy Magazine. —Carl Sulzberger Associate Editor, History

Our examination of significant developments in the introduction and expansion of electric power in southern Berkshire C ounty, Massachusetts, began in the July/August 2012 issue of this magazine. We now continue that discussion with an account of further important advances leading to the comprehensive electrification of all of western Massachusetts.

Meanwhile, Back in Lenox In early 1895, electric power from the Westinghouse powerhouse on Laurel Lake finally reached the center of Lenox via underground conduit. In June of that year, it was reported in a local newspaper that Mrs. Anson Stokes held a large party at Shadowbrook and that “the whole house was ablaze with electric lights, every one of the hundred rooms being lighted.” This feat must have been achieved with the hydroelectric generator installed on the estate because, in November 1895, it was reported that plans were being made for the extension of the electric conduit installation in the center of town to Shadowbrook and other outlying mansions. The lavish mansions built in Lenox were coyly referred to as “cottages,” as was the case with the mansions in Newport, Rhode Island. The New York Times regularly reported in its social pages about the comings and goings of the Lenox “cottagers.” This feature in the newspaper was titled “In Beautiful Lenox,” and on 14 May 1895 it was reported that George Morgan’s estate, Ventfort Hall, had been wired for electric lights. The Ventfort Hall estate was nearly lost to potential redevelopment over a decade ago but, fortunately, it was saved from demolition by a concerned group of Lenox citizens. Today, the mansion is undergoing a slow restoration, and the rooms that have been restored are open for public tours and events (see Figure 1).

Figure 1. Recent photograph of ​ Ventfort Hall, the former home of George H. Morgan and family (photo courtesy of Thomas J. Blalock).

The author was very kindly allowed to explore the unrestored Ventfort Hall during the late 1990s for the purpose of documenting the remains of original lighting fixtures and other electrical and gas artifacts still in place. No physical evidence was found, nor was any anecdotal evidence uncovered, to indicate that any sort of electric generator was ever installed at Ventfort Hall. Therefore, it must be concluded that the house relied on gas lights supplied by a Springfield Gas Machine until 1895, when it was reported that the house had been wired. The remnants of well over 100 combination gas/electric lighting fixtures were found throughout the house. In addition, many remnants of gas-only fixtures were still in place (particularly in basement areas). This indicates that the Springfield Gas Machine remained in use as a back-up lighting supply after the house had been wired for electric lights. Such a


situation was not at all unusual in the very late 19th and very early 20th centuries. Since generators were most often driven by high ​ maintenance steam engines, electric power was not often available around the clock. When the generating station was out of service for routine maintenance, gas lights provided an alternative source of illumination (see Figure 2). In June of this year, the author had the pleasure of attending a lecture given at Ventfort Hall by Donald W. Linebaugh on the topic of his book, The Springfield Gas Machine(University of Tennessee Press, 2011). Linebaugh confirmed that Wyndhurst, Shadowbrook, and the C urtis Hotel, as well as Ventfort Hall, did indeed all have Springfield Gas Machines in operation during the 1890s.

The Laurel Lake Powerhouse Unfortunately, no detailed information regarding the original equipment installed in the powerhouse at Laurel Lake for Erskine Park has been discovered. Undoubtedly, however, it was a more or less conventional power generating facility of that era. That is, it would have utilized a coal-burning steam boiler to operate a reciprocating steam engine that, in turn, would have been belted to a Westinghouse alternator.

Figure 2. Connection for a gas/electric lighting fixture in Ventfort Hall, including an insulating fitting on the upper end of the gas pipe (photo courtesy of Thomas J. Blalock).

It is known that the building itself was made as aesthetically acceptable as possible to the inhabitants of Lenox, considering that its smokestack would have belched some amount of black smoke into the pristine atmosphere of the countryside. Published reports regarding the appearance of the structure say that the exterior was sheathed in marble and that the tall smokestack was embellished with an ornate crenellated top.

The powerhouse was retired from active use in 1915. Both George and Marguerite Westinghouse had died the previous year, and the Erskine Park estate then was purchased by Margaret Vanderbilt. She had married Alfred Gwynne Vanderbilt, a great-grandson of the “C ommodore.” Alfred perished in the sinking of the British C unard Line steamer Lusitania in May 1915. Margaret hated the Erskine Park mansion and had no interest in the operation of the powerhouse. She had the house demolished and replaced with another that, today, is part of a residential condominium development.

Figure 3. Abandoned Laurel Lake powerhouse as it appeared in November 1946 (photo courtesy of the Berkshire Eagle newspaper).

The disused powerhouse building remained standing, derelict, until about 1949 (see Figure 3). When it was ​ demolished, the marble exterior sheathing was salvaged and used by a local stone mason in the construction of a chapel in Stockbridge. That structure remains in use today.

Earlier, in 1894, when it had been decided that the Laurel Lake ​ powerhouse would supply Lenox as well as Erskine Park itself, it was reported in the press that “Mr. Westinghouse will put in new dynamos and boilers.” Then, the following year, it was reported that a new Westinghouse dynamo was on exhibit at the C otton States Fair in Atlanta, Georgia, and, subsequent to the closing of the fair, this machine would be installed in the Laurel Lake powerhouse. Again, unfortunately, no other details regarding this interesting development have been uncovered. Early in 1897, one of the agenda topics at a Lenox town meeting was “shall the streets be lighted by electric lights.” At the time, either kerosene or oil lamps would have been in use as Lenox never had a municipal gas works. The C urtis Hotel had been a fixture in the center of Lenox since the mid-19th century (today it serves as senior citizen housing). Undoubtedly, it was equipped with a Springfield Gas Machine, and, in 1897, it was reported in the press that “the hotel is lighted by gas and electricity” (see Figure 4). An Independence Day celebration was held at Erskine Park in 1898, and it was reported that “the residence, from parapet to basement, was ablaze with electric lights.” Also in that year, a second underground conduit supply was begun from the Laurel Lake powerhouse to the center of town by the Lenox Electric Light C ompany (the former Lenox Electric C ompany). It was said that “this arrangement will give the company two complete

Figure 4. The Curtis Hotel, located in the center of Lenox (photo courtesy of the Berkshire Athenaeum).


and separate lines of conduits, and in case of accidents such as happened several times last year, the town can be lighted by either system.” In fact, this new conduit was connected to the original conduit so as to form a loop supply to the center of town. Through the turn of the 20th century, more and more of the “cottages” around the Lenox area were supplied with electric power. By 1902, it was reported that a total of 15 mi (24 km) of underground conduit had been installed. In 1901, it had been reported that a gas engine was being installed in the Laurel Lake powerhouse “to assist the steam engine in running the dynamos,” and, in the following year, it was reported that additional gas engines and dynamos were being installed. Late in 1902, it was reported in the press that the powerhouse would henceforth use gas, rather than coal, as fuel and that the gas was to be “made on the premises.” It is not known what type of gas was being employed. An enriched form of coal gas commonly known as “producer gas” was created by passing steam through a hot bed of bituminous coal in a retort (called a “producer”). This type of gas was often used in industry, in steel plants, for example, to fire open hearth furnaces for the purpose of refining pig iron from blast furnaces into steel. At Laurel Lake, however, the use of producer gas would have meant that a supply of coal still would be required. Springfield Gas Machines might have been used, or acetylene gas, created by the action of water on calcium carbide, may have been employed. It is also possible that the term “gas engine” may actually have referred to a gasoline engine using the liquid fuel, but gasoline would not have been “made on the premises.”

Meanwhile, at Monument Mills In 1902, the Great Barrington Electric Light C ompany purchased a site on the Housatonic River at the north end of town. This site, known as the Russell Water Power, had been used in connection with the former Berkshire Woolen Mills at that location. A combination hydroelectric and steam generating station was constructed there that used Stanley “S.K.C .” alternators (this designation stood for Stanley and his two business partners, John Kelly and C ummings C hesney). This facility would have supplemented the power being obtained from the Alger’s Furnace hydroelectric station at Monument Mills. At the Alger’s Furnace station, a second Stanley inductor alternator had been installed. This was a 280-kW, 2,400-V, twophase machine operating at the same frequency as the original machine, that being 66 2/3 Hz (presumably the iron losses in this newer machine were considerably less than in the original). This new alternator was driven by a second 325-hp turbine, identical to the first. These machines were manufactured in Massachusetts by the Holyoke Machine C ompany and were known as McC ormick turbines. In anticipation of additions to the Monument Mills complex, construction began on a second hydroelectric power station in 1906. The location was actually in the village of ​ G lendale at an existing dam that had been constructed by the former Glendale ​ Woolen C ompany 1 mi (1.61 km) or so up the Housatonic River from the Alger’s Furnace station. The original equipment in the Glendale powerhouse was similar to that in the Alger’s Furnace station but larger. A 600-hp McC ormick turbine drove a 500-kW, 2,400-V, two-phase, 66 2/3-Hz ​ S tanley alternator. This machine, however, was not of Stanley’s unique inductor design. It was of the more conventional revolving field design. By this time, the undesirable feature of the double effective air gap in the inductor alternator had become problematical in larger machines. A rather interesting electric power situation existed at this time in the Monument Mills complex. Some of the older machinery was still being driven, via shafting, from water turbines ​ located within the complex itself. In addition, however, a total of 14 motors of the induction type had been installed to drive other machinery. These, of course, were powered from both the Alger’s Furnace and the Glendale hydroelectric stations. Two large synchronous motors had also been installed, one 75 hp and the other 180 hp, that were belted to the same mill shafting normally driven by the water turbines. During times of high river flow, these machines were used as alternators to generate additional electric power for other uses. When the river flow was low, however, they were used as motors to assist the water turbines in driving the shafting. The Monument Mills complex also supplied electric street lighting in the village of Housatonic. This was by means of a 7.5kW constant-current type of transformer that fed a circuit of series-connected incandescent street lamps. C onstant expansion of the Monument Mills complex required the addition of a steam generating station within the mill complex itself in 1917. This powerhouse was notable for its 175-ft high smokestack. It had been constructed on the site of former repair shops, and it also housed water pumps for fire protection. By 1914, the Alger’s Furnace station had been retired, but this 1917 steam station and the Glendale hydroelectric station continued to produce electric power for the mills until at least 1948. The Glendale powerhouse actually survives today, and its exterior looks much the same as it did when first constructed, in spite of having been ​ a bandoned and derelict for several decades (see Figure 5). In 1977, the Glendale powerhouse was rescued by Stockbridge resident Mary Heather and her brother ​ Joseph Guerrieri,


who was an electrical engineer. The author was fortunate in being able to visit the powerhouse at that time. The only remaining artifact was a large cast iron base for a long-gone generator, which was embellished along its side with “Stanley Electric Manufacturing C ompany.” By 1988, Guerrieri had installed two induction (not “inductor”) generators with a combined capacity of about 300 kW, the electrical output that he had arranged to sell to the local electric utility.

Figure 5. Glendale powerhouse as it appears today (photo courtesy of Thomas J. Blalock).

By 1994, the powerhouse had been taken over by an independent power producer from nearby Westfield, Massachusetts, and new generating equipment that increased the generating capacity to 1,050 kW had been installed.

During the 1990s, the Glendale operation was acquired by C HI Energy, an energy producer based in Stamford, C onnecticut. The Enel Group acquired C HI Energy in 2000, and today the powerhouse is operated by Enel Green Power North America, Inc., an Enel Group subsidiary based in Andover, Massachusetts. The energy produced by the Glendale powerhouse is sold to local municipal electric utilities.

Electric Power to Stockbridge The town of Stockbridge, located between Great Barrington and Lenox, did not receive electric power until 1906. A proposal to electrify the town had been made as early as 1891, but nothing ever came of this original plan (see Figure 6). The 1906 electrification was instigated by local engineer and entrepreneur Joseph Franz, who documented this effort via a detailed article in Electrical World magazine in 1909. Franz arranged to obtain electric power for Stockbridge from the Glendale powerhouse that had just been completed. The Stockbridge Lighting C ompany was formed to distribute this power, which was transmitted via a two-phase, 2,400-V overhead transmission line from the powerhouse about 2 mi (3.22 km) to the west. This ​ a rrangement was based on the fact that the power generated at Glendale could be used to ​ operate the machinery at Monument Mills during the day and to light Stockbridge at night.

Figure 6. Main Street in Stockbridge, circa 1900, with the historic Red Lion Inn that has been welcoming guests continuously since the 18th century on the right side of the image (photo courtesy of the Berkshire Athenaeum).

The overhead transmission line terminated at a small concrete building just south of the Stockbridge town center. This structure served as a switch house to connect the incoming power to several single-phase, 2,400-V underground feeders that then ran throughout the town. Only half of this small building served as the switch house. The other half was actually used as a waiting room for riders of the local interurban streetcar operation, the Berkshire Street Railway. The building survives today and is used as an equipment storage facility for an adjacent recreation field (see Figure 7). In Stockbridge, step-down transformers were usually located in brick vaults constructed in the cellars of houses and other buildings. The 2,400-V underground circuit would loop into the vault to feed the transformer and then loop out again to continue on to other customers. The vaults were equipped with locked iron doors for protection. This same arrangement was used with the underground supply in Lenox, and a few of these brick vaults can still be found in the basements of old buildings in both towns. However, they no longer house transformers (see Figure 8).

Figure 7. Former Stockbridge switch house and street railway station as it ​ a ppears today (photo courtesy of Thomas J. Blalock).

An example of a large transformer vault still exists in the basement of Naumkeag, a mansion designed by noted architect Stanford White and constructed in 1886 in Stockbridge for Joseph Hodges C hoate. C hoate was a noted attorney and later served as the U.S. ambassador to Great Britain for six years.

The Stockbridge Lighting C ompany also supplied street lighting for the town by means of an 8.5-kW, constant-current transformer. The street lights were 32-candlepower tungsten incandescent lamps operating in series at a current of 5.5 A.


The electrical distribution system in Stockbridge was installed by the Rogers Electric C ompany of Lenox, a firm that was responsible for virtually all of the early electric work in Lenox as well. The installation was completed in 1907 at a total cost of US$23,000.

Later Developments In 1912, the Stockbridge Lighting C ompany, still using power from the Glendale powerhouse, installed step-up transformers to energize an 11,000-V transmission line to Lenox. This additional source of power allowed Mrs. Vanderbilt to abandon the Erskine Park powerhouse on Laurel Lake three years later. In 1920, the Stockbridge Lighting C ompany, the Great Barrington Electric Light C ompany, and the Lenox Electric Light and Power C ompany (as it was called by then) combined to become the Southern Berkshire ​ P ower and Electric C ompany. This new entity constructed an early “automatic” hydroelectric station in 1922 on the Williamsville River in the vicinity of the town of West Stockbridge. This generating plant had a capacity of 500 kW. Then, in 1927, a small 30-kW hydroelectric generator was installed in West Stockbridge itself. This was an outdoor unit, and, recently, a modern version of this generator has gone into operation at that same location.

Figure 8. Transformer vault in the basement of a private home in Lenox as it appears today (photo courtesy of Thomas J. Blalock).

In 1922, a 115-kV transmission line was constructed from a large hydroelectric station on the C onnecticut River at Turner’s Falls, Massachusetts, to supply electric power to the city of Pittsfield. In 1932, the Southern Berkshire Power and Electric C ompany built a transmission line from the town of Housatonic to the town of Lee where it connected with the line from Turner’s Falls. This would have supplemented the power from the still functioning Glendale powerhouse. By 1946, the Turner’s Falls transmission line had come under the jurisdiction of the Western Massachusetts Electric C ompany (which still exists today as a part of Northeast Utilities C orporation). A company brochure of that year still showed the transmission line from Lee to Housatonic as a “connection to Southern Berkshire Power and Electric C ompany.”

In 1961, the Southern Berkshire Power and Electric C ompany became part of the Massachusetts Electric C ompany, now a unit of National Grid, which still supplies electric power to the southern Berkshire C ounty communities today.

Acknowledgments The author is grateful for pertinent historical information supplied by local Berkshire C ounty historians Bernard Drew, William Edwards, and James Parrish. The author is also indebted to Ann-Marie Harris in the local history room of the Berkshire Athenaeum in Pittsfield for her valuable research assistance and to Tjasa Sprague of Ventfort Hall for allowing him to explore the mansion and for providing important research material that she had gathered.

For Further Reading C . B. Gilder and R. S. Jackson, Jr., Houses of the Berkshires (1870–1930). New York: Acanthus, 2006. L. S. Parrish, A History of Searles Castle in Great Barrington, Massachusetts. Great Barrington, MA: Attic Revivals, 1985. D. Drew, A History of Monument Mills in Housatonic, Massachusetts. Great Barrington, MA: Attic Revivals, 1984. B. A. Drew and G. C hapman, William Stanley Lighted a Town and Powered an Industry. Pittsfield, MA: Berkshire C ounty Historical Society, 1985. F. L. Pope, “Notes on the reconstruction of a small central station plant,” AIEE Trans., vol. 12, pp. 454–469, June 1895. J. Franz, “Housatonic River hydroelectric plants,” Elec. World, vol. 55, no. 22, pp. 1441–1442, June 1910. J. Franz, “Underground cable system in a village of two thousand inhabitants,” Elec. World, vol. 54, no. 4, p. 191, July 1909. F. B. C rocker, Electric Lighting, 6th ed. New York: Van Nostrand, 1906.

Share

Tw eet

0

EMAIL | PRINT | HOME


Calendar PES Meetings The IEEE Power & Energy Society (PES) website features a meetings section, which includes calls for papers and additional information about each of the PES-sponsored meetings.

September 2012 IEEE PES T&D Latin America (T&D-LA 2012), 3–5 September, Montevideo, Uruguay, contact Juan C arlos Miguez, j.miguez@ieee.org, http://ieee-tdla.org/

October 2012 IEEE PES Innovative Smart Grid Technologies EUROPE ISGT (Europe 2012), 14–17 October, Berlin, Germany, contact Kai Strunz, registration@ieee-isgt-2012.eu, http://www.ieee-isgt-2012.eu IEEE PES International Conference on Power System Technology (POWERCON 2012), 30 October–2 November, Auckland, New Zealand, contact Nirmal Nair, ncnair@ieee.org, http://www.powercon2012.com

December 2012 IEEE International Conference on Power Electronics, Drives, and Energy Systems (PEDES 2012), 16–19 December, Bengaluru, Uruguay, contact Tomy Sebastian, Tomy.Sebastian@nexteer.com, http://www.pedes2012.in

January 2013 IEEE PES 2013 Joint Technical Committee Meeting (JTCM 2013), 13–17 January, Memphis, Tennessee, USA, contact C harles Henville, c.henville@ieee.org

February 2013 IEEE PES Innovative Smart Grid Technologies Conference (ISGT 2013), 24–27 February, Washington DC , USA, contact Saifur Rahman, s.rahman@ieee.org, http://sites.ieee.org/isgt/

April 2013 IEEE PES Innovative Smart Grid Technologies Latin America (ISGT LA 2013), 15–17 April, Sao Paulo, Brazil, contact Prof. Nelson Kagan, nelsonk@pea.usp.br, www.ieee.org.br/isgtla2013

May 2013 IEEE International Electric Machines & Drives Conference (IEMDC 2013), 12–15 May, C hicago (Rosemont), Illinois, USA, contact Joyce Mast, jmast@illinois.edu, www.iemdc13.org

June 2013 IEEE PowerTech Grenoble (PowerTech 2013), 16–20 June, Grenoble, France, contact Bertrand Raison, powertech2013@grenoble-inp.fr, http://powertech2013.grenoble-inp.fr

July 2013 IEEE PES General Meeting (GM 2013), 21–25 July, Vancouver, British C olumbia, C anada, contact Paula Traynor, ptraynor@epri.com, Bill Rosehart, rosehart@ucalgary.ca, http://www.ieee-pes.org

October 2013 IEEE PES Innovative Smart Grid Technologies Europe (ISGT Europe), 6–9 October, C openhagen, Denmark, contact Rodrigo Garcia-Valle, rgv@elektro.dtu.dk

April 2014 IEEE PES Transmission & Distribution Conference & Exposition (T&D 2014), 14–17 April, C hicago, Illinois, USA, contact Tommy Mayne, t.w.mayne@ieee.org, www.ieeet-d.org

Share

Tw eet

0

EMAIL | PRINT | HOME


In My View By Stipe Fustar

Data Analytics The Architectural Perspective This issue of IEEE Power & Energy Magazine discusses a computational capability to extract a cause–effect understanding of power system events. It demonstrates associated data analytics for control center applications, enhanced security assessment and management, tuned state estimation, automated fault analysis, and renewable resource integration. Because most utilities are forced to operate close to their security limits, they are constantly trying to better integrate data and processes and to make better decisions to reduce operating and maintenance costs as well as to improve overall reliability. C onsequently, the volume of captured measurements across power grid management systems has increased dramatically. The concept of automating knowledge and information represents a paradigm shift in the current thinking and is designed not only to improve the effectiveness of operational short-term planning and operators’ situational awareness but also to facilitate decision-making steps to ensure a reliable power supply to customers. This includes capturing measurements, converting them to data, converting the data to information, and then distilling the information into knowledge that can be used to make faster and better decisions. To support the process, several industry standards and the interoperability infrastructure must be defined to streamline actual implementations. While the articles in this issue point to advanced knowledge extraction analytics, they are not addressing the overall implementation concept where all the applications would interact with each other using a common standard-based framework, and hence more attention to that issue is needed. Besides the data analytics discussed in the articles, the applicability of the automating knowledge and information concept needs to be addressed from the architectural perspective. C onsidering the interoperability of new solutions with legacy solutions, including integration, technology choices and the role of industry standards are crucial to wider use of new data analytics.

Architecture and Technology Perspectives The National Institute of Standards and Technology (NIST) has been tasked to coordinate the development of architectural frameworks that includes protocols and data model standards for information management to achieve interoperability. The coordination tasks are carried out through an organization called the Smart Grid Interoperability Panel (SGIP). The organization was created in 2009 and is now transitioning to SGIP 2.0 as a public-private partnership. Per SGIP, fundamental goals of the architecture include incorporating evolving technologies to work with legacy applications and devices in a standardized way. SGIP has also defined a conceptual model to support planning, requirements development, documentation, and organization of the diverse, expanding collection of interconnected networks and equipment that will compose the interoperable systems for power grid management. C onsistent with the SGIP framework to achieve high-performance levels of services vision for utility operations in realtime and short-term planning and leveraging automating knowledge and information concepts on a commodity platform, an extreme transaction processing (XTP) is envisioned as an important technology choice. The key distinct innovations of XTP include distributed, replicated memory spaces and persisted data storages, the use of event-driven architecture for intra- and intersystems communications (event driven architecture), and the use of microkernel-style extensible modularity of platform technology as well as the dynamic server networks (dynamic grid). To help achieve extreme, scalable performance with continuous availability for mission-critical systems, these applications often share the same basic requirements and challenges. There are several key areas where benefits of the automating knowledge and information concept are obvious. Improved quality of decisions: C omputer and communication devices in substations can extract a huge amount of data while operators can process only some of it and thus cannot consider all of the available data and information for making decisions. The article titled “The Situation Room” is an example where advanced analytics and visualization framework enhances operators’ situational awareness, including an improved ability for monitoring operating limits, a better understanding of complex events, and enhancing post-mortem analysis. As illustrated in “Measures of Value,” the results of the data analytics processing can tell operators not only the basic information about the fault type and location but also whether the fault clearing sequences were executed correctly. The software solution implements the experts’ knowledge through the rules formulated by experts. Faster response: In the power grid management domain, sometimes human response is not fast enough. Therefore some decisions must be fully automated without human intervention. A concept of a synchrophasor assisted state estimation (SPASE), which allows improvements based on statistical properties of the measurements while taking into account model uncertainties, is seen as prerequisite for more robust decision making. Data overload prevention: The idea is also to reduce the amount of information brought to the operators’ attention. The automating knowledge and information paradigm should be used in conjunction with visualization tools to implement management-by-exception strategies where operators are notified or alarmed less often and only in those situations where their involvement is required. The “Measures of Value” article describes a solution that processes a huge volume of information, however the operator is presented just what is necessary, and that is cause-effect-action information that is obtained in seconds after the event has occurred.


Improved reliability: This will enable operators to maintain a high reliability level by improved situational awareness and the ability to react promptly in complex situations that require, for example, corrective actions. As discussed in the “Operating in the Fog” article, a new way of handling uncertainties and security assessment tools is envisioned not only for online decisions but also as offline tools helping define security rules, validate dynamic models, and outline defense plan and restoration strategy. Reduced cost: The integration of renewable resources such as wind power presents opportunities to reduce overall generation cost (a new data analytics for the wind power forecasts that may be utilized for predictive control is presented in this issue). In addition, improved asset management results in reducing outage time and unsupplied energy indices as well.

Interoperability Perspective In support of the automating knowledge and information concept, almost every new system or application, regardless of its physical location, will be required to interact with other applications or systems. Therefore interoperability readiness is extremely important. Interoperability in this context, related to distributed systems, is defined with the following common factors: an exchange of meaningful, actionable information between two or more systems across departmental and organizational boundaries a shared meaning (semantic) of the exchanged information an agreed expectation for the response to the information ​ e xchange a requisite quality of service in information exchange: reliability, fidelity, security an operationalized common semantic model at run-time to achieve near-plug-and-play key interoperability decisions made at the semantic layer.

Role of Standards and Common Semantic Understanding To achieve the required level of interoperability readiness, a more rigid and disciplined approach in standard development and adoption is critical. The interoperability readiness is a necessary prerequisite to enhance the future grid’s reliability, interoperability, and extreme event protection for an increasingly complex system operation increase transmission transfer capabilities and power flow control use efficient, cost-effective, environmentally sound energy supply and demand maximize asset use. Per SGIP, the key step in defining industry standards is reaching an adequate level of semantic understanding for all data and information exchanged between various components. Therefore, to eliminate semantic ambiguities and set the foundation for defining industry standards at a syntactic level, a common semantic model should be developed and standardized as well. A common semantic understanding of raw and processed data as well as cause-effect-action knowledge information is seen as the key enabler of interoperability. A common semantic model that leverages existing industry standards as reference models [the International Electrotechnical C ommission C ommon Information Model (IEC C IM) is the key reference model] is possible to operationalize at run time. For example, a common semantic model can be used as a vehicle to ​ harmonize IEEE C 37.118 with IEC 61850 and precision time synchronization. To summarize, a common semantic model as a common vocabulary and model can be leveraged in the following ways: to provide a basis for the design of endpoints such as interfaces and staging areas between functions, systems, and vendors (all applications discussed in this issue must provide endpoints where each data element is clearly described) to standardize the design of data exchanges and convert data from a provider to a consumer using the semantic model as a logical intermediary (each element exchanged must have the same meaning to all integrated applications) to serve as a logical model for all integration patterns [for example, service design (e.g., Web Service Description Language), message payload design, database design (Data Definition Language or DDL)] (e.g., precise endpoint syntax can be forward engineered from the semantic model) to provide a platform-independent logical model for operational data store, data warehouse, data marts, staging area, and other data stores (a common semantic model that covers all data exchanges between components discussed here can be used to design data stores as well, e.g., generate DDLs) to operationalize a semantic model at run time, allowing key interoperability decisions to be made at the semantic layer to provide a basis for capturing experts knowledge and the development of related business rules to provide a basis for effective network model management.

Conclusion The use of a cause-effect-action understanding of power system events for the short-term operation planning and the


real-time operation of the power grid is identified as a promising area where significant benefits can be achieved to enhance future power grid management solutions such as EMS. The automating knowledge and information concept can be applied wherever a stream of real-time event data is available from field devices, digital fault recorders, phasor measurement units, applications, the Web, and other sources. All potential solutions presented in this issue are in use either in real life or in lab environments. As the volume of events data increases, the automating knowledge and information concept becomes more important. To implement these concepts sooner rather than later, a common semantic model should be used to describe data and cause-effect-â&#x20AC;&#x2039; a ction information unambiguously.

Share

Tw eet

0

EMAIL | PRINT | HOME


Advertisers Index The Advertisers Index contained in this issue is compiled as a service to our readers and advertisers: the publisher is not liable for errors or omissions although every effort is made to ensure its accuracy. Be sure to let our advertisers know you found them through IEEE Power & Energy Magazine. ASPEN, Inc. BCP Neplan Busarello+Cott+Partner, Inc. CRC Press CYME DIgSILENT GmbH Dow Electrical & Telecommunications Electrocon International ESA International LLC ETAP GE Digital Energy General Electric International, Inc. â&#x20AC;&#x201C; Energy Consulting IEEE Energy Tech Manitoba HVDC Research Centre Nexant, Inc. Phoenix Contact Power World Corporation Powertech Labs Inc. RTDS Technologies, Inc. RuggedCom, Inc. Schneider Electric Schweitzer Engineering Laboratories, Inc. SES Ltd. Siemens Energy, Inc. Siemens Energy, Inc. Trachte USA V&R Energy Systems Research

Share

Tw eet

0

EMAIL | PRINT | HOME


70

IEEE power & energy magazine

1540-7977/12/$31.00Š2012 IEEE

september/october 2012


One Step Ahead Short-Term Wind Power Forecasting and Intelligent Predictive Control Based on Data Analytics By Ganesh Kumar Venayagamoorthy, Kurt Rohrig, and István Erlich

©FOTOSEARCH AND ARTVILLE, LLC.

T

THE INTELLIGENT INTEGRATION OF WIND POWER INTO THE EXISTING ELECTRICITY supply system will be an important factor in the future energy supply in many countries. Wind power generation has characteristics that differ from those of conventional power generation. It is weather dependent in that it relies on wind availability. With the increasing amount of intermittent wind power generation, power systems encounter more and more short-term, unpredicted power variations. In the power system, supply and demand must be equal at all times. Thus, as levels of wind penetration into the electricity system increase, new methods of balancing supply and demand are necessary. Accurate wind power forecasting methods play an important role in addressing the challenge of balancing supply and demand. Forecasting is required to maximize the integration of a high level of wind power penetration into an electricity system because it couples weather-dependent generation with the planned and scheduled generation from conventional power plants and the forecast electricity demand. The latter is predictable with sufficient accuracy. Even with state-of-the-art wind forecasting methods, the hour-ahead prediction errors for a single wind plant are still around 10–15% with respect to actual production. Wind power prediction determines the need for balancing energy and, hence, the cost of wind power integration. In countries such as Denmark, Germany, Spain, and the United States, wind power prediction is a critical component of grid and system control. The short-term energy balancing of existing electricity supply systems depends on automatic generation control (AGC), which cannot regulate transmission line flows. Most regional voltage controllers (RVCs) are capable of regulating only the primary bus voltage and do not result in any voltage enhancement at other buses. With a high level of wind power penetration, short-term transmission line overloads and voltage violations may occur because of the limited adaptation capabilities of the AGCs and RVCs. A high degree of wind power integration without intelligent control may result in power system stability issues and penalties that cause wind farm owners to lose revenue. Real-time operation time frames require short-term wind power prediction on the order of seconds, minutes, and a few hours, as well as the integration of that prediction into the control room environment. Short-term wind power forecasting based on the current status of wind power plants (WPPs)—and the application of such forecasting in the development Digital Object Identifier 10.1109/MPE.2012.2205322 Date of publication: 16 August 2012

september/october 2012

IEEE power & energy magazine

71


Short-term wind power prediction on the order of seconds, minutes, and a few hours and its application in control centers becomes critical for the real-time operation of the electricity supply system. of intelligent predictive optimal control of reactive power and wind power fluctuations for real-time control center operations—are discussed in this article.

Short-Term Wind Power Prediction Short- to medium-term wind power forecasting using numerical weather forecasts and computational intelligence methods has experienced enormous progress in recent years and represents an integral part of today’s energy supply. For asserting predictive control of wind farms, wind farm groups, and the associated transformer, the shortterm prediction of active and reactive wind turbine power outputs is essential. Contrary to other fields of application of the prediction models for the energy market, wind farm control requires a very short forecast horizon, from a few seconds up to 15 min. The approaches used with the existing model, therefore, do not apply here. Weather pattern information will play no role in this task. Rather, it is important to estimate the electrical parameters for the

near future based on recordings and analyses of the current situation of wind farms. Compiling this estimation using analytical approaches is very difficult and imposes a high computational cost; for these reasons, the use of computational intelligence methods is essential. In several studies on wind power prediction, the ability of neural networks to carry out short-term predictions from spatiotemporal information is well known. In contrast to the previously used methods for very shortrange forecasting, the proposed method uses no related numerical weather prediction (NWP) information. The active and reactive power are predicted solely based on power data measured from representative wind farms or wind turbines in a wind farm. Due to the spatial distribution of these wind farms, changes in grid areas are identified, and this information helps to predict the supply in the near future. The suitability of this spatial method for predicting wind power over very short forecast horizons is being investigated in detail. In Figure 1, the predicted outputs are active and reactive

Hidden Layer Input Layer Active Power P1(t) Reactive Power Q1(t) Active Power P2(t) Reactive Power Q2(t )

Output Layer Predicted Active ^ Power P (t + Δt)

Predicted Reactive Power ^ Q (t + Δt)

Active Power PN (t ) Reactive Power QN (t)

figure 1. Neural network inputs are the active and reactive power of the individual N wind turbines in a wind farm at the current time, t, and the outputs are predicted active power and reactive power of a wind farm at time t + 3t. 72

IEEE power & energy magazine

september/october 2012


A high degree of wind power integration without intelligent control may result in power system stability issues and penalties that cause wind farm owners to lose revenue.

Power (MW)

power of a wind farm at the next 3,500 time interval. Measured The short-term active wind 1 h Prediction 3,000 Persistence power forecast for one of the German network regions (TenneT), 2,500 based on the measured power data of selected wind farms, is shown 2,000 in Figure 2. The input data for the neural network consist of the nor1,500 malized output signals representative of individual turbines or wind 1,000 farms. 500 The figure shows the curves of the wind energy fed into a network 0 region of TenneT compared with the 0 4 8 12 16 20 24 28 32 3640 44 48 52 56 60 64 68 72 76 80 84 88 92 96 one-hour forecast and the one-hour Time (h) persistence. The neural network model using the spatial method is figure 2. Measured and predicted curve of active power of wind farms in a grid clearly predicting large fluctua- area of TenneT. tions significantly better than the approximated persistence method. The root mean square error tainties caused by the wind, the optimization must be per(RMSE) for the one-year period was 2.5% of the installed formed in a predictive manner for a certain future time plant capacity, and the correlation coefficient was 0.989. horizon by taking into account the short-term wind foreTable 1 compares the forecast accuracy of the spatial method cast. This idea is depicted in Figure 3. In this approach, for prediction horizons from one to three hours. For the one- optimization of power flows is performed for a given scehour forecast, the spatial method provides a significantly bet- nario, which includes a set of future operating points. All ter result than NWP-based models. In contrast, larger predic- of these operating points are optimized simultaneously tion horizons suffer from reduced quality compared with the using the objective function, which can be formulated NWP-based models. With these benchmarks, the neural net- several different ways. The simplest technique is to miniwork method based on spatial power data represents a very mize power losses within the wind farm area. Taking into good solution for very short-term predictions for grid regions account the stepwise movement of on-load tap changers (OLTCs), the power losses and costs of OLTC movements and wind farms. can be considered monetarily. Predictive Wind Farm The quality of the optimal wind farm operation depends Reactive Power Control on the accuracy of the wind power forecast. In the example With the increasing integration of WPPs, grid utilities presented herein, the forecast results shown in Figure 4 have require extended reactive power supply capabilities, not been used. only during voltage dips but also during steady-state operation. According to the grid codes, the reactive power requirements are defi ned alternatively in terms of table 1. Accuracy (RMSE and correlation) the power factor, the amount of reactive power supplied, of the spatial method. or the voltage at the point of interconnection. To achieve Spatial Method the reactive power requirement optimally, WPP operaPrediction Horizon (hour) RMSE Correlation tors may consider performing reactive power optimiza2.5% 0.989 1 tion within their own facilities. The stochastic nature of 4.2% 0.970 2 the wind speed, however, poses a serious problem to the 5.7% 3 0.953 reactive power management of WPPs. To consider uncerseptember/october 2012

IEEE power & energy magazine

73


Past

Future

tx+n

tx+3 tx+2

Wind Power tx+1 tx

tx–1 tx–2

External Grid

1

Sk

Optimization Algorithm

For the Next n Time Steps Ahead

Tap Position

Var Reference of the Total Wind Farm

3

2

PCC

220/110 kV

4

110/33 kV

figure 3. Predictive wind farm reactive power optimization.

90

5

80

4

70

3

60 50 40 30

2 1 0 –1

20

110/33 kV 220/110 kV

–2

10 0

4

8

12 Time (h)

16

20

24

figure 4. Results of the wind power forecast using a neural network. 74

predictive control optimization was tested with a real wind farm model, as depicted in Figure 3. The results are shown in Figures 5 and 6. For simplicity, in this case study, all wind turbines receive the same optimized reactive power reference set point. Different optimization methods can be used for the described problem, but the optimization task in general is nonlinear and nonconvex.

Tap Position

Wind Active Power (MW)

The optimization is carried out over the predicted time period for n discrete time steps simultaneously. Then, the optimal power flow program suggests the optimal OLTC tap settings along with the optimal reactive power references for the entire wind farm for the next n time steps. By conducting this optimization every five minutes, it can be updated if new, improved forecast results become available. The proposed

IEEE power & energy magazine

–3

0

4

8

12 Time (h)

16

20

24

figure 5. Optimization results: OLTC stepping. september/october 2012


The authors therefore used the heuristic optimization algorithm called mean-variance optimization (MVO), also referred to later as the mean-variance mapping optimization (MVMO), which demonstrates excellent convergence properties. Large wind farms connected to high-voltage transmission grids must either deliver a certain amount of reactive power or control the voltage at the point of interconnection. Often, the reactive power demand is derived from the voltage according to a given characteristic. Alternative methods may exist, but the basic task always remains the same and can be described by the reactive power demand that the wind farm has to supply. To adapt the reactive power generation, usually a wind farm controller is implemented. The output of this controller is the reactive power reference to individual wind turbines or, alternatively, the local voltage reference if a voltage controller is implemented at the wind turbine level. The question that arises is how the suggested wind farm optimization can be incorporated into the common wind farm optimization loops. Figure 7 illustrates the approach used. The optimization directly controls the OLTC positions and the shunt reactor connected to the bus bar to compensate for the capacitive charging power of the cable. The shunt reactor represents a discrete optimization variable, as it can only be switched on or off. The reactive power reference of the wind farm is usually distributed to the operating wind turbines equally, meaning that the output of the proportional-integral controller, 3Qtotal, is divided by the number

9

Relative Power (MVar)

8 7 6 5 4 3 2 1 0 0

4

8

12 Time (h)

16

20

24

figure 6. Optimization results: wind farm var reference (sum of all wind turbine var reference set points).

of wind turbines. This value may now be modified by distribution factors calculated based on the optimization results. The distribution factors are usually close to unity. Deviating from 1.0 will result in different var references remotely communicated to the wind turbines. The distribution factors are calculated in such a way that even if they are not uniform, the total required power 3Qtotal will be supplied. The suggested control and optimization methods have been tested by simulating their behavior over 24 hours. The

WPP

p + jq

PCC T1

T2

Grid Xsh L1

Controller

Qref

QPCC –

ΔQtotal

Distribution Factors, Tap Positions, Reactor Switching Status

PI Optimization MVMO Every 15 min

Reference Value

ΔQ1*

ΔQ*i

On/Off Status and Loading of WG

figure 7. Integration of optimization in wind farm control for reactive power control and power loss reduction. september/october 2012

IEEE power & energy magazine

75


Total WPP Losses (MW)

7

Case 1 Case 2 Case 3

6 5 4 3 2 1

0

4

8 12 16 Simulation Time (h)

20

24

figure 8. Wind farm losses over 24 hours for three different control scenarios.

optimization is carried out every 15 min, resulting in modified distribution factors. For the simulation, it is assumed that the wind is fluctuating and that the reactive power demand is changed by the operator in a stepwise manner in the range of maximum capacitive to maximum inductive values. Wind farm losses are shown for three different cases in Figure 8. Cases 1 and 2 represent operation with optimization. In Case 1, the var references of the wind turbines are different, whereas in Case 2, all of the wind turbines have the same (but optimized) var references. Case 3 represents the state of the art as implemented in most of the wind farms without any optimization but with a wind

farm controller in operation and classical voltage controllers applied to the OLTC. Clearly, the wind farm losses can be reduced considerably with optimization. On the right-hand side of the plots in Figure 8, Case 3 shows slightly smaller losses. In this case, however, the voltage limitations in the grid are violated (not shown here). Cases 1 and 2 show similar results. This is due to the fact that the wind turbines in this particular wind farm are close to each other (500â&#x20AC;&#x201C;600 m), so different supplied var references will not result in considerable differences in the loss. Therefore, in this case, a uniform var generation distribution is acceptable. Optimization is required, however, for optimal control of the OLTC and the shunt reactor.

Predictive Optimal Control of Wind Power Fluctuations The dynamic and intermittent nature of wind power causes fluctuations in transmission line flows that may result in power system instability. Power system instability can lead to cascaded outages and, eventually, a blackout. Integrating battery energy storage systems (BESSs) reduces the uncertainty inherent in wind power generation and increases grid reliability and security. In other words, it minimizes the possibility of a blackout. Wind power varies continuously, however, and in order to effectively and continuously utilize limited energy storage to mitigate the power fluctuations, it is necessary to

Charge/Discharge Signal P*comm (t) Wind Power Balancing Controller System States [P1-6(t) 230 kV 22 kV

SOC(t)

P6-4(t)

Pwind(t)]

230 kV

230 kV

230 kV PSSC2 Infinite Bus 22 kV G1

2 Area 1

5

10 G2

4 230 kV Area 2

9

22 kV Area 3 12 G4 Pwind(t) Wind Farm 6

BESS

230 kV 345 kV

345 kV

22 kV PSSC3

1

7

8

3

11 G3

figure 9. A modified 12-bus, three-area, multimachine power system with a wind farm, BESS, and wind power balancing controller. The wind power balancing controller uses the predicted power output of the wind farm to command charging and discharging of the BESS. 76

IEEE power & energy magazine

september/october 2012


carry out a real-time optimal control of the state of charge on the system states and feedback from the critic neural net(SOC) of the battery energy storage system with variations in work regarding the actor’s performance. The system states wind speed over a moving time window. Based on the short- are measurements from the power system, which consists of term predictions of wind power over any given time window, the following four elements: the current SOC of the BESS the optimal charge and discharge power commands for the (SOC(t)), the varying wind power (Pwind(t)), and the active BESS are determined. In other words, without optimal con- power flows through the transmission lines 1-6 (P1-6(t)) and trol, the BESSs will lose their function as shock absorbers 6-4 (P6-4(t)) connected to the wind farm. The critic network is a neural network whose output is once their SOCs charge to their maximum limit or discharge an approximation of the cost-to-go function of Bellman’s to their minimum limit. Adaptive critic design (ACD) is a powerful computa- equation of dynamic programming. The utility function in tional approach that can determine optimal control laws for a the approximation of the cost-to-go function is composed of dynamic system in a noisy, nonlinear, and uncertain environ- the sum of three terms with different weightings. The first ment, such as the power system. Compared with classical con- two terms are the transmission line active power fluctuations trol and dynamic programming–based approaches, ACD is a in lines 1-6 and 6-4. The third term represents the anticipated computationally inexpensive method for solving infinite-hori- deviation in the BESS’s SOC from its maximum and minizon optimal control problems. With ACDs, no prior informa- mum SOC limits, which are estimated based on the predicted tion is needed about the continuously changing system to be wind power output over the next several seconds. If the SOC controlled, and optimal control laws can be determined based of the BESS falls below the predefined minimum, the BESS on real-time measurements. The ACD consists of two subsys- will not be able to compensate for any deficit in wind power. tems, an actor and a critic. The actor receives the states of the Similarly, if the SOC exceeds the predefined maximum, it system (wind speed, power flows, and so on) and dispenses will not be able to absorb any excess wind power. Therefore, the control/decision signals (BESS charge and discharge com- it is necessary to maintain the SOC of the BESS within its mands). The critic learns the desired performance index for chosen dynamic range at all times. The actor based on the some function associated with that index and evaluates the MVO algorithm determines the optimal charge or discharge overall performance of the system, like a supervisor. The power system in Figure 9 is used to illustrate Modified 12-Bus Power System with a Wind Farm and the need for intelligent optimal Battery Energy Storage System control of a BESS to provide maximum mitigation of transmission Rest of the line power flows with wind farms. Power Figure 9 shows a modified 12-bus, System multimachine power system with three generators (G2, G3, and G4), an infinite bus (G1), and three interconnected areas. Generator G4 is a wind farm. The BESS is connected BESS to bus 13 in area 2 of the system. System States The BESS charges and discharges energy in order to reduce power P*comm(t ) fluctuations in the two transmisCharge/Discharge sion lines (lines 6-4 and 1-6) conController nected to the wind farm bus. The (MVO-Based Actor task of the BESS is to maintain Network) steady-state power flows in lines 6-4 and 1-6 as much as possible Anticipated Short-Term Change in with wind power variations. Critic Network Wind Power State of In order to implement this (Adapts the Forecast Charge of Charge/Discharge objective, a dynamic optimal SOC the BESS Policy) controller with the ability to forecast wind power variations was deDynamic Optimal SOC Controller veloped. The actor (see Figure 10) is an MVO algorithm, which generates charge and discharge figure 10. A dynamic optimal BESS charge-discharge power command (P*comm(t)) power commands (P*comm(t)) based controller. september/october 2012

IEEE power & energy magazine

77


Wind Power (MW) Power Flow, P1-6(t) (MW)

600 500 400 300 200 100 0

wind farm to lose revenue. The ACD controller reduces the fluctuations in the transmission lines from the reference line power flow values and, hence, minimizes the deviation penalty charged to the wind power provider. The results presented here use five steps of prediction, a total of 25 s, where each step is five seconds ahead. 0

50 100 150 200 250 300 350 400 450 Time (s) (a)

300 250

X: 339.6 Y: 210.2

200 150 100

0

50 100 150 200 250 300 350 400 450 Time (s) Power Flow in Line 1-6 with ACD Controller Power Flow in Line 1-6 Without ACD Controller Reference Power Flow in Line 1-6

Power Flow, P6-4(t) (MW)

(b) 95 70 45 20 –5

–30

0

50 100 150 200 250 300 350 400 450 Time (s) Power Flow in Line 6-4 with ACD Controller Power Flow in Line 6-4 Without ACD Controller Reference Power Flow in Line 6-4 (c)

figure 11. (a) Wind power variations over a few minutes. b) Comparison of power flow in transmission line 1-6 with and without ACD controller. (c) Comparison of power flow in transmission line 6-4 with and without ACD controller.

command P*comm(t). The MVO algorithm is a new, population-based stochastic optimization technique. The MVO algorithm finds the near-optimal solution and is simple to implement. The anticipated SOC deviation of the BESS is obtained using its ampere-hour rating and the forecast wind power over the next several seconds or minutes. The active power flow fluctuations in transmission lines 1-6 and 6-4 caused by the variations in wind power over a few minutes and shown in Figure 11(a) are plotted in Figure 11(b) and 11(c), respectively. Without an ACD controller, significant power fluctuations occur in the lines, which may result in stability issues and penalties that cause the 78

IEEE power & energy magazine

Conclusions Short-term wind power prediction on the order of seconds, minutes, and a few hours and its application in control centers becomes critical for the real-time operation of the electricity supply system as more and more wind power penetrates into it. The value of short-term wind power forecasting is high considering the reduction in power losses it offers, as is maximizing the security and stability of the power system, especially when stochastic security-constrained optimal power flow is far from reaching control centers in the near future. Even more attractive to wind power providers is that short-term wind power forecast–based system applications in control centers can result in the maximization of revenue by minimizing penalties.

For Further Reading G. K. Venayagamoorthy, “Dynamic, stochastic, computational and scalable technologies for smart grid,” IEEE Comput. Intell. Mag., vol. 6, no. 3, pp. 22–35, Aug. 2011. B. Lange, K. Rohrig, B. Ernst, B. Oakleaf, M. L. Ahlstrom, M. Lange, C. Moehrlen, and U. Focken, “Predicting the wind—Models and methods of wind forecasting for utility operations planning,” IEEE Power Energy Mag., vol. 5, no. 6, pp. 78–89, Nov.–Dec. 2007. R. Jursa and K. Rohrig, “Short-term wind power forecasting using evolutionary algorithms for the automated specification of artificial intelligence models,” Int. J. Forecast., vol. 24, no. 4, pp. 694–709, Oct.–Dec. 2008. I. Erlich, G. K. Venayagamoorthy, and N. Worawat, “A mean-variance optimization method,” in Proc. IEEE World Congress Computational Intelligence, Barcelona, Spain, July 18–23, 2010, pp. 1–6. V. S. Pappala, I. Erlich, and K. Rohrig, “A stochastic model for the optimal operation of a wind-thermal power system,” IEEE Trans. Power Syst., vol. 24, no. 2, pp. 940– 950, May 2009. G. K. Venayagamoorthy, R. G. Harley, and D. C. Wunsch, “Comparison of heuristic dynamic programming and dual heuristic programming adaptive critics for neurocontrol of a turbogenerator,” IEEE Trans. Neural Networks, vol. 13, no. 3, pp. 764–773, May 2002.

Biographies Ganesh Kumar Venayagamoorthy is with Clemson University in South Carolina. Kurt Rohrig with the Fraunhofer Institute for Wind Energy and Energy System Technology in Kassel, Germany. István Erlich is with the University of Duisburg-Essen in p&e Germany. september/october 2012


P&E Magazine