Page 1

DNV GL White Paper on Photovoltaic Module Degradation Document No.: RANA-WP-03-A Date: 3 February 2015


Table of contents ABSTRACT ..................................................................................................................................... 3 1 INTRODUCTION ........................................................................................................................... 4  2 MODULE LEVEL DEGRADATION THEORY .......................................................................................... 6  2.1 Short-term degradation ............................................................................................................. 6  2.2 Long-term degradation .............................................................................................................. 6  3 SYSTEM LEVEL DEGRADATION THEORY........................................................................................... 8  3.1 Mismatch ................................................................................................................................. 8  3.2 Seasonality ............................................................................................................................ 10  3.3 Statistics................................................................................................................................ 10  4 METHOD ................................................................................................................................... 13  4.1 Review of study quality ............................................................................................................ 13  4.2 Review of studies with large or small degradation values............................................................... 14  4.3 Review of climatological data .................................................................................................... 14  5 RESULTS .................................................................................................................................. 15  5.1 Impact of filtering on average degradation .................................................................................. 15  5.2 Climatological correlation ......................................................................................................... 17  5.3 Distribution tails analysis .......................................................................................................... 18  6 CONCLUSION ............................................................................................................................ 21  7 FUTURE WORK .......................................................................................................................... 23  8 REFERENCES ............................................................................................................................. 24 

DNV KEMA Renewables, Inc. RANA-WP-03-A

Page 2 of 28


ABSTRACT DNV GL has reviewed literature reports on observed rates of degradation of photovoltaic (PV) modules and systems, and the reported data were filtered for applicability and relevance to current PV systems. The data were also reviewed for the quality of the measurement method to better account for uncertainty and to attempt to narrow the large range of reported degradation rates for crystalline silicon PV technologies. The results were statistically aggregated to calculate the module and system level degradation rates. This analysis yielded a mean module level degradation rate of -0.57%/year with a P80 confidence interval of -0.10 to -1.06%/year and a median system-level degradation rate of 0.77%/year (P80 confidence interval of -0.05 to -1.45%/year). This suggests that an additional 0.20%/year beyond the module degradation rate may be attributed to a combination of system-level degradation impacts and measurement uncertainties. Based on this study DNV GL has concluded that the long-term system degradation rate of 0.75%/year that we have assumed in the past is supported by the literature. We recognize that performance of specific products may represent narrow subsets within the range of published results, but the limited availability of product-identified results currently makes it difficult to justify estimating product-specific long-term degradation. This paper describes relevant concerns in evaluating degradation estimates and shows how results from over 2000 published instances of degradation results were reviewed to reach this finding.

DNV KEMA Renewables, Inc. RANA-WP-03-A

Page 3 of 28


1 INTRODUCTION The long-term degradation of photovoltaic (PV) power generating systems has been a much-debated subject for many years. Long-term degradation is a slow, irreversible decline in output (i.e., as shown in Figure 1). It is distinguished from three other commonly reported and in some cases, reversible, forms of PV degradation, namely: 

Light-induced degradation (LID), a rapid and irreversible 1-3% drop associated with initial sun exposure that has been linked to cell defects/impurities;

Staebler-Wronski degradation (SWD) seen in thin-film amorphous silicon PV; and

Potential-induced degradation (PID) seen largely in ungrounded and positively-grounded conventional crystalline silicon (c-Si)1-based systems.

Figure 1 Simulated monthly generation with long-term degradation

Some researchers contend that long-term degradation is negligible, while others report rates in excess of 4% per year. Some researchers link high degradation to one-off system or module design flaws; while others simply report their observations with no explanation of the underlying mechanism(s) behind the degradation. National Renewable Energy Laboratory (NREL) researchers, Osterwald et al. [1], asserted in 2006 that degradation rates assumed by PV system designers should most likely be between 0.5%/year and 1.0%/year, based primarily on NREL tests. This guidance was used by both Det Norske Veritas and Germanischer Lloyd Garrad Hassan to justify an assumption of 0.75%/year for many years prior to the merging of the companies in 2013. That paper also suggested that more accurate degradation rate information would be important to designers, though no guidance on how to obtain such information was offered. In 2012, NREL’s Jordan and Kurtz [2] compiled over 140 papers in the literature containing nearly 2,000 separate estimates of degradation rates. The Jordan and Kurtz results were divided by technology type, but a significant majority was associated with mono- or poly-crystalline silicon. They also divided the data into 1

As used here, c-Si encompasses both mono-crystalline silicon and poly-crystalline silicon materials.

DNV KEMA Renewables, Inc. RANA-WP-03-A

Page 4 of 28


equipment fielded prior to or after the year 2000. They did not attempt to characterize the reported values by the quality of the research or applicability of the design. The key result that DNV GL took from this paper was that the mean value of module degradation reports was approximately 0.8%/year, which was consistent with earlier assessments of degradation by NREL. Note that this report emphasized median rather than mean degradation rates, but the energy output of a large number of PV modules will be governed by their mean rather than their median degradation rate (regardless of whether mean or median of the system-level degradation values is of interest). A fundamental challenge of long-term degradation studies is that they take time. If the degradation is assumed to occur at essentially a constant rate, then in theory, a short-term measurement can serve as an estimate for long-term behavior. However, there are two questions that need to be answered with this approach: (1) is the rate really constant?, and if so, (2) are the measurements recorded over a short time interval accurate enough to project the long-term outcome? The question as to whether degradation is constant is usually assumed to be of limited concern, as long as the rate is small – perhaps under 1%. However, the question of measurement accuracy is less easily dismissed, because field measurements of irradiance are at best 3-5% uncertain2, and if two points are used in computing the degradation rate, then the result will have uncertainty of roughly 4.5-7.5% divided by the number of years between measurements. For the typical 3- or 5-year test duration, that is roughly 2-5 times the magnitude of the expected value of degradation. Clearly, careful review of equipment uncertainty is required before a reported value of degradation can be trusted. While the Jordan and Kurtz study is particularly interesting because it includes data spanning more than 30 years, some of the reports within the study are based on non-standard module and system construction. For example, the parallel-series-parallel connections described by De Lillo [3] exposes modules to greater risk of reverse current flow than the series-parallel configuration normally used in modern systems. Contemporary manufacturers and developers of PV projects commonly highlight differences between present designs and manufacturing methods relative to past methods, arguing that “we don’t make them like that anymore.” However, contemporary modules share many cell and module design features with early c-Si modules, making well performed degradation studies on older modules still relevant. DNV GL decided to review the degradation papers referred to by Jordan and Kurtz and differentiate between technologies one could reasonably describe as obsolete or irrelevant versus those that are reasonably similar to modern PV module and power plant designs, and identify the resulting impact (if any) on long-term degradation. However, any approach to identifying a meaningful long-term degradation value must be based both on what is actually (or potentially) happening in the real word, as well as one’s ability to measure it.

2

Uncertainty values quoted correspond to expanded (95%) confidence intervals.

DNV KEMA Renewables, Inc. RANA-WP-03-A

Page 5 of 28


2 MODULE LEVEL DEGRADATION THEORY Degradation in PV power plants is understood (per current theory) to be divided into short-term and longterm mechanisms with the focus primarily within the envelope of the PV module.

2.1 Short-term degradation While the focus of this document is long-term degradation, inappropriate inclusion of short-term degradation (such as LID or SWD) in the collected data has the potential to cause overestimation of long-term degradation. That is, if the initial power rating estimate was recorded prior to the initial short-term degradation, then the “before” versus “after” difference applied to long-term degradation will be inappropriately large. The most common c-Si PV materials used in the market today are subject to LID. These materials typically use bulk doping of the silicon with boron to produce a P-type bulk material which is then doped at the surface to produce an N-type layer at the light-exposed cell surface. Oxygen is introduced into the bulk material during the initial doping process along with the boron, and these two elements later react when exposed to light and produce an inactive complex that reduces the effective doping of the P-material [4][5]. The amount of light exposure required for this mechanism to stabilize is often assumed in test standards to be less than 30 kWh/m2, or 30 peak hours of sun, but in the course of reviewing stabilization behavior of several dozen PV modules from several manufacturers, we have observed in all cases an apparent “dip” in efficiency on the order of one-tenth of the “stable” efficiency reduction. This “recovery” has completed between 60 kWh/m2 and 240 kWh/m2, or roughly 12 to 48 days. We note that there is currently no evidence of an initial LID in c-Si PV materials that employ N-type bulk doping, though there has been limited research in that area to date so this is not conclusive. However, this appears to be consistent with the absence of boron in the bulk material to react with oxygen. Regardless of how completely the initial degradation has progressed when the “initial” performance measurement is made, if an initial field measurement is made after as little as a week of exposure during construction then that measurement should account for more than half of the LID impact. Thus, our primary criterion used to avoid data that included short-term degradation was to identify papers that based the initial capacity on nameplate power rather than field measurements.

2.2 Long-term degradation Degradation of function in any system arises either from relatively few failures that occur at the “macro” scale (e.g., shattered glass) or as an accumulation of many small failures (e.g., migration of chemicals to undesirable locations, or thermal damage to solder bonds). The modular nature of PV arrays limits the propagation of most individual macro failures, so individual failures do not typically affect large portions of industrial or utility-scale PV systems. At this scale, even glass breakage can potentially be viewed as a “micro” scale effect on overall power production. Four broad areas of concern regarding long-term degradation are: chemical changes, micro-mechanical changes, macro-mechanical changes, and mismatch.

DNV KEMA Renewables, Inc. RANA-WP-03-A

Page 6 of 28


2.2.1 Component chemistry There are investigators that have asserted that for practical purposes all significant degradation mechanisms are external to c-Si PV cells [6]. PV technology does depend on very specific spatial segregation and mixtures of chemicals. In a poor quality product, this arrangement of materials may be insufficient or just marginal coming off the production line. A significant fraction of performance gains available to manufacturers as they improve their products relate to improvements in dimensional control over the placement and extent of these chemicals. Because this sensitivity to the degree of segregation of materials is achieved by controlled chemical diffusion, reverse chemical diffusion in the field is a plausible cause of degradation. However, postdegradation analyses of c-Si PV modules rarely find that degradation of cell chemistry is significant. [7] Instead, encapsulation failures that allow acceleration of corrosion of the cells and conductors within the module are associated with the most commonly-observed chemical degradation mechanisms in the field. One widely noted concern in module performance degradation is encapsulation “browning�, where the encapsulant transmissivity decreases over time. This is widely considered to be a chemical change in the normally-transparent encapsulation that is triggered by exposure to ultraviolet light. Both composition and purity of materials can affect susceptibility to browning, thus as PV module costs are driven down there is an increasingly real risk of encountering this problem in new products. The discovery of the existence of PID occurred in several PV manufacturing companies around 2007, and it became widely-reported around 2010, though a different potential-related mechanism was reported by SunPower in 2005 [8]. The most typical PID phenomenon is characterized by voltage-driven migration of sodium ions present in the encapsulation system (primarily the glass) when the polarity and magnitude of the voltage across that encapsulation are configured to enhance that migration. If the encapsulant system or cell surface treatments are resistant to ion migration, then PID may not occur. In addition, the system integration conventions used in the design of many PV systems did not activate this degradation mechanism for many years, as bipolar, isolated, and positively-grounded systems were atypical. While some reports currently indicate that the impact of PID can be reversed by changing the polarity of voltage across the module encapsulation, other ion migration mechanisms may not be so reversible.

2.2.2 Micro-failures Most PV reliability efforts focus on mechanical concerns such as cell-interconnect failures and encapsulation failures. Cell cracks, front-surface metallization, cell-to-cell interconnections, strings to lateral busses and connections to the external wiring are all noted frequently as locations where partial failures occur. In most cases, the failure is small enough that the module can continue to function at reduced effectiveness. These types of mechanisms are generally considered in the literature to be the second most common (after browning) causes of observed performance reductions in the field.

2.2.3 Macro-failures The transition between microscopic failures and macroscopic failures is somewhat arbitrary, but items visible to the naked eye and effects on the encapsulation are generally considered in this category. Some microscopic failures can have large impacts on performance (e.g., micro-cracks), and some macroscopic failures have little effect on performance (e.g., corrosion on the frame or delamination within the laminate that does not breach the environmental seal). However, macroscopic failures such as junction box cracks or bypass diode failures are more likely to be associated with (cause or are caused by) breach of encapsulation. Because this can lead to safety hazards or DNV KEMA Renewables, Inc. RANA-WP-03-A

Page 7 of 28


significant loss of power, they are more likely to cross the line from being labeled as performance degradation to being labelled as a product defect and having warranty response. This type of re-labeling of the problem from degradation to defect puts a cap on the magnitude of power loss attributed to degradation in the normal course of project planning. That is, if a power plant is losing power at 2% per year, then in 5 years the output will be reduced by 10% and any competent asset manager will be looking for resolution via warranty or insurance. This business perspective on the use of the term degradation is not necessarily considered in the degradation literature, where a complete module failure is sometimes counted as a case of 100% degradation and aggregated with the rest of the data. To distinguish between this scientifically-correct definition of degradation and the degradation as addressed in the business perspective, we introduced a filtering criterion we label as “likely warranty” that excludes data with a greater than 2% degradation rate.

2.2.4 Mismatch The random occurrence of partial failures within each cell is one effect that can cause each cell to degrade at a different rate. The resulting variation in cell properties leads to weak cells dissipating part of the power produced by strong cells. (Note that cell-to-cell variations in available irradiance can temporarily contribute to mismatch as well due to its strong effect on current in these series-connected cells. Temperature variations can also contribute, but to a much smaller degree than irradiance variation because this primarily affects voltage.) Bypass diodes are typically included in every PV module to limit the amount of power that a weak cell might dissipate because high dissipation levels manifest as potential fire initiation sources. There are typically (but not always) 3-4 bypass diodes each limiting the effects of high levels of mismatch to within that group of cells. Mismatch is an intrinsic factor comprising the degradation observed in degradation testing of individual modules, and as such does not need to be accounted for separately from the reported overall degradation at the module level. However, while bypass diodes are effective at reducing the impact of high mismatch levels, they do not affect propagation of smaller levels of mismatch. That is, small but growing discrepancies between individual cell output levels can affect performance of cells located in other modules. This intermodule mismatch should be considered as a factor in PV system level degradation separate from the module-level degradation as discussed in Section 3.1.

3 SYSTEM LEVEL DEGRADATION THEORY It is commonly observed in the literature that there are notable differences between long-term module level degradation and array or system level observed degradation. This difference is often attributed to BOS losses and system level degradation is considered to be the sum of module level degradation and BOS degradation effects. However, this offset that is observed in system level measurements could be due in part to the higher measurement uncertainty in typical array-level performance measurement.

3.1 Mismatch Extending the discussion of mismatch from the module level, we note that each module can degrade at a different rate. Over time, the “spread” of these degradation rates is expected to lead to system mismatch, DNV KEMA Renewables, Inc. RANA-WP-03-A

Page 8 of 28


where the optimum operating current and voltages of modules may vary. For hardwired arrays this can lead to none of the degraded modules working as well in the array as they would individually, and this impact is expected to increase with time. Figure 2 illustrates a theoretical difference in degradation rates depending on how PV modules are installed (ground-mount or rooftop) [9]. This theoretical difference arises from a combination of heat-induced increase in failure rate when exposed to environments with limited cooling, as well as increased cell mismatch within the modules. It is important to keep in mind that while Figure 2 was presented by SunPower to demonstrate field validation of their PVLife model, the measured data points have quite large uncertainties so we do not regard the magnitudes or even the shapes of the modeled curves as verified yet. That is, we agree with the logic that leads to the conclusion that some difference may be present in the field later in life, but the given curves may present an exaggerated view of the impact of different thermal environments. This information does suggest that it may be prudent to wait longer than 5 years before drawing conclusions about the long-term system degradation rate, and that reports of low degradation coming from cool climates may not be appropriate to use for projects located in medium- or hot-climates.

Figure 2 Theoretical impact of mismatch (source: SunPower)

DNV KEMA Renewables, Inc. RANA-WP-03-A

Page 9 of 28


3.2 Seasonality Field measurements of performance have been observed to vary seasonally in numerous papers. This behavior is most noticeable in thin-film technologies (e.g., [10] or [11]), but can also be observed with c-Si technologies (e.g., [12] or [13]) when the measurements are made in the field. The fact that changes in apparent performance vary cyclically during each year suggests an additional reason (beyond measurement uncertainty) that short-interval data should not be used as a basis for drawing conclusions about long-term degradation. Alternatively, removal of modules for flash testing under controlled conditions at various times during the system lifetime can be used to distinguish module degradation from mismatch and BOS degradation. For amorphous silicon, the three major mechanisms that affect seasonality in the field are Staebler-Wronski LID, thermal annealing (which can partially reverse the SWD), and material spectral sensitivity interacting with seasonal spectral changes in available irradiation [14][15]. For c-Si, spectral sensitivity is the only one of these that applies, but the impact is not as dramatic. Other effects that can confound field performance measurements for any PV technology include varying reflection losses due to different incidence angles on the module; un-corrected temperature variation (e.g., use of performance ratio); and intrinsic efficiency variation over different power levels. As this report is focused on c-Si degradation rates, seasonality is viewed as predominately impacting the accuracy of the field measurements rather than as a module level degradation mechanism.

3.3 Statistics One recurring observation regarding degradation of power output of PV modules is that it tends to affect each sample in a given study differently. Within a sample of modules of a particular model, a range of apparent degradation levels is often observed that may vary according to some distribution from a negligible to some worst value [16]. If most of the modules in a particular PV system have small degradation values, then we should expect that the overall impact on the PV system will also be small. The distribution that should apply in any particular case is currently chosen (essentially) arbitrarily due to the very limited amount of long-term data upon which to base that selection. The largest body of degradation data currently available is the compiled data from the Jordan and Kurtz paper [2], so we are currently using that instead of the limited manufacturer-specific distributions. (Where unbiased manufacturer- and model-relevant longterm degradation data are available, we welcome the opportunity use those instead for the distribution.) The challenge we are tackling here is to determine whether the distribution obtained from the literature can be reasonably altered by more careful sorting of the data to simulate the results we might obtain by different exposures, increased type testing, or quality assurance (batch) testing. Note that this random degradation behavior at the module level has several implications: first, that the mean value is an appropriate measure of central tendency, and second that the variation in degraded performance will lead to some increasing level of mismatch as the array ages. Neglecting mismatch for the moment, we can expect that the total power output of a PV plant will be related to the sum of power ratings of individual modules used to construct it. This sum is proportional to the mean value of module power ratings, so the plant degradation should be most closely be characterized by the mean value of degradation behaviors of those constituent modules. This conclusion has not been observed in the literature, where median values have been used (presumably because that statistic is more robust in the presence of invalid data than the mean value). DNV KEMA Renewables, Inc. RANA-WP-03-A

Page 10 of 28


When the effect of mismatch between degraded modules is considered, an apparent increase in degradation rate over time should occur at the system level that would not be anticipated if only the mean value of module degradation rates were considered. In addition, corrosion or strand damage in the wiring connections outside of the modules can cause additional system-level losses. Some recent papers have attempted to address the problem of estimating degradation based on highly uncertain data by resorting to methods of “robust statistics” [17] [18]. DNV GL is not convinced that these methods are arriving at valid conclusions based on the amount of biased and randomly-inaccurate data that is judged to be present in these data sets [19].

3.3.1 Simulated examples Another statistical consideration is the difficulty of extracting long-term degradation from measured data even if you have no measurement error. Figure 1 shows simulated monthly performance from 20 distinct weather years “degraded” by applying a 0.75% per year degradation rate. The regression line through this simulated generation is shown with a 95% confidence band that shows increased uncertainty near the beginning and end of the data set. The spreading of this band graphically illustrates the uncertainty in the slope of the regression line. In fact, the slope obtained by regression in this case is -0.68%/year, with a statistically-estimated range of possible true values as small as -0.07%/year or as large as -1.3%/year (95% confidence). This clearly includes the (in this case known because of the simulated origin of the data) true value of -0.75%/year, but not very precisely. It also shows that the apparent rate of reduction in generation is not completely determined by the power production degradation rate. Figure 3 illustrates how extracting the degradation rate becomes easier if irradiance variation is accounted for by regressing the Performance Ratio PR

, where E is generation, H is irradiation in kWh/m2/day, and kWp is STC rated

power of the power plant. The estimated degradation of PR is -0.77%/year, or some value between 0.67%/year and -0.86%/year (95% confidence). Of course, that is assuming 20 years of data and perfect irradiance measurement. If we measure module temperature and correct for that (reducing uncertainty), and assume that we have 5 full years of data (greatly increasing uncertainty), and pyranometer drift is -0.5%/year (which is a bias that could have been partially mitigated by recalibration every 3 years), then we get a result something like Figure 4. In this case, the pyranometer drift causes the degradation to be underestimated at -0.23%/year. Keeping in mind that the true value entered into the simulation is still 0.75%/year, we also note that the still-uncomfortably-wide statistical confidence interval of -0.11%/year to -0.35%/year also fails to warn us that the measurement bias error is even present by not capturing the true degradation rate at all. This highlights the fact that caution is advised even when results appear consistent because measurement problems may not always be identified by statistical characterization of the results. These examples illustrate how an apparently straightforward use of measured data could be quite misleading even when the data are measured correctly. This observation leads us to prefer results obtained over at least 5 years, correcting for both irradiance and temperature, and to average the results from as many different equipment samples and test organizations as we can obtain before presuming to be confident in a result.

DNV KEMA Renewables, Inc. RANA-WP-03-A

Page 11 of 28


Figure 3 Degradation rate extraction improves by accounting for irradiance variation

Figure 4 Feasible degradation example

DNV KEMA Renewables, Inc. RANA-WP-03-A

Page 12 of 28


4 METHOD The Jordan and Kurtz Analytical Review paper [2] summarizes results by identifying one or more module or system degradation rate estimates from each original research paper, and categorizing them by age of installation and technology type. Jordan and Kurtz shared their base data set with DNV GL, and we proceeded to reduce the data set by eliminating papers that did not meet a set of high-level criteria as discussed below, and then reviewed the remaining papers in greater detail to categorize papers according to whether the modules had been tested or qualified in a few ways.

4.1 Review of study quality Table 1 presents the high-level filtering criteria that were used in this review. Note particularly, that the potentially subjective “Priority” criteria were intended to limit scope to a reasonable level of effort.

Table 1 Initial exclusions Category

Rationale

String degradation

Removed because this tiny amount of data contains more than module-level effects but not all of the effects that are expected to affect systems.

Non-c-Si module technologies

Mono- and poly-c-Si technologies still make up a significant portion of the installed base, and by far the majority of questions about degradation are for c-Si technologies.

Concentrator technologies

Some reports included reflectors or lenses to increase the irradiance on the active surface of the cells. These were excluded due to the wide variety of accelerated degradation mechanisms and/or thermal dissipation capabilities of such designs, and their low market penetration.

Low priority

Papers for which (1) the abstract indicated the focus was on non-c-Si technologies (even if some comparative reports on c-Si were mentioned), (2) the evaluation date was prior to 1990, or (3) we judged the study methodology to be questionable were not examined in this review.

Medium priority

Papers for which only one (but not both) of the conditions (1) “technology appeared to be applicable to typical modern systems” and (2) “methodology appeared promising” was true.

Superseded

Where we were able to identify that multiple reports of degradation related to the same equipment, we discarded the earlier estimates in favor of estimates made using longer field exposure.

<5 years exposure

During short field exposures, measurement uncertainty increases the degradation estimate uncertainty. If the field exposure was for fewer than five years, we excluded the result. Note that even this period of time requires very high quality measurements in order to obtain accurate estimates of degradation.

Likely Warranty

Remove all reports of degradation with rates greater than 2%, per discussion in Section2.2.3.

DNV KEMA Renewables, Inc. RANA-WP-03-A

Page 13 of 28


4.2 Review of studies with large or small degradation values After the initial elimination of data, DNV GL assessed papers that reported low and high degradation values. Estimates for individual degradation values in the smallest 25% (first quartile) and largest 25% (fourth quartile), were categorized as in Table 2.

Table 2 Measurement quality categories Category

Rationale

Account for binning

If the method described in the paper mentioned that the initial rating was in fact measured, or that the degradation was based on “continuous” (frequent) field measurements, then this was marked TRUE.

Account for LID

If the method described in the paper mentioned that the initial rating occurred after some exposure, then this was marked TRUE.

Number of measurements

The fewest number of rating measurements that can be used to identify degradation is two. Use of more measurements adds redundancy to the degradation estimate. Values found were “2”, “4”, “Continuous” or “NA” (not evident in the source paper).

Qualification testing

International Electrotechnical Commission (IEC) standard 61215 [20] is the current standard test protocol used for qualifying module designs for field use. CEC501 was a European precursor to IEC 61215 with somewhat weaker exposure requirements. If modules were reported as having been qualified then they were marked as such. If they were confirmed as not having been tested then they were marked as “None”. If this status could not be determined then it was marked as “NA”.

4.3 Review of climatological data Because the data extracted from papers do not in general specify the temperatures at which the equipment was actually exposed, we cannot accurately portray such information in aggregate. However, we do know in most cases the area where the equipment was installed, so we examined typical (climatological) weather in those locations [21]. Specifically, the module and system degradation rates were plotted against both the annual mean dry-bulb temperature for each testing location and the annual maximum of the monthly mean temperatures. Similar plots are generated for degradation against relative humidity.

DNV KEMA Renewables, Inc. RANA-WP-03-A

Page 14 of 28


5 RESULTS 5.1 Impact of filtering on average degradation A waterfall analysis of data filtering examines the impact of sequentially applying filtering criteria to the data. Table 3 presents a waterfall analysis of the applied data filtering, including a count of the number of degradation rate values represented at each stage of filtering. Note that the module mean values are quite similar for most filtering criteria, with the largest impact coming from the “Likely Warranty” which yielded a final module degradation rate of 0.57. As argued earlier, the mean of the module degradation rate is expected to best represent the overall effect of module degradation on system degradation. However, the median of the system degradation rates would still logically be applied to obtain the P50 energy estimate that is typically used in pro-forma investment analyses. The system degradation for the final, “Likely Warranty,” filtering was 0.77 indicating that the BOS degradation rate would be 0.2. However, the count of degradation measurements included after filtering for the system level degradation is low (36) which could increase the uncertainty in this value. P10 is provided to indicate the degree of variation for up-side analysis. The P90 is quoted as representative of the type of down-side analysis that is often applied for finance stress testing. However, as noted above, there may be increased uncertainty in the P10 or P90 for the system level data due to the small count of data after filtering.

Table 3 Data filtering impact on degradation Module Category

P10

Mean

Median

System P90

Count

P10

Mean

Median

P90

Count

No string degradation

0.17

0.82

0.60

1.60

1799

0.00

0.90

0.61

2.13

248

Minor technology types

0.17

0.82

0.60

1.60

1799

0.00

0.89

0.61

2.12

246

Low priority

0.17

0.79

0.57

1.57

1300

-0.03

0.75

0.60

1.81

170

Med priority

0.10

0.91

0.57

2.32

513

-0.03

0.81

0.64

2.10

130

Non-c-Si

0.10

0.79

0.52

1.91

450

-0.04

0.75

0.59

1.84

115

Superseded

0.10

0.79

0.52

1.86

444

-0.04

0.76

0.59

1.85

114

Concentrator

0.10

0.79

0.52

1.86

444

-0.04

0.76

0.59

1.85

114

No account for bins

0.10

0.79

0.52

1.86

444

-0.04

0.76

0.59

1.85

114

No account for LID

0.10

0.86

0.58

2.18

356

-0.05

0.76

0.60

1.85

113

<3 years

0.10

0.84

0.57

2.14

329

-0.03

0.78

0.65

1.68

89

<4 years

0.10

0.82

0.57

2.07

315

-0.02

0.80

0.71

1.63

67

<5 years

0.10

0.81

0.57

2.00

302

0.06

0.81

0.79

1.47

37

Likely Warranty

0.10

0.57

0.50

1.06

271

0.05

0.78

0.77

1.45

36

DNV KEMA Renewables, Inc. RANA-WP-03-A

Page 15 of 28


Note that Table 3 only presents one of the possible filter sequences that we examined. The only subset of the data that we have tried for which the mean degradation decreases significantly is the Likely Warranty applied to modules. One interpretation of this result could be that the criteria we chose do not correlate with degradation, which would imply that “we don’t build them that way now” is not a relevant argument. Another interpretation might be that unusual design features in some of the PV systems addressed in the remaining research papers are not being reported. Yet another interpretation is that there is significant measurement uncertainty in the remaining data that obscures the true degradation rate in many of the reported degradation values. Each of these explanations could explain some or all of the filtering results. We also note that the reported system degradation rates are in many cases lower than the module degradation rates. This could be because the distribution of degradation rates in the literature does not accurately represent the distribution of degradation rates in individual systems. However, given the larger uncertainties on estimating system degradation rates, it could just as well be an artifact of measurement uncertainty. One condition that Jordan and Kurtz emphasized was the difference between older installations (installed prior to year 2000) and new installations (the rest). They only found a statistically-significant difference for thin-film technologies, but results for c-Si were indistinguishable between older and newer installations. We have found that when applied in conjunction with our other filters the number of remaining degradation estimates from recent installations dropped below 30 yet the mean and variance in data were not changed. (That is, reports on modern installations include a similar proportion of high degradation rate reports as older reports.) For P50 estimates of system degradation, the results in Table 3 are indistinguishable from 0.75%/year. While there remains concern that system performance measurements may have higher uncertainty than module measurements due to uncontrolled measurement conditions and lower-cost instrumentation, the system P50 and mean degradation values are remarkably similar to the module-level mean degradation for most module data filtering criteria. Estimates of mean module degradation are noticeably reduced when the “Likely Warranty” criterion is applied to this data set. While most module warranties cover 1% or less degradation per year, in practice, it is challenging to identify which specific modules in an array are exhibiting this degradation and qualify for the warranty criteria. If we assume that these high module degradation rates (>2%/year) as excluded in the “Likely Warranty” criterion are typically associated with known design flaws, visible failure modes and/or complete module failures, then individual module warranty claims should be identifiable from visual inspection or high quality O&M monitoring practice. To further manage the risk of systematic (widespread) warranty claims for projects, a combination of qualification tests and batch testing can be used to help screen for design flaws and manufacturing deviations. In earlier reviews of this data set, we only examined the subset of data applying to c-Si flat-plate modules, and observed P90 stress case values of -1.3%/year. In this analysis some data with very small or negative (improving) degradation rates is no longer included and the P90 stress case now appears to be more severe at -1.45%/year. Note that identifying severe degradation on an individual module basis in the field is not normally economically practical, so it may not be appropriate to assume that the -1.06%/year P90 module degradation rate applies at the module level.

DNV KEMA Renewables, Inc. RANA-WP-03-A

Page 16 of 28


5.2 Climatological correlation Figure 5 shows how the reported degradation rates trend with mean climatological temperature as well as maximum of monthly average climatological temperature for the installation locations for the filtered data described in Table 3 [22] [23]. Figure 6 shows similar correlations for climatological relative humidity. Although there does appear to be a positive temperature correlation for modules, the high degradation rates do not tend to occur where the highest temperatures occur so the correlation is weak. That is, degradation rates still appear to be dominated by impacts other than temperature. The correlations for mean and maximum mean relative humidity are insignificant.

Figure 5 Degradation rate vs. climatological temperature

DNV KEMA Renewables, Inc. RANA-WP-03-A

Page 17 of 28


Figure 6 Degradation rate vs. climatological relative humidity

Jordan et al. [24] analyzed an extended version of this data set against the Kรถppen-Geiger climate classification system and found wide ranges of degradation values in various climates. We view this as confirmation that climate is, at best, much less important than other factors (such as design, manufacturing process or materials quality) in determining degradation outcomes.

5.3 Distribution tails analysis Table 4 summarizes several potential distinguishing factors for the smallest (first quartile) and largest (fourth quartile) reported degradation rates among the filtered data. Along with mean and median, the count of degradation rate values found as well as the number of papers involved is reported. High degradation rates associated with reported methods that avoid potential errors due to mis-handling rating bins or LID are highest when the methods reported appear to avoid these concerns. This is counter to our hypothesis that failure to account for these errors could be leading to inflated degradation values. In addition, we see that modules for which CEC501 qualification testing was performed correspond to both large and small values of degradation. That suggests that requiring that the CEC501 test protocol be passed before modules were installed did not prevent high degradation rates from occurring in some modules. To see if IEC 61215 testing protocol had an impact on degradation, we reviewed the data for instances where module designs had passed IEC 61215. No instances of the IEC 61215 protocol were observed in the first quartile, and only four such modules were in the fourth quartile. These data are inconclusive, but do indicate that this testing is insufficient to identify fast-degrading modules. In an additional examination beyond Table 4, we observed that six of the 20 papers reporting results in the extreme quartiles actually reported degradation rates in both the first quartile and fourth quartile. Four papers reported in the first quartile but not the fourth, and ten papers reported in the fourth but not the first. This 30% overlap suggests that product-specific degradation rates could be significant because the study

DNV KEMA Renewables, Inc. RANA-WP-03-A

Page 18 of 28


method is the same for those cases. However, it could also reflect measurement uncertainty large enough to span both quartiles.

5.3.1 Cited causes of high degradation and failures in the fourth quartile From a more detailed review of the literature [25][26][27][28][29][30][31][32][33][34][35][36][37], the highest reported degradation rates were on modules or systems that displayed the following factors: 

Encapsulant discoloration

Cell/Encapsulant delamination

Hot spots

Poor system monitoring

Inefficient design or system cooling

Many of these failure mechanisms are well known and strategies for addressing them are part of known good PV module manufacturing processes today. IEC qualification testing such as damp heat (IEC 6121510.13) and temperature cycling (IEC 61215-10.11) can indicate if a module design is sensitive to these types of failures. However, there is still not a convincing quantitative correlation between IEC qualification testing and performance degradation. Extended reliability testing and more extreme Highly Accelerated Life Test (HALT) / Highly Accelerated Stress Test (HAST) analyses have been used in other industries to reduce field failures, and the NREL PV “QC Plus” proposal would formalize a set of extended tests. There are two key challenges in developing these types of reliability tests for PV systems: 1) the ability to predict longterm (>25 years) reliability outcomes from short-term tests and 2) the ability to predict performance. In particular, we point out that the age of the modules cannot be directly correlated to high degradation rates and serial module failures as many older modules also showed degradation rates in the first quartile (as well as in the bulk of the distribution). Likewise, some newer modules showed high degradation rates and quality problems. Therefore it is important not to make the leap that “newer is better” in the context of degradation, but rather to use completion of specific accelerated exposure tests on a statistically-significant random sample of product to support the assertion that warranty claims will be unlikely. If we remove these possible serial module defects from the fourth quartile of the selected data, then there is minimal impact on the mean as the majority of the data points are near the center of the distribution. However, the P90 actually increases from 2.0%/year to 2.2%/year because serial defect indicators were identified with slightly more instances of typical degradation than instances of extreme degradation. This illustrates the need to confirm “reasonable” correlations before using them. Another “reasonable” assumption could be that warranty claims will mitigate possible high degradation rates. A “reasonable” upper limit for degradation might be 2% per year (as assumed in our “Likely Warranty” case), for two reasons: typical warranty values and claim impediments. The typical warranted degradation rate is about 1% per year or lower, and a reasonable person would expect the manufacturer to have a mean degradation rate smaller than this value in order to minimize warranty liability. However, two claim impediments are that there is cost overhead in pursuing warranty claims, and also that measurement uncertainty is normally assumed to be in the manufacturer’s favor for purposes of warranty claims. Thus, degradation would have to exceed typical field equipment measurement uncertainty within 5 years by enough margin to be trigger a closer look by the owner within 3 years, and then to convince the manufacturer that there was a defect by the fourth year, and implement replacement in the fifth year. For field-use pyranometers with 5% uncertainty, a 2%/year degradation rate should be convincing in the third year. For these reasons, we assume that degradation rates greater than 2% are likely to be recoverable, DNV KEMA Renewables, Inc. RANA-WP-03-A

Page 19 of 28


while rates less than 2% are unlikely to be corrected by warranty in the first five years of operation. Correction after 5 years would represent a significant degradation impact to the system lifetime energy generation. 2% degradation over 4 years would amount to 8% degradation in the fourth year. Cumulative degradation less than this would be challenging to distinguish from normal weather variation and irradiance measurement uncertainty.

Table 4 Potential distinguishing factors in first and fourth quartiles of filtered degradation Modules Quartile Category

Value

First

NA

0.12

0.10

10

2

0.07

0.06

7

3

TRUE

0.09

0.11

67

4

-0.02

-0.02

20

1

NA

0.12

0.10

10

2

0.07

0.06

7

3

TRUE

0.09

0.11

67

4

-0.02

-0.02

20

1

No. of meas. 2

0.10

0.10

67

2

0.00

0.00

1

1

4

0.19

0.19

1

1

0

0

-0.09

0.16

5

2

-0.02

-0.02

20

1

0.16

0.17

4

1

0.08

0.08

6

2

0.11

0.11

61

1

0

0

0

0

0

0

16

51

0.01

-0.02

26

3

0

0

0.00

0.00

1

1

Account for bins? Account for LID?

Continuous NA Qualification CEC501 testing

Mean

Systems

Median Count Papers Mean

IEC 61215 NA

0.06

0.10

None Fourth

Account for bins?

Median Count Papers

NA

1.34

1.17

13

2

1.16

1.12

4

3

TRUE

2.25

2.26

62

7

1.76

1.50

24

4

NA

1.34

1.17

13

2

1.16

1.12

4

3

TRUE

2.25

2.26

62

7

1.76

1.50

24

4

No. of meas. 2

2.32

2.27

46

2

0

0

4

1.32

1.32

1

1

0

0

Continuous

2.09

2.09

15

4

1.76

1.50

24

4

NA

1.34

1.17

13

2

1.16

1.12

4

3

2.38

2.29

44

1

IEC 61215

1.65

1.68

4

1

NA

1.74

1.32

25

6

1.68

1.44

28

7

None

1.07

1.07

2

1

Account for LID?

Qualification CEC501 testing

DNV KEMA Renewables, Inc. RANA-WP-03-A

Page 20 of 28


6 CONCLUSION Based on our statistical analysis of degradation reports in the literature available to us at this time, our opinion is that the best estimate for expected annual system degradation rate remains at 0.75% per year. This is based on a module level degradation rate of 0.57% per year that results from 271 data points after applying our filtering methodology. The same literature set and filtered methodology resulted in a system level degradation rate of 0.77%, from which we infer a BOS degradation rate of 0.20% per year. It should be noted that the system-level degradation rate is based on a much smaller data set (36 points), so the uncertainty in the BOS degradation estimate is greater than the module-level degradation rate. We have treated degradation rates greater than 2% per year in this study as “likely warranty” cases and the 0.23% per year of incremental potential loss not included in these results should be appropriately addressed in the project documentation (warranties, EPC agreements, O&M agreements and/or performance agreements) to ensure coverage. While many claims have been made that system-level degradation assumptions lower than 0.75% should be used in developing and financing PV systems, we have not been able to verify or validate these claims based on the data available to us. This report presents a strong case for a module level degradation rate consistent with industry expectations and warranties. DNV GL is aware that there have been advances in module manufacturing quality and control as well as system design and integration that should be taken account into when considering what degradation rate should be used when financing a specific project. The data reviewed in this report include a wide range of manufacturers, many of whom are now bankrupt or no longer in business. To distinguish likely performance within this recommended range, DNV GL would expect higher quality modules from manufacturers that implement better quality control measures, on average, to have a narrower distribution near the lower end of the range (P10-P90). However, the typical short-term data provided by module manufacturers to justify a lower-than-average degradation rate assumption is best applied to justifying an expectation of low warranty claim rates. We strongly encourage module manufacturers and others to share long-term data which could further the industry’s understanding and insights regarding the observed benefits of this critical topic, since there are reasonable arguments to the effect that improvements have been made in specific cases but limited evidence exists from fielded equipment to support that conclusion. In addition to the expected value of system degradation, we also conclude that a downside stress case P90 value of -1.45% per year is observed in the most applicable literature. As mentioned previously, there are many known failure or degradation mechanisms that could lead to more severe worst-case degradation rates. The risk of these failure mechanisms can be reduced by ensuring modules targeted for specific projects are subjected to design and accelerated reliability testing. From traditional reliability theory (e.g., the bathtub curve), we do not expect that short-term tests will necessarily trigger long-term failure mechanisms at either the micro or macro scale. Where they do trigger a mechanism, one or more additional long-term degradation mechanisms could possibly be introduced or worsened by the design changes that fixed the targeted mechanism. Therefore, while we certainly recommend that product qualification tests should be repeated periodically on statistically-significant sample sizes, it is still challenging to quantify the effect this activity will have on reducing assumed system degradation rates. Ideally there would be some set of short-term tests that would trigger all of the major long-term failure mechanisms that actually occur in fielded PV modules. The “Thresher Test” [38] and PVQA+[39] programs are promising efforts in this direction, but exposure trials which start with designs that pass these test protocols

DNV KEMA Renewables, Inc. RANA-WP-03-A

Page 21 of 28


and measure actual degradation are needed to confirm the hypothesized relationship between short-term results and long-term behavior.

DNV KEMA Renewables, Inc. RANA-WP-03-A

Page 22 of 28


7 FUTURE WORK A common weakness of degradation studies is the uncertainty analysis. A short duration of exposure combined with the use of low-quality or poorly-maintained irradiance sensors frequently leads to excessive uncertainty. Longer durations can incur risk of obsolescence (“we don’t make them that way anymore”), but they can also demonstrate whether rates are constant or changing, or what the magnitude of the worst case should be for a particular family of designs. We particularly prefer durations of 5 to 15 years, with samples at multiple times throughout the test to reduce the impact of individual measurement uncertainty, combined with a manufacturer’s review of design similarities and differences for applying results to more recent designs. Another factor that, in theory, should affect degradation rates would be which design or design features were used. Compiling degradation reports that identify specific to materials and design features present in tested equipment is necessary in order to justify applying measured degradation rates on new equipment. There are many reports that assert that high temperature and humidity or high mechanical loading environments lead to accelerated degradation. We have not yet found a significant correlation between degradation rate and temperature or humidity, even though specific failure mechanisms are known to depend on these factors. We were not able to correlate degradation rate with measured exposure in this analysis. An analysis incorporating measured temperatures could possibly show a better correlation with degradation. Measurement of irradiance is a crucial point that future studies must address. Reliance on reference cell sensors in the field has the advantage of removing spectral mismatch. Silicon detector sensors have some similar properties, though at lower overall accuracy. It is crucial to not allow sensor calibration drift to “hide” the PV array degradation, so a regular recalibration procedure should be documented. In addition, soiling due to dust can change rapidly in some locations, creating additional variability in field data. To combat these uncertainties in the field, a common approach is to remove designated sample modules from the array for flash-testing at a laboratory at several times over the life of the array. The Skoczek et al. study [28] used a methodical approach to correlating initially-available information with eventual outcomes. We would look favorably on attempts to apply that methodology to current equipment but applying screening methods such as IEC61215 or PVQA+.

DNV KEMA Renewables, Inc. RANA-WP-03-A

Page 23 of 28


8 REFERENCES [1] C. R. Osterwald, J. Adelstein, J. A. del Cueto, B. Kroposki, D. Trudell, and T. Moriarty, “Comparison of Degradation Rates of Individual Modules Held at Maximum Power,” presented at the 2006 IEEE 4th World Conference on Photovoltaic Energy Conversion, Waikoloa, HI, 2006, vol. 2, pp. 2085–2088 [Online]. Available: http://www.photonenergysys.com/osterwald%20wcpec.pdf [2] D. C. Jordan and S. R. Kurtz, “Photovoltaic Degradation Rates—an Analytical Review,” Prog. Photovolt. Res. Appl., vol. 21, no. 1, pp. 12–29, 2013. [3] A. De Lillo, S. Li Causi, and S. Castello, “Long-term performance of Casaccia plant,” presented at the 3rd World Conference on Photovoltaic Energy Conversion, Osaka, Japan, 2003, vol. 3, pp. 2231–2234. [4] J. Schmidt and R. Hezel, “Light-Induced Degradation in Cz Silicon Solar Cells: Fundamental Understanding and Strategies for its Avoidance,” presented at the 12 th Workshop on Crystalline Silicon Solar Cell Materials and Processes, Breckenridge, CO, 2002 [Online]. Available: http://www.ilb-heliosreh.com/bibliothek/workshop.pdf [5] B. Sopori, P. Basnyat, S. Devayajanam, S. Shet, V. Mehta, J. Binns, and J. Appel, “Understanding lightinduced degradation of c-Si solar cells,” in Photovoltaic Specialists Conference (PVSC), 2012 38th IEEE, Austin, TX, 2012, pp. 1115–1120 [Online]. Available: http://www.nrel.gov/docs/fy12osti/54200.pdf [6] M. A. Quintana, D. L. King, T. J. McMahon, and C. R. Osterwald, “Commonly observed degradation in field-aged photovoltaic modules,” presented at the Twenty-Ninth IEEE Photovoltaic Specialists Conference, New Orleans, Louisiana, 2002, pp. 1436–1439. [7] S. W. Glunz, R. Preu, and D. Biro, “Crystalline Silicon Solar Cells - State-of-the-Art and Future Developments,” in Comprehensive Renewable Energy, vol. 1, Elsvier, Inc., 2012 [Online]. Available: http://www.researchgate.net/publication/230603409_On_the_degradation_of_Czsilicon_solar_cells/file/9fcfd50279205ae8d2.pdf [8] R. Swanson, M. J. Cudzinovic, D. M. DeCeuster, V. Desai, J. Jürgens, N. Kaminar, W. P. Mulligan, L. Rodrigues-Barbarosa, D. H. Rose, D. D. Smith, A. Terao, and K. E. Wilson, “The surface polarization effect in high-efficiency silicon solar cell,” presented at the 15th International Photovoltaic Science & Engineering Conference, Shanghai, China, 2005. [9] M. A. Mikofski, D. F. J. Kavulak, D. Okawa, Yu-Chen Shen, A. Terao, M. Anderson, S. Caldwell, D. Kim, N. Boitnott, J. Castro, L. A. L. Smith, R. Lacerda, D. Benjamin, and E. F. Hasselbrink, “PVLife: An integrated model for predicting PV performance degradation over 25+ years,” presented at the 38th IEEE Photovoltaic Specialists Conference (PVSC), Austin, TX, 2012, pp. 1744–1749. [10] D. C. Jordan and S. R. Kurtz, “Analytical improvements in PV degradation rate determination,” presented at the 35th IEEE Photovoltaic Specialists Conference, Honolulu, HI, 2010, pp. 2688–2693 [Online]. Available: http://www.nrel.gov/docs/fy11osti/47596.pdf [11] I. J. Muirhead and B. K. Hawkins, “Prediction of Seasonal and Long Term Photovoltaic Module Performance from Field Test Data,” presented at the 9th Photovoltaic Science and Engineering International Conference, Miyazaki, Japan, 1996, pp. 457–458 [Online]. Available: http://www.telepower.com.au/PVsec96b.PDF [12] W. Marion and J. Adelstein, “Long-term Performance of the SERF PV Systems,” presented at the NCPV Solar Program Review Meeting, Denver, CO, 2003, pp. 199–201 [Online]. Available: http://www.nrel.gov/docs/fy03osti/33531.pdf [13] G. Makrides, B. Zinsser, G. E. Georghiou, M. B. Schubert, and J. H. Werner, “Degradation of different photovoltaic technologies under field conditions,” presented at the 35th IEEE Photovoltaic Specialists Conference, Honolulu, HI, 2010, pp. 2332–2337 [Online]. Available: http://www.researchgate.net/profile/Markus_Schubert4/publication/224186828_Degradation_of_differe nt_photovoltaic_technologies_under_field_conditions/file/6a85e5342c46933e16.pdf?origin=publication_ detail [14] A. Kolodziej, “Staebler-Wronski effect in amorphous silicon and its alloys,” Opto-Electron. Rev., vol. 12, no. 1, pp. 21–32, 2004. [15] D. L. Staebler and C. R. Wronski, “Optically induced conductivity changes in discharge‐produced hydrogenated amorphous silicon,” J. Appl. Phys., vol. 51, no. 6, pp. 3262–3268, 1980. [16] T. Roessler and K. . J. Sauer, “Modeling of PV Module Power Degradation to Evaluate Performance Warranty Risks,” presented at the 27th European Photovoltaic Solar Energy Conference and Exhibition, pp. 3142–3147. [17] E. F. Hasselbrink, M. Anderson, Z. Defreitas, M. Mikofski, Y.-C. Shen, S. Caldwell, A. Terao, D. Kavulak, Z. Campeau, and D. DeGraff, “Validation of the PVLife Model Using 3 Million Module-Years of

DNV KEMA Renewables, Inc. RANA-WP-03-A

Page 24 of 28


Live Site Data,” presented at the 39th IEEE Photovoltaic Specialists Conference, Tampa, FL, 2013, pp. 7–12. [18] N. H. Reich, A. Goebel, D. Dirnberger, and K. Kiefer, “System performance analysis and estimation of degradation rates based on 500 years of monitoring data,” presented at the 38th IEEE Photovoltaic Specialists Conference, Austin, TX, 2012, pp. 1551–1555 [Online]. Available: http://publica.fraunhofer.de/eprints/urn:nbn:de:0011-n-2254614.pdf [19] O. Hossjer, P. J. Rousseeuw, and C. Croux, “Asymptotics of the Repeated Median Slope Estimator,” Ann. Stat., vol. 22, no. 3, pp. 1478–1501, Sep. 1994. [20] International Electrotechnical Commission, “Crystalline silicon terrestrial photovoltaic (PV) modules – Design qualification and type approval,” International Electrotechnical Commission, Standard IEC61215:2005, 2005. [21] AIRS Science Team and J. Texeira, “Aqua AIRS Level 3 Standard Monthly Product using AIRS and AMSU without HSB V6.” NASA Goddard Earth Sciences Data and Information Services Center, 2013 [Online]. Available: http://dx.doi.org/10.5067/AQUA/AIRS/DATA319. [Accessed: 15-Oct-2014] [22] R Development Core Team, R: A Language and Environment for Statistical Computing. Vienna, Austria, 2008 [Online]. Available: http://www.R-project.org [23] H. Wickham, ggplot2: elegant graphics for data analysis. Springer New York, 2009 [Online]. Available: http://had.co.nz/ggplot2/book [24] D. C. Jordan, J. H. Wohlgemuth, and S. R. Kurtz, “Technology and Climate Trends in PV Module Degradation,” presented at the 27th European Photovoltaic Solar Energy Conference and Exhibition, Frankfurt, Germany, 2012, pp. 3118–3124. [25] H.-J. Chen, C.-M. Chiang, C.-C. Chan, J.-P. Lin, C.-M. Shu, and G.-W. Chang, “A BIPV Case Study on the PV Energy Research with Landscape Design in Taiwan – 3 Years Operation,” presented at the 25th European Photovoltaic Solar Energy Conference and Exhibition, Valencia, Spain, 2010, pp. 4890–4892 [Online]. Available: https://www.eupvsec-proceedings.com/proceedings/download/paper6474.pdf [26] K. Kiefer, N. H. Reich, D. Dirnberger, and C. Reise, “Quality assurance of large scale PV power plants,” presented at the 37th IEEE Photovoltaic Specialists Conference, Seattle, WA, 2011, pp. 1987– 1992. [27] K. O. Davis and H. Moaveni, “Effects of module performance and long-term degradation on economics and energy payback: Case study of two different photovoltaic technologies,” in Proceedings of Society of Photo-optical Instrumentation Engineers (SPIE), San Diego, CA, 2009, vol. 7412–35 [Online]. Available: http://www.creol.ucf.edu/Research/Publications/6079.pdf [28] A. Skoczek, T. Sample, and E. D. Dunlop, “The results of performance measurements of field-aged crystalline silicon photovoltaic modules,” Prog. Photovolt. Res. Appl., vol. 17, no. 4, pp. 227–240, Jun. 2009. [29] K. O. Davis, S. R. Kurtz, D. C. Jordan, J. H. Wohlgemuth, and N. Sorloaica-Hickman, “Multi-pronged analysis of degradation rates of photovoltaic modules and arrays deployed in Florida,” Prog. Photovolt. Res. Appl., vol. 21, no. 4, pp. 702–712, 2013. [30] F. E. Vignola, J. Krumsick, F. Mavromatakis, and R. Walwyn, “Measuring degradation of photovoltaic module performance in the field,” presented at the 38th American Solar Energy Society Annual Solar Conference, Buffalo, NY, 2009, vol. 5, pp. 2611–2618 [Online]. Available: http://solardat.uoregon.edu/download/Papers/MeasuringDegradationofPhotovoltaicModulePerformancein theField.pdf [31] D. C. Jordan and S. Kurtz, “Photovoltaic Degradation Risk,” in Proceedings of the 2012 World Renewable Energy Forum, Denver, CO, 2012, p. 7 [Online]. Available: http://www.nrel.gov/docs/fy12osti/53712.pdf [32] S. Pulver, D. Cormode, A. Cronin, D. C. Jordan, S. R. Kurtz, and R. Smith, “Measuring degradation rates without irradiance data,” in Proceedings of the 2010 35th IEEE Photovoltaic Specialists Conference (PVSC), Honolulu, HI, 2010, pp. 1271–1276 [Online]. Available: http://www.nrel.gov/docs/fy11osti/47597.pdf [33] T. Ishii, T. Takashima, and K. Otani, “Long-term performance degradation of various kinds of photovoltaic modules under moderate climatic conditions,” Prog. Photovolt. Res. Appl., vol. 19, no. 2, pp. 170–179, 2011. [34] A. Adiyabat, K. Otani, N. Enebish, and N. Enkhmaa, “Long term performance analysis of PV module in the Gobi Desert of Mongolia,” presented at the 35th IEEE Photovoltaic Specialists Conference, Honolulu, HI, 2010, pp. 2656–2659. [35] B. Marion, J. Adelstein, H. T. Hayden, R. (Bob) Hammond, T. Fletcher, B. Canada, D. Narang, D. Shugar, H. J. Wenger, A. Kimber, L. Mitchell, G. Rich, and T. U. Townsend, “Performance Parameters for DNV KEMA Renewables, Inc. RANA-WP-03-A

Page 25 of 28


Grid-Connected PV Systems,” in Proceedings of the 31st IEEE Photovoltaic Specialists Conference and Exhibition, Lake Buena Vista, Florida, 2005, pp. 1601–1606 [Online]. Available: http://www.nrel.gov/docs/fy05osti/37358.pdf [36] W. Vaassen, “Qualitätsmerkmale photovoltaischer Module,” presented at the 6th Symposium Photovoltaïque National, Geneva, Switzerland, 2005. [37] P. Sánchez-Friera, M. Piliougine, J. Peláez, J. Carretero, and M. Sidrach de Cardona, “Analysis of degradation mechanisms of crystalline silicon PV modules after 12 years of operation in Southern Europe,” Prog. Photovolt. Res. Appl., vol. 19, no. 6, pp. 658–666, Sep. 2011. [38] H. Kuhn and A. Funcell, “The Thresher Test: Crystalline Silicon Terrestrial Photovoltaic (PV) Modules Long Term Reliability and Degradation,” presented at the International PV Quality Forum, San Francisco, CA, 2011 [Online]. Available: http://www.countrysolarnt.com.au/assets/files/Thresher%20Test.pdf [39] S. R. Kurtz, J. H. Wohlgemuth, M. D. Kempe, N. Bosco, P. Hacke, D. C. Jordan, D. C. Miller, T. J. Silverman, N. Phillips, T. Earnest, and R. Romero, “Photovoltaic Module Qualification Plus Testing,” National Renewable Energy Laboratory, Golden, CO, NREL/TP-5200-60950, Dec. 2013 [Online]. Available: http://www.nrel.gov/docs/fy14osti/60950.pdf

DNV KEMA Renewables, Inc. RANA-WP-03-A

Page 26 of 28


Title:

DNV GL White Paper on

DNV GL - Energy

Photovoltaic Module Degradation

Renewables Advisory

Presented to:

N/A

2420 Camino Ramon, Suite 300

Contact person:

N/A

San Ramon, CA 94583, USA

Date of issue:

3 February 2015

Web: www.dnvgl.com/renewables

Project No.:

84440065

Tel: 1-925-867-3330

Document No.:

RANA-WP-03

Version:

A-Final

Task and objective: This paper discusses the current state of knowledge regarding photovoltaic degradation at the module and system level, and outlines the default P50 and downside assumptions used by DNV GL for project lifetime performance. Prepared by:

Verified by:

Approved by:

Jeff Newmiller Principal Engineer

Elizabeth Mayo Manager, Solar Technology

Ray Hudson Service Line Leader Solar

☐ Strictly Confidential

Keywords:

☐ Private and Confidential

Renewables Advisory, Solar Energy, Photovoltaic

☐ Commercial in Confidence

Module Degradation, Photovoltaic System Degradation

☐ DNV GL only ☐ Customer’s Discretion ☒ Published Reference to part of this report which may lead to misinterpretation is not permissible.

For more information, please contact: Jeff Newmiller, jeff.newmiller@dnvgl.com, 1-925-327-3005 IMPORTANT NOTICE AND DISCLAIMER Neither DNV GL nor any group company (the "Group") assumes any responsibility whether in contract, tort (including without limitation negligence), or otherwise and no company in the Group including DNV GL shall be liable for any loss or damage whatsoever. This document is issued on a no reliance basis and nothing in this document guarantees any particular energy resource or system output.

DNV KEMA Renewables, Inc. RANA-WP-03-A

Page 27 of 28


ABOUT DNV GL Driven by our purpose of safeguarding life, property and the environment, DNV GL enables organizations to advance the safety and sustainability of their business. We provide classification and technical assurance along with software and independent expert advisory services to the maritime, oil and gas, and energy industries. We also provide certification services to customers across a wide range of industries. Operating in more than 100 countries, our 16,000 professionals are dedicated to helping our customers make the world safer, smarter and greener.

DNV KEMA Renewables, Inc. RANA-WP-03-A

Profile for DNV GL

Photovoltaic Module Degradation  

White Paper RANA-WP-03-A

Photovoltaic Module Degradation  

White Paper RANA-WP-03-A

Profile for dnvgl