Blog Post

Lies, Damn Lies, and... (Another Look at LEED Energy Efficiency)

Maverick NYC mechanical systems designer Henry Gifford has long been a critic of LEED, arguing that it encourages the wrong things, and doesn't go far enough to ensure that certified buildings really save energy or provide good air quality. I have great respect for Gifford and the work he does to design and commission low-energy buildings with great ventilation on very tight budgets. Unlike too many practicing engineers, he knows exactly how much energy his buildings are using. Gifford is also a thorn in the side of many policymakers, because he has little patience for initiatives and programs that don't live up to his ideals.

Recently he's been distributing a paper attacking a study of actual energy use in LEED buildings. The study in Gifford's sights is from New Buildings Institute and USGBC, Energy Performance of LEED for New Construction Buildings. It analyzed actual energy usage in buildings that were certified based on predicted energy use.

The study compared actual to predicted energy use, and compared both to national average energy use in existing buildings as reported in the U.S. Department of Energy's Commercial Buildings Energy Consumption Survey (CBECS). USGBC and NBI reported on many interesting findings from that study, some of which were summarized in the December 2007 issue of EBN.


graphic from the NBI study

Gifford's paper is especially critical of the primary finding that LEED buildings were shown to be, on average, 25% to 30% more efficient than the national average. He provides an alternate analysis of the data that concludes that the LEED buildings are, on average, 29% lessefficient than average U.S. buildings.

The differences between Gifford's analysis and those of USGBC and NBI are based on two areas of disagreement:

1) First, the LEED buildings are compared to the CBECS data set of all existing buildings, regardless of year of construction. Gifford argues that they should have been compared only to new buildings. The 2006 CBECS summary shows that buildings built between 2000 and 2003 use, on average, about 10% less energy than the complete data set for all existing buildings.

SUPPORT INDEPENDENT SUSTAINABILITY REPORTING

BuildingGreen relies on our premium members, not on advertisers. Help make our work possible.

See membership options »

NBI's Mark Frankel disagrees, noting that some of the LEED buildings are actually renovations of older buildings, so it may not be fair to compare them to new buildings. Further, he notes that CBECS generally groups its buildings by decade, and those three years don't represent enough of a trend to rely on. Historically, he points out, when CBECS published data for just a few years it looked better, only to worsen when the full decade's data were compiled. And the trend for full decades or more since 1920 shows that new buildings use just as much energy as old ones.

2) Gifford's second adjustment is to use the mean of the LEED data set instead of the median used by NBI. (The LEED mean was not published, but NBI provided it to Gifford upon his request.) Depending on who you choose to believe, NBI used the median because it made the LEED data look better (Gifford's contention), or because it was statistically the more meaningful approach (more on this below).

Interestingly, the distinction between mean and median isn't all that significant if you omit the "high energy use" building types (labs and data centers, primarily) that constitute 13% of the LEED data set. Omitting these makes some sense, because the CBECS data has a negligible number of such high energy using buildings. But if you include those buildings, the difference between mean and median is huge:

    All buildings in the LEED data set, in kBtu/ft2/year:
      Median: 69; Mean: 105
    Without the high energy building types:
      Median: 62; Mean: 68

The CBECS numbers are means, so, Gifford argues, the LEED data should be analyzed based on means. (Actually, the CBECS numbers are averaged on a per square foot basis, meaning that larger buildings count for more. The LEED means are simple averages.)

By including all buildings in the LEED data set, and comparing based on mean instead of median, and comparing them to the CBECS 2000-2003 mean, Gifford shows that the LEED buildings' energy use exceed the CBECS baseline by 29% (105 divided by 81.6). On the other hand, median is often "a better indication of central tendency" than mean when the data is skewed (which the LEED data is). That's the same reason the authors give in their report for making that choice.

Also, the NBI study was peer reviewed by researchers from EPA, Pacific Northwest National Lab, and UC Berkeley, and none of them objected to this comparison. USGBC claims that other researchers who have since done further analysis using the data corroborate their approach as well. The NBI study used the median value rather than the mean, and compared it to the CBECS average for all existing buildings, to show that the LEED buildings use 24% less energy (69 divided by 91). I think that they could have just as easily have used the mean excluding the high-energy buildings (68) and gotten nearly the same result.

They did go much further, comparing building types in the LEED set with comparable buildings in the CBECS set, and found that the LEED buildings outperformed the CBECS buildings in every category except labs. (There is no category for labs in CBECS, but by any measure LEED labs aren't performing very well.) In the case of offices, the most common building type in both data sets, the median LEED buildings use 33% less than the CBECS average. Even without the labs and data centers the LEED buildings may be unfairly handicapped, because CBECS includes a lot of warehouses and vacant buildings, which use relatively little energy. But NBI chose not to adjust for that difference.

Gifford raises some other questions about the study, most notably the suggestion that the buildings for which actual data was provided likely performed better than those who couldn't or chose not to provide data. Given that 552 projects were contacted but data was only included from 121, this skepticism appears justified.

Frankel responds that at least some of those who supplied data had no idea how good or bad it was. (In one extreme case he contacted the owner right away to alert them to an energy hemorrhage.) He also notes that half of the 552 wanted to provide data, but some were rejected for various technical reasons, such as not having a full 12 months of data, or being located outside the U.S. Finally, they used statistical methods to test for this bias, but that's going over my head again.

In the end, I'm not entirely convinced on this one. Self-selection may have skewed the LEED results, at least a little. NBI's own responses to Gifford's challenges are posted here. Gifford doesn't raise the problem of first-year weirdness, although he does mention later in the paper that actual data should only be collected from year two of occupancy and beyond.

First-year data is often abnormally high, because systems haven't been fine-tuned. But it can also be low, if the building wasn't fully occupied for the entire year. I don't know how many of the 121 buildings in the study provided year-one data.

After attacking the NBI study on some good and some not-so-good grounds, Gifford gets back to addressing the core problem of predicted versus actual energy performance. On this front, he suggests that LEED plaques should be removable, and that someone should actually remove them if a building fails to live up to its promised performance.

That idea came up at early LEED meetings I attended, but was eventually abandoned as impractical. Gifford has an intriguing fall-back suggestion — rather than reward points based on predicted energy use, he suggests that mechanical system peak capacity would be a better indicator of performance. He doesn't propose how the baseline for that metric should be determined, however.

It's too bad that Gifford concentrated so much on attacking the study, because it's a distraction from the more important points he makes about how LEED is being misused. The good news is that LEED insiders share many of those same concerns, and are working on them. Everyone agrees that it's the actual performance, not the prediction, that really matters, and that more has to be done to improve that actual performance.

Published September 2, 2008

(2008, September 2). Lies, Damn Lies, and... (Another Look at LEED Energy Efficiency). Retrieved from https://www.buildinggreen.com/blog/lies-damn-lies-and-another-look-leed-energy-efficiency

Add new comment

To post a comment, you need to register for a BuildingGreen Basic membership (free) or login to your existing profile.

Comments

September 2, 2008 - 8:57 pm

LEED is based on a compelling idea: that anyone can take an 8 hour class, pass a test to become an accredited professional, and use a checklist or points system to profoundly improve the way buildings are designed, built, and operated. Sorry, life isn't that simple,and neither are buildings. The point is not that LEED isn't being used properly, but that LEED creates the image of energy efficiency, but not actual energy saving.

Any study that omits the worst-performing 16% or so buildings from one dataset and compares that dataset to another which hasn't had any buildings removed is like a tobacco company study that removes the people who died of lung cancer before doing an analysis. The office buildings studied only look 33% better after doing this, and by comparing the median to the mean.

And, nobody has anything to say about the 30% part of the 25 - 30% average saving claim, which is not supported by anything in the study - they just made it up.

The many people who sincerely care about the environment deserve an apology, and also deserve the truth, and a better approach, both of which can be found in my article.

Henry Gifford