Only time will tell if the industry is headed for a better code compliance method.
By now, you might have heard about the Home Energy Rating Variability Study, which was prepared for the Department of Energy (DOE) and conducted by the six regional energy efficiency organizations (REEOs). It was first summarized at the February 2018 RESNET Board meeting, but the complete report was finally released in January 2019. The following is a summary and commentary on the study, as well as its potential ramifications on the building industry.
Let’s start with the premise of the study. Quoting directly from report: “The U.S. Department of Energy (DOE) Building Energy Codes Program…commissioned a study in an attempt to better understand how home energy ratings might function as a code compliance mechanism, and to address the question of variability that could be expected if enlisting the HERS Index for the purpose of demonstrating code compliance via the ERI path.” Finally, DOE stressed “that the study sought data on the consistency of multiple ratings on a single home and not whether the resulting ratings complied with the code (via the ERI targets specified in the IECC).”
Both by its name and the initial premise, there was an expectation that variability would be found. That in and of itself shouldn’t surprise anyone, since the object of the study was subjective analyses of homes. It could be posited that variability was inevitable. The key was to determine the level of consistency among HERS raters.
Next, the methodology used was to dispatch between four and six HERS raters to a chosen home. The RESNET-certified raters were not made aware they were evaluating the same home, nor did their onsite presence overlap with each other. Each rater was given documentation in advance. From that, they were able to conduct a plan review, but they followed that up with a field inspection/onsite verification based on RESNET protocol. The output of each analysis was a preliminary HERS Index and a Building Summary Report. Two homes per region were selected, though in the end, only 11 homes were rated. This methodology produced 56 home energy ratings in total.
The broad range of ratings for a singular home produced through this study was more than many expected. The study noted: “A majority of homes (7 of the 11) experienced variability of 10 or more points. Average variability across all homes studied was approximately 13 points.” The table below breaks down the ranges for each location:
A Deeper Dive
Digging deeper into the study’s findings, the inconsistencies were widespread. For example, five raters performed an analysis of the Seattle home. Three of the five raters counted five bedrooms, while the other two raters only counted four bedrooms. Keep in mind that each rater was given house plans, window schedules, insulation values and other default or non-observable information prior to their onsite assessment. Even more amazingly, the calculated shell area ranged from 6,096 square feet to 7,107 s.f. No two raters calculated the same shell area square footage for this home—and this home’s range of indices was one of the smaller ranges in the entire study!
For the three-bedroom house in Denver, three of the five raters counted five bedrooms, while one rater counted four bedrooms and the other rater counted the correct number. Only three of the five raters conducted a total duct leakage test, and only two of the five raters tested for duct leakage to the outside. Astonishingly, the total area of wall square footage reported by the raters ranged from 2,187 s.f. to 4,250 s.f. Window area ranged from 242 s.f. to 451 s.f. The number of returns ranged from five to nine. When it came to ceiling fan energy usage, three of the five raters didn’t mark anything down, and one rater entered zero for the refrigerator’s energy usage.
The study noted that a wide range of software was used, with the average home rated using three different versions of software and, in one instance, a particular home was rated with five different versions of REM/Rate software among six raters. While it’s uncertain if software had a significant contribution to the range of variability, the inconsistencies noted above are independent of software. The discrepancies can be attributed to either poor training or a failure to execute the requisite training.
This study was incredibly important, because multiple jurisdictions (states and cities) have incorporated alternative code compliance paths built around energy ratings. As the study stated, “consistency and replicability of the rating process is crucial to the ERI path.” It went on to say “while the HERS Index was not originally specified within the ERI path of the 2015 IECC, the connection was made more explicit when ANSI/RESNET/ICC Standard 301 was incorporated by reference in the 2018 IECC.”
Many Questions, Few Answers
The study states that “it is based on a relatively small sample of homes, and should not be considered statistically representative.” Yet the very next sentence notes that “it…raises many important questions for further inquiry.” The authors call out five questions for further investigation, but the most significant questions can only be answered in retrospect. The first is: What reaction did this study elicit?
To its credit, RESNET reacted fairly quickly. On April 19, 2018 (approximately two months after the study’s results were conveyed), RESNET adopted the HERS Software Consistency Collaborative Modeling Process [https://bit.ly/2VPrJEW]. One facet of this new effort was to recruit a “technical consultant with extensive knowledge of building energy software modeling” to serve as the RESNET Energy Modeling Director.
Unfortunately, that recruitment process took almost six months to produce a new staff member [https://bit.ly/2NVAFWB]. Over the course of 2018, RESNET made the following revisions to the quality assurance aspects of the National Home Energy Rating Standards after the variability study was released:
Added a compliance path to achieve Quality Assurance Designee (QAD) status, whereby the minimum number of reviews increased from 25 to 30, though the type of reviews changed to allow file reviews (approved Aug. 9, 2018).
Modified software accreditation to generate more consistency, and allowed for appeals to the RESNET Standing Software Consistency Committee (approved Oct. 12, 2018).
Revised its original policy on the financial separation of Quality Assurance Designees (approved Nov. 15, 2018).
Provided default ventilation fans for improved consistency while citing ASHRAE 62.2-2013 (approved Nov. 29, 2018).
Another enormous question is: Will there be any damage done to the energy ratings industry, or the concept of energy ratings in general? There are many people and entities, not the least of whom are the 1,900 HERS raters across the country, who hope not. However, there are some changes taking place. Some jurisdictions are moving away from citing HERS in their respective energy codes, and are instead adopting the ERI path in the IECC or ANSI/RESNET/ICC 301. While the difference is subtle, there is significance in the fact that RESNET is not required to obtain a code-compliant ERI (RESNET is attempting to change that, but that’s a story for another day). A jurisdiction’s decision to transition from HERS to ERI could be driven by the desire to cite an ANSI standard, and might not have anything to do with the results of the variability study. Or, the results of the study might simply serve as a reinforcement of such a decision. Without asking each jurisdiction, it’s hard to say what their motivation is.
The unfortunate part of this whole saga is that this could have been avoided. At the 2013 RESNET Conference in Orlando, keynote speaker (and Green Builder® Media President) Ron Jones shared this sage advice to a crowded room: “If you don’t keep your integrity as an organization and as an individual practitioner, you don’t amount to anything. You have to keep that integrity or we’re all sunk.”
He went on to say: “We are only going to be as effective as the person in this room, the person in this industry who cares the very least.”
By its own admission in the previously cited April 18, 2018, press release, RESNET had been striving “for the past four years…to enhance the consistency of the calculation of HERS Index Scores.” That means it took the group a year to react to its own keynote speaker’s warning. Setting that delay aside, it appears the release of the variability study is what really sparked significant action. That is supported by a HERS provider (who wished to remain anonymous) that felt the result of the variability study “wasn’t a surprise.” If this issue was known to exist, why wait to fix it?
The months and years ahead will tell us whether the issues highlighted in this study are a dent that can be buffed out or a devastating gash in the hull. The good news is that the instruments of repair already exist. In addition to the steps RESNET has already taken, other potential areas of improvement include: increased quality assurance and quality control; consistent software standards; more-consistent training; reduced tolerance for errors; and increased enforcement including, if needed, suspensions or revocations of an individual’s accreditation. That leaves us with the final question: Will the industry fully commit to making the necessary improvements? If not, then anything short of that might simply amount to rearranging deck chairs.
1. Department of Energy – Home Energy Rating Variability Study. September 30, 2018
2. Natural Resources Defense Council – ESNET Board Takes Action to Enhance the Consistency of How Accredited HERS Software Programs Calculate HERS Index Scores, by Jackie Wong and Madhur Boloor. December 10, 2018.
3. RESNET 2013 General Session. March 10, 2013.
Courtesy of The Green Builder® Coalition, a not-for-profit association dedicated to amplifying the voice of green builders and professionals, driving advocacy and education for more sustainable homebuilding practices. For more information, visit GreenBuilderCoalition.org