Abstract
The validation of a building simulation program or model is a daunting task, and one that should receive as much attention as algorithm and code development. Previous research in this field has led to a well-accepted approach composed of analytical verification, empirical validation, and inter-program comparative testing to diagnose model deficiencies, mathematical solution errors, and coding errors. Through a case study, this paper demonstrates the utility of inter-program comparative testing. It shows that by comparing program-to-program results, solution problems, coding errors, and deficiencies in mathematical model descriptions can be efficiently identified, diagnosed, and subsequently repaired.
Original language | English |
---|---|
Title of host publication | Proceedings of eSim 2006 |
Subtitle of host publication | IBPSA-Canada's 4th Biennal Building Performance Simulation Conference |
Number of pages | 8 |
Publication status | Published - 2006 |
MoE publication type | Not Eligible |
Event | IBPSA-Canada's 4th Biennal Building Performance Simulation Conference, eSim 2006 - Toronto, Canada Duration: 4 May 2006 → 5 May 2006 |
Conference
Conference | IBPSA-Canada's 4th Biennal Building Performance Simulation Conference, eSim 2006 |
---|---|
Country/Territory | Canada |
City | Toronto |
Period | 4/05/06 → 5/05/06 |