A case study demonstrating the utility of inter-program comparative testing for diagnosing errors in building simulation programs

Ian Beausoleil-Morrison, Brent Griffith, Teemu Vesanen, Sebastien Lerson, Andreas Weber

    Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

    Abstract

    The validation of a building simulation program or model is a daunting task, and one that should receive as much attention as algorithm and code development. Previous research in this field has led to a well-accepted approach composed of analytical verification, empirical validation, and inter-program comparative testing to diagnose model deficiencies, mathematical solution errors, and coding errors. Through a case study, this paper demonstrates the utility of inter-program comparative testing. It shows that by comparing program-to-program results, solution problems, coding errors, and deficiencies in mathematical model descriptions can be efficiently identified, diagnosed, and subsequently repaired.
    Original languageEnglish
    Title of host publicationProceedings of eSim 2006
    Subtitle of host publicationIBPSA-Canada's 4th Biennal Building Performance Simulation Conference
    Number of pages8
    Publication statusPublished - 2006
    MoE publication typeNot Eligible
    EventIBPSA-Canada's 4th Biennal Building Performance Simulation Conference, eSim 2006 - Toronto, Canada
    Duration: 4 May 20065 May 2006

    Conference

    ConferenceIBPSA-Canada's 4th Biennal Building Performance Simulation Conference, eSim 2006
    CountryCanada
    CityToronto
    Period4/05/065/05/06

    Fingerprint Dive into the research topics of 'A case study demonstrating the utility of inter-program comparative testing for diagnosing errors in building simulation programs'. Together they form a unique fingerprint.

    Cite this