Skip to content
Michael Wetter edited this page Sep 9, 2023 · 10 revisions

Introduction

For each model of the Buildings library, there is at least one example model that illustrates its use. The example model is also used for running automatic unit tests. It is recommended to run the unit tests frequently when modifying the library to make sure no model breaks as changes are being made.

Example models that can be used for unit tests are stored in the directory Buildings/[package name]/Examples, or Buildings/[package name]/Validations where [package name] is the name of the package that contains the model that is to be tested. The scripts that run these examples are stored in the directory Buildings/Resources/Scripts/[simulator name]/[package name]/Examples or Buildings/Resources/Scripts/[simulator name]/[package name]/Validations.

The supported simulators include Dymola and JModelica, and we plan to add support for other simulation environments when they support Modelica.Fluid and Modelica.Media (or the subset of these libraries that are used in the Buildings library).

The next sections describe how to run the unit tests.

Software requirements

The scripts require Python 3.8 or above. To install buildingspy, proceed as described in https://simulationresearch.lbl.gov/modelica/buildingspy/install.html To see what version has been used for a particular development branch, see the variable BUILDINGSPY_VERSION in modelica-buildings/.travis.yml.

Executing the unit tests

Prior to executing the unit tests, make sure

  1. the executable for the Modelica environment is on the system PATH, and
  2. the PYTHONPATH points to the directory that contains the buildingspy package as its subdirectory, and that it points to Buildings/Resources/Python-Sources.

On Linux, if the directory structure is

|-- BuildingsPy
|   |-- buildingspy
|   `-- ...
|-- modelica-buildings
|   |-- bin
|   |-- Buildings
|   `-- ...

then type

cd [path_to_your_library]/Buildings
export PYTHONPATH=${PYTHONPATH}:`pwd`/../../BuildingsPy:`pwd`/Resources/Python-Sources
../bin/runUnitTests.py [-b]

The optional flag -b runs the script in a batch mode. In this mode, no user input is required if the unit test results changed. No changes in the test results will be stored, but warning messages will be written to the console.

On Windows, set the PYTHONPATH system variable to the directory BuildingsPy. Execute from a shell or dos prompt

cd [path_to_your_library]\Buildings
..\bin\runUnitTests.py [-b]

The script runUnitTests.py calls a python script that

  1. creates temporary directories for each processors,
  2. copies the directory Buildings into these temporary directories,
  3. creates run scripts that run all unit tests,
  4. runs these unit tests,
  5. compares all variables that are plotted by an *.mos script to reference results that are stored in Buildings/Resources/ReferenceResults/[simulator name]/.
  6. If the results changed by more than 1E-3 absolute or relative error, a warning is written to the console and a plot with the reference results and the new results is shown to the user. Unless the script is run with the -b flag, the user is asked whether to accept the changes.
  7. If the results are the same, no message will be written to the console.
  8. collects the dymola log files from each process,
  9. writes the combined log file unitTests.log in the current directory, and exits with the message 'Unit tests completed successfully.' or with an error message.

If no errors occurred during the unit tests, then the script returns 0. Otherwise, it returns a non-zero exit value.

For options, such as to select the simulator, run ../bin/runUnitTests.py -h.

How to include models as part of the unit tests

To enable an automatic unit test, a model developer needs to do the following:

  1. Provide a Modelica model in the Examples package.
  2. Provide a Modelica script in Buildings/Resources/Scripts/Dymola/.../Examples. This script needs to run the model and it needs to plot results.
  3. Commit to git the new reference results that are generated in Buildings/Resources/ReferenceResults/Dymola when running the unit tests.

The Modelica script must be in such a way that it can be run from the top-level directory of the Buildings library. See any of the example files that are in the directories Examples of the library.

Each example model must have an entry of the form

model Damper
  annotation(Diagram,
    experiment(Tolerance=1e-6, StopTime=3600.0),
    __Dymola_Commands(file="modelica://Buildings/Resources/Scripts/Dymola/Fluid/Actuators/Examples/Damper.mos"
                      "Simulate and plot"));

Furthermore each model must fulfill following requirements:

  1. The experiment annotation must be present for the JModelica and OpenModelica unit tests.
  2. The Tolerance annotation must be present for the JModelica unit tests.
  3. The Tolerance must be smaller than 1e-6 for the JModelica unit tests.
  4. The StopTime annotation must be present to add the model to the unit tests of OpenModelica.
  5. The StartTime, and StopTime of the experiment annotation are not allowed to use multiplication (e.g. StartTime=6*86400), only literal values are allowed. This is required by JModelica and OpenModelica
  6. The Tolerance, StartTime, and StopTime defined in the Modelica model must match the tolerance, startTime, and stopTime defined in the corresponding Modelica script.
  7. A blank line is needed at the end of each *.mos script for Dymola testing.
  8. Remember that the protected instances are not accessible through *.mos scripts.
  9. The __Dymola_Commands annotation will add an item to Dymola's pull-down menu. The Modelica script needs to contain a plot command that plots model results. Only plotted results will be included in when comparing the new results with reference results. For example, the script to run the unit test for the model Damper is as follows:
simulateModel("Buildings.Fluid.Actuators.Examples.Damper", tolerance=1e-6, stopTime=3600.0, method="dassl", resultFile="Damper");
 createPlot(id = 4,
   position = {73, 9, 598, 390},
   x = "res.y",
   y = {"res.m_flow", "res.v"},
   range = {0.15, 0.6, 11.0, 7.0},
   autoscale = true,
   autoerase = true,
   autoreplot = true,
   grid = true,
   color = true,
   filename = "Damper.mat",
   leftTitleType = 1,
   bottomTitleType = 1);

Note

If any of items 1-7 is missing, the unit test will fail with an error.

The scripts need to be stored in the sub-directories of Buildings/Resources/Scripts/Dymola that mirror the library package hierarchy. For example, for the damper model, the script is stored as Buildings/Resources/Scripts/Dymola/Fluid/Actuators/Examples/Damper.mos

When running the unit tests, all variables and parameters that appear in a plot command as the y variable will be compared to reference results that are stored in the directory Buildings/Resources/ReferenceResults/[simulator name]/. If a unit test has no reference results, then the user is asked whether reference results should be stored. Also, if the simulation results are different from the reference results, the user is asked whether the new results should become the reference results. If the user accepts the new results, then they will be written to the directory Buildings/Resources/ReferenceResults/[simulator name]/.

For example, in the above script Damper.mos, the line

y = {"res.m_flow", "res.v"}

causes res.m_flow and res.v to be compared to values that are stored in Buildings/Resources/ReferenceResults/Dymola/Buildings_Fluid_Actuators_Examples_Damper.txt.

Finally, a model developer needs to use git to commit and push the new reference results to a remote repository. This can be done using the command

git add Buildings_Fluid_Actuators_Examples_Damper.txt
git commit -m "Added new reference results." Buildings_Fluid_Actuators_Examples_Damper.txt
git push

Note that if other reference result files changed because of the new contribution, then these other changes must only be committed if they are intentional and if the reason for the change can be explained.

To avoid introducing bugs in the library, do not commit changes that you cannot explain.