Skip to content

X (Obsolete) Widget debugging

Janez Demšar edited this page May 7, 2016 · 1 revision

Table of Contents

Prerequisites

Checkout the module widgetDebugging from trunk/testing repository. The module includes debugWidgets.py, the schemas to be tested, and a folder with data sets.

Creating the test

  1. Create a save the schema as GUI application (.ows) in Orange Canvas. Schema should contain widgets that you want to test.
  2. Create a screenshot (.png) of the canvas schema (e.g., File / Print Schema / Save image) with the same name.
  3. Optionally add lines at the top of the schema file (.py) to alter. Use any of the following comments, to either specify whom should be send an email if regression testing fails, which type of data sets from Datasets folder to include, which data sets to specifically ignore or which ones are the ones to be used:
   # contact: email_address1, email_address2 ...
   # datasets: [class|noclass|class,noclass]
   # ignore: titanic.tab, iris.tab
   # useonly: imports-85.tab, zoo.tab

Commit these files to repository:

  • .py - gui schema,
  • .ows - canvas schema,
  • .sav - settings of the widgets in schema, and
  • .png - schema's screenshot.

The script that you will put to cvs will be automatically executed by debugWidgets.py script. In case your script fails (one or more exceptions happen during the testing) a mail will be sent to the contact authors. The mail you will receive will contain the log of execution - including all the exceptions that happened.

Notes

The debugWidgets.py randomly select datasets from the Datasets folder. It also randomly changes GUI components created by OWGUI module (check-boxes, drop-down menues, ...) in the schema. For widgets from OWGUI module, a tuple containing all the necessary information is added to the widgets list called _guiElements. If you create a component using the OWGUI module and specifically don't want it to be used in debugging add debuggingEnabled = 0 as one of the parameters, e.g.:

   self.optimizationDlgButton = OWGUI.button(self.optimizationButtons, self, "VizRank", 
      callback = self.optimizationDlg.reshow, debuggingEnabled = 0)

There are some cases when you have to set debbugingEnabled = 0. One example are components where the call of the callback function would disrupt normal message processing by opening a MessageBox dialog or something similar. Another example are time expensive callback functions - for example VizRank's button would call a function that could possibly take hours (or days to compute) to compute all possible projections. The widget testing program, due to time constraints, should not attempt to debug such functions.

Invoking debugWidgets.py

The following works, and test all the schema or a specific set of schemes:

   debugWidgets.py
   debugWidgets.py schema.py
   debugWidgets.py schema1.py schema2.py schema3.py

Following flags can also be used:

  • -sendmail
    notifies the authors (specified in the schemas using "contact: name1 name2) in case of Python exceptions while testing the script,
  • -verbose
    include a detailed information on invoked GUI events in the output report file (e.g., schema.txt)
  • -Verbose
    like above, but include also passing and processing of signals between widgets
  • -changes=X
    to specify a custom number of changes that will be tested while runing the schema (default: 2000)

Here are a few examples of how you can call the debugWidgets script:

   debugWidgets.py visualizations.py -sendmail -verbose
   debugWidgets.py -Verbose
   debugWidgets.py -changes=10000

How to locally run a test?

Run debugWidgets.py with the name of the schema as an argument, e.g.:

> debugWidgets.py evaluation-simple

Running this will create a text file with a name of the schema (e.g., evaluation-simple.txt) with reported exceptions.

Additional documentation

See readme.txt (file located in the root directory of the module).