Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Predictions: Allow choosing a target #5790

Merged
merged 3 commits into from
Feb 25, 2022

Conversation

janezd
Copy link
Contributor

@janezd janezd commented Jan 14, 2022

Issue

Fixes #5743.

Description of changes

It sure looks funny, especially for numeric targets, but.

I didn't want to put the combo to the main area in order to save space; in Test and Score we have plenty space on the right, here it's precious. I have put the Restore Order button and the list view for selecting probabilities at the top, and the combo for choosing the target at the bottom (with rubber in between), so they are close to their respective tables.

Screenshot 2022-01-14 at 23 00 12

This is quite nice for discrete outcomes; less so for numeric.

Screenshot 2022-01-14 at 23 00 22

I've no idea what to do here except for perhaps having a different layout in this case: just put the checkbox and the button between the tables (in that order, with rubber between), and hide the control area completely.

Includes
  • Code changes
  • Tests
  • Documentation

@codecov
Copy link

codecov bot commented Jan 14, 2022

Codecov Report

Merging #5790 (429fd85) into master (10ecebf) will increase coverage by 0.03%.
The diff coverage is 99.33%.

@@            Coverage Diff             @@
##           master    #5790      +/-   ##
==========================================
+ Coverage   86.23%   86.27%   +0.03%     
==========================================
  Files         315      315              
  Lines       66617    66774     +157     
==========================================
+ Hits        57447    57608     +161     
+ Misses       9170     9166       -4     

@ajdapretnar
Copy link
Contributor

What if:

  • probabilities are always shown
  • model scores are always shown
  • target class can be selected above performance scores if problem is categorical
    Ok, I've come so far but I've no idea where to put "Restore original order". 🙈

@janezd janezd marked this pull request as draft January 28, 2022 08:07
@janezd janezd self-assigned this Feb 4, 2022
@janezd
Copy link
Contributor Author

janezd commented Feb 11, 2022

What about this?

I replaced the listview with a combo and moved everything to main area. This not only gets rid of the empty control area and makes everything nicer, it also simplifies the widget and its code.

The listview contained all classes that appeared in the data or in any of the models. So the user was able to choose any combination of classes, including, say, some classes that only appeared in some models. I doubt anybody used this, so the list was a nuissance. Why would anybody be interested in setosa and virginica, but not versicolor?

Now the combo offers only sensible options: no probabilities, probabilities that appear in the data, probabilities that are predicted by models (which may differ from those in the data) and probabilities of individual classes that appear in the data. No combinations.

The widget did not work with contexts: it couldn't, because the context would be an ordered combinations of all classes appearing in any model (because this is what list view offered). Now, the context is simply the list of classes in data.

Screenshot 2022-02-11 at 18 48 29

Screenshot 2022-02-11 at 18 49 06

@markotoplak
Copy link
Member

@janezd, I like the concept, but your new approach trades valuable vertical space for (cheap an collapsible) horizontal. Substantially decreasing margins around components would likely make me stop complaining.

@janezd
Copy link
Contributor Author

janezd commented Feb 11, 2022

@markotoplak, you're right. Is this better?

For numeric target, I could move Restore Original Order below the table to eliminate the components above, but this could be confusing.

Screenshot 2022-02-11 at 19 34 53

@markotoplak
Copy link
Member

Looks good! I do not know about the Qt quirks about this one, but the upper control pane still has larger margins than the lower one.

I agree that moving the button could be confusing.

@janezd janezd force-pushed the predictions-select-target branch 3 times, most recently from 9571d1f to 69c5f5c Compare February 12, 2022 13:32
@janezd janezd marked this pull request as ready for review February 12, 2022 13:33
@janezd
Copy link
Contributor Author

janezd commented Feb 12, 2022

It turns out that it was not a Qt's quirk: gui.button adds some space.

    if is_macstyle():
        btnpaddingbox = vBox(widget, margin=0, spacing=0)
        separator(btnpaddingbox, 0, 4)  # lines up with a WA_LayoutUsesWidgetRect checkbox
        button.outer_box = btnpaddingbox

Ready for review.

@janezd janezd force-pushed the predictions-select-target branch 4 times, most recently from 70a313c to c1604c3 Compare February 12, 2022 17:45
@markotoplak
Copy link
Member

Hahaha, so all our buttons are adding padding that can not be disabled just so that they align with potential checkboxes? Who merged that? Wait, it was me. :)

@borondics
Copy link
Member

Hm... 🤪

@janezd janezd assigned VesnaT and unassigned janezd Feb 18, 2022
Orange/widgets/evaluate/owpredictions.py Outdated Show resolved Hide resolved
Orange/widgets/evaluate/owpredictions.py Show resolved Hide resolved
Orange/widgets/evaluate/owpredictions.py Show resolved Hide resolved
Orange/widgets/evaluate/owpredictions.py Show resolved Hide resolved
self.shown_probs = self.DATA_PROBS
self.target_class = self.TARGET_AVERAGE
else:
self.shown_probs = self.NO_PROBS
Copy link
Contributor

@VesnaT VesnaT Feb 18, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not adding MODEL_PROBS option in this case (for classification)?

In previous version of the widget the probabilities were shown despite missing target.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Huh, I could but this would complicate the code and the widget.

  • In terms of UX, allowing MODEL_PROBS when the data class is continuous but some models are categorical would require showing the combo, which would then have two options, NO_PROBS and MODEL_PROBS. When all models are regressors, the combo would be hidden...
  • In terms of the code, I can now simply expect that shown_probs is NO_PROBS in case of numeric data, which simplifies some conditions, as I remember.

Though I could change this, I'd prefer not to. This situation is too strange; perhaps we could even ignore models of wrong kind and show a warning.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, leave it as is. But be aware that, in e.g. six months, one may report an issue: Predictions widget is missing probabilities. :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'll tell him that he should not mix classifiers and regressors. :)

Orange/widgets/evaluate/owpredictions.py Show resolved Hide resolved
# because one commit button has only one dirty flag, and unnecessarily
# making an unconditional commit af a small table is better than
# conditionally commit all predictions.
self._commit_evaluation_results()
Copy link
Contributor

@VesnaT VesnaT Feb 18, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why? How is the output target_class dependent?

But, if it is dependent, the (target) combo box should not be hidden.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch. I assumed that the output depends on what is shown, but it does not.

I now fixed that. The output table now contains the same data as shown in the view. And thus depend on the combo. This is relevant only for classification, so combo can still be hidden for regression.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was actually referring to the Evaluation results output.
I don't see how those depend on the target combo box.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, you're right. I removed the comment and the function call.

"and are known to the model"
else:
class_idx = self.shown_probs - len(self.PROB_OPTS)
text += f"'{self.class_var.values[class_idx]}'"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The target_class value is missing in the report.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you sure you want to display it for regression as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure not.

@janezd janezd force-pushed the predictions-select-target branch 2 times, most recently from 034ceab to 12ab88a Compare February 18, 2022 21:06
@janezd
Copy link
Contributor Author

janezd commented Feb 18, 2022

@VesnaT, thanks for this detailed review. Some changes are substantial (those related to output), so I've put all today's changes into a new commit. Please use Squash & Merge for merging.

@janezd
Copy link
Contributor Author

janezd commented Feb 19, 2022

Here's a workflow to explore the behaviour of the output.

test-predictions.ows.zip
.

@VesnaT
Copy link
Contributor

VesnaT commented Feb 21, 2022

I've found another minor issue, but it might had been there before:

  1. File (Iris) -> Logistic Regression -> Predictions
  2. Predictions (Show probabilities for Iris-setosa) -> Sort descending by Logistic Regression column
  3. Predictions (Show probabilities for Iris-versicolor) -> Logistic Regression column should be sorted descending (due to sort indicator), but it is not

@janezd janezd force-pushed the predictions-select-target branch 3 times, most recently from a2a4338 to 5fbd250 Compare February 21, 2022 16:09
@janezd
Copy link
Contributor Author

janezd commented Feb 21, 2022

I've found another minor issue, but it might had been there before: (...) Logistic Regression column should be sorted descending (due to sort indicator), but it is not

Quite probably. The argument for order was missing when sorting after changing indices: https://github.com/biolab/orange3/pull/5790/files#diff-dd77f983a656d8fa3f453496f08aad0ae7e8c5a57e48406d5520836c76ce1690R1040.

The changes were so minor (removed call to resend signals; if that checks for discrete target in report; added argument) that I just amended the existing commit. Ready for rerereview.

@VesnaT
Copy link
Contributor

VesnaT commented Feb 24, 2022

Quite probably. The argument for order was missing when sorting after changing indices: https://github.com/biolab/orange3/pull/5790/files#diff-dd77f983a656d8fa3f453496f08aad0ae7e8c5a57e48406d5520836c76ce1690R1040.

The sort indicator is now shown on the first column by default, which is not ok.

@janezd
Copy link
Contributor Author

janezd commented Feb 24, 2022

The sort indicator is now shown on the first column by default, which is not ok.

True, but without "now". This misbehaviour is old.

This should fix it: https://github.com/biolab/orange3/pull/5790/files#diff-dd77f983a656d8fa3f453496f08aad0ae7e8c5a57e48406d5520836c76ce1690R523.

@VesnaT VesnaT merged commit 5c5c3be into biolab:master Feb 25, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Predictions is missing the "Target Class" combo for evaluation
5 participants