Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incorporate feedback from the Ethical ML workshops #20

Merged
merged 5 commits into from
Jun 9, 2022

Conversation

anssiko
Copy link
Member

@anssiko anssiko commented May 4, 2022

  • Update Operationalization: Putting the Principles into Practice
  • Add Register of Risks and Mitigations

Preview | Diff

@anssiko anssiko requested a review from dontcallmedom May 4, 2022 15:16
- Update Operationalization: Putting the Principles into Practice
- Add Register of Risks and Mitigations
@anssiko
Copy link
Member Author

anssiko commented May 31, 2022

Gentle ping, PTAL @dontcallmedom :-)

Some areas to focus on, mostly presentational:

  • Proposals welcome on how to better represent Risks -> Possible Mitigations mapping. Maybe with some CSS tricks we could use a two-column layout for Risks and Possible Mitigations side-by-side (when screen estate allows)? Someone with proper CSS-fu would need to help with that. Or we could consider that future work.

  • "This bullet intentionally left blank" in Possible Mitigations. I was thinking of turning these empty bullets into links that open a new GH issue to encourage contributions.

Comments:

  • I'd prefer to keep the source format as simple as possible and let CSS handle presentational aspects, if possible. Complex tables are tedious to edit.

  • To help editing the lists and correlate Risks with Possible Mitigations in source, maybe instead of:

1.

1.

1.

We should use:

1.

2.

3.

index.bs Outdated

So for example, if we consider the principle of “Fairness and non-dicrimination”, a risk might be that biases in training data lead to model predictions that are less accurate for particular groups, resulting in real-world harms (e.g. denial of services). A mitigation might be to test the training data and model properly for bias, or indeed it might be to not make access to essential services dependent on ML systems.

To provide a more structured approach to this, we have developed a workshop format which you can adapt and use.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
To provide a more structured approach to this, we have developed a workshop format which you can adapt and use.
To provide a more structured approach to this, we have developed a workshop format that can be adapted and used to help identify such risks & mitigations.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if the details about the workshop (until "it's worth noting" below) should be moved to an appendix - while very useful, it feels a bit out of place in the flow of the document.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved to appendix. Text changes integrated into 13d34dc

index.bs Outdated

To provide a more structured approach to this, we have developed a workshop format which you can adapt and use.

The workshop is in two parts (please make a copies for your own use):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The workshop is in two parts (please make a copies for your own use):
The workshop is articulated around two documents that are expected to be completed interactively by participants:

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Integrated into 13d34dc

index.bs Outdated
Part One - [Ethical thinking workshop](https://docs.google.com/document/d/1f_PcByjW8-zXbYWeEyOl-RpZ3SKapm_MWSaNZ9ZOi4c/edit?usp=sharing) - is about using the principles to generate and prioritize potential risks

Part Two - [Ethical Risk Canvas](https://docs.google.com/document/d/1hTQnpWC5KC4qIJB9-Kkd46yMVgDuTCQlA3MffqtUbCI/edit?usp=sharing) - is about digging deeper into specific risks and thinking about who they might impact and how best to mitigate them.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should probably post a PDF copy of these documents somewhere if only for archival sake.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Opened #21


# Register of Risks and Mitigations

Note: The risk register is work in progress and welcomes further review and tidying up.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy to leave this for a future pull request, but I think there is indeed substantive work to harmonize the phrasing of the risks, with possibly merging / sorting some of them as well.

index.bs Outdated

### Possible Mitigations

1. Browser-assisted mechanisms to find out about the limitations and performance characteristics of ML models used in a Web app. This could build on an approach published in Model Cards for Model Reporting where making this report machine-discoverable would allow for the web browser to offer a more integrated user experience. Another transparency tool is the [Open Ethics Transparency Protocol](https://github.com/webmachinelearning/ethical-webmachinelearning/issues/6).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In terms of mapping mitigations to risks, I would use a naming scheme for risks (e.g. FND-R1 for risk 1 under Fairness and Non-Discrimination), and then references these names from the mitigations that address them.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done in b00171e

Incorporate review feedback from Dom
- Creates dfns for all risks and mitigations
- Also adds Bikeshed bikeshed includes for repeated
  notes/issue blocks
@anssiko
Copy link
Member Author

anssiko commented Jun 2, 2022

Thanks @dontcallmedom for your comments, PR updated.

@anssiko
Copy link
Member Author

anssiko commented Jun 9, 2022

Gentle ping, PTAL @dontcallmedom.

Your feedback has been addressed. I'd like to merge this after your approval so the WG can advance with its W3C Note publication plan.

@anssiko
Copy link
Member Author

anssiko commented Jun 9, 2022

TY @dontcallmedom!

@anssiko anssiko merged commit a023c87 into main Jun 9, 2022
@anssiko anssiko deleted the workshop-feedback branch June 9, 2022 07:31
github-actions bot added a commit that referenced this pull request Jun 9, 2022
SHA: a023c87
Reason: push, by @anssiko

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants