Skip to content

Latest commit

 

History

History
152 lines (120 loc) · 5.46 KB

canvas.md

File metadata and controls

152 lines (120 loc) · 5.46 KB

The Open Ethics Canvas v1.0.2

modified: 2021-11-16


  • Prepared For
  • Prepared By
  • Date
  • Version

Contents


1. Scope

  • What is this product designed for?
  • What context does it operate in?

2. Users

  • What type of users does this product have?
  • What are their roles?

3. Training Data

  • How was the training data collected?
  • How do you ensure its representativeness?
  • Does your training dataset contain personal data?
  • Who annotates the data and how quality is controlled?
  • What is the data labeling process that you employ?

4. Algorithms & Source Code

  • Do you use open or proprietary sources? Which?
  • Who in the team is setting the heuristics/rules that influence the output?
  • How do you ensure the quality of used third-party codebases?
  • What is your process of making the key architectural choices?

5. Decision Space

  • What exactly does the product do?
  • Can you provide the list of all possible outputs?
  • How incorrectly supplied inputs are spotted?
  • Is there anomaly detection in place?

6. Key Stakeholders

  • Who are the key stakeholders?
  • What influence do they have over the product?
  • How do stakeholders interact with each other?
  • How is the power distributed?

7. Values & Interests

  • What values do stakeholders/users have?
  • Where these values can clash or create tensions?
  • What is known at the moment and how assumptions are tested?
  • How can you align your technology to the values you want to support/people desire?

8. Personal Data Processing

  • Which personal data is collected by the product?
  • What is the purpose of collecting personal data?
  • How is this data processed? Used? Stored? Deleted?

9. Components & Subprocessing

  • Which third parties are engaged by the product?
  • How do you evaluate the potential impacts of API on the quality of your product’s output?
  • How do you check the reliability of your data processing contractors?

10. Failure modes

  • How are failures detected and monitored?
  • What are the possible failures of the product?
  • What happens when the product fails?

11. Explainability

  • How is interpretability defined for the system?
  • What interpretability methods are used?
  • What metrics are used in result interpretation?
  • How are interpretations of the output communicated?

12. Human in the Loop (HITL)

  • What is the role of a human agent in the validation/verification of the outputs?
  • What is the role of a human agent in refining the model performance?
  • What is the decision-making power assigned to human agents responsible for the quality of output?

13. Model Performance Metrics

  • Which metrics are used to evaluate the product performance?
  • Which measures are used to re-evaluate Accuracy, Recall, Precision, and F1- Score?

14. Decision Feedback & Objection

  • How does the product allow for structured feedback?
  • How can the user challenge the application output?
  • Which are the third parties involved in resolving claims and objections?

15. Impact Assessment

  • What potential harms can your product cause? (loss of opportunity, discrimination, economic loss, social stigma, detriment, emotional distress, etc)?
  • What are the risks of the product’s failure?
  • What impact can the product cause when deployed at scale?
  • How is the product influencing the existing markets?

16. Regulatory Landscape

  • What is the regulatory context in which the product operates?
  • Is the model portable to other market verticals?
  • What regulatory risks are involved?

17. Mitigation

  • How do you test for bias and fairness? What fairness definitions do you employ and why?
  • Does your team reflect a diversity of opinions, backgrounds, and thoughts?
  • Do you have a process for redress if people are harmed by the outputs?
  • How fast can you shut down your product in production if it behaves badly?
  • Who and how should be informed?

18. Changes in Behavior

  • Do the automated decisions have significant legal or similar effects on the users/stakeholders?
  • How might users change their behavior after use?
  • What are the potentials for power imbalance?

19. Group Interactions

  • What group interactions can you anticipate?
  • What are potential changes in group behavior?
  • How is the product addressing group interests?
  • What new groups could be born due to the product deployment at scale?

20. Comments


The Open Ethics Canvas v1.0 © 2021 by Open Ethics contributors Designed by Nikita Lukianets, Alice Pavaloiu, Vlad Nekrutenko Licensed under Attribution-ShareAlike 4.0 International https://openethics.ai/canvas