Wikimedia Developer Summit/2018/Keynote by Londa Schiebinger
Def Summit '18 https://etherpad.wikimedia.org/p/devsummit18
Keynote by Londa Schiebinger https://phabricator.wikimedia.org/T183397
DevSummit event
edit- Day & Time: Monday, 1:30 pm – 2:00 pm
- Room: Tamalpais Room
- Notetaker(s): quiddity, c scott,
Session Notes:
- Gender and Fairness in machine learning. Gendered Innovations
- Impacted policy in US, Canada, and UN. Medical research must now use both genders of mice in experiements (NIH policy), because previous tests had only used males and missed out on aspects
- Three fixes: Fix the number of women, fix the insitutions, fix the knowledge (last is my focus here)
- Doing research wrong can cost lives and money. Most important examples come from biomedicine. 10 drugs were withdrawn from US markets because of life-threatening health effects, 8 of those showed greater severity for women
- Apple's Healthkit. Good example of "we track everything" but missed female specific aspects. but we don't know the costs in terms of profit, poor publicity, team morale.
- Goal of Gendered Innovations project: (1) develop state of the art methods of sex and gender analysis, (2) [?]
- http://genderedinnovations.stanford.edu/case-studies/nlp.html#tabs-2 - google translate defaults to masculine pronoun, because existing book content is 2:1 male (down from 4:1 male in 1900-1968), according to ngram data from Google Books. [Presumably the training data for Google Translate follows the trends in books.]
- Bias amplification: existing biased training data perpetuates the bias into the future.
- Google search: men are 5x more likely than women to be offered ads for high-paying executive jobs
- Standard machine learning can acquire human biases from big data.
- eg word embeddings, which try to cluster words to infer semantics based on words found in proximity, clusters man:computer programmer but woman:homemaker, eg.
- "Standard machine learning can acquire [...]"
- "The Word Embedding Factual Association [..]
- "Stereotypes about mens's and women's occupations are often exaggerated [..."]
- "Interdisciplinary teams of computer scientists [...]"
- Excellent lists of citations for related work.
- Cynthia Dwork: Fairness through awareness: https://arxiv.org/abs/1104.3913
- James Zou et al: Debiasing: https://arxiv.org/abs/1607.06520
- Equalized Odds: http://ttic.uchicago.edu/~nati/Publications/HardtPriceSrebro2016.pdf
- Reducing Bias Amplification: https://arxiv.org/abs/1707.09457
- Multi-Calibration: https://arxiv.org/abs/1711.08513
- Big Data: Seizing Opportunities, Preserving Values (May 2014) https://obamawhitehouse.archives.gov/sites/default/files/docs/big_data_privacy_report_may_1_2014.pdf
- Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights (May 2016) https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/2016_0504_data_discrimination.pdf
- The National Artificial Intelligence Research and Development Strategic Plan (October 2016) https://www.nitrd.gov/PUBS/national_ai_rd_strategic_plan.pdf
- On-going efforts:
- NSF: Council for Big Data, Ethics, and Society (2014) https://bdes.datasociety.net/
- Fairness, Accountability, and Transparency in Machine Learning (2014) - Solon Barocas, https://www.microsoft.com/en-us/research/group/fate/ and https://www.fatml.org/
- Matanya: are we talking about changing the algorithm to ignore the disproportion, and/or forcing a bias in the other direction to compensate?
- Londa: We don't want to naively return the existing bias - we don't want to relive the 1950s. There's the data, there's the labeling of the data, and a number of other steps. People are deciding what kind of outcome they want (which itself carries some dangers), and then tweaking the data/labeling/etc. We need to determine who will choose the values of the algorithms. Scientists? Governments? Ethics committees? Groups of informed people such as Wikimedians?
- Aaron: Talking about taking advantage of protected class. They're interesting way to look at biases and systems. But there are more general problems that aren't in protected classes. Do you have idea on how we'd address this? How would we address non-fairness in a fair way? E.g. non-native English speakers vs native. Can we examine these problems before the individual instances each become obvious?
- Good question. I give these presentations around the world, and always give them in English. I used to have to simplify my language, but don't anymore because English has become so imperially distributed.
- [many questions missed :-( ]