Wikimedia Developer Summit/2017/AI ethics
Wikimedia Developer Summit 2017 Session Notes Template
Session Overview
edit- Title
- Algorithmic dangers and transparency -- Best practices
- Day & Time
- Tuesday, January 10th at 1:10PM PST
- Room
- Hawthorne
- Phabricator Task Link
- https://phabricator.wikimedia.org/T147929
- Facilitator(s)
- Aaron Halfaker
- Note-Taker(s)
- Michael H, Ejegg
- Remote Moderator
- Advocate
- Pau
- Stream link
- https://www.youtube.com/watch?v=myB278_QthA
- IRC back channel
- #wikimedia-ai ( http://webchat.freenode.net/?channels=wikimedia-ai) )
Session Summary
editDetailed Summary
edithttps://etherpad.wikimedia.org/p/devsummit17-AI_ethics
Purpose
editAdvance our principles with regards to what's acceptable or problematic with regards to advanced algorithms.
Agenda
edit- What's an AI?
- What do we use them for?
- Why are we worried about AIs?
- Some thoughts:
- Stack protocol: One-hand (new), two-hands (continue thread)
- Gatekeeper: Please use your judgement
- Stack protocol: One-hand (new), two-hands (continue thread)
- What happens next?
- Victory! _o/ \o/ \o_
Style
editProblem solving (discussion)
Discussion Topics
edit- People affected by a prediction model should have the means to talk about it, identify problematic-seeming aspects
- training data given to models should be proper wiki artifacts so they can be edited and discussed
- how do we implement policies based on the "protected class" concept?
- any AI model should account for change in the dynamics of the social system -- models should reflect our values as they change.
- With vandalism tools, there's a tradeoff between educating users about why they got flagged and training vandals to be more subtle