Wikimedia Developer Summit/2017/AI ethics

Wikimedia Developer Summit 2017 Session Notes Template

Session Overview

edit
Title
Algorithmic dangers and transparency -- Best practices
Day & Time
Tuesday, January 10th at 1:10PM PST
Room
Hawthorne
Phabricator Task Link
https://phabricator.wikimedia.org/T147929
Facilitator(s)
Aaron Halfaker
Note-Taker(s)
Michael H, Ejegg
Remote Moderator
Advocate
Pau
Stream link
https://www.youtube.com/watch?v=myB278_QthA
IRC back channel
#wikimedia-ai ( http://webchat.freenode.net/?channels=wikimedia-ai) )

 

Session Summary

edit

Detailed Summary

edit

https://etherpad.wikimedia.org/p/devsummit17-AI_ethics

Purpose

edit

Advance our principles with regards to what's acceptable or problematic with regards to advanced algorithms.

Agenda

edit

Style

edit

Problem solving (discussion)


Discussion Topics

edit
  1. People affected by a prediction model should have the means to talk about it, identify problematic-seeming aspects
  2. training data given to models should be proper wiki artifacts so they can be edited and discussed
  3. how do we implement policies based on the "protected class" concept?
  4. any AI model should account for change in the dynamics of the social system -- models should reflect our values as they change.
  5. With vandalism tools, there's a tradeoff between educating users about why they got flagged and training vandals to be more subtle