User:Tgr (WMF)/anti-abuse AI platform
Spam and other kinds of automated abuse are a major strain on our power users dealing with anti-abuse tasks, who haven't received much help so far, despite the fact that we receive relatively little of it. If economical or political changes made spamming or otherwise attacking Wikipedia a more appeling target, it could cause serious disruption up to the point where anonymous editing and registration of new users would have to be completely disabled. Spambots and similar attack tools are quickly evolving threats; major internet platforms handle them by some form of machine learning, while we are stuck in the early 2000s captcha-based paradigm. Not only are captchas not very effective, but they are also disruptive for well-meaning users.[1]
We should build a machine learning platform that can automatically detect attacks by remembering and extrapolating request patterns (IPs, user agents etc) and behavioral patterns (such as mouse / keyboard input dynamics or sequences of requests), guided by input from human spamfighters. This would free up power user time to spend on less mechanical tasks, build trust by showing that the Foundation helps core volunteers with their problems even when nothing is on fire at the moment, and increase editor retention by making the signup and editing experience of new, non-trusted accounts less frustrating.
- ↑ Current captchas are easly breakable with off-the-shelf OCR software and probably have around 20% human failure rate; see T152219#3405799 and T125132#4442590. They are also not internationalized and not accessible.