User:Dantman/Anti-spam system

Spambots are getting smarter and smarter. They use innumerable ip addresses, bypass captchas, understand wikitext, register as users, upload images, wait until they're autoconfirmed, confirm email addresses, and even go through AbuseFilter warnings. Eventually we are going to have to give up on trying to fight spambots using the simple heuristics we have been using and handle them using machine learning that can answer the question "Is this spam?". This will require both an extension and a central service.

Service edit

  • A central anti-spam service is hosted with all wikis using the extension connected to it.
    • Unfortunately it takes a lot of data of both spam and not-spam to teach a machine-learning engine what is and isn't spam. Many wikis have little activity. This means that a lot of the wikis that need spam protection do not have the data necessary to teach the wiki itself what is and isn't spam. As a result, to do machine-learning based anti-spam outside of Wikimedia (who already have bots for this) we're going to need to do it with a central service that stores all the data and lets users say what is generic spam.
    • Ideally this central hosted service would be funded by donation rather than becoming another closed Akismet or the like.
  • Edits made to a wiki are submitted to the service which responds saying whether the edit is spam or not-spam.
  • All submitted edits are stored as changes (title&delta) inside the spam system so they can be reviewed.
  • Users are given interfaces to review changes submitted to the service:
    • A spam/not-spam/dontknow interface that gives users random unasserted changes to review.
    • A log of recently submitted changes and the latest assertion if any.
  • When a user marks a change as spam or non-spam it is stored as a row of it's own in the database.
    • (change, user, is_spam?, timestamp, is_rejected?=False)
    • The latest non-rejected assertion is used as the is_spam? value for the change. But all old assertions are still kept.
    • Doing this makes it so that after a user is banned for marking spam as not-spam a range of (or all of) the user's assertions can be rejected automatically making the system fall back to the good assertions.
  • When a change is marked as spam or non-spam by a wiki's admins it is sent as a report to the central service and saved.
    • Users are given an interface that allows them to accept or reject new reports.
    • A log of recent reports and their states is given so users can go over other user's assertions.
    • Accepting or rejecting a report is stored as an assertion against that report.
    • The latest non-rejected assertion against a report is used as the report's state.
    • If the report is considered accepted then it is used as an assertion against the change (ie: is spam, is not spam)
    • Based on what ratio of reports by a wiki are accepted and how many reports that wiki has made a wiki is given a favourability score. ie: A wiki needs both a high ratio of accepted reports and a fair number of reports to be considered favourable.
      • If a wiki's favourability is near-perfect then new reports from it are auto-accepted.
  • Changes that have been asserted as both spam and not-spam are used to teach an engine what is spam and what isn't spam.
    • This engine is periodically rebuilt with fresh data. The engine is rebuilt from slave data on a machine separate from the answering service that uses the brain to avoid hindering performance. After it is built it is quickly installed into the live system.
    • The web UI will probably have a footnote like "Brain last installed <date>, build time <duration>"
  • To be as efficient as possible and optimize the system we'll probably use python with and a multi-thread gevent based server like gunicorn.
  • We'll also need to decide what the best database storage is. We're going to end up storing a lot of data so we may want something that can scale horizontally. But we'll also want something supporting slave replication.
  • I'm unsure what the format we'll want to store changes in the database is. We'll probably decide that after reviewing the engine we'll use for machine-learning.

Backend to wiki edit

  • Every edit that is made to the wiki is submitted to the central service and a response will be expected saying whether the edit is spam or not.
  • When an edit is said to be spam it will be rejected and the user will be notified.
  • Since early on the database will not know what is and isn't spam a configuration option can be set to make the extension give an edit a "spam?" tag instead of rejecting the edit.

Wiki UI edit

  • Users can flag/report edits made to the wiki as spam. This will make them show up in a queue that administrators can deal with.
  • When an edit is judged by the service to be spam the wiki keeps the data needed for the whole edit around.
    • This data is stored in a special table and old entries are purged from it.
    • This data is exposed on a special page and in part acts as a log of recent spam coming into the wiki to see how much spam there is.
    • Additionally this data also helps administrators find edits being marked as spam which are not spam.
  • Entries sitting inside this table of edits marked as spam can also be reported as not-spam allowing administrators to deal with it.
  • When a user's edit is marked as spam by the service they get a message telling them that their edit was considered spam. Below this they are given a button that allows them to quickly report the edit as non-spam.
  • When an admin goes over an item on the spam page they have 2 or 3 options; "Mark as not-spam and accept" to mark the edit as non-spam and attempt to apply the edit to the article if it can still be applied. "Mark as not-spam but reject" to mark the edit as non-spam but not apply the edit to the wiki. And if they are dealing with a report "Dismiss as spam" to reject reports about spam being non-spam.