Help:New filters for edit review/Quality and Intent Filters/ha
![]() |
Lura: Lokacin da ka shirya wannan shafin, kun yarda don sakin gudummawar ku a ƙarƙashin CC0. Duba Shafukan Taimakon Jama'a na Jama'a don ƙarin bayani. | ![]() |
Edit Review Improvements (ERI) |
---|
Features |
Documentation |
Technical |
New filters for edit review introduces two filter groups—Contribution Quality and User Intent—that work differently from other edit-review filters. The filters in these groups offer probabilistic predictions about, respectively, whether or not edits are likely to contain problems and whether the users who made them were acting in good faith.
Additionally, the language-agnostic revert risk model, enabled in 2024, provides a prediction about how likely an edit is to require reverting.
Knowing a bit about how these unique tools work will help you use them more effectively.
These filters are only available on certain wikis.
Based on machine learning
The predictions that make the Quality and Intent filters possible are calculated by ORES, a machine learning program trained on a large set of edits previously scored by human editors. Machine learning is a powerful technology that lets machines replicate some limited aspects of human judgement.
The Quality and Intent filters are available only on wikis where the “damaging” and “good faith” ORES “models” are supported. The ORES “damaging” model powers Quality predictions, while its “good-faith” model powers Intent.
Enabling ORES requires volunteers to score edits on the relevant wiki. This page explains the process and how you can get it started on your wiki.
Misalin harshen-agnostic sake mayar da hadarin yana goyan bayan duk Wikipedias na harshe kuma baya buƙatar horon da hannu ta masu sa kai.
Choosing the right tool
Looking at the Quality, Intent, and Revert Risk filters, you may notice something different about them. Unlike filters in other groups, the various options don’t target different edit properties. Instead, many of them target the same property, but offer different levels of accuracy.
Why would anyone choose to use a tool that's less accurate? Because such accuracy can come at a cost.
Increase prediction probability (higher ‘precision’)

Matsakaicin "daidai" akan menu suna dawo da mafi girman kaso na daidai daidai da tsinkayar da ba daidai ba kuma, saboda haka, ƙarancin tabbataccen ƙarya. (A cikin harshen [[w: Ƙwararren Ƙwaƙwalwa ), waɗannan matattara suna da mafi girma " Madaidaici". Lokacin bincike, suna saita mashaya mafi girma don yuwuwar. Babban abin da ke cikin wannan shi ne cewa suna mayar da ƙaramin kaso na abin da suke so
- Example: The Very likely have problems filter is the most accurate of the Quality filters. Performance varies from wiki to wiki, but on English Wikipedia its predictions are right more than 90% of the time. The tradeoff is that this filter finds only about 10% of all the problem edits in a given set —because it passes over problems that are harder to detect. The problems this filter finds will often include obvious vandalism.
Find more of your target (higher ‘recall’)
Idan fifikon ku shine nemo duka ko galibin burin ku, to kuna son fiɗaɗa, ƙarancin ingantaccen tacewa. Waɗannan suna samun ƙarin abin da suke nema ta saita sandar yuwuwar ƙasa. Ciniki a nan shi ne cewa sun dawo da ƙarin tabbataccen ƙarya. (A cikin harshen fasaha, waɗannan matattarar suna da mafi girma " tuna", wanda aka bayyana azaman adadin abubuwan da kuke nema waɗanda ainihin tambayarku ta samo.)
- Example: The May have problems filter is the broadest Quality filter. Performance varies on different wikis, but on English Wikipedia it catches about 82% of problem edits. On the downside, this filter is right only about 15% of the time.
- If 15% doesn’t sound very helpful, consider that problem edits actually occur at a rate of fewer than 5 in 100—or 5%. So 15% is a 3x boost over random. And of course, patrollers don’t sample randomly; they’re skilled at using various tools and clues to increase their hit rates. Combined with those techniques, May have problems provides a significant edge.
(As noted above, ORES performs differently on different wikis, which means that some are less subject to the tradeoffs just discussed than others. On Polish Wikipedia, for example, the Likely have problems filter captures 91% of problem edits, compared to 34% with the corresponding filter on English Wikipedia. Because of this, Polish Wikipedia does not need—or have—a broader May have problems filter.)
Get the best of both worlds (with highlighting)

The filtering system is designed to let users get around the tradeoffs described above. You can do this by filtering broadly while Highlighting the information that matters most.
To use this strategy, it’s helpful to understand that the more accurate filters, like Very likely have problems, return results that are a subset of the less accurate filters, such as May have problems. In other words, all “Very likely” results are also included in the broader May have problems. (The diagram above illustrates this concept.)
- Example: Find almost all damage while emphasizing the worst/most likely:
- With the default settings loaded,
- Check the broadest Quality filter, May have problems.
- At the same time, highlight —without checking the filter boxes— Likely have problems, in yellow, and Very likely have problems, in red.
- Because you are using the broadest Quality filter, your results will include most problem edits (high “recall”). But by visually scanning for the yellow, red and orange (i.e., blended red + yellow) bands, you will easily be able to pick out the most likely problem edits and address them first. (Find help on using highlights without filtering.)
Find the good (and reward it)

Good faith is easy to find, literally! So are good edits.
The Very likely good faith filter and the Very likely good (Quality) filter give you new ways to find and encourage users who are working to improve the wikis. For example, you might use the Very likely good filter in combination with the Newcomers filter to thank new users for their good work.
- Example: Thank good-faith new users
- Clear the filters by clicking the Trashcan. Then select the Page edits and Human (not bot) filters.
- Check the Quality filter Very likely good.
- Check the User Registration and Experience filters Newcomers and Learners (this has the hidden effect of limiting your results to registered users).
- Highlight the Newcomers filter, in green.
- All edits in your results will be good edits by Newcomers (users with fewer than 10 edits and 4 days of activity) and Learners (users with fewer than 500 edits and 30 days of activity). The green highlight lets you easily distinguish between the two.
Good is everywhere!
The “good” filters mentioned above are both accurate and broad, meaning they aren’t subject to the tradeoffs described in the previous section (they combine high “precision” with high “recall”). These filters are correct about 99% of the time and find well over 90% of their targets. How can they do that?
The happy answer is that the “good” filters perform so well because good is more common than bad. That is, good edits and good faith are much, much more plentiful than their opposites—and therefore easier to find. It may surprise some patrollers to hear this, but on English Wikipedia, for example, one out of every 20 edits has problems, and only about half those problematic edits are intentional vandalism.
Filters list
On wikis where Quality and Intent Filters are deployed, some filters may be missing due to a better quality of predictions. The better ORES performs on a wiki, the fewer filter levels are needed.
Contribution quality predictions
- Very likely good
- Highly accurate at finding almost all problem-free edits.
- May have problems
- Finds most flawed or damaging edits but with lower accuracy.
- Likely have problems
- With high accuracy, finds most problem edits.
- With medium accuracy, finds an intermediate fraction of problem edits.
- Very likely have problems
- Very highly accurate at finding the most obviously flawed or damaging edits.
User intent predictions
- Very likely good faith
- Highly accurate at finding almost all good-faith edits.
- May be bad faith
- Finds most bad-faith edits but with a lower accuracy.
- Likely bad faith
- With medium accuracy, finds an intermediate fraction of bad-faith edits.
Maida haɗari =
Yana amfani da samfurin Harshen-agnostic koma baya samfurin.
Notes
- ↑ These figures come from research that went into training the “damaging” and “good faith” ORES models on English Wikipedia. That’s to say, when volunteers scored a large, randomly drawn set of test edits, this is what they found.