Help:Nya filter för redigeringsgranskning/Filter för kvalitet och avsikter

This page is a translated version of the page Help:New filters for edit review/Quality and Intent Filters and the translation is 21% complete.
Outdated translations are marked like this.
PD OBS: När du redigerar denna sida samtycker du till att släppa ditt bidrag under CC0. Se hjälpsidorna för Public Domain för mer information. PD

Nya filter för redigeringsgranskning introducerar två filtergrupper, bidragskvalitet och användares avsikter, som fungerar olika från andra redigeringsfilter. Filter i dessa grupper erbjuder troliga förutsägelser om redigeringar innehåller problem och om användarna som gjort dem redigeringarna agerade i god tro eller inte. Det hjälper att veta lite om hur dessa unika verktyg fungerar för att kunna använda dem effektivare.

Additionally, the language-agnostic revert risk model, enabled in 2024, provides a prediction about how likely an edit is to require reverting.

Knowing a bit about how these unique tools work will help you use them more effectively.

Dessa filter finns bara på vissa wikier.

Baserat på maskininlärning

Förutsägelserna som möjliggör filtret för kvalitet och avsikter beräknas av ORES, ett maskininlärningsprogram som tränats i en större mängd redigeringar som tidigare betygsatts av människor. Maskininlärning är en kraftfull teknologi som låter maskiner återge vissa begränsade aspekter av mänsklig bedömning.

Filtren för kvalitet och avsikter finns bara tillgängliga på wikier där ORES-"modellerna" "skadliga" och "god tro" stöds. ORES "skadliga" modell driver kvalitetsbedömningar medan "god tro"-modellen driver avsikter.

För att aktivera ORES krävs det att frivilliga betygsätter redigeringar på den relevanta wikin. Den här sidan förklarar processen och hur man kan komma igång på din wiki.

The language-agnostic revert risk model supports all language Wikipedias and does not require manual training by volunteers.

Välja rätt verktyg

Looking at the Quality, Intent, and Revert Risk filters, you may notice something different about them. Unlike filters in other groups, the various options don’t target different edit properties. Instead, many of them target the same property, but offer different levels of accuracy.

Why would anyone choose to use a tool that's less accurate? Because such accuracy can come at a cost.

Increase prediction probability (higher ‘precision’)

This conceptual diagram illustrates how the Quality filters relate to one another on many wikis (performance varies).
As you can see, the Har mycket troligt problem filter captures results composed almost entirely of problem edits (high precision). But it captures only a small portion of all problem edits (low recall). Notice how everything in Har mycket troligt problem (and Har troligtvis problem) is also included in the broader Kan ha problem, which provides high recall but low precision (because it returns a high percentage of problem-free edits).
You may be surprised to see that Kan ha problem overlaps with Mycket troligt bra. Both filters cover the indeterminate zone between problem and problem-free edits in order to catch more of their targets (broader recall).
For space reasons, the diagram doesn't accurately reflect scale.

The more “accurate” filters on the menu return a higher percentage of correct versus incorrect predictions and, consequently, fewer false positives. (In the lingo of pattern recognition, these filters have a higher “precision”.) They achieve this accuracy by being narrower, stricter. When searching, they set a higher bar for probability. The downside of this is that they return a smaller percentage of their target.

Example: The Har mycket troligt problem filter is the most accurate of the Quality filters. Performance varies from wiki to wiki, but on English Wikipedia its predictions are right more than 90% of the time. The tradeoff is that this filter finds only about 10% of all the problem edits in a given set —because it passes over problems that are harder to detect. The problems this filter finds will often include obvious vandalism.

Find more of your target (higher ‘recall’)

If your priority is finding all or most of your target, then you’ll want a broader, less accurate filter. These find more of what they’re looking for by setting the bar for probability lower. The tradeoff here is that they return more false positives. (In technical parlance, these filters have higher “recall”, defined as the percentage of the stuff you’re looking for that your query actually finds.)

Example: The Kan ha problem filter is the broadest Quality filter. Performance varies on different wikis, but on English Wikipedia it catches about 82% of problem edits. On the downside, this filter is right only about 15% of the time.
If 15% doesn’t sound very helpful, consider that problem edits actually occur at a rate of fewer than 5 in 100—or 5%. So 15% is a 3x boost over random. And of course, patrollers don’t sample randomly; they’re skilled at using various tools and clues to increase their hit rates. Combined with those techniques, Kan ha problem provides a significant edge.

(As noted above, ORES performs differently on different wikis, which means that some are less subject to the tradeoffs just discussed than others. On Polish Wikipedia, for example, the Har troligtvis problem filter captures 91% of problem edits, compared to 34% with the corresponding filter on English Wikipedia. Because of this, Polish Wikipedia does not need—or have—a broader Kan ha problem filter.)

Get the best of both worlds (with highlighting)

You can get the best of both worlds by filtering broadly but highlighting using more accurate functions. Here, the user casts a wide net for damage by checking to use the broadest Quality filter, Kan ha problem. At the same time, she identifies the worst or most obvious problems by highlighting (but not filtering with) Har troligtvis problem, Har mycket troligt problem and Har troligtvis onda avsikter.

The filtering system is designed to let users get around the tradeoffs described above. You can do this by filtering broadly while Highlighting the information that matters most.

To use this strategy, it’s helpful to understand that the more accurate filters, like Har mycket troligt problem, return results that are a subset of the less accurate filters, such as Kan ha problem. In other words, all “Very likely” results are also included in the broader Kan ha problem. (The diagram above illustrates this concept.)

Example: Find almost all damage while emphasizing the worst/most likely:
  1. With the default settings loaded,
  1. Check the broadest Quality filter, Kan ha problem.
  1. At the same time, highlight —without checking the filter boxes— Har troligtvis problem, in yellow, and Har mycket troligt problem, in red.
Because you are using the broadest Quality filter, your results will include most problem edits (high “recall”). But by visually scanning for the yellow, red and orange (i.e., blended red + yellow) bands, you will easily be able to pick out the most likely problem edits and address them first. (Find help on using highlights without filtering.)

Find the good (and reward it)

This reviewer wants to thank new users who are making positive contributions. The Mycket troligt bra filter isolates problem-free edits with 99% accuracy. Filtering for Nykomlingar and Nybörjare limits the search to these two experience levels, while applying a green highlight to Nykomlingar (only) enables the reviewer to distinguish at a glance between the two levels.

Good faith is easy to find, literally! So are good edits.

The Har mycket troligt goda avsikter filter and the Mycket troligt bra (Quality) filter give you new ways to find and encourage users who are working to improve the wikis. For example, you might use the Mycket troligt bra filter in combination with the Nykomlingar filter to thank new users for their good work.

Example: Thank good-faith new users
  1. Clear the filters by clicking the Trashcan. Then select the Sidredigeringar and Mänskliga (inte bot) filters.
  2. Check the Quality filter Mycket troligt bra.
  3. Check the User Registration and Experience filters Nykomlingar and Nybörjare (this has the hidden effect of limiting your results to registered users).
  4. Highlight the Nykomlingar filter, in green.
All edits in your results will be good edits by Newcomers (users with fewer than 10 edits and 4 days of activity) and Learners (users with fewer than 500 edits and 30 days of activity). The green highlight lets you easily distinguish between the two.

Good is everywhere!

The “good” filters mentioned above are both accurate and broad, meaning they aren’t subject to the tradeoffs described in the previous section (they combine high “precision” with high “recall”). These filters are correct about 99% of the time and find well over 90% of their targets. How can they do that?

The happy answer is that the “good” filters perform so well because good is more common than bad. That is, good edits and good faith are much, much more plentiful than their opposites—and therefore easier to find. It may surprise some patrollers to hear this, but on English Wikipedia, for example, one out of every 20 edits has problems, and only about half those problematic edits are intentional vandalism.



På wikier där filter för kvalitet och avsikter aktiverats kanske vissa filter saknas på grund av bättre kvalitetsbedömningar. Ju bättre ORES presterar på en wiki, desto färre filternivåer behövs.

Förutsägelser av bidragskvalitet

Mycket troligt bra
Hög noggrannhet för att hitta nästan alla redigeringar med goda avsikter.
Kan ha problem
Hittar de mest felaktiga eller skadliga redigeringar, men med lägre noggrannhet.
Har troligtvis problem
Hög noggrannhet och hittar de flesta problematiska redigeringar.
Medelhög noggrannhet och hittar en mellanliggande andel av de problematiska redigeringarna.
Har mycket troligt problem
Mycket hög noggrannhet för att hitta de mest uppenbart bristfälliga eller skadliga redigeringarna.

Förutsägelser av användares avsikter

Har mycket troligt goda avsikter
Hög noggrannhet för att hitta nästan alla redigeringar med goda avsikter.
Kan ha onda avsikter
Hittar de flesta redigeringarna med onda avsikter men med en lägre noggrannhet.
Har troligtvis onda avsikter
Medelhög noggrannhet och hittar en mellanliggande andel av redigeringarna med onda avsikter.

Revert risk

Filter levels TBD. Uses the Language-agnostic revert risk model.


  1. These figures come from research that went into training the “damaging” and “good faith” ORES models on English Wikipedia. That’s to say, when volunteers scored a large, randomly drawn set of test edits, this is what they found.