Our wikis are available on 25+ language and have thousand of articles.
We are using MediaWiki and many extensions used by the foundation.
Our most used wiki (english version has hundreds edits per day)
Warning: | The ORES infrastructure is being deprecated by the Machine Learning team, please check wikitech:ORES for more info. |
This talk page is intended to be used to discuss the development of ORES. For the bots/tools that use ORES, please contact their owners/maintainers. See ORES/Applications for a list of tools/bots that use ORES.
Our wikis are available on 25+ language and have thousand of articles.
We are using MediaWiki and many extensions used by the foundation.
Our most used wiki (english version has hundreds edits per day)
This is a great reading spot!! Gotta love the WiKi, I read from WiKi just about every night!!
w:Wikipedia talk:IPs are human too#Outdated study might be of interest to you? (P.S.: No preview button here?) --143.176.30.65 21:23, 5 June 2021 (UTC)
Imagine you’ve just spent 10 minutes working on what you earnestly thought would be a helpful edit to your favorite article. You click that bright blue “Publish changes” button for the very first time, and you see your edit go live! Weeee! But 10 seconds later, you refresh the page and discover that your edit has been reverted.
Actually, an AI system - called ORES- has contributed to the judgement of hundreds of thousands of edits on Wikipedia. ORES is a machine learning system that automatically predicts edit and article quality to support editing tools in Wikipedia.
I'm exploring strategies for tuning ORES predictions about quality and vandalism to your needs and I'd like to work with you. I am are looking for editors to discuss the values of Wikipedia as it relates to ORES.
If you are interested in participating, please fill out the short survey below. Thanks! https://docs.google.com/forms/d/e/1FAIpQLSe7itK8GM6Y7vgWdtcFXXnsJ8iWe9ysjQI8S1KVtomfonbkxw/viewform
We have to stay at least as supportive to beginners as we ere in the beginning hen we haad to grow from small to gigantic like we are now. Practically everyone has good will
I made a minor edit to the ORES template to update the team name that maintains ORES. Despite refreshing the page, the changes are not reflected. Can someone look into this?
I edited both ORES/FAQ and ORES/Get support pages to mention FANDOM wikis as well.
ORES AI service is used by Wikimedia Foundation's projects, most notably English Wikipedia, but what about FANDOM's Unified Community Platform (UCP) wikis?
I'm curious about machine learning assisting local wiki moderators and admins, and even SOAP (formerly VSTF) members finding spam and vandalism as well as disruptive edits.
ORES could totally be used on any wiki. Are the FANDOM wikis running MediaWiki? If not, there would need to be some engineering work to develop the API connectors that would allow ORES to pull data from whatever wiki platform FANDOM is running on. If not, I think the hardest part is going to be getting a server to run it as we couldn't host a model for FANDOM in Wikimedia's production installation. Depending on the # of edits, ORES can run on relatively minimal hardware.
Is it suggested to add keywords in your edit summaries undoing other people's edits such as "damaging", "vandalism", or "good-faith" to help train ORES?
We don't process edit summaries to look for such annotations. Instead, the team is developing mw:Jade, a system for explicitly saying what was going on in an edit while undoing it. They are pretty close to deploying a pilot.
As part of our work with the community wish list on SVWP , we're going to develop a gadget that gives an editor feedback using the article quality assessment. The idea is to show the quality before and after the edit. A question that has arisen is what changes to show and with what precision. Is it reasonable to show the difference for all probabilities, regardless of how small that difference is? My worry is that small changes to the probabilities may not be significant and could be misleading. Could someone give me some help with what changes are useful to show in a case like this?
I've developed a Javascript gadget that does something very similar to what you are planning. See https://en.wikipedia.org/wiki/User:EpochFail/ArticleQuality I wonder if we could make a modification to this tool to support what you are working on.
I've been using a "weighted sum" strategy to collapse the probabilities across classes into a single value. See this paper and the following code for an overview of how it works for English Wikipedia.
WEIGHTED_CLASSES = {FA: 6, GA: 5, B: 4, C: 3, Start: 2, Stub: 1} weightedSum = function(score){ var sum = 0 for(var qualityClass in score.probability){ if (!score.hasOwnProperty(qualityClass)) continue; var proba = score.probability[qualityClass] sum += proba * WEIGHTED_CLASSES[qualityClass] } return sum }
This function returns a number between 1 and 6 that represents the model's prediction projected on a continuous scale.
Now, how big of a change matters? That's a good question and it's a hard one to answer. I think we'll learn in practice quite quickly once we have the model for svwiki.
That looks very interesting. I found (part of?) your script earlier, but I haven't had time to go figure out exactly what's going on there. I'll have a look and see what bits are reusable for this. I'd guess that the backend stuff (API interaction, weighting etc.) should be fairly similar.
I like the idea of just having one number to present to the user, along with the quality. From what I've understood, the quality levels aren't as evenly spaced on SVWP as on ENWP; it goes directly from Stub to equivalent to B. I don't know if and how this would impact the weighting algorithm, but maybe that will become apparent once it's in use.
We can have non-linear jumps in the scale. E.g. {Stub: 1, B: 4, GA: 5, FA: 6}
Dear all. I am not sure, if this thread is still active, however, I have a student working for an interface for representing the quality of Wikidata items. I would be happy to meet and to talk about it. We are based in Berlin :)
Hey folks! I've been debugging some issues with our experimental installation of ORES in Cloud VPS recently. It looks like we're getting *a lot* of traffic there. I just want to make sure that everyone knows that the production instance of ORES is stable and available at ores.wikimedia.org and that ores.wmflabs.org will be going up and down as we use it to experiment with ORES and new models we'd like to bring to production.
Hey folks,
We expect a couple minutes of downtime while we restart a machine tomorrow (Tuesday, July 16th @ 1500 UTC).
The maintenance is done and it doesn't appear that there was any downtime.
Per https://phabricator.wikimedia.org/project/profile/2872/ For a time, JADE was named "Meta-ORES". JADE stands for Judgement and Dialog Engine. Should we change the Title of this?
~~~~
ORES is not Jade, and Jade is not ORES. They are related but not the same thing.
There is also a page for Jade.
But there is a request phab:T153143 that ORES query results should include JADE refutations.
OK, I see. I was trying to find some documentation of JADE, where can I find it? thank you!