Growth/Personalized first day/Structured tasks/Add an image

Other languages:
English • ‎Tiếng Việt • ‎Türkçe • ‎čeština • ‎العربية • ‎বাংলা • ‎日本語

Growth

Help contents: Use the tools: (Help panel, Enable the Homepage, How to claim a mentee, Suggested edits)

This page describes work on an "add an image" structured task, which is a type of structured task that the Growth team will offer through the newcomer homepage. The Android team is also thinking about a similar task for the Wikipedia Android app using the same underlying components. Additionally, the Structured Data team is in the early stages of exploring something similar, targeted at more experienced users and benefiting from Structured Data on Commons. Discussion and updates on this page are relevant to the work of all teams.

This page contains major assets, designs, open questions, and decisions.

Most incremental updates on progress will be posted on the general Growth team updates page, with some large or detailed updates posted here.

Current statusEdit

  • 2020-06-22: initial thinking about ideas to create a simple algorithm to recommend images
  • 2020-09-08: evaluated a first attempt at a matching algorithm in English, French, Arabic, Korean, Czech, Vietnamese
  • 2020-09-30: evaluated a second attempt at a matching algorithm in English, French, Arabic, Korean, Czech, Vietnamese
  • 2020-10-26: internal engineering discussion of possible feasibility for image recommendation service
  • 2020-12-15: running initial round of user tests to start to understand whether newcomers might succeed at this task
  • 2021-01-20: Platform Engineering team begins building proof-of-concept API for image recommendations
  • 2021-01-21: Android team begins work on minimum viable version for learning purposes
  • 2021-01-28: posted user test results
  • 2021-02-04: posted summary of community discussion and coverage statistics

SummaryEdit

Structured tasks are meant to break down editing tasks into step-by-step workflows that make sense for newcomers and make sense on mobile devices. The Growth team believes that introducing these new kinds of editing workflows will allow more new people to begin participating on Wikipedia, some of whom will learn to do more substantial edits and get involved with their communities. After discussing the idea of structured tasks with communities, we decided to build the first structured task: "add a link".

Even as we build that first task, we have been thinking about what a next structured task could be, and we think that adding images could be a good fit for newcomers. The idea is that a simple algorithm would recommend images from Commons to be placed on articles that have no images. To start with, it would use only existing connections that can be found in Wikidata, and newcomers would use their judgment to place the image on the article or not.

We know that there are many open questions around how this would work, many potential reasons that it might not go right. That's why we are hoping to hear from lots of community members and have an ongoing discussion as we decide how to proceed.

Why images?Edit

Looking for substantial contributions

When we first discussed structured tasks with community members, many pointed out that adding wikilinks is not a particularly high-value type of edit. Community members brought up ideas for how newcomers could make more substantial contributions. One idea is images. Wikimedia Commons contains 65 million images, but in many Wikipedias, over 50% of articles have no images.  We believe that many images from Commons can make Wikipedia substantially more illustrated.

Interest from newcomers

We know that many newcomers are interested in adding images to Wikipedia. "To add an image" is a common response newcomers give on the welcome survey for why they are creating their account. We also see that one of the most frequent help panel questions is about how to add images, true across all the wikis we work with. Though most of these newcomers are probably bringing their own image that they want to add, this hints at how images can be engaging and exciting. That makes sense, given the image-heavy elements of the other platforms that newcomers participate in -- things like Instagram and Facebook.

Difficulty of working with images

The many help panel questions about images reflects that the process to add them to articles is too difficult. Newcomers have to understand the difference between Wikipedia and Commons, rules around copyright, and the technical parts of inserting and captioning the image in the right place. Finding an image in Commons for an unillustrated article requires even more skills, such as knowledge of Wikidata and categories.

Success of "Wikipedia Pages Wanting Photos" campaign

 
In the Wikipedia Pages Wanting Photos campaign, 600 users added images to 85,000 pages.

The Wikipedia Pages Wanting Photos campaign (WPWP) was a surprising success: 600 users added images to 85,000 pages. They did this with the assistance of a couple of community tools that identified pages that have no images, and which suggest possible images through Wikidata. While there are important lessons to be learned about how to help newcomers succeed with adding images, this gives us confidence that users can be enthusiastic about adding images and that they can be assisted by tools.

Taking this all together

Thinking about all this information together, we think that it could be possible to build an "add an image" structured task that is both fun for newcomers and productive for Wikipedias.

AlgorithmEdit

Our ability to make a structured task for adding images depends on whether we can create an algorithm that generates sufficiently good recommendations. We definitely do not want to urge newcomers to add the wrong images to articles, which would cause work for patrollers to clean up after them. Therefore, trying to see if we could make a good algorithm is one of the first things we've worked on.

LogicEdit

We have been working with the Wikimedia Research team, and so far we have been testing an algorithm that prioritizes accuracy and human judgment. Rather than using any computer vision, which can generate unexpected results, it simply aggregates existing information in Wikidata, drawing on connections made by experienced contributors. These are the three main ways that it suggests matches to unillustrated articles:

  • Look at the Wikidata item for the article. If it has an image (P18), choose that image.
  • Look at the Wikidata item for the article. If it has a Commons category associated (P373), choose an image from the category.
  • Look at the articles about the same topic in other language Wikipedias. Choose a lead image from those articles.

The algorithm also includes logic to do things like exclude images that are likely icons or that are present on an article as part of a navbox.

AccuracyEdit

As of December 2020, we've gone through two rounds of testing the algorithm, each time looking at matches to articles in six languages: English, French, Arabic, Vietnamese, Czech, and Korean. The evaluations were done by our team's ambassadors, who are native speakers in each languages. Looking at 50 matches in each language, we went through and classified them into these groups:

Classification Explanation Example
2 Great match for the article, illustrating the thing that is the title of the article. The article is "Butterfly" and it is an image of a butterfly.
1 Good match, but difficult to confirm for the article unless the user has some context, and would need a good caption. The article is "Butterfly" and it is an image of an important scientist who studies butterflies.
0 Not a fit for the article at all. The article is "Butterfly" and the image is an automobile.
-1 Image is correct for the subject, but does not match the local culture. The article is "Butterfly" and the image is a specific butterfly from a part of the world that has different butterflies than the local kind.
-2 Misleading image that a newcomer could accidentally think is correct. The article is "Butterfly" and the image is a moth.
-3 Page should not have an image. Disambiguation pages, lists, or "given name" articles.

A question that runs throughout the work on an algorithm like this is: how accurate does it need to be? If 75% of matches are good is that enough? Does it need to be 90% accurate? Or could it be as low as 50% accurate? This depends on how good the judgment is of the newcomers using it, and how much patience they have for weak matches. We'll learn more about this when we user test the algorithm with real newcomers.

In the first evaluation, the most important thing is that we found a lot of easy improvements to make to the algorithm, including types of articles and images to exclude. Even without those improvements, about 20-40% of matches were "2s", meaning great matches for the article (depending on the wiki). You can see the full results and notes from the first evaluation here.

For the second evaluation, many improvements were incorporated, and the accuracy increased. Between 50-70% of matches were "2s" (depending on the wiki). But increasing the accuracy can decrease the coverage, i.e. the number of articles for which we can make matches. Using conservative criteria, the algorithm may only be able to suggest tens of thousands of matches in a given wiki, even if that wikis has hundreds of thousands or millions of articles. We believe that that kind of volume would be sufficient to build an initial version of this feature. You can see the full results and notes from the second evaluation here.

We are continuing to make improvements to the algorithm, and in December 2020, we are trying a third evaluation, which you can follow along with here.

CoverageEdit

The accuracy of the algorithm is clearly a very important component. Equally important is its "coverage" -- this refers to how many image matches it can make. Accuracy and coverage tend to be inversely related: the more accurate an algorithm, the fewer suggestions it will make (because it is only making suggestions when it is confident). We need to answer these questions: is the algorithm able to provide enough matches that it is worthwhile to build a feature with it? Would it be able to make a substantial impact on wikis? We looked at 22 Wikipedias to get a sense of the answers. The table is below these summary points:

  • The coverage numbers reflected in the table seem to be sufficient for a first version of an "add an image" feature. There are enough candidate matches in each wiki such that (a) users won't run out, and (b) a feature could make a substantial impact on how illustrated a wiki is.
  • Wikis range from 20% unillustrated (Serbian) to 69% unillustrated (Vietnamese).
  • We can find between 7,000 (Bengali) and 155,000 (English) unillustrated articles with match candidates. In general, this is a sufficient volume for a first version of the task, so that users have plenty of matches to do. In some of the sparser wikis, like Bengali, it might get into small numbers once users narrow to topics of interest. That said, Bengali only has about 100,000 total articles, so we would be proposing matches for 7% of them, which is substantial.
  • In terms of how big of an improvement in illustrations we could make to the wikis with this algorithm, the ceiling ranges from 1% (cebwiki) to 9% (trwiki). That is the overall percentage of additional articles that would wind up with illustrations if every match is good and is added to the wiki.
  • The wikis with the lowest percentage of unillustrated articles for which we can find matches are arzwiki and cebwiki, which both have a high volume of bot-created articles. This makes sense because many of those articles are of specific towns or species that wouldn't have images in Commons. But because those wikis have so many articles, there are still tens of thousands for which the algorithm has matches.
  • In the farther future, we hope that improvements to the image matching algorithm, or to MediaSearch, or to workflows for uploading/captioning/tagging images yield more candidate matches.
Wiki Total articles Unillustrated articles Unillustrated percent Have image match Percent of unillustrated with match
enwiki 6,199,587 2,932,613 47% 154,508 5%
trwiki 382,825 151,620 40% 35,561 23%
bnwiki 99,172 33,642 34% 6,921 21%
frwiki 2,273,610 952,994 42% 94,594 10%
ruwiki 1,680,385 584,290 35% 60,415 10%
fawiki 755,709 304,253 40% 55,382 18%
arwiki 1,080,564 581,710 54% 59,551 10%
dewiki 2,506,229 1,190,517 48% 110,771 9%
ptwiki 1,048,255 388,605 37% 79,483 20%
hewiki 282,232 73,261 26% 14,453 20%
cswiki 467,573 182,177 39% 37,300 20%
kowiki 526,990 274,338 52% 48,417 18%
plwiki 1,441,429 560,334 39% 71,456 13%
ukwiki 1,058,563 365,209 35% 51,154 14%
svwiki 3,514,965 1,686,664 48% 91,337 5%
huwiki 479,215 170,936 36% 26,559 16%
euwiki 364,458 105,412 29% 21,481 20%
hywiki 278,487 96,729 35% 13,531 14%
arzwiki 1,171,440 759,418 65% 32,956 4%
srwiki 640,678 126,102 20% 27,326 22%
viwiki 1,259,538 867,672 69% 83,785 10%
cebwiki 5,377,763 1,357,405 25% 61,839 5%

MediaSearchEdit

As mentioned above, the Structured Data team is exploring using the MediaSearch algorithm to increase coverage and yield more candidate matches.

MediaSearch works by combining traditional text-based search and structured data to provide relevant results for searches in a language-agnostic way. By using the Wikidata statements added to images as part of Structured Data on Commons as a search ranking input, MediaSearch is able to take advantage of aliases, related concepts, and labels in multiple languages to increase the relevance of image matches. You can find more information about how MediaSearch works here.

As of February 2021, team is currently experimenting with how to provide a confidence score for MediaSearch matches that the image recommendations algorithm can consume and use to determine whether a match from MediaSearch is of sufficient quality to use in image matching tasks. We want to be sure that users are confident in the recommendations that MediaSearch provides before incorporating them into the feature.

The Structured Data team is also exploring and prototyping a way for user generated bots to use the results generated by both the image recommendations algorithm and MediaSearch to automatically add images to articles. This will be an experiment in bot-heavy wikis, in partnership with community bot writers. You can learn more about that effort or express interest in participating in the phabricator task.

Questions and discussionEdit

Open questionsEdit

Images are such an important and visible part of the Wikipedia experience. It is critical that we think hard about how a feature enabling the easy adding of images would work, what the potential pitfalls might be, and what the implications would be for community members. To that end, we have many open questions, and we want to hear of more that community members can bring up.

  • Will our algorithm be sufficiently accurate such that plenty of good matches are provided?
  • What metadata from Commons and the unillustrated article do newcomers need in order to make a decision about whether to add the image?
  • Will newcomers have sufficiently good judgment when looking at recommendations?
  • Will newcomers who don't read English be equally able to make good decisions, given that much of Commons metadata is in English?
  • Will newcomers be able to write good captions to go along with images that they place in the articles?
  • How much should newcomers judge images based on their "quality" as opposed to their "relevance"?
  • Will newcomers think this task is interesting? Fun? Difficult? Easy? Boring?
  • How exactly should we determine which articles have no images?
  • Where in the unillustrated article should the image be placed? Is it sufficient to put it at the top of the article?
  • How can we be mindful of potential bias in the recommendations, i.e. perhaps the algorithm will make many more matches for topics in Europe and North America.
  • Will such a workflow be a vector for vandalism? How can this be prevented?

Notes from community discussions 2021-02-04Edit

Starting in December 2020, we invited community members to talk about the "add an image" idea in five languages (English, Bengali, Arabic, Vietnamese, Czech). The English discussion mostly took place on the discussion page here, with local language conversations on the other four Wikipedias. We heard from 28 community members, and this section summarizes some of the most common and interesting thoughts. These discussions are heavily influencing our next set of designs.

  • Overall: community members are generally cautiously optimistic about this idea. In other words, people seem to agree that it would be valuable to use algorithms to add images to Wikipedia, but that there are many potential pitfalls and ways this can go wrong, especially with newcomers.
  • Algorithm
    • Community members seemed to have confidence in the algorithm because it is only drawing on associations coded into Wikidata by experienced users, rather than some sort of unpredictable artificial intelligence.
    • Of the three sources for the algorithm (Wikidata P18, interwiki links, and Commons categories), people agreed that Commons categories are the weakest (and that Wikidata is the strongest). This has borne out in our testing, and we may exclude Commons categories from future iterations.
    • We got good advice on excluding certain kinds of pages from the feature: disambiguations, lists, years, good, and featured articles.. We may also want to exclude biographies of living persons.
    • We should also exclude images that have a deletion template on Commons and that have been previously removed from the Wikipedia page.
  • Newcomer judgment
    • Community members were generally concerned that newcomers would apply poor judgment and give the algorithm the benefit of the doubt. We know from our user tests that newcomers are capable of using good judgment, and we believe that the right design will encourage it.
    • In discussing the Wikipedia Pages Wanting Photos campaign (WPWP), we learned that while many newcomers were able to exhibit good judgment, some overzealous users can make many bad matches quickly, causing lots of work for patrollers. We may want to add some sort of validation to prevent users from adding images too fast, or from continuing to add images after being repeatedly reverted.
    • Most community members affirmed that "relevance" is more important than "quality" when it comes to whether an image belongs. In other words, if the only photo of a person is blurry, that is usually still better than having no image at all. Newcomers need to be taught this norm as they do the task.
    • Our interface should convey that users should move slowly and take care, as opposed to trying to get as many matches done as they can.
    • We should teach users that images should be educational, not merely decorative.
  • User interface
    • Several people proposed that we show users several image candidates to choose from, instead of just one. This would make it more likely that good images are attached to articles.
    • Many community members recommended that we allow newcomers to choose topic areas of interest (especially geographies) for articles to work with. If newcomers choose areas where they have some knowledge, they may be able to make stronger choices. Fortunately, this would automatically be part of any feature the Growth team builds, as we already allow users to choose between 64 topic areas when choosing suggested edit tasks.
    • Community members recommend that newcomers should see as much of the article context as possible, instead of just a preview. This will help them understand the gravity of the task and have plenty of information to use in making their judgments.
  • Placement in the article
    • We learned about Wikidata infoboxes. We learned that for wikis that use them, the preference is for images to be added to Wikidata, instead of to the article, so that they can show up via the Wikidata infobox. In this vein, we will be researching how common these infoboxes are on various wikis.
    • In general, it sounds like a rule of "place an image under the templates and above the content" in an article will work most of the time.
    • Some community members advised us that even if placement in an article isn't perfect, other users will happily correct the placement, since the hard work of finding the right image will already be done.
  • Non-English users
    • Community members reminded us that some Commons metadata elements can be language agnostic, like captions and depicts statements. We looked at exactly how common that was in this section.
    • We heard the suggestion that even if users aren't fluent with English, they may still be able to use the metadata if they can read Latin characters. This is because to make many of the matches, the user is essentially just looking for the title of the article somewhere in the image metadata.
    • Someone also proposed the idea of using machine translation (e.g. Google Translate) to translate metadata to the local language for the purposes of this feature.
  • Captions
    • Community members (and Growth team members) are skeptical about the ability of newcomers to write appropriate captions.
    • We received advice to show users example captions, and guidelines tailored to the type of article being captioned.

Plan for user testingEdit

 
Screenshot from the prototype of a potential image matching workflow, used in user testing. The user can scroll down to see more metadata about the image from Commons.

Thinking about the open questions above, in addition to community input, we want to generate some quantitative and qualitative information to help us evaluate the feasibility of building an "add an image" feature. Though we have been evaluating the algorithm amongst staff and Wikimedians, it is important to see how newcomers react to it, and to see how they use their judgment when deciding on whether an image belongs in an article.

To that end, we are going to run tests with usertesting.com, in which people new to Wikipedia editing can go through potential image matches in a prototype and respond "Yes", "No", or "Unsure". We built a quick prototype for the test, backed with real matches from the current algorithm. The prototype just shows one match after another, all in a feed. The images are shown along with all the relevant metadata from Commons:

  • Filename
  • Size
  • Date
  • User
  • Description
  • Caption
  • Categories
  • Tags

Though this may not be what the workflow would be like for real users in the future, the prototype was made so that testers could go through lots of potential matches quickly, generating lots of information.

To try out the interactive prototype, use this link. Note that this prototype is primarily for viewing the matches from the algorithm -- we have not yet thought hard about the actual user experience. It does not actually create any edits. It contains 60 real matches proposed by the algorithm.

Here's what we'll be looking for in the test:

  1. Are participants able to confidently confirm matches based on the suggestions and data provided?
  2. How accurate are participants at evaluating suggestions? Do they think they are doing a better or worse job than they are actually doing?
  3. How do participants feel about the task of adding images to articles this way? Do they find it easy/hard, interesting/boring, rewarding/irrelevant?
  4. What information do participants find most valuable in helping them evaluate image and article matches?
  5. Are participants able to write good captions for images they deem a match using the data provided?

DesignEdit

Concept A vs. BEdit

In thinking about design for this task, we have a similar question as we faced for "add a link" with respect to Concept A and Concept B. In Concept A, users would complete the edit at the article, while in Concept B, they would do many edits in a row all from a feed. Concept A gives the user more context for the article and editing, while Concept B prioritizes efficiency.

In the interactive prototype above, we used Concept B, in which the users proceed through a feed of suggestions. We did that because in our user tests we wanted to see many examples of users interacting with suggestions. That's the sort of design that might work best for a platform like the Wikipedia Android app. For the Growth team's context, we're thinking more along the lines of Concept A, in which the user does the edit at the article. That's the direction we chose for "add a link", and we think that it could be appropriate for "add an image" for the same reasons.

Single vs. MultipleEdit

Another important design question is whether to show the user a single proposed image match, or give them multiple images matches to choose from. When giving multiple matches, there's a greater chance that one of the matches is a good one. But it also may make users think they should choose one of them, even if none of them are good. It will also be a more complicated experience to design and build, especially for mobile devices. We have mocked up three potential workflows:

  • Single: in this design, the user is given only one proposed image match for the article, and they only have to accept or reject it. It is simple for the user.
  • Multiple: this design shows the user multiple potential matches, and they could compare them and choose the best one, or reject all of them. A concern would be if the user feels like they should add the best one to the article, even if it doesn't really belong.
  • Serial: this design offers multiple image matches, but the user looks at them one at a time, records a judgment, and then chooses a best one at the end if they indicated that more than one might match. This might help the user focus on one image at a time, but adds an extra step at the end.
 
Single: in this design, the user is given only one proposed image match for the article, and they only have to accept or reject it.
 
Multiple: this design shows the user multiple potential matches, and they could compare them and choose the best one, or reject all of them.
 
Serial: this design offers multiple image matches, but the user looks at them one at a time, records a judgment, and then chooses a best one at the end if they indicated that more than one might match.

User tests December 2020Edit

Background

During December 2020, we used usertesting.com to conduct 15 tests of the mobile interactive prototype. The prototype contained only a rudimentary design, little context or onboarding, and was tested only in English with users who had little or no previous Wikipedia editing experience. We deliberately tested a rudimentary design earlier in the process so that we could gather lots of learnings. The primary questions we wanted to address with this test were around feasibility of the feature as a whole, not around the finer points of design:

  1. Are participants able to confidently confirm matches based on the suggestions and data provided?
  2. How accurate are participants at evaluating suggestions? And how does the actual aptitude compare to their perceived ability in evaluating suggestions?
  3. How do participants feel about the task of adding images to articles this way? Do they find it easy/hard, interesting/boring, rewarding/irrelevant?
  4. What metadata do participants find most valuable in helping them evaluate image and article matches?
  5. Are participants able to write good captions for images they deem a match using the data provided?

In the test, we asked participants to annotate at least 20 article-image matches while talking out loud. When they tapped yes, the prototype asked them to write a caption to go along with the image in the article. Overall, we gathered 399 annotations.

Summary

We think that these user tests confirm that we could successfully build an "add an image" feature, but it will only work if we design it right. Many of the testers understood the task well, took it seriously, and made good decisions -- this gives us confidence that this is an idea worth pursuing. On the other hand, many other users were confused about the point of the task, did not evaluate as critically, and made weak decisions -- but for those confused users, it was easy for us to see ways to improve the design to give them the appropriate context and convey the seriousness of the task.

Observations

To see the full set of findings, feel free to browse the slides. The most important points are written below the slides.

 
Slides showing the full user test findings
  • General understanding of the task matching images to Wikipedia articles was reasonably good, given the minimal context provided for the tool and limited knowledge of Commons and Wikipedia editing. There are opportunities to boost understanding once the tool is redesigned in a Wikipedia UX.
  • The general pattern we noticed was: a user would look at an article's title and first couple sentences, then look at the image to see if it could plausibly match (e.g. this is an article about a church and this is an image of a church). Then they would look for the article's title somewhere in the image metadata, either in the filename, description, caption, or categories. If they found it, they would confirm the match.
  • Each image matching task could be done quickly by someone unfamiliar with editing. On average, it took 34 seconds to review an image.
  • All said they would be interested in doing such a task, with a majority rating it as easy or very easy.
  • Perceived quality of the images and suggestions was mixed. Many participants focused on the image composition and other aesthetic factors, which affected their perception of the suggestion accuracy.
  • Only a few pieces of image metadata from Commons were critical for image matching: filename, description, caption, categories.
  • Many participants would, at times, incorrectly try to match an images to its own data, rather than to the article (e.g. "Does this filename seem right for the image?"). Layout and visual hierarchy changes to better focus on the article context for the image suggested should be explored.
  • “Streaks” of good matches made some participants more complacent with accepting more images -- if many in a row were "Yes", they stopped evaluating as critically.
  • Users did a poor job of adding captions. They frequently would write their explanation for why they matched the image, e.g. "This is a high quality photo of the guy in the article." This is something we believe can be improved with design and explanation for the user.

Metrics

  • Members of our team annotated all the image matches that were shown to users in the test, and we recorded the answers the users gave. In this way, we developed some statistics on how good of a job the users did.
  • Of the 399 suggestions users encountered, they tapped "Yes" 192 times (48%).
  • Of those, 33 were not good matches, and might be reverted were they to be added to articles in reality. This is 17%, and we call this the "likely revert rate".

Takeaways

  • The "likely revert rate" of 17% is a really important number, and we want this to be as low as possible. On the one hand, this number is close to or lower than the average revert rate for newcomer edits in Wikipedia (English is 36%, Arabic is 26%, French is 22%, Vietnamese is 11%). On the other hand, images are higher impact and higher visibility than small changes or words in an article. Taking into account the kinds of changes we would make to the workflow we tested (which was optimized for volume, not quality), we think that this revert rate would come down significantly.
  • We think that this task would work much better in a workflow that takes the user to the full article, as opposed to quickly shows them one suggestion after another in the feed. By taking them to the full article, the user would see much more context to decide if the image matches and see where it would go in the article. We think they would absorb the importance of the task: that they will actually be adding an image to a Wikipedia article. Rather than going for speed, we think the user would be more careful when adding images. This is the same decision we came to for "add a link" when we decided to build the "Concept A" workflow.
  • We also think outcomes will be improved with onboarding, explanation, and examples. This is especially true for captions. We think if we show users some examples of good captions, they'll realize how to write them appropriately. We could also prompt them to use the Commons description or caption as a starting point.
  • Our team has lately been discussing whether it would be better to adopt a "collaborative decision" framework, in which an image would not be added to an article until two users confirm it, rather than just one. This would increase the accuracy, but raises questions around whether such a workflow aligns with Wikipedia values, and which user gets credit for the edit.

MetadataEdit

The user tests showed us that image metadata from Commons (e.g. filename, description, caption, etc.) is critical for a user to confidently make a match. For instance, though the user can see that the article is about a church, and that the photo is of a church, the metadata allowed them to tell if it is the church discussed in the article. In the user tests, we saw that these items of metadata were most important: filename, description, caption, categories. Items that were not useful included size, upload date, and uploading username.

Given that metadata is a critical part of making a strong decision, we have been thinking about whether users will need to be have metadata in their own language in order to do this task, especially in light of the fact that the majority of Commons metadata is in English. For 22 wikis, we looked at the percentage of the image matches from the algorithm that have metadata elements in the local language. In other words, for the images that can be matched to unillustrated articles in Arabic Wikipedia, how many of them have Arabic descriptions, captions, and depicts? The table is below these summary points:

  • In general, local language metadata coverage is very low. English is the exception.
  • For all wikis except English, fewer than 7% of image matches have local language descriptions (English is at 52%).
  • For all wikis except English, fewer than 0.5% of image matches have local language captions (English is at 3.6%).
  • For depicts statements, the wikis range between 3% (Serbian) and 10% (Swedish) coverage for their image matches.
  • The low coverage of local language descriptions and captions means that in most wikis, there are very few images we could suggest to users with local language metadata. Some of the larger wikis have a few thousand candidates with local language descriptions. But no non-English wikis have over 1,000 candidates with local language captions.
  • Though depicts coverage is higher, we expect that depicts statements don’t usually contain sufficient detail to positively make a match. For instance, a depicts statement applied to a photo of St. Paul’s Church in Chicago is much more likely to be “church”, than “St. Paul’s Church in Chicago”.
  • We may want to prioritize image suggestions with local language metadata in our user interfaces, but until other features are built to increase the coverage, relying on local languages is not a viable option for these features in non-English wikis.
Wiki Local language descripton Local language caption Depicts
enwiki 51.71% 3.65% 6.20%
trwiki 1.91% 1.32% 4.33%
bnwiki 0.51% 1.08% 5.74%
frwiki 5.95% 0.66% 8.52%
ruwiki 4.05% 0.61% 6.73%
fawiki 0.58% 0.59% 4.06%
arwiki 0.97% 0.59% 7.00%
dewiki 6.11% 0.49% 5.16%
ptwiki 1.38% 0.34% 4.27%
hewiki 1.20% 0.30% 6.18%
cswiki 1.82% 0.23% 5.71%
kowiki 0.97% 0.19% 4.80%
plwiki 1.82% 0.17% 5.93%
ukwiki 1.04% 0.12% 5.95%
svwiki 0.90% 0.07% 10.10%
huwiki 2.28% 0.03% 5.96%
euwiki 0.27% 0.03% 6.20%
hywiki 0.69% 0.03% 5.39%
arzwiki 0.02% 0.01% 6.84%
srwiki 0.36% 0.01% 3.46%
viwiki 0.08% 0.00% 6.63%
cebwiki 0.00% 0.00% 9.93%

Given that local-language metadata has low coverage, our current idea is to offer the image matching task to just those users who can read English, which we could ask the user as a quick question before beginning the task. This unfortunately limits how many users could participate. It's a similar situation to the Content Translation tool, in that users need to know the language of the source wiki and the destination wiki in order to move content from one wiki to another. We also believe there will be sufficient numbers of these users based on results from the Growth team's welcome survey, which asks newcomers which languages they know. Depending on the wiki, between 20% and 50% of newcomers select English.

Android MVPEdit

See this page for the details on the Android MVP.

After lots of community discussion, many internal discussions, and the user test results from above, we believe that this "add an image" idea has enough potential to continue to pursue. Community members have been generally positive, but also cautionary -- we also know that there are still many concerns and reasons the idea might not work as expected. The next step we want to in order to learn more is to build a "minimum viable product" (MVP) for the Wikipedia Android app. The most important thing about this MVP is that it will not save any edits to Wikipedia. Rather, it will only be used to gather data, improve our algorithm, and improve our design.

The Android app is where "suggested edits" originated, and that team has a framework to build new task types easily. These are the main pieces:

  • The app will have a new task type that users know is only for helping us improve our algorithms and designs.
  • It will show users image matches, and they will select "Yes", "No", or "Skip".
  • We'll record the data on their selections to improve the algorithm, determine how to improve the interface, and think about what might be appropriate for the Growth team to build for the web platform later on.
  • No edits will happen to Wikipedia, making this a very low-risk project.

The Android team will be working on this in February and March 2021, hopefully allowing the Growth team to begin learning quickly.

EngineeringEdit

This section contains links on how to follow along with technical aspects of this project: