Growth/Personalized first day/Structured tasks/Copyedit
This page describes work on a "copyedit" structured task, which is a type of structured task that the Growth team may offer through the newcomer homepage. This page contains major assets, designs, open questions, and decisions. Most incremental updates on progress will be posted on the general Growth team updates page, with some large or detailed updates posted here.
Copyedit structured task
A task for newcomers to complete copyedits based on machine suggestions
- 2021-07-19: create project page and begin background research.
- Next: continue background research
Structured tasks are meant to break down editing tasks into step-by-step workflows that make sense for newcomers and make sense on mobile devices. The Growth team believes that introducing these new kinds of editing workflows will allow more new people to begin participating on Wikipedia, some of whom will learn to do more substantial edits and get involved with their communities. After discussing the idea of structured tasks with communities, we decided to build the first structured task: "add a link".
Even as we built that first task, we have been thinking about what subsequent structured tasks could be; we want newcomers to have multiple task types to choose from so that they can find the ones that they like to do, and can increase in difficulty as they learn more. The second task we started working on was "add an image". But in our initial community discussions of the idea of structured tasks, the task type that communities desired most was a task around copyediting -- something related to spelling, grammar, punctuation, tone, etc. Here are our initial notes from looking into this and discussing with community members.
We know that there are many open questions around how this would work, many potential reasons that it might not go right: what kind of copyediting are we talking about? Just spelling, or something more? Is there any sort of algorithm that will work well across all languages? These questions are why we are hoping to hear from lots of community members and have an ongoing discussion as we decide how to proceed.
- We want to understand the types of copyediting tasks it might be possible to assist with algorithms.
- We want to use an algorithm that can suggest tasks for a type of copyediting in articles across different languages.
- We want to know how good the algorithm works (e.g. know which model works best from a set of existing models).
- What different subtasks are considered copyediting?
- Identify different aspects of copyediting across the spectrum: typo/spelling to grammar to style/tone
- What are existing approaches to copyediting in Wikipedia?
- What are existing public commonly-used tools for spell-checking/grammar etc such as hunspell, LanguageTool, or Grammarly?
- We know that our communities prefer transparent algorithms, so it is easy for everyone to understand where suggestions come from.
- What are available models from research in NLP and ML, for example for the task of Grammatical Error Correction.
Defining the taskEdit
- Which aspect of copyediting will we model for the structured task?
- Type of task: spelling, grammar, tone/style
- For example: What can browser-spellcheckers do?
- Granularity -- highlighting task on the level of: article, section, paragraph, sentence, word, sub-word
- Depends on the task
- Surface known items (e.g. from templates) or predict new ones?
- Only suggest that improvement is needed, or suggest how to improve?
- Suggesting improvement is easier for simpler tasks.
- Simply highlighting that work is needed is easier for more complex tasks (e.g. style or tone)
- Language support: how many languages do we aim to support?
- Include Spanish and Portuguese as target languages alongside Arabic, Vietnamese, Bengali, Czech.
- We ideally want to cover all languages, but will realistically need to evaluate solutions based on the depth of their language coverage.
Building a dataset for evaluationEdit
- Generate a test-dataset (ideally in multiple languages) for the task for which we can compare different algorithms. This can be achieved in different ways
- An existing benchmark dataset, such as CoNLL-2014 Shared Task on Grammatical Error Correction, or approaches for corpora generation (from Wikipedia)
- Generate our own dataset from revision history using templates (copyedit) or edit summaries (typo)
- Manual evaluation of output of models run on a set of sentences from Wikipedia.