Growth/Personalized first day/Newcomer tasks/Comparative review

In designing for the "newcomer tasks" project, the Growth team's designer reviewed the way that other platforms (e.g. TripAdvisor, Foursquare, Amazon Mechanical Turk, Google Crowdsource, Reddit) offer task recommendations to newcomers. We also reviewed Wikimedia projects that incorporate task recommendations, such as the Wikipedia Android app and SuggestBot. We think there are best practices we can learn from other software, especially when we see the same patterns across many different types of software. Even as we incorporate ideas from other software, we will still make sure to preserve Wikipedia's unique values of openness, clarity, and transparency. The full set of takeaways is below.

  • Positioning and context
    • Some external products offered task recommendations within the main product, but notably larger platforms with many different task types tended to have a separate product entirely for task “feeds” (Google Crowdsource, Translate Facebook)
    • Products where task recommendations were only part of the product’s purpose tended to have more ‘in-context’ or opportunistic task suggestions. For example, Google Maps may ask users to rate or verify information about a location if the user looks up the particular location.
    • Task types of external products tended to be “structured editing” for parts of content rather than the whole, akin to editing single structure data items in an article . For example, Foursquare Superuser tools has different task queues for reviewing different sections of single listings (is this the right location, are the opening hours correct, is the website up to date)
    • “Homepages” of many products with task recommendations have other content built around the task recommendations component. For example, impact stats, help & resources, other contributor activity sections on the Translate Facebook app
    • Products with short and simple tasks that could be performed in one or two interactions (rating, selecting from multiple choice responses) tended to have mobile-versions – or rather vice versa, mobile versions tended to have only simple tasks
  • Task and content types
    • All external products offered task recommendations that would generate real content when completed, in the form of:
      • Creating content published directly to the product
      • Verifying or moderating content by other humans that would be ‘published’
      • Verifying or moderating content by AI to help improve automated services (for example Translate)
    • Most tasks in external products reviewed are about (in order of popularity):
      • Rating content
      • Creating content
      • Moderating/Verifying content
      • Translating content
  • Incentivizing features
    • “Intangible” incentives to edit are offered in almost all of the products reviewed, in the form of:
      • Awards and ranking (points, badges, leaderboard scores)
      • Personal pride and gratification via contribution statistics
      • Unlock features (either to access more tasks and “responsibility”, or admin capabilities in rare cases)
    • “Unlocking” behavior on task recommendations was less frequently seen in external products reviewed
    • Intangible rewards mainly appear to leverage perceived better social standing as a core motivating factor - i.e., sharing one’s high rank in the level of experience with other members of the platform
    • Tangible rewards for contributions in the form of money or administrative powers was not common in the products reviewed, with Amazon Mechanical Turk being the most obvious example whereby users were not only paid to perform tasks, but through performing certain tasks would unlock “qualifications” to perform more tasks.
  • Personalization and customization
    • Most task recommendations are personalized/customized, based on:
      • Previous user input – surveys upon account creation or before a task
      • System-based derivation – e..g, geolocalization
      • On-platform user activity (less frequently or less overtly)
    • Most task recommendations have at least one facet of personalization/customization, usually the main choice for the task (location, language…)
  • Visual design and layout
    • Onboarding and welcoming material is often rich with illustrations and animation to catch user attention on tasks
    • Incentivizing features (ranking, user stats, badges, etc) are similarly visually rich
    • In contrast, pared back UIs during task completion, focusing users on single short tasks to complete.
    • Previewing content (esp. article preview) was an distinguishing visual feature available in Wikimedia products – likely as edits on Wikipedia were more complex, users require more context before deciding whether to pursue a recommended edit
    • Task recommendations are usually shown on mobile version of the platforms with some layout and UI differences:
      • Sometimes only a subset of the tasks proposed on desktop is proposed on mobile
      • In only a few cases no tasks are proposed on mobile
  • Guidance and help
    • Almost all products reviewed offered at least basic guidance prior to task completion, mostly in the form of:
      • Introductory/Onboarding screens
      • Link to resources/documentation
      • Link to FAQs
      • Examples of other users’ activity
    • In-context help (when performing a task) was generally provided through:
      • Instructional copy in the task
      • Tooltips
      • Task completion through completing simple forms
      • Step-by-step flows
      • Feedback within the tool (Submit feedback, ask a question)
    • A few products also offered more intensive step-by-step guided help
    • Only a couple of Wikimedia products offered fake tutorials or “practice” exercises for educational purposes