Reading/Strategy/2016-2017 plan

Overview edit

Strategic Problem edit

The mission of the Wikimedia Foundation is to empower and engage people around the world to collect and develop educational content under a free license or in the public domain, and to disseminate it effectively and globally. Currently, as measured by page views to our own sites, Wikimedia’s dissemination of this content is plateauing and we need to take decisive steps to increase our global impact.

 
Taken from: February 2016 WMF Metrics Meeting

Objectives: edit

The WMF Reading Team’s strategic activities have two distinct objectives to support our mission:

  1. Increase engagement and retention in our current core markets of North America and Western Europe.
  2. Increase our reach with New Readers in the developing countries
 
Our two primary objectives in support of our mission to disseminate educational content

Solutions edit

Specifically this translates into 3 possible strategies, that are reliant on two critical foundations.

Strategies edit

  1. Improve the Encyclopedia Experience: Boost engagement and retention in current web and app experiences
  2. New Readers: Reach new users in the Global South
  3. New Experiences
    1. Community of Readers: Experiences based on interactions (engagement)
    2. Guided Educational Experiences: Experiences based on Learning (retention & engagement)

The details of these strategies are below.

Foundations edit

The first is “Understand our Users”. This initiative focuses on qualitative and quantitative research about our readers and corresponds to the first strategic priority.

The second critical investment is in our existing site/apps and in a new architecture that can scale across platforms and devices.  We call this Services.

 
Services edit

We presented our services vision at the 2016 Developer's Summit. The feedback from the Community and the Foundation was that an incremental strategy towards a service platform was desirable. Therefore we will approach Services deployment in an opportunistic, incremental fashion based on pre-set performance and functionality targets and SLAs.

Operationalization edit

These 3 strategic pillars probably overlap quite a bit, but have distinct goals and prioritizations

 

We want to focus on the initiatives that will be most effective in achieving the stated objectives. Only one of them, New Experiences, is optional, but we need to experiment to discover what proportions are best and develop the specifics of each initiative.  So we are using our multiple platforms to experiment and learn at an accelerated rate.

Here are the initial strategy assignments, though there is overlap:

 
The reading development teams are mostly organized along media platforms. Each team has a strategic focus, but will also work on other initiatives.

Giving each team (iOS, Android, Web) a single strategy gives them clarity.  They can use that strategy to guide their decisions.  These staffing allocations for each strategy will likely shift over time as we learn more about our users, find one strategy outperforming the others, or simply migrate successful features from one platform to another. We also migrate successful features from one platform to the other, when possible.

Programs: upcoming year's annual plan edit

Reading Metrics edit

Reach - how many people are reading the content on a daily or monthly basis. We use the metrics Daily Active Users (DAUs) and Monthly Active Users (MAUs) to measure these. It is likely we will use devices as a proxy for users since we cannot count unique users. We can count this across our platforms.

Engagement  - how long people spend on our site. There are a few metrics that can be used to measure engagement; we will use page-views for Web (as well as other session based metrics as they become available). We have this metric across devices as well as more descriptive metrics for the apps.  New engagement metrics are outlined in the analytics roadmap: https://office.wikimedia.org/wiki/Engagement_metrics/Metrics

Retention - how often people come back to our site. This is available for apps but will not be available for web until we collaborate with Analytics on a technical solution.

Requests - how many requests/per day a service is used. This is used for the API because we do not have a way to track how many different external clients are using the API, and cannot track downstream use, or pre-user engagement, of requested data

Specific Metrics:

(italics = not yet available)

  • Pageviews
  • Users
  • Sessions
  • Session length
  • Sessions per user
  • Apps:
    • Installs?
    • Retention?

2. Please list your goals and objectives for each program. Please be sure your objectives meet all three criteria for each program

  • Be sure the objectives listed are each SMART: specific, measurable, attainable and relevant, and include time-bound targets.
  • Include both qualitative targets and quantitative targets, and remember to highlight your baseline metrics.
  • Provide any additional information that is important to our understanding of this program. For example, you may include needs assessments, logic models, timelines, tables, or charts. Share how this program will contribute more broadly to movement learning, or explain how your program aligns with important Wikimedia priorities such as increasing participation and improving content on the Wikimedia projects.

Program 1: Maintain mobile and desktop website, extensions, APIs and Apps (Web and Apps) edit

The web team currently maintains both the mobile and desktop interfaces responsible for 15 billion page views a month, where online fundraising campaigns responsible for the lion's share of WMF Fundraising run The team also maintains the APIs, which handle 7 billion requests per month. These interfaces require regular updates in order to fix bugs, respond to changes to the underlying infrastructure, and incorporate changes in browsing technology and web standards. The team also maintains 29 community and Foundation extensions that are used across the wikis. This work is largely reactive and is generally difficult to predict, but recent examples over the last two quarters include: fixing an issue that prevented timelines from being rendered on mobile phones, updating the automated testing infrastructure, and aligning release cycles to WMF standards.

The apps team currently maintains the apps that run on the iOS and Android platforms, as well as the Mobile Content Service that increasingly provides content for these apps. These platforms are constantly being updated with new versions of both hardware and software being released from major vendors. The apps must also be modified to support changes to software and APIs and design changes dictated by platform providers, and the modifications must be tested against multiple devices and OS versions. The Mobile Content Service continues to evolve with the underlying RESTBase infrastructure it leverages, as well as support changes in usage patterns from the apps themselves. Like the web work, these tasks are largely reactive.

Program 2: Understand our users edit

The Reading team has started investing in this area by creating the quick survey tool, ethnographic research in Mexico, new analytics infrastructure for our iOS app and community consultations to hear from both editors and readers on both feature level decisions and strategic direction.  Our maturation in developing new features on different platforms is an investment in this as well.  More details on this initiative for 2016-17 is below.

Description:

  • This is a cross-functional initiative that addresses the most requested reach initiative in the community strategic consultation.
  • This initiative is very much part of how the Reading team performs its day-to-day activities; we are a young team and seek to learn more about our Readers and Communities as we design experiences for them.
  • We use a wide range of methods to understand our users ranging from purely quantitative metrics to more qualitative approaches, but in general we try to combine both approaches to build actionable models and datasets.
  • These are used across product development from using demographic information to select high potential new geographies to building personas that product designers can use to develop empathy for their users.

Outcomes:

  • The direct outcome of this initiative will be a series of reports and learnings from our varied research activities listed below:
    • More granular, privacy centric metrics (engagement, retention)
    • Reader Intention studies
    • Ethnographic Research (particularly in developing countries)
    • Community engagement and co-creation
    • Evaluative Research
  • The ultimate outcome of this initiative is success in our product work resulting from a better understanding of incentives and obstacles impacting learning, worldwide
  • Example from WMF-wide strategy: Summarize, process, and apply output of New Readers research project: In FY15-16, we are performing ethnographic research in four countries under-served by Wikimedia projects: South Africa, Mexico, Nigeria, and India. Synthesize this research and share it with product teams and the community in FY16-17 as a core component of the project.

Impact:

  • More informed decision-making
  • Better products and more alignments

Existing work

We plan to build on this over the coming year as we continue to develop capacity:

Program 3: New Readers edit

Description

People in developing countries represent 85% of the world’s population, more than 60% of mobile subscriptions, only 30% of our page views. There is a clear opportunity for the WMF to fulfill our mission by finding new Readers in these geographies.

 
Traffic trends for "Global North" countries for Wikimedia projects as of February, 2016. Taken from: February 2016 WMF Metrics Meeting
 
Traffic trends for "Global South" countries for Wikimedia projects as of February, 2016. Taken from: February 2016 WMF Metrics Meeting

This initiative will invest in understanding and building experiences for readers in developing countries whose fundamental approaches and needs for knowledge may be different from those currently served by Wikipedia and sister projects;  partnering with local community members and developers to co-create culturally relevant experiences and collaborating with Communications, Community and Partnerships on holistic product development, awareness campaigns and launches.

This initiative will consist of three phases:

  • Research: Ethnographic research in six priority countries (either self-directed or partnered). This is a significant overlap with Understanding our Users.
  • Product Development: Investing in understanding and building experiences for readers in developing countries whose fundamental approaches and needs for knowledge may be different from those currently served by Wikipedia and sister projects.
  • Launch: Partnering with local community members and developers to co-create culturally relevant experiences and collaborating with Communications, Community and Partnerships on holistic product development, awareness campaigns and launches.

Outcomes

  • We will increase the number of readers (reach) in new geographies via:
    • Web or app solutions to problems faced by the GS population as identified by research
    • Web continued performance improvements for low-connectivity, low speed and high data cost environments
    • Improved language support

Impact:

  • More reach, more mission fulfillment, more awareness and more GS editors/content
  • We will increase our reach by 5% in targeted geographies

Program 4: Improve the Encyclopedia experience edit

Definition of the problem:

Readership and retention declining.

  • For our core audiences in North America and Europe we want to increase retention by improving the experiences in our current web and app experiences, without expanding on the fundamental encyclopedia model.
  • Our hypothesis is that an encyclopedia is as viable as ever and we just need to update and polish it for modern expectations.
  • This strategy builds trust and collaboration with our Editor communities which we see as critical partners in increasing all aspects of Readership

Outcomes:

  • We will increase retention in our core geographies via:
    • iOS: content notifications lead to sustained return visit increase
    • iOS: incorporate release article scoring tool into the reading experience
    • iOS: article profiles optimized for content type (i.e. format articles about locations to optimize for common location information)
    • Web: improve mobile article rendering
    • Web: community driven desktop improvements
    • Web: integration of wikidata descriptions

Impact:

Reach and engagement, testing ground for other platforms.

Program 5: Community of Readers (new experiences) edit

Definition of the problem: With the reference use-case being dis-intermediate and readership plateauing we need to explore new kinds of experiences that build on and extend the pure reference model.  There are two we identified as leading contenders:

  • Guided education experiences - a focus on learning and comprehension over static reference
  • A community of readers - a focus on creating means for user interaction with content and each other.

Of these two experimental veins, we felt that community of readers was the pragmatic first choice because it did not require new content creation and fit well with a better understanding of our reader to editor funnel.

Specific to community of readers is the hypothesis that there is a large audience of visitors between casual browsers and encyclopedia article writers that would like to interact with knowledge.  We plan to address curation tasks within this initiative and be very sensitive to moderation issues at all times.

Outcomes:

  • Android prototype and research interactive Wikipedia reader-oriented feature
  • Android micro-contribution/interaction experimentation
  • Android micro-contribution notifications

Impact:

  • Grow readership via engagement
  • We will increase our engagement by 5% in targeted geographies

Overall Tentative Roadmap: edit

  • Highly dependent on ongoing research, community input and early results.  Specifically:
    • All “New readers” work is subject to results of research in developing countries
    • All “A community of readers” work is subject to community consultation.
  • * denotes specific user/community request
 

Community Tech edit

Program 1. Improve and maintain community requested projects (Community Tech) edit

The Community Tech team helps to build and maintain tools used by the most active contributors to our projects. The team’s primary backlog is determined by an annual cross-project Community Wishlist survey, which invites contributors to propose and vote on the features and fixes that they want to see.

In 2016, the team is responsible for investigating and addressing the top 10 wishes from the 2015 survey, including:

At the end of 2016, the team will run a new survey, gathering a new set of proposals from the communities. One problem with the 2015 survey was that the support-voting process privileged larger projects – Wikisource communities were a strong presence in the 2015 survey, especially considering their relative size, but they couldn’t muster enough votes for any of their proposals to break into the top 10. The team is planning to modify the 2016 survey process to include some carve-out space for smaller projects.

Working with volunteer developers is an important part of Community Tech’s work, and the team is beginning a separate project to improve Wikimedia Tool Labs as a center for volunteer development support. Popular tools and bots become defacto production software needed to keep the wikis healthy and happy, but when a tool has a single maintainer, there’s a high risk that it will be abandoned. Setting up a framework of best practices and communication diminishes this risk; a primary focus of Tool Labs support is to help projects to grow beyond what can be accomplished by a single motivated individual.

Starting in the new fiscal year, Community Tech will also work with WMF’s Program Capacity & Learning team to build and maintain tools for community-run programs and events.

Metrics edit

The Community Tech team will use performance against quarterly goals as their metric and will complete 75% of these goals.

Program 2: Tool Labs Support edit

Bots and tools are a vital resource for many on-wiki content creation and curation activities. A typical bot/tool project begins life as a way for a motivated Wikimedia community member to make some on-wiki task easier (or possible). Many of these projects have a short life-cycle due to factors such as loss of interest by the maintainer, insurmountable technical hurdles, or discovery of a better means to manage the original problem. Others however become popular and tightly integrated in the workflows of one or more on-wiki communities.

Popular tools and bots become de facto production software needed to keep the wikis healthy and happy. Their roots as weekend projects from motivated volunteers brought them their success, but ultimately pose a risk to their end users. Facilitating the best practices and communication that allow for projects to grow beyond what can be accomplished by a single motivated individual would be a primary focus of Tool Labs support.

Potential Activities edit

  • Curate and promote documentation of best practices for tool and bot development.
  • Connect bot and tool developers via mailing lists, irc channels, fabricator and other public communication channels to solve shared problems and advocate for shared concerns.
  • Help interested developers transition from one person weekend projects to small collaborative communities.
  • Help organize groups to work on shared libraries/infrastructure needed for bots and tools. Pywiki bot is a good example of a healthy library community to emulate for other projects.
  • Help curate bootstrap projects for common tasks including support for best practices and evolving Labs infrastructure.
  • Consult on performance and security for interested projects.
  • Advocate for shared Tool Labs user concerns in Wikimedia Foundation planning.

Metrics edit

As tool labs support is a new initiative, performance to quarterly goals will be the initial metric with the target of 75% of these goals being completed each quarter.