How do we test interfaces edit

The only sources of data we currently have to test UX design are:

  1. Usability testing via usertesting sessions
  2. quantitative data from A/B tests
  3. externally hosted surveys (SurveyMonkey)
  4. self-reported bugs/issues by community members

We have no information on the demographics of users (readers or editors) interacting with specific UI elements. The only demographic information we can collect comes from annual surveys or from self-reported information on user pages.

Problems with these solutions edit

  • (1) provide useful data but at a very small scale and in an often invalid ecological conditions (we ask users to complete a task with step by step instructions and with strong completion incentives)
  • (2) are very effective to measure conversions of different UX designs but (still) very costly to set up, as they typically involve software engineering effort and access to data analysts. They also don't give us qualitative data to complement conversion rate measurements.
  • (3) and (4) are excellent sources of highly biased qualitative data.

Problem with offsite surveys edit

We don't have the ability to collect large-scale qualitative data in the context of a given workflow. We typically use CTAs inviting users to take surveys hosted off-site, asking them multiple questions and after the fact, which produces biased data that is of little use for product/UX design decisions.

What if we could ask one-off questions to users of a specific feature/UI element right after they complete a transaction or reach a specific target, without the need of sending them off-site or disrupting their workflow?

Guiders as microsurvey tools edit

Guided tours could be an effective way of collecting opt-in UX feedback or demographic information in a minimally intrusive way via microsurveys that would be triggered by specific on-site behaviors.

Here's how a microsurvey would work:

  1. A single-guider GuidedTour is created in the MW namespace. The guider includes (1) a single question and (2) multiple responses represented as styled links. No form element or input field, just plain links + the regular dismiss button.
  2. EventLogging collects impressions and clicks on this guider using a generic Schema:MicroSurvey model that stores (a) the tour id, (b) the user ID, (3) an action field encoding the response as a click on the corresponding link. The schema could also be designed to entirely discard user_id's to collect anonymized data.
  3. The microsurvey is then triggered by a specific hook in the UI (the trigger could apply to all users for a limited amount of time or to a small random sample of users)
  4. Responses are then collected in real-time, the microsurveys can be easily disabled by editing the MW namespace where the GuidedTour is defined

Benefits edit

  • reduce to a minimum selection/sampling bias of off-site, context-free surveys
  • collect opt-in demographic data that would be very hard to obtain otherwise
  • incur minimal instrumentation costs, as long as a generic schema can be used for all these microsurveys

Caveats edit

Depending on how guiders are triggered and for how many users, these surveys could potentially conflict with the goal of driving up conversions

Examples edit

These are examples of questions that could be asked:

Users landing on a semi-protected page after account registration:

Did you register an account because you wanted to edit this page? 
    Y [ ]
    N [ ]
    Don't know [ ]

Users leaving the edit screen:

Please tell us why you stopped editing? 
    A [ ]
    B [ ]
    C [ ]

Users rolling back a revision:

Do you know if the author of this revision is a newbie? 
    Y [ ]
    N [ ]
    Don't know [ ]