Wikimedia Discovery/Meetings/Analysis retrospective 2016-04-20

Discovery Analysis Retrospective

2016-04-20

Covering whatever has happened related to the team since the last retro (2016-03-16)

  • Life without Oliver
  • Quarter ended; new quarter started
  • Posted J.D.
  • Met with recruiters, went through some candidates
  • Analyzed A/B test for search team (needs to be re-analyzed)
  • First draft of Portal A/B test report done

Review action items from before

edit

What went well?

edit
  • New backlog Phab board works pretty well, really liking the "Up Next" column
  • More detailed zero results rate (now query types rather than two broad categories that are hard to explain) +1
    • People frequently asked the difference between fulltext and prefix; it happened in the quarterly review for example.
  • Externally referred traffic now broken up into greater detail
  • Being able to deploy changes without waiting for CR has actually been kind of really nice???
    • Pros and cons of self-merging without review
    • Kevin: Code review can a) detect bugs, b) encourage cleaner code, c) spread knowledge in both directions
  • Dan thinks we're handling the velocity hit from Oliver's departure pretty well (see related point in "What could've gone better?")
  • Assembling quarterly review decks continue to be a total breeze because of our investment in analysis
  • Mikhail's involvement in hiring; not only for his technical chops, but also his attention to diversity and reasonableness
    • He also ran the JD past other people at the foundation to get more input

What could have gone better?

edit
  • We're definitely feeling a velocity hit from Oliver's departure (see related point in "What went well?")
  • It's bad that we were incorrectly calculating the zero results rate for so long (how long?) (very long)
    • Detected when we noticed some odd trends in the graphs and unexpected effects from search code changes
    • Erik and David helped solve it and improve things
    • Root cause not exactly clear: Maybe poor communication between teams? Or dirty data (complexity)?

What else should be noted?

edit
  • Editing team is now using our task for their analyst candidates; Mikhail will be grading their candidates' submissions
    • As expected, many candidates are applying for both positions

Action items

edit
  • Dan: Publicize the data guidelines more widely