What: Captures the narrative as presented by the user (with little-to-no interpretation or cleanup)
Goal: Understand the user's needs, desires, and expectations.
The raw narrative is useful even if it doesn't really make sense! -- It helps to know where we started in the process to grok their expectations.
Honestly great to have a literal description of the deliverable in their own words
Key Questions:
What is their goal? Why do they want to achieve that? Gains/Pains. What are possible scale-backs?
Who surfaced this desire (actual person) and in what context? Who do we ultimately deliver to?
How clear was the user's vision of the feature, the deliverable and the reqs?
What sort of expectations did they have *before* coming us about the LoE of their request and/or the timeline?
Are there any external deadlines? Does this block anything?
User Feature Requirements & Acceptance Criterionedit
What: Cleaned and clarified description of the deliverable, broken out into features that make sense to everyone.
Ideally, we want both the high-level, easy-to-grok description, and the low-level, precise breakout.
eg. "Web dashboard w/ 2 tabs -- "Mobile Site", and "Mobile Apps". A contains 6 graphs: 1. Pageviews to Mobile Site (line, logscale, step=1mo), with metrics (all pageviews / month): total across all sites, alpha site, beta site. 2. ..."
Goal: Consensus around the concrete deliverable and especially its scope, including:
Clear and precise enumeration capabilities/limitations
Expectations around behavior and appearance
Rough expectations around a timeline, and some boxing on level-of-effort/the time commitment
Components:
Data Requirements, specified as precisely as possible.
We don't necessarily need to make the user sit through us hashing this out, but (imo) we do need to get them to explicitly sign off on the result.
I think we should consider it a hard requirement that any feature which requires new metric work have descriptions for:
Definitions for all metric dimensions (with datatypes) -- this means "counts" should explicitly say what dimensions are grouped and counted
Data output shape -- define rows/columns with types (including the step between rows); what defines a "key", and is it row-major or col-major?; cardinality/storage estimate; drill-downs/aggregations/rollups
Data source(s) -- both the data necessary for the computation and the origin of said data. For example, if the expectation is that we use a stream into Kraken, it is a different feature to fall back on the sampled Squid logs, as the whole implementation toolchain is different.
Data output format(s) -- can't just assume tsv if we aim to deliver certain visualizations (esp geo) -- is json, xml, or avro ok? how will the data be published? does it need to be public, or can we cron files into a shared space on stat1001?
Minimally required frequency of recomputation -- note this is different from the timestep for rows and/or rollups; this is how stale the data can be (aka, how often the job runs)
Presentation/Visualization Requirements, including any graphs, dashboards, domains, or other presentation features (pointing to existing examples if possible).
Dashboards: names, contents for each, their tabs, and the visualizations in each tab.
Questions for each visualization: graph type (line, bar, geo, etc); any specific formatting/appearance requests (custom labels? scale? timespan?)
What: List several technical solutions as options for satisfying the requirements, assessing the tasks required and the level-of-effort involved.
Goal: Find the best way to make the user happy while conserving dev effort and advancing along the trajectory of our strategy to provide insight via a robust, automated, self-service platform.
If necessary, Engineering Lead coordinates team expertise and presents results.
Everyone works together to assess, revise, and prioritize (planning poker, etc).