Phlogiston/Data Loading Model
This page is obsolete. It is being retained for archival purposes. It may document extensions or features that are obsolete and/or no longer supported. Do not rely on the information here being up-to-date. |
A lot of the Phlogiston data model and logic is dedicated to recreating the state of the Phabricator data at a fixed point in time. If this data could be accessed directly from Phabricator, this would be unnecessary. In the model below, the steps that are specific to Phlogiston reporting, and could not be pulled from Phabrictor, are marked with a ✱.
Data Loading
edit- Download the latest http://dumps.wikimedia.org/other/misc/phabricator_public.dump (updated around 0400 UTC).
- Discard all previously loaded Phabricator data (the "Load" tables)
- Load the dump file into the database.
- All project, column, task, and unparsed transaction information is loaded.
- As an optimization, all edge transactions are parsed from the transaction log and a list of edge transactions is generated.
- Everything keyed by a PHID is re-keyed to ID.
Data Reconstruction
editOnce for each scope:
- Generate a list of project IDs relevant to the scope. ✱
- All projects listed in the recategorization file are relevant. ✱
- The Status Report project in
<prefix>_scope.py
is relevant. ✱ - The hard-coded IDs for certain keyword tags, e.g., 'category', are relevant. ✱
- Determine the range of dates to be processed:
- If an incremental run, start the day after the last day in the data.
- If a complete run,
- wipe reconstruction tables of any data for this scope
- Set the start date to the date in
<prefix>_scope.py
.
- For each day since the start date,
- For each task in the complete list of tasks in Phabricator,
- Get the list of edges from the most recent edge transaction (not later than the working day) associated with the task. For each project in the list of edges,
- If the project is also a project relevant to this scope
- Make a record in
maniphest_edge
for this combination of date, task, and project. - Example: In edge transaction data, there is a single record, "on 2018-04-01, Project 300 was added to Task 142.". After reconstruction, there is a one record linking Project 300 and Task 142 for each day from 2018-04-01 to today.
- Make a record in
- If the project is also a project relevant to this scope
- Get the list of edges from the most recent edge transaction (not later than the working day) associated with the task. For each project in the list of edges,
- For each task in the complete list of tasks in Phabricator,
- For each day since the start date,
- For each task associated with any of the relevant project IDs on the working day,
- Reconstruct the state of the task for that day:
- Get title from the current (most recent load, not working day) values in
maniphest_task
. - Get points from the most recent points transaction before or on working day.
- If none is available, get points from the current values in
maniphest_task
.- If none is available, use default_points from
<prefix>_scope.py
. ✱
- If none is available, use default_points from
- If none is available, get points from the current values in
- Get status from the most recent status transaction before or on working day.
- Get priority from the most recent priority transaction before or on working day.
- Determine the project ✱
- Get all of the edges associated with the task on the working day (from
maniphest_edge
) ✱ - Compare the edges with the list of relevant projects id, looking for the highest-priority project id that matches. ✱
- Priority is determined by row order in
<prefix>_categorization.csv
. - This step is necessary to ensure that the categorization rules can be applied correctly. (If a task kept all of its project edges, it might get counted in a lower-priority category).
- Priority is determined by row order in
- Get all of the edges associated with the task on the working day (from
- With the project determined, get the column on that board, if any, from the most recent core:columns transaction before or on the working day.
- Get title from the current (most recent load, not working day) values in
- Using the reconstructed fields, add a record to
task_on_day
representing the state of that task on that day within that scope.
- Reconstruct the state of the task for that day:
- For each task that is tagged "Category", and in scope,
- For each each descendent of that task,
- Make a row in
phab_parent_category_edge
linking the task to the original ancestor for that day.
- Make a row in
- The algorithm includes only tasks present in the
task_on_day
, so if a child is not present in the data (because it doesn't belong to any projects in the source project list), but the grandchild is, the grandchild will not be included. See T115936#1847188 for a more precise algorithm, not implemented in Phlogiston. ✱
- For each each descendent of that task,
- Repeat the previous step but for "goal".
- For each task associated with any of the relevant project IDs on the working day,
- For all tasks in scope, use the ancestor relationship, if any, to update the category_title field in the task in
task_on_day
.- As implemented, this doesn't update the category_title of the ancestor tasks themselves, so another pass updates their category_titles as well
- fix_status(). Handle special cases where task status is not properly accessible through the transaction log, and set the task status to whatever the current status is.
Data Reporting
editOnce for each scope:
- Precalculate a lot of key dates.
- Wipe Reporting tables of any data for this scope.
- Copy all records in scope in
task_on_date
totask_on_date_recategorized
.- Convert all status=stalled tasks to status=open ✱
- Delete all duplicate, invalid, and declined tasks, which will be completely absent from the reports. ✱
- For the rest of Reporting, changes apply to the temporary data, not the original reconstructed data.
- Reload
<scope>_recategorization.csv
. This is to allow development of report parameters without having to repeat reconstruction. ✱ - Recategorize all tasks in
task_on_date_recategorized ✱
- For each category rule, in priority order: ✱
- Apply the rule to set the category of all tasks in scope that have not already been categorized. ✱
- Each task∙day record is recategorized separately, so a task may have one category for some dates and a different category for later dates.
- For each category rule, in priority order: ✱
- If retroactive categories is specified in the configuration, update the category for each task∙day to be equal to the most recent category for that task. ✱
- If retroactive points is specified in the configuration, update the points for each task∙day to be equal to the most recent points for that task. ✱
- Prepare data for the recently_closed report ✱
- Generate aggregate data ✱
- Determine the "backlog_resolved_cutoff". If specified in the scope configuration, this is either the date specified, or the start of the current quarter. Otherwise it is not used. ✱
- The purpose of this is to reset the burnup of resolved tasks, so that the burnup for the current period can be clearly seen rising from 0, rather than being an incremental change on top of all previous completed work.
- Create three different datasets, all stuffed into
task_on_date_agg
and aggregated by status, category, maint_type (obsolete), and date. The datasets aggregating the daily data in three ways: with no cutoff, with the specified cutoff, and with a cutoff three months before specified. ✱
- Determine the "backlog_resolved_cutoff". If specified in the scope configuration, this is either the date specified, or the start of the current quarter. Otherwise it is not used. ✱
- generate_reporting_files(). Generate all of the CSV files necessary for reporting this scope: ✱
- Set up a temp dir for this scope, /tmp/<scope>
- Execute make_report_csvs.sql to generate csv files in /tmp/phlog ✱
- In addition to making many CSV files, this also calls calculate_velocities(), which calculates historical data and forecasts for all categories in this scope. ✱
- Rename the files and move them to /tmp/<scope>.
- Make Tranche Reports ✱
- Prepare a color palette based on how many categories there are in scope. ✱
- For each category (Tranche), ✱
- call make_tranche_chart.R with a bunch of scope-specific parameters, including the color, to generate a set of graphs for that category. ✱
- Generate a number of charts by querying the database and using python html templates to make html files. ✱
- Update the dates of the report (date run and date of most recent data) ✱
- Generate the forecast charts. ✱
- Generate the open task report. ✱
- Generate the unpointed tasks report. ✱
- Generate the recently_closed_tasks report. ✱
- Generate status reports. ✱
- Call make_charts.R with data files and parameters, to generate the forecast, burnup charts, velocity, and points histograpm graphs. ✱