Wikimedia Release Engineering Team/Deployment pipeline/2020-08-19
TODOs from last time
edit
- Meeting with Fandom happened yesterday...
- TODO Tyler to ask Fandom if they have a recording of the meeting
- Different environments: dev, staging, prod -- would be nice
- Interesting that they just use kubectl apply -- maybe they don't have very much coupling between services and mediawiki
- Q: how many chages are they deploying per day?
- Seems they have a daily deployment rather than continuous deployments
- Monorepo
- Don't quite understand how they integration upstream changes: cherry-pick, rebase, etc
- Jeena has a few automation patchsets in progress:
- Dan has been thinking about container composition and workflows.
- See https://phabricator.wikimedia.org/T259817#6395133 for current thinking.
- Met with Jeena about this and we came up with questions and follow-up.
- Working on a short presentation but feel free to dig in and ask questions now.
- COPY --from fetches in parallel (LLB: low level build language)
- thcipriani: answers how to upgrade debian packages copying in base images
- Assumes a weekly iteration
- Creates a distribution images, tagged with wmf branch, pushed to private registry
- Periodic job
- Does this .pipeline file live in mediawiki/core? Anyway: deployment variant
- Fandom are running support images for php-fpm
- Copies in all the distributed images
- COPY --from <image name> -- implemented in blubber
- Question: does this use the docker cache? i.e., would this run anything if already built with tags
- l10n compilation step
- Can we compile in per-component build step
- Seperate each build by language?
- One regression in this process
- l10n is recompiled for every backport
- May need to solve l10n update
- Blubber patch to support scratch images
- (See above use case for MW image build pipelines.)
- Another use case is t for compiled programs, however, the feature as it's currently implemented does not drop to unprivileged users.