Wikimedia Release Engineering Team/Deployment pipeline/2017-02-21

2017-02-21

edit

Antoine, Jean-René, Giuseppe, Marko, Tyler

Background Notes

edit


Notes from Giuseppe:     

  • Need a single cli tool that generates a standard, WMF-specialized dockerfile for at least: php/hhvm, python, node, java
  • Existing debian packages of libraries we already use should be preferred by the tool whenever possible
  • Only use images coming from https://registry.wikimedia.org/ (official docker registry, public already)
    • Images build on and published from copper (machine ops use to build packages/images etc)
  • NO private data in container images, ever. They are all going to be public by default
  • Integration layers need a *lot* of thought
  • Dockerfile generation can be tricky
  • Images will not be uploaded by users, they just upload the repo, the official artifact will be built by CI or "releases"
    • How to mark releases?
    • gerrit-specific?
  • CI pipeline from pearson https://github.com/pearsontechnology/deployment-pipeline-jenkins-plugin


Base images to be maintained by ops

  • includes limited APT sources, etc.
  • what happens when a new base image is built/registered?
    • rebuild all dependent images and upload to registry (triggered/automated in CI)
  • do we need a private registry? seems there's some facility for this already

Some kind of unified/standardized "build manifest" that abstracts away the problematic Dockerfile

  • Tests with private data (eg MediaWiki security patches)?  Part of Docker registry can be made private.


Last Time

edit
  • Questions for operations
    • What do the base images look like? How are they created? Are they signed?
    • Re: Some Build Manifest Abstraction, do we limit package installation from certain apt sources?
    • What do they need from this pipeline
    • Base image updates trigger new builds/tests/everything
      • Example: firejail updates
      • Currently manual testing in beta and rollout

Next

edit

As Always

edit