Extension:WikiLambda/Development environment
This page is currently a draft.
|
This page contains a detailed guide to have a fully functional local installation of Wikifunctions, either for its use or for development purposes.
Environment overview
editThe Wikifunctions architecture includes the WikiLambda extension as well as the back-end services: function-orchestrator and function-evaluator as well as shared tooling and definitions in wikilambda-cli and function-schemata.
There are different levels of environment complexity that we can work with:
- A local MediaWiki installation with the WikiLambda extension with the back-end services running remotely. You can follow the the MediaWiki-Docker instructions for WikiLambda.
- A local MediaWiki+WikiLambda installation (same as previous point), with the back-end services running locally on docker containers.
- A local MediaWiki+WikiLambda installation and with the back-end services running on a local Kubernetes cluster. This is the most complex environment setup and only advised if you need to replicate a production-like environment.
Running the back-end services on docker
editUsing registry images
editYou can locally run containers built from images of our back-end services that have been already merged and pushed to the Wikimedia docker-registry. This will allow you to run functions locally without having to clone our back-end repositories and build the images locally.
Copy the contents of the services
block in WikiLambda's docker-compose.sample.yml
file to the analogous services
block in your mediawiki/docker-compose.override.yml
file. Replace the <TAG>
entries in the stanza you just copied with the latest builds from the Docker registry for the orchestrator and the evaluators: wasm-python3 and wasm-javascript. (Note: We will be deprecating omnibus and others, the wasm* images are all we need for now.)
Then, run the set of containers with docker compose up -d
. Once everything is done, you should be able to do docker compose ps
and see all your containers with a running status.
Finally, get the full URL of your function orchestrator by inspecting again this data and gathering the <container name>:<external port> details:
docker compose ps function-orchestrator
NAME COMMAND SERVICE STATUS PORTS
core-function-orchestrator-1 "node server.js" function-orchestrator running 0.0.0.0:6254->6254/tcp, :::6254->6254/tcp
If your MediaWiki checkout is called "core", this url will most likely be core-function-orchestrator-1:6254
, but make sure that that's the correct name generated by docker compose.
Edit your LocalSettings.php file from your mediawiki installation folder and add:
$wgWikiLambdaOrchestratorLocation = 'core-function-orchestrator-1:6254';
Test your installation
edit
You can automatically test your installation by editing your local copy of ApiFunctionCallTest.php
to remove @group: Broken
, and running the PHPUnit test suite as described in the MediaWiki install instructions, or by using the following command:
docker compose exec mediawiki composer phpunit:entrypoint -- extensions/WikiLambda/tests/phpunit/integration/API/ApiFunctionCallTest.php
You can manually evaluate a function call by navigating to http://localhost:8080/wiki/Special:EvaluateFunctionCall
, selecting a function from the wiki, and choosing your inputs. If successful, the function response will be presented, having traversed the orchestrator and the evaluator to be run in one of the code executors.
You can also visit http://localhost:8080/wiki/Special:ApiSandbox and try out one or more tests as follows:
- In the
action
drop-down menu, selectwikilambda_function_call
- Switch from the
main
to theaction=wikilambda_function_call
section on the left sidebar - Click on {} Examples, and select any of the listed examples
- Click the blue
Make request
button - In the Results box, look for the word "success".
🎉 Congratulations! 🎉
Using local images
editIf you want to modify and run our back-end services, you will have to clone our repositories and build the images locally.
1. Requirements
edit- Ensure that you are using docker compose version 2.x or above. If you have 1.x, you will need to upgrade, or, if using Docker Dashboard, you may be able to select a preference for "Use Composer v2".
- Install Blubber
2. Clone the repositories
editClone the back-end services locally. You can clone both of them, or only the one that you wish to alter.
For [[1]] service:
# Clone the repo with --recurse-submodules flag
git clone --recurse-submodules git@gitlab.wikimedia.org:repos/abstract-wiki/wikifunctions/function-orchestrator.git
# Or adjust your local check-out later if you have already cloned the repo
git clone git@gitlab.wikimedia.org:repos/abstract-wiki/wikifunctions/function-orchestrator.git
git submodule update --init
For the [[2]] service:
# Clone the repo with --recurse-submodules flag
git clone --recurse-submodules git@gitlab.wikimedia.org:repos/abstract-wiki/wikifunctions/function-evaluator.git
# Or adjust your local check-out later if you have already cloned the repo
git clone git@gitlab.wikimedia.org:repos/abstract-wiki/wikifunctions/function-evaluator.git
git submodule update --init
3. Build the local images using blubber
editBlubber is an abstraction for container build configurations that outputs Dockerfiles. To build local images for our services we will need to run blubber and then build the images using the output Dockerfile. We will then use the newly created image name to inform our mediawiki docker-compose where to build the serves from.
To build the function-orchestrator docker image, go into the repo root directory and run:
blubber .pipeline/blubber.yaml development | docker build -t local-orchestrator -f - .
# You can also save the Dockerfile locally:
blubber .pipeline/blubber.yaml development > Dockerfile
# And then just run the following command when you need to update the image:
docker build -t local-orchestrator .
To build the function-evaluator docker image, you can do the same as with the orchestrator. Just remember to alter the image name:
blubber .pipeline/blubber.yaml development | docker build -t local-evaluator -f - .
Now if you run docker images
you will see your newly built local-orchestrator
and local-evaluator
images, both of them tagged as latest
.
4. Build the containers
editAlter the docker-compose.override.yml file from your mediawiki installation directory to change the image from which your back-end service(s) are built. The image field will need to be set to the value <image_name>:latest
, where the image name is the one given in step 3. Do this for both or for only the one you wish to alter.
For example, if you want to use both locally built images, your docker-compose.override.yml file should be:
services:
function-orchestrator:
image: local-orchestrator:latest
ports:
- 6254:6254
function-evaluator:
image: local-evaluator:latest
ports:
- 6927:6927
You are now ready to test your installation following the steps from above
Logging from the back-end services
editWhile developing or modifying any of the back-end services, you might want to log messages and use them for debugging. You can do that simply by using console.log or console.error anywhere in the code, but to see these outputs, you must rebuild the project, reinitialize the docker containers, and view docker logs.
For example, after adding console.log
statements in function-orchestrator
or in its submodule function-schemata
, run blubber in the function-orchestrator root directory. Once the image is rebuilt, restart your MediaWiki docker containers with docker compose up -d
.
Use docker compose logs
or docker logs
to view the logs:
# Show all container logs and follow
docker compose logs -f
# Show only function-orchestrator logs and follow
docker compose logs function-orchestrator -f
# Alternate logs command
# Runs from any directory with cleaner output, but is less comprehensible
docker logs mediawiki-function-orchestrator-1
To log exceptions from the python executor (function-evaluator/executors/python3/executor.py):
import logging
logging.exception("this is some error")
And similarly, rebuild the function-evaluator with blubber and reinitialize the MediaWiki docker containers.
Testing the back-end services
editTo test function-orchestrator you can use npm.
First you must make sure that the address and port 0.0.0.0:6254 is available. If you have your docker environment up and the service is occupying that same port, you can simply do docker compose down
while you run the tests:
# Install npm dependencies
npm install
# Run test script: this runs lint and mocha
npm run test
# Or skip linting by doing
npm run test:nolint
To test function-evaluator you will need to use the docker variants that run in CI:
# To run the full suite for the Node service
blubber .pipeline/blubber.yaml test | \
docker build -t test -f - . && \
docker run test
# To run the tests for the Python executor
blubber .pipeline/blubber.yaml test-python3-executor | \
docker build -t testpy -f - . && \
docker run testpy
# To run the tests for the JavaScript executor
blubber .pipeline/blubber.yaml test-javascript-executor | \
docker build -t testjs -f - . && \
docker run testjs
Deploying to Kubernetes
editThere are two ways to deploy Wikifunctions to Kubernetes locally:
- deploy the CI helm chart (easier and supported by the team)
- deploy a production-like Kubernetes deployment (harder, using alpha tooling)
Deploying the CI helm chart
editTo deploy the same helm chart chart that the end-to-end tests deploy on CI on your own computer, follow the instructions in the aw-ci-chart README. This is more straight forward and is a close approximation to a production deployment and allows you to specify which WikiLambda patch you want to deploy.
Service deployment to production-like Kubernetes
editFor expert development mode, you might need to replicate a production-like environment in your local machine using Kubernetes. For that you will need to install
- Docker: https://docs.docker.com/get-docker/
- Minikube: https://minikube.sigs.k8s.io/docs/start/
- Helm: https://helm.sh/docs/intro/install/
You will also need to clone the following repositories:
Similarly to the case with docker, this environment also allows us to create the service containers from the official images pushed to the registry or from images built locally. Let's review both options.
Using registry images
editLocal-charts is a tool to run a Mediawiki ecosystem in Minikube using the Helm charts from the deployment-charts repo. First, run the installation script by running make install
from the repo's root directory and start minikube with make start
.
Edit the file helm/requirements.yaml
and at the end of the list of dependencies add the following:
- name: function-orchestrator
version: 0.0.1
repository: "https://helm-charts.wikimedia.org/stable/"
condition: global.enabled.function-orchestrator
- name: function-evaluator
version: 0.0.1
repository: "https://helm-charts.wikimedia.org/stable/"
condition: global.enabled.function-evaluator
If the back-end service charts are not available in the ChartMuseum, you can also point to local charts by replacing the repository with the relative path of the function-orchestrator and function-evaluator charts from the deployments-charts repo.
- name: function-orchestrator
version: 0.0.1
repository: "file://../../../operations/deployment-charts/charts/function-orchestrator"
condition: global.enabled.function-orchestrator
- name: function-evaluator
version: 0.0.1
repository: "file://../../../operations/deployment-charts/charts/function-evaluator"
condition: global.enabled.function-evaluator
Now create a values.yaml
file from the example file values.example.yaml
and edit it. You should add the new added services in the global.enabled section. Also, set all the other services to false, as we won't be needing them:
enabled:
mariadb: false
mediawiki: false
parsoid: false
restrouter: false
function-orchestrator: true
function-evaluator: true
You are ready to start the Kubernetes cluster in minikube now. Do:
# Start minikube with
make start
# Deploy and name your release as wikifunctions
make deploy release=wikifunctions
Once your services are deployed, you should probably test that everything went well.
# Get your minikube IP by doing:
minikube ip
> 192.168.58.2
# You can run the minikube dashboard too...
# You should see both services in green!
minikube dashboard
# Let's inspect the deployed services using kubectl
kubectl get services
> NAME TYPE ... PORT(S) AGE
> function-orchestrator-wikifunctions NodePort ... 6254:30001/TCP 18s
> function-evaluator-wikifunctions NodePort ... 6927:30002/TCP 18s
# We should be able to make a cURL request to any of the
# services using the minikube IP and the services external ports:
curl 192.168.58.2:30001/_info
> {"name":"function-orchestrator","version":"0.0.1","description":"A Wikifunctions service to orchestrate WikiLambda function executors","home":"http://meta.wikimedia.org/wiki/Abstract%20Wikipedia"}
🎉 Success! 🎉
Using local images
editCreate new service chart
editoperations/deployment-charts/README.md
If you want to create a new chart use the
create_new_service.sh
script, test it and upload a change to Gerrit. Then wait for a review.
Running create_new_service.sh for function-orchestrator
port > 6254 name > function-orchestraotr image label > wikimedia/mediawiki-services-function-orchestrator
Running create_new_service.sh for function-evaluator
port > 6927 name > function-evaluator image label > wikimedia/mediawiki-services-function-evaluator
Change in the chart
function-orchestrator: * main_app.version: ff7fb9f7ccdd9d9f9e635ccbc0269ae76cd828b9 * main_app.readiness_probe.httpGet.path: /_info * tls.public_port: 4970
function-evaluator: * main_app.version: fffdeacd512acc72dc7f73b1feaf988dcfed198a * main_app.readiness_probe.httpGet.path: /_info * tls.public_port: 4971
Now you are ready to run using releng/local-charts
Test charts using local-charts
editLocal-charts tutorial: https://wikitech.wikimedia.org/wiki/Local-charts/Tutorial
Followed this to test deployment chart for function-orchestrator: https://wikitech.wikimedia.org/wiki/Deployment_pipeline/Migration/Tutorial
make deploy values=values.example.yaml # Or with values.yaml cp values.example.yaml values.yaml make deploy minikube ip kubectl get svc # Let's name our release make deploy release=wikifunctions
Delete all:
# Without naming release: helm del default # But with the name helm del wikifunctions
To make changes and update the deployment do:
# For unnamed release make update # For named release make update release=wikifunctions
Test deployment:
minikube ip > 192.168.58.2 kubectl get svc > NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE > function-orchestrator-default NodePort 10.111.200.0 <none> 6254:30642/TCP 18s curl 192.168.58.2:30642/_info > {"name":"function-orchestrator","version":"0.0.1","description":"A Wikifunctions service to orchestrate WikiLambda function executors","home":"http://meta.wikimedia.org/wiki/Abstract%20Wikipedia"}
This configuration allows us to deploy the function-orchestrator and function-evaluator images pushed to the wikimedia registry, identified as wikimedia/mediawiki-services-function-evaluator
, but what if we want to test local changes without having to merge and push to the remote registry?
This way we could: * Use local images of function-orchestrator and function-evaluator, which we can alter, deploy and test inside the pods * Use deployment-charts to edit config parameters, and add new environment variables into the services * Configure our locally running installation of mediawiki and alter the config variables so that we can point at the services running in Kubernetes instead to the ones running in docker containers * [QUESTION] Can we have the function-orchestrator service deployed inside of kubernetes, make GET requests to the mediawiki installation running over docker on the host machine?
To use local images for the services instead of the ones in the registry, modify deployment-charts values.yaml
for each service:
For development purposes:
docker: # registry: docker-registry.wikimedia.org registry: localhost:5000 pull_policy: IfNotPresent
and:
main_app: # image: wikimedia/mediawiki-services-function-orchestrator # version: ff7fb9f7ccdd9d9f9e635ccbc0269ae76cd828b9 # we use image: local-orchestrator version: latest
And the same changes for function-evaluator.
Continue:
Use local docker
https://stackoverflow.com/questions/42564058/how-to-use-local-docker-images-with-minikube
As the README describes, you can reuse the Docker daemon from Minikube with eval $(minikube docker-env).
So to use an image without uploading it, you can follow these steps:
Set the environment variables with eval $(minikube docker-env) Build the image with the Docker daemon of Minikube (eg docker build -t my-image .) Set the image in the pod spec like the build tag (eg my-image) Set the imagePullPolicy to Never, otherwise Kubernetes will try to download the image.Important note: You have to run eval $(minikube docker-env) on each terminal you want to use, since it only sets the environment variables for the current shell session.
# From the function-orchestrator directory: cd function-orchestrator # Set environment variables so that minikube and host share the same docker daemon eval $(minikube docker-env) # Build the image with the Docker daemon of minikube blubber .pipeline/blubber.yaml development | docker build -t local-evaluator -f - . # Or if the Dockerfile has been created, just do: docker build -t local-evaluator . # Now, can we deploy local images using development-charts? # NOPE, it requires we specify the registry url # Okay let's create a local registry docker run -d -p 5000:5000 --name registry registry:2 # And tag our local images docker image tag local-orchestrator localhost:5000/local-orchestrator docker image tag local-evaluator localhost:5000/local-evaluator # YES!! # They are responding # Wohoooo # Evaluator: curl 192.168.58.2:31318/_info # Orchestrator: curl 192.168.58.2:30741/_info # Now let's see if we can see the container logs # Find the name of the containers that we want to log docker ps -a | grep node | grep function-* # And log docker logs <container_name> -f
So that we don't have to re-tag images every time that we generate them, and change the image tag in deployment-charts, we are going to edit the deployment-charts template so that helm always creates a new container for function-orchestrator from local-orchestrator:latest
whenever we do make update
For this, we have added in the function-orchestrator depoyment.yaml
template:
spec: template: metadata: annotations: {{- if .Values.config.development }} # FIXME: Remove 'rollme', development only: force roll every time we do helm update rollme: {{ randAlphaNum 5 | quote }} {{- end }}
And we can set this variable in releng/local-charts/values.yaml
:
function-orchestrator: config: development: true
Setting development value to true will force helm update to always roll the function-orchestrator image, and setting to false will only roll if the chart or values have changed. This is useful for development purposes, in which we don't wanna be changing the tags and all the related parameters in deployments-charts, but we want to be able to make local changes to function-orchestrator, create the local image, tag it, and redeploy it inside minikube.
# For every function-orchestrator change,
# Do in function-orchestrator directory (with minikube-env):
blubber .pipeline/blubber.yaml development | docker build -t local-orchestrator -f - .
docker image tag local-orchestrator localhost:5000/local-orchestrator
# Do in local-charts directory (with minikube-env):
make update release=wikifunctions
# Find the container and print the log again, because the container ID has changed
docker ps -a | grep node | grep function-*
docker logs <container_name> -f
# Do curl with the same IP because the service port has not changed:
curl 192.168.58.2:30316/1/v1/evaluate -X POST
Example of a CURL request for testing:
curl 192.168.58.2:30316/1/v1/evaluate -X POST -d '{ "zobject": { "Z1K1": "Z7", "Z7K1": "Z885", "Z885K1": "Z502" }, "doValidate": true}' -H 'Content-Type: application/json'
Can we connect from minikube to localhost? https://stackoverflow.com/questions/55164223/access-mysql-running-on-localhost-from-minikube
YES, minikube directly creates two host names: minikube
and host.minikube.internal
, which means that we can use these names from INSIDE the kubernetes cluster, so in the function-orchestrator variables in deployment-charts:
config: public: FUNCTION_EVALUATOR_URL: http://minikube:31318/1/v1/evaluate/ WIKI_URL: http://host.minikube.internal:8080/w/api.php
And WikiLambda will need to have the URL of the orchestrator this way:
$wgWikiLambdaOrchestratorLocation = '192.168.58.2:30316';
Finally we need to be able to make requests to that IP from the mediawiki docker composer. If we have minikube running with docker (we should), we will have a network called minikube
already created:
docker network ls NETWORK ID NAME DRIVER SCOPE f9c9960881a6 bridge bridge local b7e0ca8c4fcd core_default bridge local 00911e6f166e host host local c9521850bca9 minikube bridge local 2c7f44adeb7d none null local
We can inspect the networkd data with
docker network inspect minikube
Where we can see what containers are attached to this network. We would need our mediawiki docker containers to connect to that network directly, for which we need to edit the docker-compose.override.yml
file in our mediawiki/core
directory and add the following:
# We can also comment the previously used services here, # because we are going to use the kubernetes ones from now on # services: # function-orchestrator: # image: local-orchestrator:latest # ports: # - 6254:6254 # function-evaluator: # image: local-evaluator:latest # ports: # - 6927:6927 # Make the containers connect to the minikube network by default networks: default: name: minikube
Once the containers are run again, we can do docker network inspect minikube
and we should see how our mediawiki containers are now part of the Containers map:
[ { "Name": "minikube", "Id": "c9521850bca9ec76afc26dd27eaf4df1d8a7a24a91fa79aecf58721c9fb11250", "Created": "2022-01-26T13:30:06.035450274+01:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "192.168.58.0/24", "Gateway": "192.168.58.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "1db44ceb36e8500fe47e907196f23392219e46c4d4b4246a7a6431607212ef33": { "Name": "core-mediawiki-1", "EndpointID": "fa429f8daf17ca903a88c51213f828e261588f024e6da2d6c3bc1cd0184d2ab4", "MacAddress": "02:42:c0:a8:3a:05", "IPv4Address": "192.168.58.5/24", "IPv6Address": "" }, "71e83dc2f6f2ee887de0995f52a050fe9f5ce77bc622ed9bfb58aaa385d5776c": { "Name": "minikube", "EndpointID": "640126a25a270fb71807308cf1af8377af4e2f6c3f54246e17bb297bb297379a", "MacAddress": "02:42:c0:a8:3a:02", "IPv4Address": "192.168.58.2/24", "IPv6Address": "" }, "cae02884e78ec7c4e6090dcc452cdde1c5e840fbb7c2929d42cd6bad7ec9d8c9": { "Name": "core-mediawiki-jobrunner-1", "EndpointID": "82792dd0267ff1d1b134dfcf4498f3d4080b456f9a94607bc620816eefb911c6", "MacAddress": "02:42:c0:a8:3a:04", "IPv4Address": "192.168.58.4/24", "IPv6Address": "" }, "ff33fc0cc0297eb80b99771378132e6f927e9ced8a8782b80cf8a25c2bfdc205": { "Name": "core-mediawiki-web-1", "EndpointID": "fab671c0eadb0584cd0117d1e0282d04e38179f5fca969670cbfd138f3c9c887", "MacAddress": "02:42:c0:a8:3a:03", "IPv4Address": "192.168.58.3/24", "IPv6Address": "" } }, ... } ]
From inside any of these containers, we should be able to successfully ping the IPs of the others.
How to test?
edit- Go to mediawiki/core and run the mediawiki web containers with
docker compose up -d
- Go to releng/local-charts and run the services on kubernetes:
minikube start
make deploy release=wikifunctions
- See function-orchestrator logs
eval $(minikube -p minikube docker-env)
docker ps -a | grep node | grep function-*
docker logs <container name>
- Get IP and port of function orchestrator
minikube ip
kubectl get services
- Go to mediawiki/core/LocalSettings.php and make sure that the orchestrator path is correct:
$wgWikiLambdaOrchestratorLocation = '<MINIKUBE URL>:<PORT>'
;
- Go to API sandbox and make a call to wikilambda_function_call
Expected outcomes: * You should see logs being printed with the API call * The API Sandbox should receive a successful response
Things to solve:
editThis page is currently a draft.
|
- [x] Function-orchestrator to communicate with function-evaluator
- [x] Function-evaluator to return stuff to function-orchestrator
- [x] Function-orchestrator to know how to access mediawiki installation
- [x] Mediawiki to be able to communicate with function-orchestrator
- [ ] Service requets?
- https://phabricator.wikimedia.org/project/profile/1305/
- https://phabricator.wikimedia.org/T297314
- [ ] Function-evaluator to be read-only
- [ ] Function-evaluator to not have network access
- [ ] Label function-evaluator and function-orchestrator (latest? stable?)
- function-evaluator/orchestrator:./pipelines/config.yaml define tag latest
- https://wikitech.wikimedia.org/wiki/Deployment_pipeline/Migration/Tutorial#Publishing_Docker_Images
- Need to add publish pipeline to integration/config:jjb/project-pipelines.yaml
- Any other configuration needed for integration/config:zuul/layout.yaml ???
- Currently ${setup.tag} https://wikitech.wikimedia.org/wiki/PipelineLib/Reference#Setup
- [x] Change info output from function-evaluator
Production TODO
edit- [x] Create production services helm charts
- [ ] Ask SRE to set up a service proxy for it
- Rationale: https://phabricator.wikimedia.org/T244843
- Proxies setup: operations/puppet.git:/hieradata/common/profile/services_proxy/envoy.yaml
- [ ] Set wikifunctions-evaluator and wikifunctions-orchestrator hostnames/ips
- mediawiki-config/wmf-config/ProductionServices.php:109
- these are the common services for all clusters
- Kubernetes: Add a new service:
- https://wikitech.wikimedia.org/wiki/Kubernetes#Add_a_new_service
- [ ] Service ports
- Ensure the service has it's ports registered at: Service ports
- https://wikitech.wikimedia.org/wiki/Kubernetes/Service_ports
- [ ] Create deployemnt user/tokens in the puppet private and public repos
- hieradata/common/profile/kubernetes/deployment_server.yaml edit
profile::kubernetes::deployment_server::services
Overall steps for production deployment
edit- [ ] Read https://wikitech.wikimedia.org/wiki/Kubernetes#Add_a_new_service
- [ ] Register the public ports that your services will use in https://wikitech.wikimedia.org/wiki/Service_ports
- [ ] Use operations/deployment-charts/create_new_service.sh script to generate the chart(s) for your new service(s)
- Follow instructions from the section above Create new service chart
- Test your charts using minikube, kubectl and helm
- Can use releng/local-charts to test
- Add function orchestrator and function evaluator to values.yaml
- Change mariadb repository URL (bitnami is a possibility)
- Do
make deploy -release=wikifunctions
- FOllow instructions from the section above Test services using local-charts
Useful resources
editThis page is currently a draft.
|
TODO: Filter and order reference material TODO: Take reference material to a more generalist help document
Wikimedia clusters:
edit- https://wikitech.wikimedia.org/wiki/Clusters
- Core services: eqiad and codfw
- Edge catching: esams, ulsfo, eqsin, drmrs
Beta Cluster:
editWikifunctions URL: https://wikifunctions.beta.wmflabs.org/wiki/MediaWiki:Main_Pag
- In Cloud VPS, Beta Cluster is deployment-prep
- Project page: https://wikitech.wikimedia.org/wiki/Nova_Resource:Deployment-prep
- Deployment prep: https://wikitech.wikimedia.org/wiki/Nova_Resource:Deployment-prep/Overview
- URL https://meta.wikimedia.beta.wmflabs.org/wiki/Main_Page
- <language>.<project>.beta.wmflabs.org
- Logs: Various server logs are written to the remote syslog server deployment-mwlog01 in /srv/mw-log
- Logs in production: https://wikitech.wikimedia.org/wiki/Logs
Cloud Services:
edit- Glossary: https://wikitech.wikimedia.org/wiki/Help:Glossary
- General landing page https://wikitech.wikimedia.org/wiki/Portal:Cloud_VPS
- How to access our instances: https://wikitech.wikimedia.org/wiki/Help:Accessing_Cloud_VPS_instances (Beta Cluster is deployment-prep)
- FAQ for the Web control system, Horizon: https://wikitech.wikimedia.org/wiki/Help:Horizon_FAQ
Accessing Cloud Services:
edit- Created another ssh key with Key<> pass
- Saved this key into .ssh/wikitech, Wikitech settings and Gerrit settings
Production services:
edit- How to add a new service: https://wikitech.wikimedia.org/wiki/Kubernetes#Add_a_new_service
- Example helm chart: https://gerrit.wikimedia.org/r/plugins/gitiles/operations/deployment-charts/+/refs/heads/master/charts/chromium-render/values.yaml
About Horizon
edit- Official tool for managing OpenStack deploys
- The node definitions for a VPS instances are configured via OpenStack Horizon user interface
- https://wikitech.wikimedia.org/wiki/Help:Horizon_FAQ
- https://horizon.wikimedia.org/project/
- Access credentials (same as wikitech)
- Genoveva Galarza
- Tech<>
- 2FA for wikitech
About Puppet
edit- https://wikitech.wikimedia.org/wiki/Puppet
- Puppet is our configuration management system.
- Puppet is not being used as a deployment system at Wikimedia
- Public puppet repo https://gerrit.wikimedia.org/r/p/operations/puppet
- Puppet hiera: https://wikitech.wikimedia.org/wiki/Puppet_Hiera
- Configuration variables for puppet to be stored outside of manifests
- Hiera is a powerful tool to decouple data from code in puppet.
- Rules:
- The code should be organized in modules, profiles and roles, where
- Modules should be basic units of functionality (e.g. "set up, configure and run HHVM")
- Profiles are collection of resources from modules that represent a high-level functionality ("a webserver able to serve mediawiki"),
- Roles represent a collective function of one class of servers (e.g. "A mediawiki appserver for the API cluster")
- Any node declaration must only include one role, invoked with the role function. No exceptions to this rule. If you need to include two roles in a node, that means that's another role including the two.
- Puppet manifests
- operations/puppet/manifests/site.pp
Charts Museum
edit- Docs: https://wikitech.wikimedia.org/wiki/ChartMuseum
- Repository URL: https://helm-charts.wikimedia.org/stable/
- All stable charts: https://helm-charts.wikimedia.org/api/stable/charts
Blubber and PipeLine
edit- About Blubber https://wikitech.wikimedia.org/wiki/Blubber/Tutorial
- About PipeLine https://wikitech.wikimedia.org/wiki/PipelineLib/Tutorial
- How to configure CI for your project: https://wikitech.wikimedia.org/wiki/PipelineLib/Guides/How_to_configure_CI_for_your_project
Additional links
editStarting point: https://wikitech.wikimedia.org/wiki/Kubernetes General Kubernetes deployment documentation: https://wikitech.wikimedia.org/wiki/Kubernetes/Deployments To deploy a new service documentation: https://phabricator.wikimedia.org/project/profile/1305/
- Deployment pipeline:
- Uses PipelineLib to quickly build images with Blubber, integrate those images with Helm, and deploy to Kubernetes with Helmfile
- https://wikitech.wikimedia.org/wiki/Deployment_pipeline
- https://wikitech.wikimedia.org/wiki/Deployment_pipeline#/media/File:Containerized_continuous_delivery_2017_concept.png
- Tech talk https://www.youtube.com/watch?v=i0FTcG7PxzI
- Migration tutorial: https://wikitech.wikimedia.org/wiki/Deployment_pipeline/Migration/Tutorial