Wikimedia Maps/2015-2017/Tile server implementation

The Maps project is designed to provide all the tools required for the large scale maps hosting.

Architecture

edit

Subsystems

edit
Database
OpenStreetMaps data imported into a PostgreSQL database with the PostGIS 2.1+ spatial extensions.
Tile server "Kartotherian"
A Node.js service that serves tiles. Kartotherian Github/Diffusion/Gerrit is implemented by WMF using open source components from Mapbox and Mapnik engine. Tiles are stored as Protocol Buffers vector data blobs. Kartotherian can send tiles to the browser as:
  • vector (PBFs, served verbatim) to the client and let the browser render a map from the data (sometimes called WebGL maps)
  • raster (rendered to PNG or JPEG on the fly).
HTTP cache
Even though vector to raster conversion is fast, we rely on Varnish for caching.
Tile generator "tilerator"
A Node.js service that generates vector tiles. It creates all the tiles initially, then processes updates from a queue. This is internal-only.
Tile storage
Currently we are storing all the data in Cassandra. All tile servers should hold all the vector tiles because there's sufficient space for it. See considerations and ramblings below.

Server roles

edit
Database server
Postgres, Tilerator. For performance, it is essential that tiles are generated on the same machine as Postgres. TBD: whether it should have its own copy of tiles or it should just write to tileservers.
Tile server
Tile storage and Kartotherian.
Caches
TBD whether it should be on tile servers or use the overall caching infrastructure.

Workflow

edit

We generate all the vector tiles for zoom levels 0 to 15+ and regenerate affected tiles when the underlying OSM data changes. For higher zoom levels (we plan to stop at z18 at this point), serve pieces of lower zoom tiles.

Storage

edit

This table bellow summarizes our tile storage needs for one style. For the first iteration, we plan to store up to level 15. In order to greatly reduce the storage requirements, we will use "solid detection" - a check to see if the tile consists entirely of one type of polygon (e.g. water or forest), without any other features. If so, we simply do not store the tile, and during rendering, we use over-zoom. Over-zooming is when we know that a tile does not contain more information than the corresponding piece of a tile in the lower zoom level, so instead of storing a tile, we simply extract a part of the tile above when needed.

Dirty tiles

edit

Some tiles could get marked as dirty during the OSM import, or as part of the data layer SQL adjustments. Tilerator has a job queue to process such tiles. Additionally, server could regenerate dirty tiles on the fly if user requests them. This is especially relevant to OSM itself, where the result of editing should be immediately visible, but could benefit WMF as we increase the OSM pull frequency.

Tile ID

edit

As a side result to tile iteration problem, we came up with a simple way to index any tiles regardless of the zoom level. We use a single 64bit integer to represent both X and Y coordinates together, and iterate it from 0 to 4Z. For each value, extract X value by combining every odd bit and Y value from the even bits: So if we look at this iteration sequence, the value 35 would be:

decimal 35  =>  binary 0010 0011
X is every odd bit:     0 0  0 1  => 1 (decimal)
Y is every even bit:   0 1  0 1   => 5 (decimal)
Result - (1,5) tile coordinates

In general, indexing goes according to this scheme:

00 01 04 05 16 17 20 21
02 03 06 07 18 19 22 23
08 09 12 13 24 25 28 29
10 11 14 15 26 27 30 31
32 33 36 37 48 49 52 53
34 35 38 39 50 51 54 55
40 41 44 45 56 57 60 61
42 43 46 47 58 59 62 63

Storage approaches

edit

Store tiles in NoSQL

edit

We have settled on Cassandra, and implemented a storage source for it.

Store one tile per file

edit

This is the simplest approach - each tile is stored as an individual file. We have been told of an existing implementation that stores tiles as files up to level 14 (max 358 million), by increasing inodes count. By default, a 800GB partition has 51 million inodes. We are worried about the overhead here, especially since we will almost certainly need to store tiles for more than one data source later. implements this approach

Store multiple tiles per file

edit

There is an existing implementation called mbtiles, that stores multiple vector tiles together in one sqlite file. From what we heard, even Mapbox itself sees this as a dead-end approach for server-side tile storage, even though it is used heavily by Mapbox studio for storage and uploading. We might however use some ideas from it or even fork'n'rewrite it to our needs.

Tile storage statistics

edit

This snippet generates a two column list, where the second column is the "group size" - how many identical tiles were found. E.g. 1 means that this tile is unique, 2 means there exists exactly one other tile with the same content, etc. The first column is how many groups with that size were found. E.g. when groupsize=1, this is the number of unique tiles, for groupsize=2, it is the number of pairs found. The total number of stored tiles is the sum of these two columns multiplied.

$ cd /srv/kartotherian/vectors
$ find 5 -type f -name '*' -exec md5sum {} \; | cut -f 1 -d ' ' | sort | uniq -c | cut -c1-8 | sort | uniq -c
Zoom Tile count
(this level)
Tile count
(running sum)
Size on disk
(this level)
Size on disk
(running sum)
Average per tile
(in KB)
Duplicates Tiles Stored % Removed

By Overzoom

0 1 1 0.9 MB 0.9 MB 924.1 0% 1 0%
1 4 5 1.4 MB 2.3 MB 367.2 0% 4 0%
2 16 21 1.8 MB 4.1 MB 115.5 0% 11 31%
3 64 85 2.5 MB 6.6 MB 39.6 2% 41 36%
4 256 341 3.5 MB 10.1 MB 14.0 5% 157 39%
5 1,024 1,365 169.1 MB 179.2 MB 169.1 10% 592 42%
6 4,096 5,461 192.6 MB 371.8 MB 48.2 26% 2,289 44%
7 16,384 21,845 284.0 MB 655.8 MB 17.8 41% 12,596 23%
8 65,536 87,381 403.1 MB 1,059.0 MB 6.3 36% 36,525 44%
9 262,144 349,525 777.2 MB 1,836.2 MB 3.0 42% 135,726 48%
10 1,048,576 1,398,101
11 4,194,304 5,592,405
12 16,777,216 22,369,621
13 67,108,864 89,478,485
14 268,435,456 357,913,941
15 1,073,741,824 1,431,655,765
16 4,294,967,296 5,726,623,061

Tile (re)generation

edit

The task of tile generation might take considerable time - depending on the zoom level (number of tiles), SQL server speed, and the number of servers doing the rendering. In order to manage this process, we plan to use a priority-queue, with persistent storage to mitigate crashes.