PoolCounter

PoolCounter is a network daemon which provides mutex-like functionality, with a limited wait queue length. If too many servers try to do the same thing at the same time, the wait queue overflows and some configurable action might be taken by subsequent clients, such as displaying an error message or using a stale cache entry.

It was created to avoid massive wastage of CPU due to parallel parsing when the cache of a popular article is invalidated (the "Michael Jackson problem"), but has later been put to other uses as well, such as limiting thumbnail scaling requests.

MediaWiki uses PoolCounter via an abstract interface (see $wgPoolCounterConf) which allows alternative implementations.

SourceEdit

The implementation is located in multiple places:

There is also a Redis-based default implementation in MediaWiki core, and an experimental Python client for the daemon in Thumbor.

As of Debian Buster (10) and Ubuntu Disco (19.04), the poolcounter server can be installed with sudo apt install poolcounter.

ArchitectureEdit

The server is a single-threaded C program based on libevent. It does not use autoconf, it just has a makefile which is suitable for a normal Linux environment. It currently has no daemonize code, and so is backgrounded by systemd.

In MediaWiki, the client must be a subclass of PoolCounter and the class holding the application-specific logic must be a subclass of PoolCounterWork. See Manual:$wgPoolCounterConf#Usage for details.

ProtocolEdit

The network protocol is line-based, with parameters separated by spaces (spaces in parameters are percent-encoded). The client opens a connection, sends a lock acquire command, does the work, sends a lock release command, then closes the connection. The following commands are defined:

ACQ4ANY <key> <active worker limit> <total worker limit> <timeout>
This is used to acquire a lock when the client is capable of using the cache entry generated by another process. If the active pool worker limit is exceeded, the server will give a delayed response to this command. When a client completes its work, all processes which are waiting with ACQ4ANY will immediately be woken so that they can read the new cache entry.
ACQ4ME <key> <active worker limit> <total worker limit> <timeout>
This is used to acquire a lock when cache sharing is not possible or not applicable, for example when an article rendering request involves a non-default stub threshold . When a lock of this kind is released, only one waiting process will be woken, so as to keep the worker population the same.
RELEASE
releases the lock that the client most recently acquired
STATS [FULL|UPTIME]
show statistics

The possible responses for ACQ4ANY/ACQ4ME:

LOCKED
successfully acquired a lock. Client is expected to do the work, then send RELEASE.
DONE
sent to wake up a waiting client
QUEUE_FULL
there are more workers than <total worker limit>
TIMEOUT
there are more workers than <active worker limit>; no slot was freed up after waiting for <timeout> seconds
LOCK_HELD
trying to get a lock when one is already held

For RELEASE:

NOT_LOCKED
client does not currently hold any locks
RELEASED
lock successfully released

For any command:

ERROR <message>

ConfigurationEdit

The server does not require configuration. Configuration of pool sizes, wait timeouts, etc. is done dynamically by the client.

For MediaWiki-side configuration, see

TestingEdit

$ echo 'STATS FULL' | nc -w1 localhost 7531 
uptime: 633 days, 15209h 42m 26s
total processing time: 85809 days 2059430h 0m 24.000000s
average processing time: 0.957994s
gained time: 1867 days 44820h 50m 24.000000s
waiting time: 390 days 9365h 18m 24.000000s
waiting time for me: 389 days 9343h 3m 28.000000s
waiting time for anyone: 22h 14m 53.898438s
waiting time for good: 520 days 12503h 48m 24.000000s
wasted timeout time: 473 days 11375h 2m 44.000000s
total_acquired: 7739031655
total_releases: 7736374042
hashtable_entries: 119
processing_workers: 119
waiting_workers: 216
connect_errors: 0
failed_sends: 1
full_queues: 10294544
lock_mismatch: 227
release_mismatch: 0
processed_count: 7739031536

Request tracing in productionEdit

Trivial Wireshark support for the protocolEdit

The following Lua script is a trivial 'dissector' for Wireshark that simply stringifies the payloads of Poolcounter network packets, so you can then add that as a column displayed in Wireshark's UI:

--[[
Trivial Poolcounter wire protocol dissector.
Simply renders payload as a string field, which can be then
enabled as a column.

Copyright © 2020 Chris Danis & the Wikimedia Foundation

This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation; either version 2
of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU General Public License for more details.
--]]

poolcounter_protocol = Proto("PoolCounter", "PoolCounter wire protocol")

pc_command_str = ProtoField.string("poolcounter.cmd")

poolcounter_protocol.fields = {pc_command_str}

function poolcounter_protocol.dissector(buffer, pinfo, tree)
    length = buffer:len()
    if length == 0 then return end
    pinfo.cols.protocol = poolcounter_protocol.name
    local subtree = tree:add(poolcounter_protocol, buffer(), "PoolCounter protocol data")
    subtree:add(pc_command_str, buffer(0,length-1))
end

local tcp_port = DissectorTable.get("tcp.port")
tcp_port:add(7531, poolcounter_protocol)

On modern Linux systems you should be able to save this as ~/.local/lib/wireshark/plugins/poolcounter.lua and then it will work automatically in either wireshark or tshark.

Tracing the execution of certain flavors of requestsEdit

Imagine that you cared about seeing the full conversational 'flow' between PoolCounter and its clients for a certain part of the keyspace -- for our example, we'll use enwiki:SpecialContributions:a:127.0.0.1.

Since the PoolCounter server's responses (e.g. LOCKED) don't include the key they're talking about, this isn't trivial to do.

Begin with a packet capture from the timespan you're interested in. You might generate this on a poolcounter host (or on an appserver host you're using for testing) with e.g.

sudo tcpdump tcp port 7531 -c 500000 -w poolcounter.pcap

Then, we'll ask Wireshark to extract the list of its internal TCP stream ID numbers for all requests that match that keyspace:

tshark -r poolcounter.pcap -Y 'poolcounter.cmd contains "enwiki:SpecialContributions:a:127.0.0.1"' -T fields -e tcp.stream | sort | uniq > ids.txt

Once we have that list of IDs, we'll transform it into a Wireshark display filter:

FILTER=$(sed -e 's/^/tcp.stream eq /' -e :a -e 'N;s/\n/ or tcp.stream eq /;ta' ids.txt)

and then use that filter to select all PoolCounter protocol traffic from just those streams in the original packet capture:

tshark -r poolcounter.pcap -Y "poolcounter and ($FILTER)" -e frame.time_relative -e frame.time -e ip.src -e tcp.stream -e poolcounter.cmd -Tfields