User:Spetrea/Archive1

2014-March-30 edit

An e-mail setup edit

I wrote before about switching to a completely different setup. In the past week I've read more mail than the last 6 months. (Thunderbird really doesn't cut it for high-traffic mailing lists). And I actually wanted to read e-mails faster for a long time now. Initially I was able to import mailman e-mail archives through a(now discontinued) extension, but it was quite slow when loaded up with big amounts of archives.

I've tried Mutt in the past but wasn't aware of OfflineIMAP until very recently. On a different note, notmuch is extremely fast for searching through mail. Maybe the only bad side is that Maildir wastes up some disk space, but not that much.

Now I can do offline reading/searching through years and years(hundreds of thousands, maybe millions) of e-mails.

The diagram that follows would have been more complicated, but I didn't include everything(I'll revisit it later). It seems to conform to some well-known rules.

So here's what it looks like(with courtesy of graphviz):

 

Proprietary software(2 fully Commercial issue-trackers) edit

Apparently two major proprietary software companies, one called Thoughtworks, and one called FogCreek have produced two issue trackers. The fruits of their hard and intense mental/intellectual labour have materialized in two great pieces of ingenuity named Mingle and Trello. How miraculously blessed am I, a human being living in this time and age, that I have access to such well designed pieces of software. Of course, the same book-keeping of tasks could very well be kept in text files(or opensource issue trackers), but Mingle/Trello want to really push the envelope, they want to innovate in the field of.. well.. book-keeping. Is there really so much left to be said/done in the field of issue-trackers ? Why does bugzilla not suffice in this particular case? While Mingle has a semi-broken REST API, and has way too many options some of which do not work(for me), Trello strikes me as too simplistic, in the sense that it doesn't seem to allow very custom ways of storing stuff. Mind you, I've used Mingle, but, sadly I will never experience Trello, but I am glad to bikeshed about it with people who have so I can find out more.

I ♥ Thoughtworks/FogCreek/Mingle/Trello, they are the best!

Note: I also like taskw, zim for keeping notes. And I also use text files for keeping notes, can you believe that? so arcane..

2014-March-24 edit

Security-related edit

After some big problems with debsums, it was time to harden some security on my local network. Starting from the router, and going down to each of the machines in detail and depth.

First of all, Ubuntu is a slimy mess, filled with spyware/malware/keyloggers ( see fixubuntu.com ), so I switched to Debian. Maybe you can secure Ubuntu if you know all the hidden spots and all the things that can go wrong. I'd rather skip that and switch to Debian.

I had to do a lot of reading. Some of the things I found were useful and I set those up on all my machines:

  • auditd ( setting up to cover suspicious filesystem events )
  • chkrootkit, rkhunter, aide ( checking for rootkits )
  • iptables and sysctl configuration ( strict firewall on each of the machines, logging of all dropped packets and martian packets )
  • debsums ( comparing checksums of binaries/files on disk with the ones that came with the packages )
  • raising passwords length for all accounts I have everywhere, encrypting them on disk with gpg, and using gpg-agent
  • isolating any suspicious application into VMs or LXC containers(Chrome and Google Hangouts plugin are big on my list)
  • dropping usage of gmail.com web interface, and switching to Mutt+OfflineIMAP+msmtp+notmuch for reading mail
  • dropping usage of Chrome(and related)natively on my machines, in favor of Firefox and/or dwb
  • dropping usage of NetworkManager

I tried a bunch of other solutions(some commercial, which I hate very much), but I tried to limit myself to only things that are available in Debian package repositories.

It's difficult to set up all these IDSs properly and very time-consuming. I don't think the setup is perfect(as I'm not a security expert) but it was a best effort.

Some future plans emerged, but I'll put them on hold for now(if you'd like to share some thoughts on this, drop me an e-mail here).

Future plans:

  • cronjobs that send daily reports in a readable format in /var/mail
  • Improving my knowledge about iptables in terms of logging, in order to understand what kinds of strange situations I should watch out for
  • security updates when required
  • incorporating everything into a puppet manifest that I can use to decrease the time it takes me to set this up
  • getting a better understanding of martian packets
  • getting a DD-WRT compatible router and preparing it for my home network
  • continuing to look out for unuseful services/processes/users running on the machine without a thorough understanding of what they do
  • disabling any sort of autorun behaviour for USB/CD/DVD. convincing udisks/udev to stop automounting, checking things preventively without mounting beforehand
  • adding auditd network events logging
  • read more about Gentoo

Misc:

I found out that iptables is going to be replaced by nftables soon. Also, iproute2 will replace net-tools.

2014-February-20 edit

UDFs edit

New patchset for gerrit change for Hive UDFs. Here's a sample of the data produced.

2014-February-15 edit

UDFs edit

Added accuracy and specificity comparison to the comparative analysis of the UDFs.

2014-February-03 edit

UDFs edit

Added a new patchset(PS10) to the udf gerrit change. The gerrit change is up for review. If you want to do a review, feel free to do so. I'll be submitting new patchsets to it.

If you want to try out the UDFs, you can get a pack with all the things you need from this archive. If you follow all the steps in the README.txt you should be able to use the UDFs.

OpenDDR edit

Did some work on analyzing the evolution of OpenDDR from version to version. This is hosted here at the moment. The current report for it is here. If you're interested, feel free to download it. Feedback is welcomed and encouraged.

This might soon be part of a bigger comparative analysis between the two UDFs (UAParser and Dclass).

2014-January-20 edit

UDFs edit

Last week on Friday I uploaded a patchset #9 for the UDFs. They are working. There's still work to be done on comparing them and seeing which are best and for what. And also work left on the new dclass packages.

Reports for vendor,minor,major were computed for one day of data and they were released on the internal analytics mailing list last Friday.

Other edit

Watched this presentation on opensource. I liked it a lot.

2014-January-14 edit

UDFs edit

Now there are 3 Hive scripts that produce the reports mentioned here. The UDF using UA-Parser is ready. The one with Dclass apparently has a problem (the author ommited "device_os_version" from the dtree files, I'm trying to get that fixed). Got the dclass repo back into git.wikimedia.org with the help of Andrew.

Wikimetrics unattended install repo and screencast edit

This is an unofficial way of installing Wikimetrics.

I made a screencast accompanied by a github repo aimed at providing a way to quickly install Wikimetrics for development purposes.

The official way of installing Wikimetrics remains this one.

(youtube mirror of the screencast)

I've received mixed feedback, some people say it's useful, others say it can hurt wm users and it should be part of mediawiki-vagrant.

The lack of a transcript was a problem(I was planning to do that but after lots of editing and messing around with FFmpeg and other various automated editing scripts.. I ran out of time).

2013-December-18 edit

hardware problems solved edit

As my old desktop machine would just not want to behave well with any of the 3 sets of memories I tried it with... I decided I had to get a new system.

Asked a friend, he told me about one of his machines and because it was really close to what I needed I made an order to a computer hardware store, got the parts delivered instead of an assembled machine(weird because I mentioned I'd like it pre-assembled), tried to assemble the parts, messed up, nearly fried mobo, took it to service, left it there, got it back today, works ok. These are the specs:

  • Mobo: ASRock Z87 PRO4, Socket 1150
  • CPU: i5-4570, 3.20GHz, Haswell, Socket 1150
  • Memory: Dual Channel Kingston 16GB (2 x 8192MB), DDR3, 2133MHz
  • 64GB ADATA SSD (made for SATA-3)
  • 3TB HDD

Returned most of the memory sets I got previously.

So, the hardware problems are over. Just have to return 1 more set of memory and I'm done.

OS installed on machine.

2013-December-16 edit

labs instances problems edit

I tried to use instances on labs. I spawned up one machine of type m2.xlarge . I was only able to connect to it after it was created. The connection was quite slow, I tried to install a deb package, dpkg stalled. Tried to connect from a different terminal, ssh stalled.

I will contact the labs mailing list about this.

Update: I talked with Coren. He looked into the problems with labs. It turned out to be glusterfs that caused this. He fixed it and now I'm able to create and use labs instances properly again.

Thank you Coren !

2013-December-15 edit

alternatives to google hangouts edit

There are alternatives to google hangouts. For the voip(audio) feature one solution is mumble(wiki article) which is a open source, low-latency, high quality voice chat software primarily intended for use while gaming. The difference here between google hangouts and mumble is that the server is also provided so it's a complete solution from that point of view(I've been using it for quite some time now and I can say it's awesome, it requires zero configuration on server-side, it just works and it works great). Mumble also offers text-chat.

For live screensharing vnc is one solution, and one implementation is tightvnc in read-only mode which would offer the same screensharing feature that google hangouts does.

For video, mumble has some plans to implement this in the near future. But in the meantime ffmpeg can create video streams of your webcam perfectly fine.

The problem with these solutions is that they are all fragments of google hangouts. Integration would require some work. But if you're interested in just audio conferences, text-chat then mumble is a good solution for that.

Tightvnc also works ok, although if you want to use vncserver on your machine and you're behind some router and don't have a public IP you're probably going to have to do some trickery with a free DNS of your choice or maybe tunnel it through some machine on labs. In the easy situation where the router's IP is public you can just forward port 5900 to the machine that hosts vncserver. The same goes for murmur(the server associated with mumble) and the port 64738.

Another option is jitsi which was quite bloated and slow the last time I tried it but it worked.

There are some solutions over the SIP protocol, I think those are annoying to configure so I'll leave them out(for now, maybe I'll have a look at them later although I doubt it).

So, why not just use Google Hangouts? It was running very well about 6 months ago, but since then, they keep adding stuff to it. Personally I'm not sure what they're adding, I was really satisfied with what they had back then, I don't understand why they keep changing it, it worked perfectly fine. It's also closed source which makes it a bit strange(although libjingle seems to have nearly all of google talk plugin functionality, that leaves the question where's the rest of it ? unanswered, since it's closed source).

sharing terminal sessions or screencasts edit

I find it useful to sometimes record a demo, but only when I have something to show. I've made some screencasts in the past with ffmpeg. A ~20 minute screencast recorded with ffmpeg takes around 20-30MB. Recently I found out about showterm.io and asciinema which also allow you to make casts of your terminal. The main difference is the recordings made with showterm/asciinema are purely text-based as opposed to video. I haven't tried them, not sure if they support audio.

alternative to Vagrant's default provider edit

While VirtualBox is the default provider for Vagrant there are other ones as well. One notable alternative is LXC(linux containers) which is considerably faster than VirtualBox. You can try it out like this:

   vagrant plugin install vagrant-lxc

Afterwards, you can get a LXC packed box from vagrantbox.es (at the time of writing this, there were LXC boxes for CentOS, Debian, Ubuntu, OpenMandriva).

You can read more about vagrant-lxc here.

Some drawbacks I can see are property names for the Vagrantfile have different names when using vagrant-lxc than with the default provider(VirtualBox).

There are some other network-related limitations(at this time) for vagrant-lxc.

For a bigger list of available vagrant provider plugins have a look here.

wandering around edit

Talked to Ori on IRC. Found out about a project called operations/software/mwprof which is (from my limited understanding) a way of profiling live instances of mediawiki, in particular counting of method calls.

It's similar to webstatscollector, but written for a totally different purpose. I talked to Ori and found out he wanted JSON output instead of XML. So I made a gerrit change with a JSON schema, validation, and some code to write the stats for it in JSON format using json-glib.

2013-December-13 edit

Got the 3rd set of memories. This one doesn't work either. Tommorow I'll have to go at it again, but this time in the computer hardware store, and have other people assemble it and test it for me on the spot. Turns out I'm not a hardware person.

2013-December-12 edit

memory compatibility edit

As mentioned before, the problem was with the memory and not with the disks.

I tried two different sets of RAM, none of which worked, they are incompatible with the motherboard (I was using 8GB RAM cards and was unaware of the frequencies of RAM cards I had to choose until today)

I read tomshardware forums, called Kingston offices and asked about the problem, none of which helped. In the end, I found out the answer through conversations with a friend and by reading the motherboard manual. I also found a QVL inside the manual indicating that ASUS has already tested the motherboard with a lot of different memories from different vendors for compatibility.

Apparently the motherboard only accepts 1,2,4 GB memory DIMMs.

Ordered another set of 4 x 4GB DIMM cards.

It's quite clear I'm not a hardware person by any stretch of the means.

ramdisks, SSDs/SSHDs for performance edit

Let's say you have a VM that was running something that made extensive use of the disks and you have large amounts of RAM available.

You could make a ramdisk, move your VM to the ramdisk, and then link the VirtualBox directory back to $HOME so vagrant/VirtualBox can find it.

mount -t tmpfs -o size=4096M tmpfs /mnt/ramdisk1
mv ~/VirtualBox VMs/ /mnt/ramdisk1/
cd $HOME
ln -s "/mnt/ramdisk1/VirtualBox VMs"

Then you could run and use your VM which now would be stored completely within memory. Of course that would also mean you would lose the contents of the ramdisk once you reboot the physical machine, but you can back it up right before you do, and put it back in the ramdisk when you boot up again.

Another solution is putting the same ~/VirtualBox VMs directory on an SSD or SSHD.

2013-December-11 edit

HDD death & recovery edit

Got some 16GB of memory for the machine that runs the cluster. Bought an 64GB SSD as well. As soon as I was able to plug the SSD in, the HDD failed. So, hard-disk failure on that machine. New hardware is on the way, should arrive in the next 3h. (have to do some data recovery as well). Unexpected and weird as that HDD was from 2011 (2 years is a long HDD lifespan ? read some stuff on forums about some of these Seagates failing after 1-2 months, maybe it's more about luck). Apparently my usb stick is nowhere to be found, so I had to get one of those as well to do a Ubuntu reinstall on the desktop machine. Any disk-intensive operation causes all processes on the machine to segfault, then some memory corruption occurs and I'm forced to reboot it.

Found a way to do an integrity check for the b0rked HDD.

sudo smartctl -a /dev/sda | less

This doesn't output much (nothing).

Moreover, the failures started happening exactly after I tried to move the Virtualbox .vbox files for the VMs from the HDD => to the SSD.

Update: Did some trial & error, took out disks, put them back in, took out new RAM, put old RAM back in, tried and tried. Turns out the new memory is the problem. The memory is not compatible with the chipset of the motherboard. Nobody told me that. Have to solve it tommorow.

A small dream edit

If I had infinite time in each day, one of those days I'd write a crawler to get all computer parts specs, throw them in tables in a DB. Then I'd be able to run queries like this one:

SELECT motherboard.model, cpu.model, ram.model, gfxcard.model, hdd.model, ssd.model 
FROM motherboard 
JOIN cpu ON (motherboard.cpusocket = cpu.socket AND motherboard.chipset = cpu.chipset) 
JOIN motherboard ON  motherboard.gfxport = gfxcard.port 
JOIN ram_mobo_compatibility ON (ram_mobo_compatibility.mobo = motherboard.model AND ram_mobo_compatibility.ram = ram.model ) [...];

Then I'd have no problem, everything would be awesome, I could just make a query, find what I need to get, get it, and it would just work and I could just work on what I have to work on and everyone would be happy and the sun would shine on the sky even in wintertime and foggy days. And I could also get what I'd need at the lowest price too. Except I don't have that time, and I have to do it by hand

2013-December-10 edit

Made some progress on the UDF. Still getting some errors, have to figure them out.

2013-December-09 edit

Some background on ways of extending Hive edit

There are 3 ways of extending Hive's query language:

  • User-Defined Functions (UDF)
  • User-Defined Aggregation Functions (UDAF)
  • User-Defined Table Functions(UDTF)

For the purpose of card 1227 we're interested in UDFs.

UDFs should not be confused with SerDes (Serializer/Deserializer). They are both ways of extending the query language but while SerDes seem to be concerned more with serializing/deserializing data(for example the JSON SerDe), UDFs are more geared towards processing/transformations/extraction logic on one or more columns inside a table.

There are at least two types of UDFs:

When developing an UDF one extends one of those two classes and implements various methods for them. For plain UDFs it's much easier than for GenericUDFs. There's a lot more specific datatypes that need to be used for GenericUDFs as this blog post shows. It's also more involved as you have to implement more details of the UDF. For generic UDFs apparently hive.ql.udf.generic.GenericUDF the official documentation is more scarce than for plain UDFs. There are of course other forms of documentation, such as.. reading generic UDFs that other people implemented(or blog posts) and trying to deduce how they work or what is necessary in order to implement one.

For the simple UDFs the documentation is here 1. Apparently the GenericUDFs are used when the input/output is more complex. That's the thought-trajectory/motivation behind reading about GenericUDFs.

More to come soon.

2013-December-04 edit

Read documentation on UDFs

2013-December-03 edit

My Standup update edit

Extend the UDF to get multiple values out of it. The following things will be accessible:

  • minor browser version
  • major browser version
  • browser vendor

Then I can write queries for all the 3 required groupings.

I'm currently reading up on UDFs and GenericUDFs about this. Now the UDF just outputs one text value(the vendor) so I have to extend it to return a Map instead.

Maybe I'll get some time to write some tests as well for this.

Sample output from the Mobile team edit

This particular section contains sample output for card 1227. This was provided to us by the mobile team. So it's roughly what they would expect.

Vendor table edit
Mobile Safari
  35.87% (4064512)
Android
  25.14% (2848976)
Other
  9.45% (1070491)
Chrome Mobile
  6.35% (719102)
Opera Mini
  4.55% (515398)
total
  100.0% (11331934)
Report major version edit
Mobile Safari 6
  29.8% (3377452)
Android 4
  13.84% (1568531)
Android 2
  11.2% (1268731)
Other unknown
  9.45% (1070491)
Report minor version edit
Mobile Safari 6.0
  27.3% (3093981)
Android 2.3
  9.79% (1108994)
Other unknown
  9.45% (1070491)
Android 4.1
  7.27% (823372)
Android 4.0
  6.2% (702782)
Googlebot 2.1
  4.25% (481245)
Chrome Mobile 18.0

Updated Onboarding page edit

Update parts of the onboarding page for the analytics team with details on hardware, useful software and mailing lists.

2013-December-02 edit

Hive query edit

Here's some sample output for the query I'm working on(before I added the percentages column):

Windows 7,39079
iOS,13776
Windows XP,12781
Android,12006
Mac OS X,6659
Windows 8,4546
Other,3950
Windows Vista,2713
Windows,1209
Linux,839

It's yet to be decided whether we'll use dClass or ua-parser in the final query for this. The output and current implementation are using the java implementation of ua-parser.

Personally I'd like to implement UDFs and queries for both dClass and ua-parser to be able to compare them and have enough information to take an informed decision on which to use.

Some observations about Hive and the cluster I'm playing with edit

Almost forgot about this page for a while.

So a simple SELECT stuff FROM table LIMIT 10; would take around 40 seconds because of overhead(loading up hive, adding jars for serde/udfs, switching database, telling hive how to find some udfs you'll use in queries).

Writing a Hive UDF to count percentages of browser vendors and versions. It's working out well so far. Hive syntax has differences from that of MySQL. Apparently for each additional thing to a query, Hive creates a new job. For example, you want a sub-query, a new job, you want to add a LIMIT clause.. a new job, you want to INNER JOIN, a new job. All these jobs have a 20-30second overhead, so it all adds up.

I suspected this would turn out to be a problem. Now I have a query with a couple of joins and stuff, and I apparently managed to hit ~2m30s running time with a rather simple query. It's still ok I guess. Actually it's not ok... it depends how you view this. If you compare it to mysql, yeah, it has a lousy overhead. I'm actually still trying to cope with this overhead that Hive forces upon you while developing a query or a udf for it. Apparently there's no way around it. If you have any thoughts on this, I'd be very happy to hear from you.

I have my own hand-rolled Hadoop/Hive cluster set up on one of my machines over here. Setting Hadoop/Hive is a pain, but at least I get full control over them and get to learn some puppet and vagrant in the process.

I unsuccesfuly tried to manually install hue. Got it running, logged in with some user I inserted manually through its sqlite user db(had to forward hue ports from master to dev laptop), then tried to run some hive queries on hue and apparently it just hangs. So much for my dream of using hue. Anyway, it seems it had some problems with the hiveserver2 configuration to which hue talks to. Hue being what it is, namely, a Python web app that talks to hiveserver2 through the thrift protocol there's lots of xml and configuration, I probably messed up some configurations. Certainly spent lots of time reading forums and mailing lists for it, but.. to no avail.

I am thinking about converting all this setup to puppet-cdh4 which has hue and all the stuff already packaged.

This is a simple ascii diagram of how I run this home-grown hadoop setup. Of course, it's not very detailed or useful(maybe later). I made the diagram using App::Asciio (you can also get it from the asciio package).

                           .----------------------------------.
                           | desktop                          |
                           | machine                          |
                           |                                  |
                           |                                  |
                           |    master   backup               |
                           |     ____     ____                |
                           |    |    |   |    |               |
                           |    |____|   |____|               |
   dev                     |    /::::/   /::::/               |
  laptop                   |       ^                          |
  ____                     |       |                          |
 |    |  --------------------------'                          |
 |____|     ssh & sshfs    |                                  |
 /::::/                    |    hadoop1  hadoop2  hadoop3     |
                           |     ____     ____     ____       |
                           |    |    |   |    |   |    |      |
                           |    |____|   |____|   |____|      |
                           |    /::::/   /::::/   /::::/      |
                           |                                  |
                           '----------------------------------'


2013-April-10 edit

It's been a while since I've updated this page.

500M bump was solved.

This is because we agreed to use mingle instead. So right now I'm working on card-60 and card-551.

The card 551 is focused at documentation, while the card 60 is focused on Mobile paveviews reports.

The documentation is also available under mediawiki format.

It's currently being generated automatically in 3 formats:

  • HTML
  • LaTeX (so it's available in pdf format)
  • POD
  • MediaWiki

2013-March-07 edit

Attempted to fix 500M bump:

2013-March-06 edit

Fixed a segfault in webstatscollector time_travel branch


2013-February-22 edit

Today work will be done on the 500M bump.

An issue was made to treat backporting in debianize. Not sure what priority this has. I hope to find out more about this in the standup.

2013-February-21 edit

Polished some changes to webstatscollector time_travel branch. Gerrit got in the way, not letting me put a new patchset to the already existing review. But I managed to get the changes in through another changeset.

This change is replacing spaces with underscores for the page titles in pagecounts output of filter in webstatscollector.


Evan Rosen published his research on the way Maxmind's different versions of its database affects the page views statistics over time.

2013-February-20 edit

Provided support for building dClass on OSX, added detailed instructions in the README here.

2013-February-17 edit

Erik provided some very useful details about the logic of wikistats. We'll use them in the new mobile pageviews report. Maybe we can find a way to overcome the 500M bump.

Evan provided this diagram which is the way he is counting pageviews. This will be implemented and compared with the logic in the new mobile pageviews reports.

New features in git2deblogs.pl the debianize:

All tests pass, debianize is green.

All maxmind databases for city/country/etc for all months and years are now packed into a big 11G zip file on stat1:/home/spetrea/maxmind_archive.zip

2013-February-15 edit

More work on Limn debianization. A new user is now created when installing the limn package.

The package is installing limn in /usr/lib/limn

It is also creating a user called limn with a home directory /var/lib/limn.

2013-February-14 edit

Evan Rosen asked that we collaborate in finding a unified definition of a mobile pageview. This page was created and it will be updated to reflect that definition "mobile pageview".

The graph inside is created using graphviz(more info on the dot syntax can be found here). If anyone is reading this and would like to contribute, please contact someone from the Analytics team.

2013-February-13 edit

Taking some days off next week(not now).

Started working on debian package for Limn.

The 500M bump edit

This is the biggest problem we currently have with the new mobile pageviews reports.

We tried multiple approaches so far:

  • we tried eliminating all /w/api.php requests with action=opensearch
  • we tried disabling bots discarding
  • we tried removing /wiki/Special: urls
  • we tried to disable our 20x/30x check
  • we tried checking the mimetype density of processed requests in december 2012 before and after 14dec.

What we found so far is that:

  • the problem is most likely limited to the /wiki/ urls
  • the problem is limited to the urls with mimetype "-"

What we want to do next:

Classify the mimetype "-" requests with /wiki/ urls by

  • ip
  • url

This is part of a drilldown process so we can find out some features these requests in the 500M bump share together.

2013-February-12 edit

New UI changes for Mobile reports.

  • legend has all the details about all the numbers in the table cell.
  • breakdown and discarded piecharts are now optional (selected through a checkbox)
  • the wiki and api counts are now optional (selected through a checkbox)

A sample was created with processed entries with mimetype "-" for days 1-14dec and 15-31dec.


Top 200 referer domains in december 2012 was generated.

Top 200 referer domains in december 2012 after 14dec.

Top 200 referer domains in december 2012 before 14dec.

2013-February-11 edit

New mobile pageviews reports edit

We have to drill down in the data so we'll make histograms with the mimetypes 1-14december and 15-31december 2012 to find out which of the mimetypes have a bigger share of the total.

Mimetype density chart made for 1-14dec, 15-31dec 2012.

2013-February-10 edit

Maxmind's database and the IP block re-allocation edit

The maxmind database changes as blocks of IPs are re-assigned regularly. We are using Maxmind's database indirectly through udp-filter. There is an archive of all maxmind databases to which we have access to. Udp-filter and any program who does geolocation should take into account the date of the log entry when the geolocation is done.

A solution to this would be to load all the maxmind databases in memory when doing geolocation and depending on the time of the log file to use the appropriate database.

This also applies to bot detection. We currently have in wikistats code that relies on various IP ranges. These IP ranges change across time. I'm not aware of a list of Google, Bing, Yahoo bot IP ranges across time (but it would be very helpful if we could find one)

The problem with Maxmind's geoip database is directly related to the country reports. Because the blocks get re-allocated the counts will be affected from one month to the other.

Ideally we should use different maxmind dbs for different time intervals.

What I'm currently working on edit

The main areas of focus are:

  • Country report( requested by Amit Kapoor )
  • New Mobile pageviews report (requested by the Mobile Team, in particular Tomasz Finc)
  • Solving bugs in wikistats (the bugs present in Asana requested by Erik Zachte )
  • Limn debianization
  • Device detection through the dClass library (requested by the Mobile Team)

New mobile pageviews reports edit

It now takes ~2h to generate reports for an entire year of data. And we can write tests for them as well because the functionality is split into classes. In this particular case, we have added map-reduce logic which can crunch the data in parallel. We also use templating to separate html/js code and rendering-specific code from the rest of the code.

Currently we're experiencing some difficulties with the months November and December 2012 and onwards because the API for mobile has changed and there are multiple requests per pageview. The vast majority of these requests are in /wiki/ requests as can be seen in revision 25 of the report here.