Something that would benefit both commons, and editors of every wiki is some form of similarity analysis (perceptual hashing) for media:
- Images - http://en.cnki.com.cn/Article_en/CJFDTOTAL-DZXU200807028.htm
- Video - https://github.com/rednoah/VASH , http://iosrjournals.org/iosr-jce/papers/Vol18-issue6/Version-5/P1806058486.pdf
- Audio - http://link.springer.com/article/10.1155/ASP.2005.1780
While most of these would require some research to evaluate and integrate it into an app or even commons. Perceptual hashing for images is already supported by the backend software that wikimedia currently uses:
If exposed using an API or even a special page that enhances or supersedes Special:FileDuplicateSearch, this would be a huge gain for curation, and readers.
This can also be exposed in the app, by asking editors whether these images are similar, or even implemented as image captcha. It can detect attacks such as " rotation, skew, contrast adjustment and different compression/format". In some cases it will even identify but useful images. A similar algorithm is the blockhash ( http://blockhash.io/) that claims to be more efficient in some instances. A non-academic comparison of the two is here:
Other usecases include surfacing similar images in search results (like google), or even in media search tool within VisualEditor / new wikitext editor, or within the image file page, or in the mediaviewer. The applications are endless.
Even if these tools for the app fail and are eventually removed this technology can continue being used in other products. Win Win !