Security for developers/Architecture
These security guidelines help lead developers, architects, and product managers make decisions that protect MediaWiki's users when developing new features or refactoring old code.
All MediaWiki developers can follow these principles and process when developing new core features or extensions. If a developer or team is planning to have their code deployed on the Wikimedia cluster, following these guidelines will ensure the security review process is quick and requires minimal changes before deployment.
This guide interrelates with the Architecture guidelines, Performance guidelines, and design style guidelines.
Developing securely in MediaWiki
editWhen writing a new feature or refactoring old code in MediaWiki, it's important that you consider security throughout the process. You should:
- Know what types of information we are trying to protect in MediaWiki
- Before starting, assess whether this feature is a good idea
- While designing (architecting), prefer decisions that promote secure design principles
- For security-critical features, model the threats, and get input from security reviewers on how you will mitigate them
- When writing code, ensure you avoid common security mistakes
- Ensure you have browser tests for all attack surfaces, so we can automate security testing
- Identify someone who will be responsible for addressing security vulnerabilities in the future
What are we trying to protect?
editAlthough MediaWiki is primarily designed to allow anyone to edit, and to provide users with access to other users' information, there are a few things that the system attempts to keep private. Additionally, MediaWiki supports the idea of "private wikis", where all content is private, and only a limited set of users can read and edit the content.
MediaWiki is also designed for distributed administration by a large number of trusted users. Typically, wikis will have different groups of registered users for different roles, such as sysop/administrator (perform basic administrative tasks), bureaucrat (modify other users' rights), checkuser (investigate abuse), and oversight (ability to view/remove extremely sensitive content from administrators). Users should not be able to act in a role that they have not been granted.
Additionally, MediaWiki is designed to prevent these common threats:
- An attacker correlating what articles a user reads, whether or not the user has registered
- The physical location or real identity of editors who wish to remain pseudonymous
- Spambots adding unproductive content
- Administrative users accidentally/unknowingly taking an action on behalf of an attacker
- Attacker attempting to take over the underlying server
To support these goals, MediaWiki will protect:
- The confidentiality, integrity, and availability of the underlying operating system and components.
- The confidentiality of deleted & suppressed content.
- E.g. content in an article, edit summary, username of editor, or specific log entries that have been deleted by an admin, or suppressed by a user with suppression rights.
- The confidentiality of data protected by the WMF privacy policy
-
- For example, UserAgent of editors, and IP address of logged in editors
- Email address or password associated with an account
- Other private data defined in Wikimedia's Data retention guidelines
- Ability to delete and suppress of any contributed content.
- Ability to remove ("delete" for normal users, "suppress" for admins) any content that comes from a user (actual content, or metadata such as username or edit summary).
- Integrity of content, attribution and logs.
- Your feature should attribute content to the author, and allow neither the attribution to be changed, nor the content of the contribution without further attributing the change to the user who made it. Logging information shouldn't be deleted or changed, unless there as an approval and oversight process. In general, a user should not be able to deny that they made an edit attributed to their user, nor should an admin be able to deny taking an administrative action that the logs report they took. For any actions that change the state of the wiki (or add content), ensure anti-CSRF mechanisms are used. Any contributed content must integrate with the CheckUser extension so vandalism and threats to personal safety can be appropriately investigated.
- Anonymity.
- Prevention of site Denial of Service (DoS).
- Normal/anonymous users shouldn't be able to asymmetrically cause MediaWiki to do a lot of work. If a feature is easy for the user to request, but requires significant resources for the server to fulfill, the feature should be limited per user or by total instances running on the server.
- Prevention of content Denial of Service, e.g. vandalism and spam.
- Effectiveness of spam prevention tools.
- All contributed content should integrate with spam prevention tools (SpamBlacklist, AbuseFilter), and spam-bot prevention tools (ConfirmEdit). New features should not allow contributions by registered users or IP address users who have been previously blocked.[1]
- Preventing accounts from elevating their privileges without authorization.
- Your feature should protect the existing access rights to content, and your feature should not allow changing user rights outside of the existing process.
- Although MediaWiki doesn't attempt to allow enforcing fine-grained access controls, it is especially important to ensure that your feature doesn't allow read access if the wiki is private.
Assess your feature's security
editA feature may not have any of the implementation flaws listed on Security for developers, but if it fails to protect the items listed above, then we don't want this feature running on Wikimedia sites. Always ensure that the process that your feature enables isn't itself flawed.
When starting a new application or re-factoring an existing application, you should consider each functional feature, and consider:
- Is the process surrounding this feature as safe as possible? In other words, is this a flawed process?
- If I were evil, how would I abuse this feature?
- Is the feature required to be on by default? If so, are there limits or options that could help reduce the risk from this feature? [2]
If your feature has specific security features (authenticates users, implements its own XSS sanitization, or similar tasks), get someone on the Security Team to review the design of those specific features. Additionally, high risk applications (e.g., those dealing with sensitive data, or handling new, complex file formats) should have their design reviewed also. See "Does my application need a security review" to determine if you need to review the architecture before you start implementing your feature.
Teams should consider and plan for security non-functional requirements during the assessment phase.[3] That is: Basically, teams should find a way to plan for security across their features and track any requirements for it to satisfy our security goals.
Secure design principles
editThere have been many design principles for security features discussed in academia and the information security community for many years. Both the lists from Saltzer and Schroeder's 1975 paper, and OWASP's 2005 Developer's Guide are often cited. Although a case could be made that all code in MediaWiki should follow all of the principles from both lists, the items that we especially value and look for during review are:
- Minimize the attack surface. For any feature (and especially security-critical features), limit the avenues of attack.
- Bad example: In the early days of the MediaWiki API, each API method generated and distributed its own anti-CSRF token for state changing operations. However, when we expanded the API to handle JSONP calls, many of the individual token handling functions weren't updated. This resulted in bug 49090, in which we added anti-CSRF tokens available via JSONP in several API methods. Since JSONP allows reading JavaScript responses from other domains, this allowed attackers to read a user's token from another website, and perform CSRF attacks against them.
- Good example: Maintenance scripts can only be run from the command-line interface.
- Simplicity
- Bad example: User identification and session management is complicated in MediaWiki. (Code: core and any extensions that implement
UserLoadFromSession
, such as CentralAuth.) Extensions can manage the user's session for some users, but not other users. User can use either PHP session, ortoken
anduser_id
cookies. Administrators can configure the storage backend for user sessions. Autocreation of user accounts when they (anonymously) visit a wiki, but the wiki can automatically create their account, caused a sever issue (bug 53032) when some pages emitted headers to cause the contents to be cached, but users were simultaneously autocreated, and their cookies were also cached.
- Bad example: User identification and session management is complicated in MediaWiki. (Code: core and any extensions that implement
- Secure (fail-safe) defaults. Prefer code that defaults to being most restrictive, and give administrators or developers the ability to only enable features that they need. Prefer to filter via whitelist instead of trying to blacklist insecure values.
- Good example: By default, MediaWiki mitigates many clickjacking attacks by disallowing the iframing of pages by default. Developers must specifically call
allowClickjacking()
to allow iframing a particular page. - Good example: Extension:RSS requires a MediaWiki administrator to whitelist known good feeds for inclusion in a wiki, instead of blacklisting sites that have caused problems. This limits the server's exposure to client-side vulnerabilities in curl and related libraries.
- Good example: Uploading is disabled by default, since it raises potential issues, ranging from occasional security issues in file-handling code to spam and copyright concerns.
- Good example: By default, MediaWiki mitigates many clickjacking attacks by disallowing the iframing of pages by default. Developers must specifically call
- Least Privilege. Users of MediaWiki should be able to work with the least set of privileges necessary to compete their task. Additionally, MediaWiki should strive to function well with minimal rights on the hosting server.
- Bad example: The editing interface gives every editor the same abilities, including template functionality.
- Good example: MediaWiki's web installer allows the user to provide two database user accounts. The first account has privileges to create the tables during the initial installation, and the second will be used during operations that only need basic CRUD rights on the existing tables.
- Good example: Many useful tools run on the WMF's Toolforge system, instead of being built as extensions or core functionality in MediaWiki. Further, many of these tools now use OAuth, so users only grant these tools limited access rights on their behalf. One example: the image rotate tool on Commons.
- Psychological acceptability. "It is essential that the human interface be designed for ease of use, so that users routinely and automatically apply the protection mechanisms correctly."
- Good example: When enabling HTTPS for all logged-in sessions, we gave users the option to still opt-out of using HTTPS for their particular session. This allowed us to protect all user's clear-text passwords during login, and by default the user's authentication tokens while logged in, while allowing users who felt the experience was too poor for their use to continue editing once they opted out.
- Bad example: MediaWiki's HTML generation classes (
Xml
andHtml
) are often shunned by developers as being too complex and cumbersome to use. Instead, developers generate the HTML themselves, and sometime generate XSS vulnerabilities.
When security and other requirements conflict
editIn many cases, a particular feature will go against one or more of these principles. As with any art form, rules sometimes should be broken, but doing so should be done with deliberation and with consultation of an experienced security developer, and not accidentally.
Threat modeling
editWhen looking at specific security considerations and controls in your feature, you also need to consider how your feature will be attacked. For each possible attack, you need to decide if the risk is worth mitigating with a technical control and how that should be done, or you can make the conscious decision to accept the risk posed by the threat. We call this threat modeling. Accepted risks should be communicated to your stakeholders.
To think through the different threats that your feature will face, in Threat Modeling: Designing for Security (ISBN 1118809998), Adam Shostack recommends first drawing a data flow diagram representing the external actors, processes, and datastores for your feature, and how the data flows between them, with trust boundaries drawn around the actors and processes that trust each other. Once this diagram is drawn, for each place where data flows across a trust boundary, consider how your feature will prevent Spoofing, Tampering, Repudiation, Information disclosure, Denial of Service, Elevation of privileges (STRIDE).
Alternatively (or in addition to STRIDE modelling), you can also use MITRE's list of common attack patterns (CAPEC) to think through common attacks on your feature, and how you can mitigate each if applicable. STRIDE is often a useful mnemonic when you are initially thinking through the major ways your code can be abused, while CAPEC is a long, but fairly comprehensive list of attacks and may be useful as a checklist to review your design.
- Example data flow diagram: VisualEditor and Parsoid.
Implementation
editAs you implement the feature and controls, you need to make sure that:
- The controls you identified while doing the design and threat models were correctly implemented
- The code does not allow attacks on the site's users (XSS, CSRF) or the server (SQL or Command Injection). Review security for developers to be familiar with the most common mistakes.
- Verify that
- MediaWiki authorization structure is enforced
- You integrate with MediaWiki's anti-spam and anti-vandalism controls (at minimum ConfirmEdit, Checkuser, AbuseFilter, and SpamBlacklist)
- If your code processes XML, you have disabled XML external entity loading (XXE attacks)
- If your code redirects or proxies requests, the location has been sanitized and approved via whitelist
Security testing
editSecurity-critical units of code should have a comprehensive set of unit tests. Tests should show both that the feature works, and that basic adversarial inputs are accounted for. For instance, ensure that a user can be authenticated with the correct password, and that the user is not authenticated with the wrong password or a blank password. Gerrit change 140938 shows how the Flow team implemented this to test how their file opening code prevents directory traversal.
Most (or all?) ways that users can interact with your feature should have browser tests defined. This will help us run security scanning software to test all of the ways a user can interact with your feature for security vulnerabilities.
When defining tests, be careful about your test data, and the systems that your tests interact with
- Never copy private data from a production system to use as test input
- If your feature relies on external services (payment processing gateways, etc.), ensure you tests don't contact production systems when the tests are run
Ongoing response to security bugs
editBefore your project or feature is put into production, people or teams who will be responsible for providing security features in the future must be identified.
Security bugs will all be made public in Bugzilla when they have been fixed (and the fixes released/deployed as appropriate). To see the types of bugs we have fixed in MediaWiki, go here. Each bug should include all the information specified in Reporting security bugs, so you can see what the problems were, and how we fixed them.
Resources
edit- OWASP top 10 - https://www.owasp.org/index.php/Top_10_2013-Top_10
- CWE top 25 - https://cwe.mitre.org/top25/
- CAPEC - http://capec.mitre.org/data/definitions/1000.html
- STRIDE - http://msdn.microsoft.com/en-us/magazine/cc163519.aspx
- BSIMM - http://bsimm.com/
Notes
edit- ↑ Currently, MediaWiki captures the IP addresses of non-registered users, and stores those addresses and permanently and publicly interrelates those addresses with the users' contributions. We allow ourselves to do it because it would be too hard to change, but it's something a lot of people would like to see go away.
- ↑ OWASP, A Guide to Building Secure Web Applications and Web Services, 2005, http://prdownloads.sourceforge.net/owasp/OWASPGuide2.0.1.pdf?download
- ↑ http://www.methodsandtools.com/archive/archive.php?id=113