Wikimedia Technical Conference/2018/Session notes/Architecting Core: extension interfaces
Theme: Architecting our code for change and sustainability
Type: Technical Challenges
Leader(s): Timo Tijhof
Description: Extensions are the key way that we add and modify the functionality of MediaWiki. This session looks into the interface for extensions and how they impact the architecture of MediaWiki. The primary goal for this session is identifying (potentially breaking) changes we can make to the extension interfaces to enable the underlying architecture to be changed without breaking compatibility in the future. -- T206081
Would be useful to look at how some other software does extensions. . Two that have recently done overhauls: Firefox, and Wordpress. Both made major breaking changes and invested a lot of effort in this overhaul process. Need to look at what they did, and why.
Question 1: what’s bad about the extension interface that we have?
- We expose so many internals that we’re not able to make changes any longer
- Restricting the model will allow for changes without destroying existing extensions
Question 2: Is there only one extension interface? Or are there multiple? Can we classify the existing extension ecosystem into a limited number of interfaces that, collectively, cover most of our use cases.
- One is a listener, that just receives information
- One is a filter, that has the potential to modify, something before sending it along.
- One registers additional implementations of existing abstract
- Cindy, Daren, Alexia, TheDJ, Leszak, Subbu, Florian, Raz, Vogel, Tim S, Daniel K, Kate, Irene, Adam Basso, Gergo, Brion Vibber, Timo, Antoine L.
Questions to answer during this sessionEdit
Why is this question important? What is blocked by it remaining unanswered?
|Which functionality should be provided by extensions, rather than core? Which functionality should have a default implementation that can be replaced?||Architecting core to support both Wikimedia and 3rd party use cases is difficult. The Extension interface is the primary way that this is accomplished. Before we can discuss improvements of the extension mechanisms, we have to know what we want to use these mechanisms for.
Generally speaking, core should cover functionality that is common to most if not all uses of MediaWiki, while extensions should be used for functionality that is only needed or wanted on relatively few instances.
|Which properties of the existing extension interfaces makes it difficult to make architectural changes to MediaWiki?
What changes can be made to extension interfaces to ease making architecture changes while supporting the functionality needed for extensions?
|Many architecture changes that we want to make aren’t easy due to the current extension interfaces exposing a lot of internals. Listing out specific issues will help us define what a better interface may look like.|
|How can the needs of different extensions be classified, and how can these classes of needs be addressed using different kinds of extension interfaces?
What functionality of core should be exposed through these extension interfaces?
|Identifying the different kinds of interfaces, and their properties and constraints, will guide the design of concrete interfaces, should improve consistency, and avoid exposing internals.|
|If breaking changes are needed, what processes can be implemented to ease such a change?||If we need to break compatibility, we should be clear about what is being broke, why we are breaking it and develop a good migration plan. We should also consider providing compatibility shims where possible.|
- Timo’s Presentation:
- Extension interfaces - should we have them?
- Hoping to focus on the two specific solutions we are currently thinking of
- What are the current problems?
- Hooks expose a lot
- Internal services, state and methods are implicitly public; which is both good and bad
- Any change is a breaking change to stable API; there are some historically stable APIS but they might change tomorrow, because we don’t mark them as stable
- Desired outcomes:
- Stable hooks - Extensions can be supported buy core for a long time
- Small hooks - extensions can still change defaults without duplication
- Need to decide what to do about big hooks - extensions can still easily replace an entire service
- Improvable core - ideally without breakage
- Hooks are slow and fragile
- Synchronously run during the operations; which makes them easy to edit while running, for an ext feature you don’t have to identify all of the hooks, can abort the action in one of various ways, replace the action in its entirety, and extend it, such as a notification after the action is performed
- Can do “anything” but only implicitly.
- All stuck together so it hard to pull them apart
- This raises issues of scalability: data base transactions, such as saving an edit, without workaround means no-one else can modify while someone is editing, which scales very poorly and leads to cascading failure
- Availability: cascading failure
- Performance: async is difficult. Would be better to separation actions into chunks
- Hooks expose a lot
- What are some questions we should answer together?
- Q1: What types of modifications can an extension perform right now?
- Q2: Which ones work well?
- Q3: Which ones currently suffer from these problems?
- What are our solutions?
- Small groups section where folks are discussing the questions above (
- Extensions can do anything, so multi groups
- Mw core was a lot of things, auth mechanisms, new actions ex exporting,
- Change whatever the parser is operating
- Changing the skin
- Prevent actions
- Two question marks about where they go
- Apply data schema
- Is the api something has [?] or has filters, apparently apis that allow modification of what the module actually does
- Q2: Which hooks work well? (photo)
- Things that are atomic and don’t store data (e.g. Parser extensions that only filter or expand a piece of text); no side effects.
- Scribunto's extensions, such as Wikibase adding additional Lua built-in modules.
- Filter hooks work well, such as those that modify primitive or replace value objects.
- but can be improved (especially because they see too many internals which often forces extensions to hardcode assumptions or interact with interfaces that do not remain stable between MediaWiki releases, e.g. often breaks compatibility)
- Q3: Which hooks suffer from problems? (photo)
- Media handlers, interprets what kind of media type a file is, makes a registration for that, and dictates what happens with the media output - other session have identified that there’s work to be done here; not well defined. Registry but too much for it to do.
- Parser extensions - have storage and async effects
- Lack of clarity on whether a hook can be invoked multiple times during an action. E.g. if a hook is given 1 value, does that mean the user operated on one value, or could it be that the user made 5 changes at once and the hook is called once for each item separately. For filter hooks this should not matter in theory, but for event hooks, a consumer generally needs to know the intent.
- When they expose implementation detail as opposed to concept detail (such as Parser extensions)
- Parsing arbitrary data structures around
- Subclassing wikipage factory → hook for wikipage factory But no wikipage factory Actually exists. Too broad and should kill it
- Skinning is broken
- EditFilter has to fetch ParserOutput from elsewhere
- How do we solve all of these problems?
- Without making things worse ideally
- Still in flux on how we do this. What works well is
- We categorize hooks into two broad areas
- Something happens and you as another extension want to respond to that (mirror edits to elsewhere, ex), not changing semantics of the default (user re-name)
- Filters - modifying a value in some way based on other parameters, don’t want side-effects
- Services (not currently a hook but should be) ex kitchen sink; are these hooks or are these services? Timo says services
- Overall, these would allow these to be more bundled
- If you miss a hook you have a case where you get errors; using an abstract class would help with that
- What are the caveats to this solutions?
- Concerns with the proposal / Suggestions for improving the proposal (photo)
- Replacing services does not work for multi-tenant addition of functionality, such as adding parser functions. → addressed by designing the interface with a registry instead of an overridable service.
- How long will we maintain the existing hook system? Can we turn it into this with certain caveats? How long can we support the old system? Cindy is in charge of answering this question. Trade-off of requiring one line change and migration, or do we not require one line change with caveats (Timo leaning towards the former)
- We should be more restrictive about parameters - these principles don’t prevent exposing internals. Need principles on what kind of parameters we allow on a filter. (Timo: Maybe restrict to primitives/scalars and some generic way of detecting value objects, maybe an empty interface; but also a registry to allow extension to whitelist additional class that can't implement it for some reason)
- Async actions may be "too" async. For example, renaming the user-page eventually, after an account rename, may not be sufficient. May need to distinguish between "in-request" async (post-send deferred update) and "later" async (job queue). To keep in mind: What about in-process caching of state? (Timo: Should be fine, we already have a strong model of treating cache the same as slave DB; must not be used to inform writes; except when immutable/verifiable)
- Things that we would solve with this proposal (photo)
- Increase maintainability of core. Current model makes it nearly impossible to make trivial changes without implicitly breaking compatibility. (Everything is public, everything is exposed, everything can be depended on) (Timo: Making it possible to depend on everything is quite powerful and might still be useful, but we should still distinguish between stable interfaces that we encourage and provide long-term compatibility for; and internals you may use at your own risk in a way that a developer would understand and be aware of when they do so)
- Industry standard pattern (Laravel, WordPress, Symphony, ..); provides confidence and streamlines work for devs, also limits the start-up cost for on-boarding new folks due to increased familiarity.
- Increased predictability
- We have inconsistency in hooks and whether or not they defer/can be aborted, so we’d like to have this more consistent
- We need some guarantees here. Some of our current hooks are triggered by multiple different classes, sometimes in a job but not always; A given hook should make clear what environmental aspects it guarantees and what may vary. (Eg. always do from a job; or specify that it is a lower-level hook that may happen from any context)
- The hook would naturally have a well-defined purpose and scope for what it allows extensions to do; instead of providing an arbitrary injection point for anything to happen and for the entire class' internal state to potentially change.
- Who gets to decide what type of a hook will be (action/event hook, or filter hook, or overridable service)? The implementer, or whoever defines the interface?
Generally avoid patterns where you call a hook from a deferred update
- How much isolation do we want to guarantee by default? → depends of the use case, depends on the shared cache in the drop-order
Timo will turn this into an RFC. Some initial feedback from Daniel Kinzler noted below:
- The "event" category of hooks (those that don't modify the behaviour but react to it, e.g. publishing information about edits elsewhere).
- Q: Needs (per-hook) guarantees about latency and reliability. How bad is it if an event is dropped? Is the order guaranteed? What if it takes a day to arrive?