Talk:WMF product development process/Archive 3

About this board

See /Archive 1 and /Archive 2 for previous page contents.

Technical Collaboration Guideline

1
Qgil-WMF (talkcontribs)

In order to avoid confusion and excessive dependencies, the recommendations on collaboration between Product teams and communities are being developed at Technical Collaboration Guideline.

Reply to "Technical Collaboration Guideline"
GDubuc (WMF) (talkcontribs)

What's the status of this initiative? Is it already being applied?

At any rate, we would like to add Performance to the Plan, Develop and Release stages. Fixing Performance after the fact in the Maintenance phase isn't a sustainable model.

AKlapper (WMF) (talkcontribs)
Reply to "Status?"

Connecting with other initiatives

3
Rogol Domedonfors (talkcontribs)

There does not seem to have been much activity on this topic in recent months. Could somebody with authority to do so please make sure the page is still up to date – it would be especially helpful to give more detail of the status of this process (mandatory, preferred, advisory, optional, draft, ...), scope (universal, default, specific, limited, ...) and the relationship between this and various other proposals such as Design and development principles and Technical Collaboration Guideline/Community decisions

Malyacko (talkcontribs)
Rogol Domedonfors (talkcontribs)

Thank you, I could see that. My question is, what is it supposed to be a draft of? Is it a draft of a mandatory process, a draft of a preferred process, a draft of an advisory process, a draft of an optional process, a draft of a draft framework or what? Is it a working draft of a process that will come into force (what ever force it has) as soon as it is finished, or is a draft of a draft framework to be adapted ad hoc by multiple developers in any given project scenario, or perhaps it is the intended end product a draft submission to some higher authority who will further refine it or enact it as mandatory, preferred, etc? Or is it a draft of something which is not yet currently decided but when finished there will be a further discussion to decide what sort of thing – mandatory, preferred, advisory, optional – enactment, framrework, submission – it is?

I can see the topic currently immeditely below this one too, thanks. My request was for "more" details, not to be told to read again that single sentence – more details of the relationship between this and other proposals such as Design and development principles and Technical Collaboration Guideline/Community decisions, the first of which you did not address at all.

You say the focus is currently on another piece of work. Will the fous return to this process at some stage for further work, or has it served its purpose, whatever that is, or has it failed and been quietly abandoned? Please tell us, so we know where and how to collaborate with you.

It seems strange that this piece of work has been kicked off without anyone able to immediately answer these questions. Presumably everyone was clear what they were trying to do before they started the work? Anyway, I ask not out of idle curioisity nor to test your undertstanding, nor to point the finger at yet another piece of WMF work that seems to be going nowhere, but because if you publish more information about what you are trying to do with this, you are likely to get more and better collaboration from the other stakeholders, both in the drafting and in the operation of this process. And that is what you want, isn't it?

Reply to "Connecting with other initiatives"
Whatamidoing (WMF) (talkcontribs)

This is a product development process, and that means that we need a good, shared understanding of what a product is.

Below is an alphabetized list of things that might be "products" for the purpose of this process; I believe that most of them are. Are there any items in this list that you think should not be covered by this process? Are there any items in this list that you suspect would receive little or no engagement from typical contributors? Are there any products (on or off this list) that you would recommend a different process for?

Whatamidoing (WMF) (talkcontribs)

I'm not sure that SUL finalisation counts as a "product" for the purpose of this proposal. There's no possibility of a "maintenance phase". It was a one-time event, rather than a product (although some software and scripts were created to automate some of the tasks).

What do you think?

CKoerner (WMF) (talkcontribs)

I know it's not this simple, but maybe products are things people touch our interface with, while the other things are infrastructure?

  • HHVM - infrastructure
  • Echo - product
  • SUL implementation - product
  • SUL as a thing moving forward - infrastructure
Elitre (WMF) (talkcontribs)

"Any tech change which is bound to impact wikis users' experience"?

Whatamidoing (WMF) (talkcontribs)

"Any tech change that affects a user's experience" means, well, "any tech change".

@CKoerner (WMF), I think that major infrastructure projects are meant to be covered. Wikipedia editors might not care about infrastructure, but it should presumably follow this process anyway (just with a reasonable expectation that the core volunteer communities won't care very much about most infrastructure changes).

Elitre (WMF) (talkcontribs)

I'm pretty sure there may be changes which people would have no way to detect unless we told them. That a certain feature is now written in a programming language rather than in another may be the easiest example.

Qgil-WMF (talkcontribs)

I think it is useful to offer a common process for all Product and Technology projects, where each project takes the checklist points that affect to them and skips the rest. If a project doesn't have any impact on end users, only on developers, then they could skip the part related to involving end users and editors, but they still would need to go through other checks, like architecture, performance, deployment plan...

Whatamidoing (WMF) (talkcontribs)

A process that allows teams to completely skip some points may be sensible, but it tends to make the process unpredictable. Also, what if I decide that (for example) performance is not relevant for my project, so I skip that step, but someone who works in that area later decides that I was wrong? How can I be certain that it's safe/appropriate to skip a potential stakeholder without actually contacting that stakeholder?

Qgil-WMF (talkcontribs)

Every stakeholder should define the conditions of their requirements in their checklists. For instance, Community Liaison's requirements (the ones we put on behalf of the communities) should be reflected at WMF product development process/Communities. If a condition is ambiguous (i.e. one could say "Relevant stakeholders have been invited" leaves too much room to interpretation) then we can edit the checklists and make them more precise.

Also, reviews are expected along the process. If a project didn't have a performance check but then later in the process a stakeholder has a problem with this (or the tests results show performance issues), they will need to get that performance check, or at least have a discussion whether they should do that check or not.

Whatamidoing (WMF) (talkcontribs)

If we create a flexible process, then we create a process in which you can take reasonable steps and I can claim later that you acted badly.

If we create a "precise checklist", then we create a process that overburdens most products in the hope of being barely adequate for the biggest.

I am not certain that having a single process is necessarily appropriate. What's needed for, say, PageTriage, is really quite different in both quality and quantity from what's needed for, say, MobileApp, which in turn is different from ULS. At an abstract level, of the sort that you'd find in the first chapter of a beginner's textbook on software development, there are certainly some commonalities, but the application is quite different. PageTriage needed to make a dozen highly communicative English Wikipedians happy. MobileApp needs to make thousands of non-editors happy. ULS needs to avoid making the wikis unreadable for readers and editors alike.

Reply to "What's a "product""
Bluerasberry (talkcontribs)

I use English Wikipedia. Is there any calendar logging when features are added or removed to the "beta features" user menu?

I would like to have access to a record of when features are added, when they are removed, and the circumstances under which they are removed. I was wondering how many beta features become standard offerings, and how many are ended without being integrated.

Thanks.

Jdforrester (WMF) (talkcontribs)

No, sorry, I don't think such a calendar exists.

Tgr (WMF) (talkcontribs)

git log -L is your friend: P2684

Bluerasberry (talkcontribs)

Thanks - this is at the edge of my ability to understand but nice to know that this is what is available. I appreciate it.

Reply to "Calendar request"

Waterfall or agile/scrum

8
Ad Huikeshoven (talkcontribs)

As a documentation starter:

  - The phab:T124022 ticket has been announced by Quim Gil on wikimedia-l

  - That post links to WMF product development process

The WMF product development process very much looks like a waterfall process or method.  Is that intentional or inevatible? Isn't that at odds with agile / scrum software development?

Elitre (WMF) (talkcontribs)
Ad Huikeshoven (talkcontribs)
Whatamidoing (WMF) (talkcontribs)

I wonder whether agile is too – I'm not sure how to explain this. With a large project, in agile, you're really doing everything at once. When you are releasing part 1, you're also building part 2, designing part 3, planning part 4 – and also re-building part 1, re-designing part 1, re-planning how to integrate the changes from part 1 into your current plans for parts 2, 3, and 4, etc.

And it's kind of hard to explain this "do everything at once" as a "process" to someone who is thinking in terms of, say, the process used by a factory line or the process used in a court of law.

Would it make more sense to present it as several interconnected, simultaneous focus areas, rather than anything in a line? (For example, six blobs with lines between them rather than six lists in a row.)

Qgil-WMF (talkcontribs)

At least when it comes to the conversations with our communities, Understand and Plan come before Release and Maintain, and not vice versa. There is a sequence. The purpose of the checkpoints for every stage is to check whether a project is fit enough for a next stage.

Another example. Once a project is in Develop, they can go back to Understand, Plan, etc as often as they wish, but still they cannot jump into Release without going through the related checkpoint. Again, there is a sequence for new products and new big features requiring community checks.

Whatamidoing (WMF) (talkcontribs)

We probably agree, but your brief explanation here sounds rather "waterfall-y". Perhaps VisualEditor is an example: "Release" and "Maintain" citations came before "Understand" and "Plan" table editing. Therefore, "Release and Maintain" (something) actually did come before "Understand" and "Plan" (something else). From the communities' perspectives, all of these things happened at the same time.

Qgil-WMF (talkcontribs)

We probably agree indeed. From the point of view of the communities, they want to know about features like "VE citations" or "VE table editing", and some people might be interested in A but not in B. Each feature follows more or less that sequence (at least ideally).

I think this topic will be less of a problem when we move from one draft in a wiki page to some real action.

ARipstra (WMF) (talkcontribs)

I think it would be helpful for this discussion to have clear definitions into what is waterfall and what is agile from a person on TPG.

Also, we have drawn this process as a line, and really it is more of a circle. There are places in the process that do not neatly check off boxes and then move to the next stage without going back and forth for a while. For example when building citations in VE, there were several concepts built and tested, and iterated, and tested (several circles, back and forths between parts of the process) until people were able to use it (it was usability tested and people were able to accomplish creating a citation without difficulty, considering all the details and edge cases), there were no bugs (this was a combo of QA and usability testing - which are different), then, it was released to production.

It might help to explain this process in a more visually representative way, showing that it really is a circle (we may some day decide to add functionality or remove functionality to the product called citations) that sometimes has circles (iterative testing until ready for release) within each part of the process. Drawn as a straight line with not back and forth between parts of the process (iteration), it communicates a less iterative process that does not reflect the reality of product development inclusive of the necessary activities to create usable, bug free, quality software. I think it is important to call out the back and forth between stages with arrows pointing backwards in the, at this point, unidirectional flow. Perhaps a visual designer could help us with creating a more representative visual to describe this concept.

Reply to "Waterfall or agile/scrum"
Whatamidoing (WMF) (talkcontribs)

This process should include end-of-life requirements: If you stop maintaining an extension, you should post a plan for a graceful removal/information about when support stopped (e.g., "last tested on this version of MediaWiki").

Qgil-WMF (talkcontribs)

Good point. I guess this would belong to the Maintain stage.

Whatamidoing (WMF) (talkcontribs)

It should probably be its own stage. Everything, even something as durable as Microsoft Windows, is likely to have an End of Life stage some day.

WMoran (WMF) (talkcontribs)

It is a good point. A Retire stage could be a nice addition. What types of tasks and stakeholders would be involved?

Whatamidoing (WMF) (talkcontribs)

For tasks, I suppose you'd need to figure out any dependencies (both directions: Did you add something else to support this product, that could be removed with it? Is anything else relying upon your code?) and make suitable announcements and updates to documentation.

For stakeholders, I would guess this short list:

Product Manager – to make the decision

Community Liaisons – to find out whether anyone at the WMF wikis is using it

Operations – to deal with practical stuff about removal

Qgil-WMF (talkcontribs)

Just FYI, I have added this discussion as blocker of Phab:T125806 (Consolidate the WMF product development process overview).

Reply to "End of life"
Quiddity (WMF) (talkcontribs)
(Copying a post by @Wittylama that was misplaced due to a pagemove problem)

Could you perhaps discuss in this document the role of the "Beta features" system?

This was created as a place for people to opt in to new tools but has been used somewhat randomly. Sometimes new features bypass the beta system altogether - going straight into production (a recent example being the new notification icons) and sometimes things sit in the beta system indefinitely. Personally I'd really like it if there was a consistency about how the beta system was used, and also some objective, public, and measurable goals applied to each feature as the criteria for 'graduation' being an opt-out/default feature. Currently the beta system tells you how many people have opted-in to a given too, but not the retention rate or how many people that is as a % of active users etc.etc. No one likes surprises, and you've got a perfect platform to test and receive feedback, but it's just not seemingly part of the standard procedures. Wittylama (talk) 18:11, 5 November 2015 (UTC)

Qgil-WMF (talkcontribs)

@Wittylama, I agree that the use of Beta features should be systematic and documented. In my opinion, the use of Beta features should be part of the Release plan (as a requirement to enter the Release stage) and the intended results should be integrated in the quality criteria for release (as a requirement to move from individual opt-in to regular deployment).

See also the related discussion about Scope of "release" stage.

Qgil-WMF (talkcontribs)

@Jdforrester (WMF), @Quiddity (WMF), could you help clarifying whether there is a documented process defining when and how Beta Features should be used? If not, is there a task in Phabricator to fix this?

Jdforrester (WMF) (talkcontribs)

See Beta Features/Package. The guidance is that Beta Features should only be used for bigger changes on which you're not sure; trivial things like changing icons are not in scope. The system was built back when desktop vs. mobile was still a thing, and there's no good plan for resolving that. In general, I would rather we killed Beta Features than added yet more bureaucracy to it.

Wittylama (talkcontribs)

On the one hand what you say sounds quite reasonable, but then again the WMF did just recently rolled-out a change that was considered a cosmetic adjustment to buttons (splitting the notification icon into two) that had to be rolled-back for an important fix. It's a quite recent example of where a 2 week 'beta' period would have been perfectly sensible:

*the change wasn't urgent

*the audience for the change was active editors

*the success criteria for default rollout are easy to define (no slowdown in load time, no broken extensions, WCAG accessibility compliant

*whatever it was that the change was designed to improve in the first place).

I completely agree that minor changes should not have unnecessary bureaucracy, but that should speak more to the way that a 'beta testing' period should be handled as a smooth and natural transition. If the change isn't urgent, what's the harm in having people who opt-in testing it for two weeks? If there's no problem, no problem. If there's an issue raised then it can be quickly fixed without the embarrassment of having to undo a rollout.

P.s. Jdforrester (WMF) can you explain what you mean by "back when desktop vs. mobile was still a thing"? Wittylama (talk) 09:30, 4 December 2015 (UTC)

Jdforrester (WMF) (talkcontribs)

By "back when desktop vs. mobile was still a thing", I was making a slightly snippy off-hand reference to the out-dated idea that we wanted to build two different websites, one for desktop and one for mobile, instead of working towards a single site for users of any and all device sizes. What we would have considered acceptable then isn't something we'd be OK with now. Sorry for the confusion.

Wittylama (talkcontribs)

It remains my understanding that the mobile development and the desktop development are undertaken separately, and that the features, user-experience and workflows are intentionally different. See, for example, this recent comment by @Jdlrobson on Phabricator https://phabricator.wikimedia.org/T118338#1840066. I'm not saying that either approach is wrong, but I am getting very mixed messages about how the mobile development and the desktop development are undertaken.

Jdforrester (WMF) (talkcontribs)

Sorry, I don't run the Reading department, I was just relaying what I'd understood from them.

Wittylama (talkcontribs)

Stop pretending, @Jdforrester (WMF) - we all know that you secretly run everything ;-)

TheDJ (talkcontribs)

I actually highly doubt that such a problem would have made itself obvious in beta mode. If it wasn't obvious in mediawiki.org and test.wikipedia.org, then it wouldn't have been obvious in beta either.

It's actually a good example of what James meant I think: Beta will not protect you from bugs or implementation errors. Regression testing and performance metrics will.

Beta, at most it allows you to do gauge interest and gather feedback on something you are 'considering'. But unfortunately, it's not instrumented well enough to actually do so in practice.

And there is another problem, that in the current form Beta is not much more then 'gadgets on steroids', which also means it comes with implementation limitations, that require you to reimplement whatever you tested. If we ever want to use it more, it's entire architectural pinnings will need to be reworked to change that....

Wittylama (talkcontribs)

Thanks for the reply user:TheDJ, even if your response is rather depressing. If that's the case, it basically tells me that the Beta features system is pretty much a wasted project in the first place.

P.S. There's a related thread on Phabricator where I've just mentioned this discussion here: https://phabricator.wikimedia.org/T76573

TheDJ (talkcontribs)

Part of this has to do with that fact that we are 'hyper optimized'. Unfortunately, that also makes it very difficult to 'vary' inside the software stack, since the stack was never designed to allow for much variance. MediaWiki and Wikipedia specifically was always designed on the premise of "Everyone gets the same, where possible", and it's very difficult to change all those thousands of assumptions throughout the system into: "Everyone should be able to get something totally different" and not tumble the entire website.

I think we will have to change that going into the future, but it will require massive changes, not only in our software, but definitely also in our operations and hosting. What ops is doing with new deployment tools might help with that, but it's a long road. It's what is needed to make 'Beta' actually work and to make that a 'Core' feature.

TheDJ (talkcontribs)

And I definitely wouldn't say it is useless/wasted. Every system has limitations, you just need to be aware of them and understand how they impact your usage of the system. But currently there are soooo many limitations, that the system is basically not worth the extra effort required to make use of it.

It can be improved, the linked ticket describes a few things that could be worthwhile improvements to at least get better/more understandable/interpretable feedback out of a deployed beta feature.

But using it to 'catch bugs' when only about 5% of a couple thousand active editors will ever use the system is unrealistic. You need the test group of 120000 active english Wikipedians at some point to really reach those 50 editors that, more than once a week, use the thing you changed (on Chrome 35). This is part of our challenge, we have areas of the software that really have a minute amount of users, yet those few people can be impacted greatly by even the smallest change.

Pginer-WMF (talkcontribs)

I think that being able to expose features gradually to our users has many benefits.

Some features may need more steps than others. For example, a feature can be available as opt-in for those users that look for it, then it can be announced in a relevant context inviting more users to try, later it can become the default with an option to opt-out, and finally become just the default experience. Other features may need less steps, or none at all for the most trivial cases (although the devil is in the details and most uncontroversial changes may have more ramifications than the ones expected initially).

Being able to try a new feature in a real context (not an example in a testing server that may or may not fit with your workflows) and having always a way out to go back to normal in case things go wrong, seems valuable and helps to set expectations about the feature.

Communication is a big part of this in order to make users aware about how the feature is evolving, indicate whether they consider the feature is working for them and describe which issues they experience.

I think many of the aspects mentioned above connect with several of the challenges teams face when launching products and result in much time spent afterwards. Beta features is not completely supporting all of them, but I think that what we need is to improve the platform. I think that problems such as few people joining, or the need to make the process more agile require less effort than the problems of not having a way to gradually early and often get our users on board.

Jdforrester (WMF) (talkcontribs)

Yup, this is right. Beta Features isn't for "beta testing" of whether the code works overall, or has performance issues like with the notification badge split (that's the job of the developers to get right before it ever is seen by a user). It's more like a tool for User acceptance testing of bigger changes.

Deskana (WMF) (talkcontribs)

Beta Features is essentially intended for performing user acceptance testing, i.e. intended to answer the question "Does this feature meet the user's needs?". It's not intended to be used as a way of finding regressions like those in the recent notifications rollout, although of course any time anyone's using software bugs and regressions can be found.

test.wikipedia.org and en.wikipedia.beta.wmflabs.org are intended to find regressions and test integration, i.e. "Does this feature simply not work, or break other features?". In theory, the regression with notifications should've been caught here. However, it is known that these wikis are not really enough like production to catch these regressions. That's something that @Greg (WMF) et. al. have been interesting in improving for a while, but the problem is very complex and would take significant work.

Qgil-WMF (talkcontribs)

Let's try to summarize this long discussion. All what this product development process would define are checkpoints about public testing in the Develop and Release stages. These checkpoints could define some expectations:

  • Software quality: prototype, alpha, beta, stable opt-in...
  • Target groups for each announcement: tech ambassadors, Beta Feature users...
  • Minimum duration of the testing
  • Feedback channels used

This process should not require the use Beta Features or any other specific technology to get early feedback from users.

Is this a good summary?

Wittylama (talkcontribs)

I'd say that one thing missing from that bullet-point list is that it needs specific and measurable criteria for acceptance. Depending on the type of thing that might be something needing community consultation to define or might be a statistical measurement (like 'no slowdown in load time'). My primary criticism of the Beta Features system (other than confusion about when it is used or not used) is that there's no way to tell whether something is "successful" and should be promoted, or "unsucessful" and needs to be demoted. At present, things just live there in limbo-land. I refer again to my favourite ever rollout 'go/no go' critieria in Wikimedia history - the "Usability Initiative" (for building the Vector skin). Its published in advance critieria was "80% opt-in user retention". This was a measurable, achievable, and objective criteria. It also baked-in the concept of valuing the feedback from the early-adopters who chose to drop-out, which resulted in those people trying it again and eventually becoming advocates for it within the community (ping @Trevor Parscal (WMF) who was part of that team).

Trevor Parscal (WMF) (talkcontribs)

@Wittylama I also think that was a good approach. One thing to note, however, is that we actively surfaced the feature's availability by adding links to the personal tools and running banners. This dramatically increases awareness of features, but it also requires we do this one-at-a-time and once-in-a-while or there will be advertisement fatigue.

Reply to "Beta features"

An example of the need for quality assurance: Defective WMF harassment survey

4
Guy Macon (talkcontribs)

(Copied from Jimbo's talk page. My comments are below).

The mystery of the obviously incorrect "revenge porn" results of the WMF Harassment Survey has been solved on Wikipediocracy by Belgian poster Drijfzand... Basically, this survey of 3,845 Wikipedians across a range of WMF projects (45% of whom were from En-WP) generated 2,495 responses to a question asking whether they personally experienced harassment. Of these, 38% (about 948 people) said yes. (pg. 15). However, on page 17, in what is purported to be a breakdown of the forms of harassment experienced by these editors, an astounding 61% (about 578 people) are said to have claimed to be victims of "revenge porn." This, to anyone who ponders the number for more than 6 seconds, appears patently absurd — bearing in mind that the survey respondents were about 88% male and that the great majority of Wikipedians maintain some degree of anonymity.

Drijfzand observed that the number of responses for doxxing, revenge porn, hacking, impersonation, and threats of violence all fell within a range of 5% — which she or he said "simply can't happen." I theorized that the problem was a software glitch and Drijfzand identified the problem as a set of defective sliders in the survey form which refused to accept a value of 0, a bug identified by Burninthruthesky on November 3 and which was apparently remedied on November 4. LINK. Unfortunately, the survey was not launched on En-WP until Day 5 (to allow more responses from smaller Wikis so as to reduce the weight of the large projects, see pg. 2), meaning that bad data was generated on some projects for nearly a week. Whereas the survey should have been aborted and restarted, it apparently was not, and so the data presented on page 17 (and any conclusions derived therefrom) is a case of Garbage-In-Garbage-Out.

Once again: a failure to adequately beta-test software is evident. There is one saving grace, and that is we have a very good snapshot of the magnitude of the gender gap based on survey respondents (a ratio 88:12 for those who indicated a gender, with some 7 % of survey participants declining to respond). Assuming a heavier-than-average percentage of women than men in the "decline to respond group," this means we are probably in the ballpark of 86:14 or 85:15. There is also, for the first time ever as far as I am aware, a decent survey of age of Wikipedians. Your takeaway numbers: 35% of respondents (and presumably Wikipedians in general) are age 45 or over; only 24% are under the age of 25. All the fresh faces, many on travel grants, at Wikimania are deceiving — it appears that the median age of Wikipedians is right around 31 years old, give or take. So the expenditure on the harassment survey wasn't a total loss even if it failed at its intended mission (at least in part) due to bad software (leaving aside the very real question of sketchy survey design).

--Originally posted by User:Carrite at User talk:Jimbo Wales on the English Wikipedia at at 19:50, 3 February 2016 (UTC), reposted here by Guy Macon

-------------------------------------------------------------------------------

I would like to discuss the comment "Once again: a failure to adequately beta-test software is evident" in the above.

The WMF has many roles, and one of those roles is "software developer". One of our ongoing problems is the low quality of the software we develop. I would note that this is almost certainly not the fault of the individual developers or the managers one or two levels above them, but rather an institutional problem that flows down from top management. I would also note that top management almost certainly do not realize that they are the root cause of our low quality software.

The Wikipedia community has some extremely skilled project managers and software developers, but we have no way of helping the WMF to address this problem. I have personally tried every avenue that anyone suggested to try to get a technical proposal considered, (details available on request, but right now I am addressing the larger problem) but have been stonewalled. There should have been a way for me to get an answer, even if the answer was "no".

I would very much like to be able to report in a few months time that this has been solved and that the lines of communication are opening. Let's talk about how to make that happen. --~~~~

Qgil-WMF (talkcontribs)

@Guy Macon, testing software is a crucial activity, but I wonder which specific recommendations from this message we can extract for the product development process being discussed here. The survey mentioned was handled with Qualtrics, which is a product that we have not developed.

Guy Macon (talkcontribs)

Are you implying that using an external product relieves the WMF of the responsibility of doing some basic functionality testing of the software that handles a survey before releasing said software upon the Wikipedia community?

I would also note that the actual question I asked ("The Wikipedia community has some extremely skilled project managers and software developers, but we have no way of helping the WMF to address this problem. .. I would very much like to be able to report in a few months time that this has been solved and that the lines of communication are opening. Let's talk about how to make that happen.") has, as is our tradition, gone completely unanswered.

Whatamidoing (WMF) (talkcontribs)

I think that everyone agrees that QA testing is good. However:

  1. The software that was being used was not written by or supported by the WMF,
  2. the software itself was functioning correctly,
  3. the survey isn't a "product" and isn't being "developed", and
  4. the survey was not produced by or associated with any Product team.

Consequently, it's unclear to me why you think the Product department should address this, or how this error should affect the development of software products by the WMF Product department (=the subject of this page).

Or, to put it in terms that may be more familiar with Wikipedia editors: you have posted your comments at the {{wrong venue}}. (If you need help finding the right venue, then leave a note on my talk page.)

Reply to "An example of the need for quality assurance: Defective WMF harassment survey"

Subpages for each stage?

8
Qgil-WMF (talkcontribs)

Currently only specialists can jump comfortably from the necessary simplicity of the WMF product development process to the very detailed and specialized documentation at WMF product development process/Proposal. What about creating subpages for every stage? These pages would provide enough information for every stakeholder involved, and especially for our communities and volunteer developers, who don't need to dig deep into WMF's agile methodologies to contribute efficiently to the process.

Qgil-WMF (talkcontribs)
Qgil-WMF (talkcontribs)

Following on the feedback from @Salix alba and others, it would be useful to have an additional page focusing on the participation of our communities, explaining the points where they are expected to participate and how, linking to the appropriate pages or sections of the product development process with more details.

For the average editor, the expression "product development process" is confusing enough, leave alone the whole description of the process. We can present a community-centric view of this process highlighting when and how they are expected to get involved, promoting Wikimedia community terminology and trying to minimize software development terminology.

WMoran (WMF) (talkcontribs)

I think it is a great idea. What about a simple FAQ per stage? How do I submit a feature? How do I provide feedback? How can I participate? Would that be better?

Qgil-WMF (talkcontribs)

OK, one page for every stage.

A FAQ in the stage pages focusing on community members would give the impression that this documentation is only for them. The docs are the same for everybody and need to speak to everybody. I still think that a subpage "WMF product development process/For Communities" might be a better approach.

Rdicerb (WMF) (talkcontribs)

I have a page that is retooling, and there was a Design Research workshop this morning that took staff through a spreadsheet that describes each teams actions during the stages of development. That could be another way of doing it.

I do think that a goal at the moment is to update the table, but my table/markup skills are lacking. I did upload the chart that was discussed to Commons the other day - not sure if that's helpful (I feel like that clarification may be needed prior.

Qgil-WMF (talkcontribs)

@Rdicerb (WMF), I think you are right. Let's keep the overview simple, focusing only on the key aspects of the process. Then different audiences may have their own related pages with all the details affecting to them directly. Modular Milestone-driven Development is a good example of a page geared toward Engineering teams more than our communities. We should have a page targeted to our communities. Should this page be created or is there an existing page that could be reused as starting point?

Qgil-WMF (talkcontribs)

We discussed this problem in a meeting with Wes, Rachel, Keegan, Abbey and Jonathan, and we agreed on the following:

  • Each stage will have a subpage (coming soon) with a general intro and sections specific for each audience (i.e. Communities, Performance, Security...)
  • Each audience will have a subpage as well, so readers caring mainly about Communities, Performance, Security... have a clear landing page.
  • In order to avoid duplicated work and mismatches, we will transclude the sections from the stage subpages into audience subpages, using labelled section transclusion.

Let's see how it goes.

Reply to "Subpages for each stage?"
Return to "WMF product development process/Archive 3" page.