A kitten for you!Edit
Thanks for enabling WikiLove!
Jorm (WMF) 19:56, 8 July 2011 (UTC)
Can you explain how you know for certain that Extension:FCKeditor has been abandoned? I think you should add your rationale to the discussion page for that extension's article. --Lance E Sloan 14:05, 28 July 2011 (UTC)
Per the upstream bug tracker [http://dev.ckeditor.com/ticket/5602 http://dev.ckeditor.com/ticket/5602] and [http://dev.ckeditor.com/ticket/6273 http://dev.ckeditor.com/ticket/6273] "MediaWiki and FCKeditor are no longer supported. Closing the ticked."
16:07, 28 July 2011 (UTC)
- I see. But the newer CKEditor is supported, right? We should add a template that directs users to the extension for that editor instead. I'll look for such a template. --Lance E Sloan 17:06, 28 July 2011 (UTC)
- I don't know. Which is that extension/where does it live? :/ Reedy 18:11, 28 July 2011 (UTC)
- I think I'm wrong about that. FCKeditor has been renamed CKEditor, but on this wiki, the page for CKEditor redirects to FCKeditor. The article about WYSIWYG is no help, either. So... I guess there's nothing else that can be done for now. Hopefully your marking the extension as abandoned will attract some attention from developers that have some time to work on an alternative. --Lance E Sloan 18:22, 28 July 2011 (UTC)
Friendly reminder: 1/13 @ 19:00 UTC - MediaWiki Workshop: Preparing extensions for MW 1.19 in #wikimedia-devEdit
This workshop will be an opportunity to share information about changes in MediaWiki 1.19 that may require revisions to extensions or skins. Also an opportunity for developers to ask questions regarding extension development.
Look forward to seeing you in IRC. :) --Varnent 06:20, 13 January 2012 (UTC)
A cuppa tea for you!Edit
|Thank you for making WikiLove fully translatable. [[kgh]] (talk) 22:51, 21 February 2012 (UTC)|
Username change requestEdit
Hi Reedy. I wanted to ask you if you can glance at this request for bugzilla (since you work there as well, don't you?). I'm asking you it because the last request we made (for our project) has been fulfilled after months. So if you could do us a favour, we would be glad. Regards, --Frigotoni ...i'm here; 15:07, 25 April 2012 (UTC)
Yes there was: bugzilla:33123, w:en:Wikipedia:Village pump (proposals)/Archive 83#Enable_.22Show_changes_since_last_visit.22_on_watchlist. --Krenair (talk • contribs) 17:05, 10 May 2012 (UTC)
- Well, now the community is angry, with some former supporters now opposing. Notice that we asked for a preference option, and we weren't given one.--Jasper Deng (talk) 18:56, 10 May 2012 (UTC)
- That was never voiced on bugzilla:33123 Reedy (talk) 19:24, 10 May 2012 (UTC)
- False. Only a few users mentioned opt-in/preference. --Krenair (talk • contribs) 19:31, 10 May 2012 (UTC)
- A few out of a few... in short, we didn't have enough !votes. I'll admit that no-one actually opposed because of a lack of a preference but me, but it caused some issues on enwiki when it actually happened. In any case, though, I actually like the stars far better than boldening, though optimally there's a full opt-out. Thanks.--Jasper Deng (talk) 21:14, 10 May 2012 (UTC)
Hi Reedy, hope you're well. When you have time, and if it's not too much trouble, do you think you could handle a few more upload requests for me? (bugzilla:36815, bugzilla:36816, bugzilla:36817, bugzilla:36818, bugzilla:36830) Thanks in advance, FASTILY (TALK) 07:12, 14 May 2012 (UTC)
According to the Review queue, the next step for the RandomRootPage extension is that you would move it to the Deployment queue. Tim Starling has approved the extension (see bugzilla:16655) and the extension is on git at git:extensions/RandomRootPage.git.... And, apparently Tim Starling wants to deploy it.--Snaevar (talk) 11:00, 14 May 2012 (UTC)
Yet another Upload RequestEdit
Common commit errorsEdit
It seems like the Common commit errors page you created could really be merged with Manual:Pre-commit checklist. A lot of things on that list could have automated checking. -- ☠MarkAHershberger☢(talk)☣ 20:49, 2 June 2012 (UTC)
New project on LabsEdit
Hi Reedy, thanks for setting me up with developer access. I'm working on a new MediaWiki extension, and from reading over https://labsconsole.wikimedia.org/wiki/Help:Getting_Started and the other help content I get the impression that I would need to have a new 'specific' project created. I didn't see any instructions on how to do that, so I figured I'd ask you how to get set up with a new project.
My basic goal is to get this new media handling extension working on a staging environment that's close to Wikimedia's. I'm not sure how relevant this is, but the extension involves file uploading, which includes some processing upon completing the upload to transform an effectively plain-text file into a static image. I've got the extension working on Ubuntu 11.10 Oneiric with MediaWiki 1.18 and (on another machine) 1.19. I've tentatively named the extension "PDBHandler". Thanks, Emw (talk) 03:48, 27 June 2012 (UTC)
Adminship for Uncle GEdit
(Alternately, you could make me a bureaucrat. ;-)
Another Upload RequestEdit
There is a set of diffs here: Wikia_code which are an out of date diff from the Wikia code base and mediawiki core. I'm a developer at Wikia and I'd be interested in keeping those up to date if you could tell me how they were created. Do you have a script to do it? Owyn (talk) 10:02, 4 February 2013 (UTC)
- I did actually write a wrapper script to do this. It's probably not needed, using diff -R would have a similar result, though it would be one combined file... Which still then needs uploading.. Those were 1.16. You seem to be running 1.19 now. Comparing that itself against master (considering we're now 1.21) will likely have a lot of changes there were just changes in core.. Do you want this running again for 1.19 vs wikia 1.19? Or your 1.19 against master? Reedy (talk) 14:33, 4 February 2013 (UTC)
- Argh, talk pages! Sorry, I didn't notice you had replied. I am interested in 1.19 vs Wikia 1.19, and we have merged in the latest security updates already. I was mostly interested in the process of keeping those pages up to date. Eventually I would like to try and reconcile most of these changes (determine if they are still necessary and either merge them into mediawiki or find another way of implementing them at Wikia). I can get the diffs myself and do all the work locally, but as there seemed to be an attempt to put a workflow in here by marking pages as DONE I was wondering if that was useful at all and whether it would be valuable to keep doing it. Owyn (talk) 23:44, 25 March 2013 (UTC)
it seems that with wmf20, an existing JS variable, namely "wgVectorEnabledModules" has evaporated. IMO, this should appear in "breaking changes", as the disappearance of previously available variable breaks some scripts (this is how i found out it disappeared). i'm not asking to resurrect this variable - it's probably too late for that anyway, and people will remove it from their scripts rather than wait a couple of weeks with broken scripts waiting for it to come back to life in wmf21, but i think it's reasonable to ask that this will appear in the changelog.
- It was purposely removed as part of cleaning up Extension:Vector. It's a wiki, feel free to edit it. Reedy (talk) 23:51, 10 October 2013 (UTC)
- i wasn't bitching about the removal - i just commented that i think it should be added to the changelog as "breaking change".
- is this the only JS variable that was removed? ("cleaning up" hints that maybe some other variables were removed, no?)
- i don't mind adding it to the changelog myself, but it will be better, i think, if someone who actually knows what it's about will do it (and at the same time, will give a full list of all variables that were removed). peace - קיפודנחש (talk) 18:22, 11 October 2013 (UTC)
- changelog is prolly the best place i can think of. i'll drop a message about it on enwiki wp:vpt - if you search on enwiki in user+mediawiki namespaces for "wgVectorEnabledModules", you'll notice several dozen hits. since this used to by some kind of ovject, some scripts tried to test one of its members (e.g., like so: which throws exception for null value. it's worth telling the members about it, methinks. peace - קיפודנחש (talk) 18:57, 11 October 2013 (UTC)
mw.config.get('skin') == 'vector' && mw.config.get('wgVectorEnabledModules').collapsiblenav
- changelog is prolly the best place i can think of. i'll drop a message about it on enwiki wp:vpt - if you search on enwiki in user+mediawiki namespaces for "wgVectorEnabledModules", you'll notice several dozen hits. since this used to by some kind of ovject, some scripts tried to test one of its members (e.g., like so:
Deployment of bugzilla:53132Edit
An important message about renaming usersEdit
I am cross-posting this message to many places to make sure everyone who is a Wikimedia Foundation project bureaucrat receives a copy. If you are a bureaucrat on more than one wiki, you will receive this message on each wiki where you are a bureaucrat.
As you may have seen, work to perform the Wikimedia cluster-wide single-user login finalisation (SUL finalisation) is taking place. This may potentially effect your work as a local bureaucrat, so please read this message carefully.
Why is this happening? As currently stated at the global rename policy, a global account is a name linked to a single user across all Wikimedia wikis, with local accounts unified into a global collection. Previously, the only way to rename a unified user was to individually rename every local account. This was an extremely difficult and time-consuming task, both for stewards and for the users who had to initiate discussions with local bureaucrats (who perform local renames to date) on every wiki with available bureaucrats. The process took a very long time, since it's difficult to coordinate crosswiki renames among the projects and bureaucrats involved in individual projects.
The SUL finalisation will be taking place in stages, and one of the first stages will be to turn off Special:RenameUser locally. This needs to be done as soon as possible, on advice and input from Stewards and engineers for the project, so that no more accounts that are unified globally are broken by a local rename to usurp the global account name. Once this is done, the process of global name unification can begin. The date that has been chosen to turn off local renaming and shift over to entirely global renaming is 15 September 2014, or three weeks time from now. In place of local renames is a new tool, hosted on Meta, that allows for global renames on all wikis where the name is not registered will be deployed.
Your help is greatly needed during this process and going forward in the future if, as a bureaucrat, renaming users is something that you do or have an interest in participating in. The Wikimedia Stewards have set up, and are in charge of, a new community usergroup on Meta in order to share knowledge and work together on renaming accounts globally, called Global renamers. Stewards are in the process of creating documentation to help global renamers to get used to and learn more about global accounts and tools and Meta in general as well as the application format. As transparency is a valuable thing in our movement, the Stewards would like to have at least a brief public application period. If you are an experienced renamer as a local bureaucrat, the process of becoming a part of this group could take as little as 24 hours to complete. You, as a bureaucrat, should be able to apply for the global renamer right on Meta by the requests for global permissions page on 1 September, a week from now.
In the meantime please update your local page where users request renames to reflect this move to global renaming, and if there is a rename request and the user has edited more than one wiki with the name, please send them to the request page for a global rename.
Stewards greatly appreciate the trust local communities have in you and want to make this transition as easy as possible so that the two groups can start working together to ensure everyone has a unique login identity across Wikimedia projects. Completing this project will allow for long-desired universal tools like a global watchlist, global notifications and many, many more features to make work easier.
If you have any questions, comments or concerns about the SUL finalisation, read over the Help:Unified login page on Meta and leave a note on the talk page there, or on the talk page for global renamers. You can also contact me on my talk page on meta if you would like. I'm working as a bridge between Wikimedia Foundation Engineering and Product Development, Wikimedia Stewards, and you to assure that SUL finalisation goes as smoothly as possible; this is a community-driven process and I encourage you to work with the Stewards for our communities.
Random acts of CR barnstarEdit
Heiya Reedy, thank you for solving the "TNT issue". This diff shows another use case for your bot I just discovered. I think it will be cool if you could to a replacement here too. Cheers --[[kgh]] (talk) 00:38, 8 March 2015 (UTC) PS Seems to be a bigger story. Another popular template in need of help. --[[kgh]] (talk) 10:17, 24 March 2015 (UTC)
A barnstar for you!Edit
|The Tireless Contributor Barnstar|
|"Thanks for the great night", ok, that sounds misleading :P So, simply "thanks" for all the work! :) Florianschmidtwelzow (talk) 11:58, 3 January 2016 (UTC)|
|For the work you did, you earn more as just one beer, but I don't want to spam your talk page :P Thanks for the work you invested to review changes for all these extensions! :) Florianschmidtwelzow (talk) 17:55, 4 January 2016 (UTC)|
Just wanna say...Edit
Hello, just wanna say, that all members of this group are inactive :) 
Hi. I noticed that you are listed as an OAuth administrator here, but that group was migrated to meta wiki, and no longer has any user rights associated with it here. Any objections to removal? Thanks, --DannyS712 (talk) 05:47, 19 December 2019 (UTC)
Hi. I was looking at https://phabricator.wikimedia.org/source/mediawiki/browse/master/maintenance/deleteBatch.php and noticed that, apparently, this script isn't batching any DB transactions (i.e.: ordering to delete everything at once?). I see there's an interval parameter, and a
wfWaitForSlaves();, but shall it instead use
getBatchSize(); and set a reasonable limit over there? Best regards, —MarcoAurelio (talk) 14:44, 1 February 2020 (UTC)
- Batching only really works when we're doing the SQL queries directly in a maintenance script/similar. I'm guessing this is because of the complexity of deleting a page, its revisions and anything else in attached database tables. So it makes more sense to be a wrapper to WikiPage. The batching code is only any use if you're doing a loop where you can fill something (an array) to have X (or less) items and then can do it as such. Reedy (talk) 16:16, 1 February 2020 (UTC)
I've abandoned the patch chain, that should not cause overload anymore.
Can I ask you to re-enable Jenkins so I can continue work? Thank you!
- Reedy Hello again! It is awkward to ask people for "recheck" comments. May I ask you what is the reason for the holdup? I'm starting to get the feeling this is related to my opinion that gerrit is outdated, which might have offended you.
- I'm sorry for that. The lack of pull requests and the resulting inability to handle long chains of commits - demonstrated by yesterday's CI delay - should satisfyingly back up my opinion. I hope there are no more bad feelings between us and I can continue work without further interruption, for which please re-enable Jenkins.
- If you have something else on your heart, please make it known. Thank you.
- Yours sincerely, —Aron Man.🍂 edits🌾 08:31, 12 March 2020 (UTC)
Most of this is nothing to do with Gerrit (and nothing to actually do with the previous discussion - specifically about this incident); it doesn't do the CI scheduling etc. It's nothing to do with being "outdated", it's a different workflow to GitHub, and that doesn't make it outdated. It's always had this workflow, and GitHub has had it's workflow for longer. It was a design decision as far as I'm concerned.
Gerrit has safeguards in place to prevent people sending so many commits in one go (last time I had problems with this, it was set at 10. Yes it's trivially work-aroundable, but it's still a deliberate action to do so). But you still pushed two lots of 20 commits in two short periods.
While I understand that having to get other people to "recheck" your patches is painful, so is not having CI run on other patches because one person is using more than their fair amount of resources (I concur that maybe we should not let this happen to that extent, and there should be some fair use), and while I'm going to AGF and presume you didn't do it intentionally, but you did cause a DoS that was basically lasting for over 12 hours. Never mind your patches yesterday weren't being tested in a timely fashion anyway, because of your own doing. Getting someone else to "recheck" them is almost certainly quicker than things . I don't think it unfair to be asked to sit out for a little while as a punishment. Removing people from CI whitelisting as a "trusted user" isn't something we do regularly. So hopefully that hints to the percieved seriousness of the issues caused.
So because of your actions, people had to then remedy it (killing CI checks, restarting Zuul which then lost the rest of the queue), never mind your cluttering of peoples review queues with 40 patches; again a form of DoS.
Noting you should be able to run tests locally too. You shouldn't have to rely on Jenkins (though, I know personally that locally can suck too).
I'll re-whitelist you now. Please don't let it happen again.
- Thank you for the detailed explanation, I appreciate it.
I don't think it unfair to be asked to sit out for a little while as a punishment.
- You're right. I was thinking "Goes around, comes around. That's fair." But came at the wrong time: I was rushing the patches to get this demo done... one day earlier, preferably :-) Usually, I'm not in that rush.
but you did cause a DoS that was basically lasting for over 12 hours.
- Thank you for giving the exact numbers. I did not realize until maybe an hour before JForrester aborted the jobs. I had no info about the delay of each check, just saw that I receive a report email every now and then, maybe 10 minutes. Usually, tests take 5-20 min and when - I assume - the main office is busy it can take more, half hour maybe, I've never measured. With 30+ changes I did not follow which test finished. When the silence became longer then I started to suspect something's wrong, until then I did not suspect 30+ commits will overload the infrastructure. At that point, I wanted to disable / abort the jobs, but I don't know about such possibility. I did not even actually need the tests, just wanted the patches in gerrit, tests only for the 3 checkpoints.
Gerrit has safeguards in place to prevent people sending so many commits in one go
- Yes, 10. The only issue I expected is cluttering the review logs. I have WIP set as default for new patches, I hoped that would keep it out from the queues. I reckon that's not the case with the gerritbot on phab, so I did not write the related Bug: into the commit messages. Overloading CI was a big surprise to me. I'm sorry for that, I obviously won't submit a long chain... and I'll keep an eye out on the status of queued checks.
- This use would only be possible with pull requests, where checking happens only on the last patch. Patch chains are almost pull-requests, but I assume Gerrit does not keep track of the batches (boundaries between patch lists), just the links. I'm not familiar with the history of Gerrit, but I think the concept of PRs could be fitted over chains, so for me, this is a painfully missing feature.
I concur that maybe we should not let this happen to that extent, and there should be some fair use
- I like working on tooling, maybe I'll find some time later to see what could be done.
- I suspect jobs that did not finish before a new patch set came in for the same change are not removed from the queue: as I recall things got quiet after I had a fix and a resulting rebase close to the bottom of the chain and submitted the whole again. Removing invalidated jobs would be a step to safeguard this.
- Next could be to ignore chains above 5 or 10 patches and only test the head or every 10th change. If the developer needs an intermediate patch checked, they should type "recheck".
- Maybe prioritizing testing the last patch in a chain, though I'm not sure that would give a significant benefit.
- A setting to disable automatic jenkins testing would be helpful too. Now it's all or nothing, depending on the whitelist. "recheck" would be enough in disabled state.
- I'll revisit this. I think these measures would prevent this issue.
- —Aron Man.🍂 edits🌾 01:48, 13 March 2020 (UTC)
[WMF Board of Trustees - Call for feedback: Community Board seats] Meetings with MediaWiki and Wikitech communitiesEdit
The Wikimedia Foundation Board of Trustees is organizing a call for feedback about community selection processes between February 1 and March 14. While the Wikimedia Foundation and the movement have grown about five times in the past ten years, the Board’s structure and processes have remained basically the same. As the Board is designed today, we have a problem of capacity, performance, and lack of representation of the movement’s diversity. Our current processes to select individual volunteer and affiliate seats have some limitations. Direct elections tend to favor candidates from the leading language communities, regardless of how relevant their skills and experience might be in serving as a Board member, or contributing to the ability of the Board to perform its specific responsibilities. It is also a fact that the current processes have favored volunteers from North America and Western Europe. In the upcoming months, we need to renew three community seats and appoint three more community members in the new seats. This call for feedback is to see what processes can we all collaboratively design to promote and choose candidates that represent our movement and are prepared with the experience, skills, and insight to perform as trustees?
In this regard, two rounds of feedback meetings are being hosted to collect feedback from the technical communities in Wikimedia. Two rounds are being hosted with the same agenda, to accomodate people from various time zones across the globe. We will be discussing ideas proposed by the Board and the community to address the above mentioned problems. Please sign-up according to whatever is most comfortable to you. You are welcome to participate in both as well!
- Round 1 - Feb 25, 4:00 pm UTC
- Round 2 - Mar 4, 4:00 am UTC
- Sign-up and meeting details: Wikimedia Foundation Board of Trustees/Call for feedback: Community Board seats/Conversations/MediaWiki and Wikitech
Also, please share this with other volunteers who might be interested in this. Let me know if you have any questions. KCVelaga (WMF), 14:38, 21 February 2021 (UTC)
Hi Reedy, May I request a review on a cherry-picked patch for core REL1_35? https://gerrit.wikimedia.org/r/c/mediawiki/core/+/668813/ Alistair3149 (talk) 02:30, 11 March 2021 (UTC)
Release schedule of minor releasesEdit
Hi Reedy, I am a bit fuzzy about the release schedule of minor releases. In the past couple of years it was some time during the last two weeks of a quarter which I thought was great. The last one was postponed by a week to the second week of the quarter with the comment that it was just a weeks delay from regular schedule. This was already a divergence from the past years experience to me. So to cut it short: What is actually the regular schedule you seek to meet? Note that I do not want to pin you or others down on something here. This is just for better predictability. --[[kgh]] (talk) 08:22, 14 April 2021 (UTC)
- We definitely don't commit to any sort of release schedule for the minor releases. It is a best effort, currently on a quarterly, mostly dependant on my workload to get things shepherded and out the door. I was pretty busy in March, and as per the pre release email, I didn't think putting the release out on April 1st (April Fools day) was the best idea. It meant that a fix actually landed for task T235554, which I know has been annoying users for a while on 1.35, and meant it didn't have to wait for 1.35.3. It also meant the Parsoid security fix could land (task T279451 - though, granted it was reported and fixedafter the usual schedule), which makes it easier for WMF production (as carrying a vendor patch would be difficult and basically require rebasing every week), and also meant potentially not doing a followup security release a week later for that issue. It would be my intention to do the next one in late June, but it's hard to know ~2 months out what my plate will look like. Reedy (talk) 08:54, 14 April 2021 (UTC)
- Thanks for your elaborate reply which in the end confirms what I was thinking about this on my end. My question was triggered by the last release but it was more a general one. We all have workload to handle and items to consider. No problem at all. And indeed, nobody wants to commit this task to definitive inflexible schedule, at least I do not want to. No worries. I hope this was clear from my initial post here. I think going a quarterly path here is cool and if for some good reasons it touches the next quarter this is very well ok, too. Basically this is what I wanted to know. At this point I would also like to thank you for your effort here which I personally appreciate a lot. Cheers --[[kgh]] (talk) 09:12, 14 April 2021 (UTC)
Wikitech policy changeEdit
- Happy to help! If you're likely needing to be making more changes to protected pages, we can trivially get you admin access on Wikitech if you want it. Reedy (talk) 23:27, 14 April 2021 (UTC)
- I would rather not paint a big "hack me!" sign on any of my accounts, thanks.
- This should just be the one page. I'm hoping to have it on Monday, 26 April or shortly thereafter. And maybe I should see about posting a message somewhere, to let people know? Maybe e-mail to the cloud-announce mailing list? Whatamidoing (WMF) (talk) 15:42, 16 April 2021 (UTC)
- I would rather not paint a big "hack me!" sign on any of my accounts, thanks.