The Code of Conduct was changed yesterday (you can no longer threaten someone with legal action). Leaving aside the wisdom of this particular change, what are the rules for changing the Code of Conduct? Who is allowed to change it, and when? I don't see this explained anywhere.
Talk:Code of Conduct
About this board
Protocol for changing the Code of Conduct?
It's written on Code of Conduct/Amendments and linked from the page.
Although saying that the community has achieved consensus seems like a bit of a stretch this time.
@Jdforrester - ah, I missed the link before; thanks.
@Tgr - yes, it doesn't seem like the process was followed for this change.
Hello and thank you for raising this issue. Opinions in the discussion seemed to go back and forth, mostly revolving around how such an amendment could be misconstrued to include any legal commentary such as informing others about licensing. At least to some, our change was more of a clarification than a true amendment. The CoC already included the phrasing "deliberate intimidation" which a legal threat could fall under. For instance, a comment such as "I'm going to sue you if you don't merge my patch" is clearly inappropriate. However, a comment like "there could be legal problems if you don't license this properly" is not really a personal threat and wouldn't violate the CoC. In the same vein a comment could be seen as "offensive, derogatory, or discriminatory" but only when taken out of context, or say if there was an innocent language barrier. I think much of the CoC can't be taken literally and reports are always handled on a case-by-case basis. As for this amendment, we felt the wording "threats of legal action" was a balance between being comprehensive and not too explicit.
The proposed amendment was advertised on wikitech and the discussion was open for over a month. We recognize the consensus here is debatable. If you feel this amendment needs to be revisited, say so and we are happy to engage a new discussion. We did not mean to bypass community input or ignore consensus, and in reality we do not believe the scope of the CoC has notably changed as a result of the amendment.
Thanks for clarifying. I just went through the talk page discussion and counted - I see three people supporting this change (Doc James, Huji and Pbsouthwood - sorry for singling anyone out), and seven people either disagreeing with the change or at least wanting it amended in some way. So maybe consensus went the other way?
I'm also now very curious about which parts of the CoC you think shouldn't be taken literally, but maybe that's a topic for another discussion...
The CoCC concludes this proposal as accepted while making sure the wording is explicit enough to not be confused with other things like informing others about license laws:
> "Personal attacks, violence, threats of violence, threats of legal action, or deliberate intimidation."
Wondering if I can suggest the adding of legal threats to the list of unacceptable behavior. We have policies prohibiting it across many languages and projects. Doc James (talk) 01:31, 23 December 2019 (UTC)
I think it is obviously covered, and I agree that making it explicit is a good idea.
Indeed you find a rule in many Wikimedia wikis. I sometimes wonder about the relationship to the "normal" legal world.Usually, offwiki, you are allowed to mention that you consider taking legal advice, but you are not allowed to do that in a threatening manner (this is a simplification of the German situation, and I am sure that there are notable differences from country to country).
In general, I am not against such a rule in Wikimedia policies, but I am interested about possible complications.
@Ziko I am sure when the discussion happens in publicly that the LGBT+ community will have complications to add which other communities are not expecting.
I am interested to see that.
can we add a proviso that arguing for following a software license is not a legal threat. Unless the person is like super threatening about it against an individual.
It comments like "I am going to sue you" or "you should be sued for what you did" that are harmful to conversations and in my opinion should not be permitted.
Saying "we do not allow fair use content" or "we do not permit NC licensed images" is obviously perfectly appropriate. As is saying "here is the license that this is under and you are required to follow it, here are the steps required to be compliant".
I do not see that mentioning that you will be taking legal advice is necessarily threatening. It is quite reasonable to take legal advice if you are in doubt of the legality of a situation, particularly as we have a lot of cross-cultural communication. Taking legal advice is a thing one does for oneself, and does not imply action against another party will ensue.
Pointing out that something may or does contravene a law in a specific jurisdiction is also not a threat, though some evidence should be provided to support the claim.
my main part is i want that to be clear in whatever documentation we adopt, to avoid any chilling effect on ensuring license compliance.
Enforcing license compliance is necessary, but can and should be done politely, until if necessary we politely indefinitely block for persistent non-compliance.
Agree with Pbsouthwood. We block a lot of people for repeated copyright violations. We never threaten to sue them, or at least we shouldn't.
I don't see why threats of legal action need to be covered. Rules that make sense on a website don't necessarily make sense in other circumstances. Would this apply to, for example, an event organizer warning attendees not to go beyond a certain point or they might get arrested for trespassing?
If the legal threat is done in order to harass or intimidate, those are already covered by the code of conduct.
An event organizer warning attendees not to go beyond a certain point or they might get arrested for trespassing is not a legal threat, so no.
A person who is not welcome at an event threatening to sue the organisers for refusing them entry would be a legal threat in that context.
It really says something about our legal system that people can threaten to misuse it, and people will fear that because the legal system doesn't contain any safeguards or deterrents against such misuse; so that we have to create a rule here to regulate that behavior.
There are some safeguards, but they don't always work the way intended.
My biggest problem with adding "legal threats" is that while there is literally a legal definition for what "legal" means, the definition for what individuals might consider a "threat" is very different. This creates a huge gap and potential for misunderstanding that can easily lead to people being banned for asking questions in an unfortunate way. For example, I'm easily offended if somebody just tells me there is a law that says the opposite of what I believe should be common sense. And heck, there is an insane amount of "legal" stuff out there that is far from being common sense, and even further away from being morally right. LGBT right probably being the most obvious example. So tell me, what does it mean to ban legal topics that are a threat for certain people? Will it be forbidden to say, I don't know, "you might go to jail for this in your country", even if this sentence speaks the absolute truth?
I don't get it.
Some comments above say such an example doesn't count. But how do we know? What is the definition of "threat" in this context?
Some comments above even create the impression that actions (e.g. blocking) are fine, but announcing actions is not. Is it just me, or does this sound fucked up?
I think its pretty clear in context, that "legal threat" in this context means an ultimatum that one party will engage in legal proceedings (i.e. Sue someone) if they don't get their way.
Edit: Re-reading, i guess its less clear. I suppose the context is that its mostly people from english wikipedia proposing that the CoC embed a well known cultural norm from en wikipedia.
You may well be right.I think we are having a failure to communicate because of different interpretations of the meaning of a "legal threat"
I don't think anything is "clear" here. Which "legal proceedings" are covered by this, and which are not? What is the definition of "threat" in this context?
I am taking the meaning of "threat" as is commonly defined in dictionaries, or in the English Wikipedia article on the topic - "a communicated intent to inflict harm or loss on another person"
The legal proceedings referenced are generally civil actions. In criminal actions it is the state which intends to inflict harm or loss on the person. Does your country's law work differently?
You prefer we should allow threats of legal action against members of the communities? This is a common form of harassment and bullying made against people who are judged by the perpetrator as likely to back down in the face of what could be a very expensive inconvenience, in an arena where money and influence often win out against fairness and reason.
The example " Will it be forbidden to say, I don't know, "you might go to jail for this in your country", even if this sentence speaks the absolute truth?" does not illustrate a legal threat. No-one is threatening someone with legal action. "No legal threats" refers to threats of civil action by the threatener against the threatened person. Criminal prosecution is done by the public prosecutor or local equivalent.
"You prefer we should allow threats of legal action against members of the communities"? Are you fucking serious? Dear CoC committee, I do have something to report.
That is what I understood you to be saying, It seemed unlikely, so I posed it as a question in the hope of clarification. Your response does not clarify your intended meaning at all.
You are implying the opposite of what was said. That's not acceptable. Just because I don't want us to enforce one extreme doesn't mean I want us to enforce the opposite.
The profanity is unnecessary here.
I think the CoC already covers this making the proposed amendment unnecesary. The CoC lists as unnaceptable behaviour making "[p]ersonal attacks, violence, threats of violence, or deliberate intimidation", and also "[h]arming the discussion or community with methods such as sustained disruption, interruption, or blocking of community collaboration (i.e. trolling)." Most legal threats will already fall into those two criterions, and I doubt adding "legal threats" to the wording of the CoC will be any positive as what can be perceived as a threat is very subjective. I oppose the proposed amendment.
I agree with this comment.
Would you consider that informing an organiser that you intend to sue them for having you evicted from a meeting because you are blacklisted is covered by the current CoC? If not, is it behaviour we wish to accept?
Would you consider that informing another participant that you intend to sue them for defamation is acceptable? Is it covered by the current CoC?
One might argue that these are deliberate intimidation,disruption or trolling. If so, is this sufficiently clear?
if someone is already blacklisted, what else are we going to do to them? Double-blacklist them?
For the defamation case, it would depend on the context and specifics, as all things should.
This seems like a terrible idea to me. A user who is vulnerable to a threat of legal action may well be someone who has actually broken a law. And many laws are, after all, necessary for the functioning of society. Strongly reminding someone about what is against the law should not ''per se'' be a violation of the CoC. Most users here already enjoy the protection of anonymity, which they regularly exploit, sometimes in completely unfair ways. The CoC already seems to have a great deal of opaqueness due to the privacy protections granted to accusers/victims and the peesumptive acceptance of the accuser's/victim's claims to have been harmed. Extending the realm of this kind of protection seems likely to further damage the loyalty of many to the movement.
It is not clear to me exactly what you think is a terrible idea. Could you clarify?
See top of the thread: "adding of legal threats to the list of unacceptable behavior" Or is that already off-topic. ~~~~
Thanks, this format is not always clear about what someone is replying to.
Let me put my concern this way: Our CoC already creates a kind of legal system. It defines conditions that might lead to specified actions, and the actors within this system. However, it does not specify exactly how this will be executed in a specific situation. It's not code. You can not run it and expect it to always create the same result. It's something trusted human beings are asked to perform. And even if we might argue about certain decisions, we as a community are expected to accept the committee's decisions. And that's neither good or bad, that's just something that needs to be done.
Here is the problem: What do we win when we add terminology and definitions from another legal system? I think the two do work better independent from each other.
Threatening somebody should already be covered by the CoC, as far as I understand it. Isn't this enough?
So do not see it as being covered... So even though it should be enough it does not appear it is.
when was the last time that legal threats were an issue in (online) tech spaces? I understand its a common problem on wikipedia, but we aren't wikipedia, and if it is not an issue that affects us, lets avoid rule creep.
I excluded offline stuff because i cant really speak to those having only attended but never organized. Personally i think it was a mistake to make one policy for those two very different contexts, but i digress.
Suggested amendment: Public logging of bans made by CoC
Given the discussion happened in the past couple of days, here's my proposal to amend to CoC, part of cases:
Any ban made by the code of conduct committee will be logged publicly in Code of Conduct/Cases/Log/2018 with user name, space of ban (phabricator, mediawiki.org, etc.), duration, and if not private reasoning. Unless any of these two conditions apply:
- The reporter asks that ban not to be public.
- Or the code of conduct committee decides not to disclose anything as it might disclose private information. For example, logging harassment cases in Wikimedia events can lead to real-world identity of users being disclosed.
It goes without saying that this is not applicable retrospectively.
Just to clarify, this would apply to bans specifically, not cases themselves?
I’m against this. Even with the on wikitech-l suggested option of only making it public while the ban is in effect, if the log is kept on a wiki, the (history of) the ban list can (and will) be used for naming and shaming, long after punishment has been dealt and served.
If any transparency is required, I would counter propose that an aggregate report is published periodically (quarterly/bi-annually, yearly —- depending on the number of expected cases —- max 10 or so per period would be nice), which report on the number of actions per platform and the duration of the actions, where applicable.
Also, the ban might be worn as a badge of honor, which would encourage further abuse. :(
Is having a list of bans performed by CoC any different than a wiki having Special:BlockList?
No, but perhaps that shouldn't exist either. Most websites don't list all of the users who have been banned, and I don't think that is necessarily a bad thing.
Most websites are not collaborative projects supported by a huge community.
That's true, but as an example: GitHub, Stack Overflow, or any an all Open Source projects (including projects much much larger than ours). I can't think of a website/project (other than Wikimedia's) that publicly lists bans. I think the public log is an assumed virtue, when the existence of the list, may in fact, encourage more harassment. I have no evidence to suggest this, my point is only that the existence of such a list may have unintended consequences.
Not existence of such list is already having consequences. And bans on the wiki had been public since forever, as far as I know, without any evidence this did encourage anyone for more harassment.
As a somewhat trivial example, being blocked by the President of the United States, is seen by some as a badge of honor, and therefor, a prize to be sought. https://www.washingtonpost.com/news/the-intersect/wp/2017/06/07/how-getting-blocked-by-trump-on-twitter-became-a-badge-of-honor/
At least on enwiki, I don't think many people use Special:BlockList unless they're looking up a block ID to investigate an autoblock. It chiefly consists of drive-by vandals, so it doesn't really serve as a wall of shame or badges of honour, as you'll quickly be buried in there and forgotten. The block log itself obviously is necessary to see prior abuse. At any rate, admins can redact the log entries, but on the English Wikipedia this practice is prohibited.
Yes, Not even warnings neither any mention of reporter will be logged. Hope that helps.
I'd propose one additional clause that requires:
- In the first month of each year, last year's log is amended one final time to disclose the number of non-public bans by space of ban. This number cannot be opted out of by the reporter, the committee, or anyone else. And this statistic would not contain any start date, duration, user, or reasoning. Just the numbers by space of ban. (Bans affecting multiple spaces could be counted multiple times, or we could decide that "Multiple spaces" be one of the counted spaces.)
- In the first month of each year, last year's log should be linked to from a public announcement. I have no preference for which platform this post would be on, but it should be decided ahead of time and included in the policy. Perhaps Wikitech-l, or another public wikimedia.org list, or a mediawiki.org newsletter, or a Phabricator Phame post, etc.
Lastly, I would recommend that this clause does apply retroactively, treating all prior cases as non-public. Thus disclosing this year's complete counts by January 2019.
EDIT: I wrote this comment at the same time as @Siebrand wrote his. I did not see his until after I submitted mine. I would support doing only aggregated reports.
Anonymous statistics are in theory already part of the CoC (see the last paragraph of https://www.mediawiki.org/wiki/Code_of_Conduct/Cases#Responses_and_resolutions ).
I would prefer that this applies not just to bans but all sanctions by the committee (e.g. asking someone to apologize doesn't need to be logged, but doing something to someone against their will does. e.g. deleting a phab comment does require logging).
I even wonder if maybe the enforcement role of the committee should be separated from the judgment roles of the committee. Right now I feel like the CoC committee is judge, jury and executioner and lacks any effective oversight due to secrecy. I consider this state of affairs dangerous
I would also like to suggest that for any sanction, the person being sanctioned must be notified of the action, length, rationale and how to appeal. I've heard complaints that in some cases when comments were deleted the commenter was never notified, and only found out by accident (which is supposed to deter them how?). In the recent MZMcbride case, the notification was rather lacking (Assuming the email posted by Mzmcbride is accurate) as it failed to disclose length, how to appeal and the rationale while not totally missing was lacking imo (that's more debatable though).
I agree with your thoughts.
I agree with the publishing of the bans, but I am not sure Wiki (and specifically Mediawiki) is the best place because, as correctly noted, the wiki infrastructure ensures the information is preserved forever. While in theory everything public on the internet is forever, in practice there might be benefits to not carrying old grudges around.
The problem to solve here is not statistics though. It's nice, but it's a different case. What is really needed is when action is in force, I (as a participant of the collaborative platform) can see that it is, and adjust workflows accordingly. It is a collaborative environment, so the ban would affect collaboration. Aggregated report is useless for this case - if I worked with X and their account suddenly shows up as "disabled", it won't help me any that in 2 months time I'd see in aggregated report "number of bans: 7". What I need to know is: a) was X banned? and b) for how long. Most community members, I suspect, would also want c) by whom and d) for what.
I am not sure how it can be possible that the ban won't be public - account being disabled on Phabricator is pretty public, and we'd be kidding ourselves if we ignore the fact that this is what people would assume (especially now). If we do it in non-transparent way, people would just assume more and create a narrative in their heads which might not necessary be even true, but is inevitable. I do not think the fact of the ban in a public collaborative platform can be hidden. Or should be. We might not disclose any details about it beyond the duration (I think this is a must), though I would advise in general case not involving sensitive matters to opt to more disclosure (again, secrecy breeds mistrust, and mistrust makes people construct narratives that we'd want to avoid). But I do not see how hiding bans is practical, or desirable.
I would discourage using wikitech-l to publish ban reports. This mailing list is read by newbies and other developers who just want to stay informed about technical matters. It is not very welcoming to see such drama and it might incite more lengthy and toxic discussion. I think an on-wiki page is fine, or somewhere that's not as "in your face". I should not be forced into seeing this side of the technical community.
Edit -- I meant to post under my volunteer account, for the record!
I think using wikitech-l is a consequence of a lack of pre-designated forum for such discussions. When there's a need for discussion and community does not have a Schelling point for it, the next best thing is taken to be it. To avoid it, we need to arrange such space in advance, and specify it when CoC decision is communicated, and maybe also in CoC pages where we establish the rules.
I join Bawolff on the lake of separation of powers. I would add that it would be good to have well defined processes and publicly documented rules stating for which cases what kind of sanction can be applied or not. That doesn't mean absolute transparency on each details of execution of this processes, which is certainly not a required or desirable level of transparency permitting trust to foster, and can itself be a source of legitimate concerns for people safety, as it was pointed. Although I am not sure that this is the most appropriate place to talk about this aspect of the issue.
As there is a collaboration aspect that have been indicated, I think Meta would be a more appropriate place to publish a report, if any.
Regarding the feedback of status of collaborators when they are banned, I don't have a perfect idea so far. My first thought would be something like integrate a feature in Phabricator itself, showing in the user page that "this account as been (temporally) disabled (for 3 eons), for more information consult this page/contact that referent".
I support the original amendment. I don't care much about the venue for publishing the bans, but it is tremendously important that they are visible at least for the duration of the ban. Not everyone who needs to be aware is in the CoC committee (phabricator.wikimedia.org and wiki admins, event organizers etc)
This seems like a solution in need of a problem. Would you mind clearly stating what the problem is so we can determine if this is the appropriate solution?
The problem is very simple. They way it works now there's no indication to anyone (even Phabricator admins!) what happened when account is disabled. Not who, not why, not for what duration, nothing. It is unacceptable that on a collaborative platform people are just disappearing without any notice and any possible explanation (and since the account is disabled, they can't communicate what happened either). This is not how open collaboration should be done, and not how a collaborative environment should be administered. I understand the privacy concerns, but my firm belief is it is completely possible to produce a transparent and accountable system without revealing any private details. Just hiding everything and not providing any information at all is not a good solution.
Yes. CoC santions must all be publicly logged for accountability, transparency and awareness. This doesn't mean that we must require a full disclosure in the most sensitive cases but at least username, type of sanction and duration must be made publicly avalaible. With regards to the case that caused the opening of this thread, we already have two Phabricator administrators that didn't know where the block came from, one of them overturning the ban ignoring that it was a CoC sanction. This is not optimal, as we can lead to involuntary reversals and misunderstandings. It is in the benefit of both the CoC and the users to know all of this. As some have pointed above, wiki blocks are kept logged for the said reasons on a personal log and active blocks and bans listed on an special page. Why the secrecy here? Of course reports should be managed with confidentiality, but actions by the CoC based on said reports should not fully be. I must note that you can find at List of globally banned users a list of users subject to community and WMF-Legal global bans. Those lists ain't aimed at being walls of shame, but help users know why something has happened. I insist, the log doesn't have to have the full details, but at least username, space, duration (and reasoning if that's appropriate) should be published. Thank you.
I don't believe there is value in transparency for the sake of transparency. Especially when that transparency comes at the cost of someone's privacy. Also, it may have unintended consequences that have not been explored.
Perhaps an acceptable compromise would be to permission the ban/block log to the administrators of the domain? (i.e. Phabricator/Gerrit Administrators). This way, it isn't a public log (that can be used for shame/honor), but it would resolve the problem of administrators not being aware of bans/blocks that they should be aware of.
How about bans for real-life event? How can other events organizers be aware of a ban if no public log exists?
I'd like to insist that I am not asking for the full disclosure of the case details. That'd be crazy. The log can be so brief as to "User X banned from (platform/WMF technical spaces) for (a week/indefinite) by decision of the Code of Conduct Committee [optional: because ...]." This does not IMHO disclose any personal information other than what the user might have disclosed themselves voluntarily (ie: if you use a username which is also your real name [Bad Idea (tm)]), helps understand people why an account suddenly has become deactivated, inactive or is not replying, and improves transparency and accountability of the CoC. Of course cases and investigations should continue to happen in camera, and there might be cases where revealing even that certain user is subject to a CoC santion might be a bad idea. If that situation ever comes I'd certainly support not making that situation fully public, but at least the administrators of the platform where the user got banned should be informed (e.g.: Please note that the COCCom has banned X from Y; please do not overturn) as to avoid involuntary reversals or misunderstandings. Best regards.
I understand what you are asking for, but I do not understand why a ban/block log needs to be public, when it seems that the problem is that administrators are the ones who need this log. I understand that it would not include personally identifiable information. My argument is that, the block itself ought to remain private. Or are we saying that people who are blocked/banned no longer have that right to keep that information private?
Is there a problem with making this log permissioned to admins?
> Or are we saying that people who are blocked/banned no longer have that right to keep that information private?
How can you keep it private if your account is disabled and this fact is publicly visible? If account becomes disabled and then enabled back in a week, it doesn't take Sherlock Holmes to see what happened.
Perhaps instead of having a log, admins should be trained to contact CoCC before unbanning/unblocking a user?
On every disabled account, even one having nothing to do with CoCC, just on the chance it might be another of the ineffable ways of CoCC? That sounds completely backwards - instead of CoCC plainly announcing the action, we have to ask it on every action that happened - is this your action? What about this? What about this? And this? And this? I don't think CoCC members would want this, and IMHO this is not the right way to manage a community. CoCC should not be a black box oracle.
If that's the case, then what's the current problem we are trying to solve?
As I described above, the current setup only shows the account is disabled. It does not show the fact of the ban, the duration, the contact person, the discussion venue etc. Anything that is needed for a proper community management is hidden. The only fact that you consider to be private is not. One could guess, post-factum or after a long discussion on the mailing list and sometimes a comedy of errors with different admins undoing actions of each other, what happened - but that should not be the modus operandi of a healthy community. We shouldn't waste a lot of time and effort of multiple people to figure out what happened. Transparency and user-friendliness should be part of the system functionality, not possible emergent result of hard work by multiple people. That's the problem we are trying to solve.
Those are valid reasons, and I understand the rational.
As I asked above, Is there a problem with making this log permissioned to admins? That should resolve everyone's concerns I think?
It'd be good if you could specify a reason in the "disable account" function. The log of account enables and disables are already only visible to admins. Problem is that I am not sure if Upstream could be convinced this is a priority to introduce in their software, and I am not sure if we can make a local "hack" in our install (or even if that's even adviceable) to do so. Manual logging is the only solution for now. Where to seems to be the problematic part here.
As for other venues, Gerrit for example requires running a CLI command to disable an account (and of course there's no public log either). IRC does have their +b lists. Mailing lists have the moderation function. Wikitech and MediaWiki do have Special:Log/block for blocks and as for other technical spaces I don't know.
There's not a single "administration" for the whole lot of WMF technical spaces managers. Either all of them are informed (thus making the log visible for all of those that can issue restrictions on WMF technical spaces) or we still run into the risk of your left hand not knowing what your right hand did.
It would solve half of the problem. It will be still not transparent to 99.99% of the users of the system (who still wouldn't know what happened to the person they were collaborating with), but will allow at least admins to perform their functions efficiently. As a partial solution, it would be better than nothing, but not entirely satisfactory.
Some things to consider:
- CoC bans could involve at least Phabricator, Github, Gerrit, mediawiki.org, Mailman, IRC. All of those have different admin groups who need to be aware (if nothing else, then to avoid being tricked by the blocked person into removing the block). Some of those places have have public ban lists (IRC, mediawiki.org unless suppressed), some don't have a list but you can see the block in the account status (Phabricator, mediawiki.org when suppressed), some don't have any public block info (Github, Mailman, maybe Gerrit?).
- Suppose the banned user does not want the ban to be public - they promise to fix their ways and do not want a stain on their record (public logs can be removed but people's memories of them can't). What level of dissemination would be appropriate then? Admins of the affected spaces? Admins of any space? Temporary public record? Permanent public record?
- If there are public records, is it important that people find it easily? If "collaborators should be aware of what happened" is a real problem to solve then putting a block log on some wiki page no one ever heard of is not really a solution.
It makes me wonder, how publication of this can of logs are impacted by w:GDPR? What would be the relevant people/team to contact to have relevant advises on that matter?
I think that the idea that logs on sensitive topics (such as any kind of abuse in Wikimedia events) should be private both for victim and abuser is laudable. You can probably have both goals succeed in different ways: a public task with a log, where without any naming there would be somewhat concrete description of CoCC actions, and a private log, visible for moderators on Phabricator and CoCC, where it would be logged with more details (not enough to damage any people, of course).
The problem of bad reputation that is provided by public logging of blocks in Wikimedia projects is real, and consequences of implementing it on a real world abuse would probably be more damaging to both parties, so finding a more sophisticated approach should not be scoffed at.
Unfortunately this thread seems to focus solely on bans for online activities. For offline activities, there is no "admin" group that can be announced. There are all kind of independent events in universities and other venues that are in no way affiliated with the WMF, many times do not necessarily interact with WMF community liaisons, but are bound by the CoC. None of the alternative solutions proposed above considers these events.
I would ask people opposing a public ban list to consider how their proposals fit with offline events. Thank you.
If it's not being done already, maybe the organizers of offline events should be provided by someone at the WMF/COC with a list of banned people privately so they can manage the attendance to the event safely. Best regards.
In theory this works, sure, but there are 2 issues I can think of right now:
- Is anyone tracking tech events throughout the world? Would it scale to do that?
- Would that require event organizers to sign some kind of NDA? If so, that would be a huge turn-off...
Or make IRL bans public but other bans not. I imagine most bans will be about online spaces and not events (as most people are more likely to misbehave online than face to face).
Sorry to repeat myself (and maybe this is slightly off-topic), but if you noticed we recently had one or two new contributors welcomed on wikitech-l. The long nasty "disabled phab account" thread seems to have died down, and I hope it stays that way. Am I the only one concerned about how bad it appears to newbies? "Welcome to Wikimedia! Hope you like drama!"
It's a good thing the thread has died down, since the case itself was not worth it. However, this proposal is worth adopting.
This comment is in regards to the original amendment. Each venue should have a place where bans are listed, for the benefit of other administrators or persons who might be asked to lift the ban. So I would suggest that phabricator have such a place where the action, the target, the actor, the duration and (potentially) the reasoning are made available to phab admins. This does not necessarily need to be public.
There are two things being conflated in this discussion: 1) the need for other venue admins to know what's happening when someone is banned, 2) the desire of the public (contributors to Wikimedia projects including MediaWiki) to know what's going on. I'd rather deal with each separately.
I think you nailed the issue; personally I think the CoCC can't really be effective until problem 1 has been solved.
"Problem" 2 is not really a problem as far as I'm concerned - or at least it's something we are not in a hurry to solve.
I'm kind of irritated that most people here seem to believe that there is no need to diclose ban information to the general public. How are you supposed to work like that in a community-driven project?
Given the case that a user is blocked on Phabricator, this is visible to the general public and prominently marked whereever the account appears (marked with a dot or greyed out, depending on the form). If I see this happened to a user I worked with, how am I supposed to know what happened? Did this user just quit his MediaWiki work and left? Did he stop working for WMF? Or is he just temporarily gone? If I don't know what has happened, how shall I decide which actions to take, when I need to contact this user?
I do believe we should disclose bans. A block summary and a log on Phabricator would be most ideal, something more than just saying "disabled". I understand this probably isn't a feasible thing to implement right now, but I assume Phab admins can at least edit the "blurb" or "title" on the user's profile? You could link to the diff of the log entry that lives somewhere on mediawiki.org. This need extends well beyond CoC transparency. For instance, for a long time I collaborated with a particular user on Phabricator, and one day he disappeared. I saw his account was disabled and assumed he did something very wrong. Turns out, sadly, he passed away, which I didn't discover until months later. So we should log all blocks, regardless if they are CoC-related. I think the block summary should be as descriptive as possible, barring privacy concerns. If we need to be vague, "code of conduct violation" (or what have you) would suffice. I'll note that blocks, logs, internal drama, etc., are well-hidden on the wiki and I only hope the same will be true for Phabricator and other technical spaces. I'm all for transparency, but please, pretty please, don't use the highly-visible mailing list to announce or debate these actions, scaring off uninvolved contributors, or distracting them from their work. No one should be forced into this drama.
- Any ban made by the code of conduct committee will be logged publicly in Code of Conduct/Cases/Log/2018 with user name, space of ban (phabricator, mediawiki.org, etc.), duration, and if not private reasoning. Unless any of these two conditions apply:
- The reporter asks that ban not to be public.
- Or the code of conduct committee decides not to disclose anything as it might disclose private information. For example, logging harassment cases in Wikimedia events can lead to real-world identity of users being disclosed.
Weak support, put I'd prefer to see a third condition there: the banned user asks for the ban not to be public.
Also I think there should be a guideline of how to handle bans in various spaces, as the log is not really useful if it's not exposed when someone tries to interact with a banned user. E.g. it should be linked from the MediaWiki block summary, the Phabricator user profile, the gerrit status message etc. Of course that's not part of the CoC amendment.
send to translation
Ive added a missing anchor in the EN page; please sent for translation to propagate the correction. Thank you.
sorry no evolution after my last update of 5 May:
anchor is to be declared in the english page not the french one !
Text from the Contributor Covenant is licensed under a CC BY 4.0 license, rather than the MIT license.
At the time the Code of Conduct was created the Contributor Convenant was licensed under MIT.
That is true. Can we change it now to CC BY 4.0?
StackOverflow Code of Conduct
Stack Overflow (or to be more precise the Stack Exchange network) just created their Code of Conduct. It's quite well done, IMO (of course very specific to what they are doing). The expectations part (ie. starting with something positive) is something I miss from ours.
I like that Expectations section very much too. I also like the general presentation of their CoC. Looks more easy to consume, and less like a legal document.
I agree, it is friendly, positive and easy to understand.
👍, we should take inspiration and make our page better.
I see a profound problem when the aspirational (to cite a common example - "A Scout is: Trustworthy, Loyal, Helpful, Friendly, ...") is conflated with quasi-judicial concepts along the lines of "misconduct". This can lead to broad and vague speech codes, which may be selectively enforced in capricious ways. Serious question to those who find the prohibitions clear: Would you say any of the controversial tweets of a recently hired New York Times editorial board member, are in violation of this CoC's provision of "No bigotry. We don't tolerate any language likely to offend or alienate people based on race, gender, sexual orientation, ..."? Would she be expelled for "display[ing] a pattern of harmful destructive behavior toward our community"?
Seth, respectfully, I don't think your comment has to do with this topic and should be discussed in a separate topic.
@Huji, what it has to do with this topic is my assessment is different from the views above, which say, "quite well done" "easy to understand", and so on. I don't think it's well-done or easy to understand all all, from the crucial perspective of what is a violation as opposed to an aspiration. Hence, I ask, if it is well done, or easy to understand, please tell me if the case I give above is overall a violation.
Oh, I see.
So let me clarify my opinion then: I don't know if their CoC is something I would approve or not. But for what it is, the mothod of delivery is appealing to me. I think we could/should try to take our own CoC and make it more appealing like that as well. My opinion has not to do with the content of their CoC. Yours, apparently, does. And that's okay, but I wanted to clarify that we might be talking about different things.
Anyone to take the lead on turning good parts of StackOverflow 's CoC to amendments of or CoC?
Recommendation to modify the appeals process
Hi, the Technical Collaboration’s Community Health group wants to share some thoughts about the appeal process that we are currently handling. The text describing the appeal process itself is fine. The problematic part is to have an appeals team other than the CoC Committee itself without defining the relationship and governance between both teams.
The core of the problem is that it is not defined who has the ultimate decision on the resolution of an appeal. This is fine when both teams agree on a resolution, but what if they don’t? The options are
- They have to keep discussing until there is consensus. This would put both teams on equal foot, which is fine but needs to be documented.
- The Committee has the last word. This means that the Appeals team has an advisory function, which is fine but needs to be documented.
- The Appeals team has the last word. This might even be the default expectation (?), but it is actually the most problematic one because it means that the Appeals team has more power than the Committee itself.
If we want to go for the third option anyway, then that Appeals body cannot be a team like we have now, formed by Wikimedia Foundation members by design. There were good reasons to make this choice (leaving tough situations to paid professionals, saving some trouble to volunteers), but having a team of WMF employees having more power than the Committee is a setup that we don’t want to have.
The current text states: "These [appeals] will be considered by the Committee, which may alter the outcome." This suggests to me that the Committee has the last word. I believe this makes perfect sense, since the foundation should only override community-elected structures for legal reasons (in which case the Community Health group doesn't sound like the right group to make a decision anyway).
Can you link to the pages for each of these two committees or teams? I want to see a page for each, listing who the members are, and stating how anyone comes to be on these teams.
Based on what you say here and my browsing around I cannot quickly come to understand the differences in the nature of these two teams.
Can the auxiliary members of the CoC be the Appeals team? In which case I think option 1 above makes the most sense.
I'm not excited by having the auxiliary members be the Appeals team. Said as a former auxiliary member, I'd prefer to keep the function strictly as fallback in case of conflict of interest of active CoC committee members.
An additional factor to be considered. The Technical Collaboration team doesn't exist as such anymore. The people who form the Community Health group are all active, so if we receive a new appeal we can still handle it. However, we would welcome a decision on our proposal.
Tracked: task T199086
I think the Appeals team should have the final word on cases submitted to their consideration. Thank you.
Giving a non-elected (wmf appointed) team power to veto and repeal decisions my a community committee seems contrary to being a community driven organization. WMF staff should not have "benevolent dictator" powers in social processes.
In order to help the discussion, I think two aspects should be considered:
- Should the Appeals team be nominated by the Wikimedia Foundation or not? (and if not, how is this team nominated)
- Who should have the last word, the Committee or the Appeals team?
The combination of these points offer four scenarios. A fifth would be that there is no Appeals team.
So far there are two things that can be seen here:
- Slight majority believe a WMF-based team should not have power to overrule CoCC remedies. If there is no strong objections towards this by the next 7 days, I will make it clear in the CoC.
- We don't have a "Community Health group" at WMF anymore. The functionality needs to be given to another body. I don't know WMF internal structure to suggest an alternative body in these cases. I reach out to people for suggestions.
I still think that an appeals body should exist and be able to overturn a decision submitted to their consideration. Otherwise, what'd be the point on having one? It'd be bureaucracy for the sake of bureaucracy and a false appearance on the existance of an appeal process. If the problem is that we don't want to grant such power to a WMF Team for whatever reason, then I suggest that the appeals body be formed by community members instead in the same way the COCC is elected. Thank you.
I don't have any better proposal but making a committee just to check appeals seems too much overhead to me. There are several committees/group/teams we can delegate this responsibility.
The WMF operates most technical spaces, sponsors most development etc. so ultimately it is the WMF's responsibility to ensure technical spaces have a healthy culture. Having it as the decisionmaker of last resort makes sense.
OTOH if most decisions get appealed and some WMF team has to secondguess the CoC committee all the time, that seems like a bad situation. Rather than setting up another committee, I think it might be better to restrict appeals to situations where the committee made some objectively identifiable mistake (and then the WMF team's involvement would be limited to verifying that the mistake indeed happened).
Dead link: 'Open Code of Conduct' at todogroup.org
At paragraph 'Attribution and re-use', the item 'the Open Code of Conduct' has a dead link to 'https://todogroup.org/opencodeofconduct/'
Thanks, I've made it point to their Github repository for their CoC instead.
In spite of my work I still have not the rights: can you still push the page for translation to propagate the correction ? Thanks.
Proposal amendment: Committee should serve for one year
In the hackathon we (Committee members) are talking about having the term extended to one year because it takes a rather a long time for the committee to learn how to work together and thus makes sense to work together for a longer period of time.
Even a year seems short to me. If the term is one year then I hope that it happens that people stay longer if their committee service is working.
+1 from me for terms of 1-2 years. My only fear in longer terms is the extra stress placed on the committee members.