We are updating our Code of Conduct and we would like your feedback

64 min read Original article ↗

Can we get "assume good intent" back in the Code of Conduct?

Or, better still, "presume good intent": this places the emphasis on initial assumptions, and can't be construed by bad-faith actors as a fully-general "never object to anything" directive.


The landing page is an excellent introduction to the Stack Exchange system, but will new users see it? Currently, the text on sign-up says:

By clicking “Sign up”, you agree to our terms of service, privacy policy and cookie policy

where the Terms of Service says:

you affirm that you have read, understand, and agree to be bound by these Public Network Terms, as well as the Acceptable Use Policy and Privacy Policy.

None of this links to the Code of Conduct; in fact, the only place that really seems to link to it is the Help Center. I was under the impression that the Code of Conduct was binding on users, but it looks like it's only binding on moderators.

Perhaps you could add a link to the Code of Conduct to the footer, or the tour, or something? (I can't really think of anywhere it'd fit.)


The wording in the Abusive behaviour, Sensitive content and imagery, and Political content policies looks like it's adapted from US federal law, but (to a European ear) the "actual or perceived race" bit reads like the race realism common to US politics. That, or it's confusing ethnicity and race.

Perhaps you could take inspiration from French law instead?

non-public defamation of a person or groups of persons based on their origin or on their – actual or assumed – membership or non-membership of a specific ethnic group, nation, race or religion, shall be punishable by the fine laid down in respect category 4 offences. Non-public defamation of a person or group of persons based on gender, sexual orientation or disability shall be subject to the same penalty

The list mentions gender twice, too; I assume that's a copy-paste error.


The Bullying and Harassment section includes:

Content that contributes to a hostile or threatening environment, denies a person's expressed gender identity, or invalidates a person's individual experiences in a manner that causes harm.

Broadening this to "denies a person's expressed identity" (omitting "gender") would cover a few extra circumstances that, historically, we haven't had policies prohibiting.

It'd be nice to have written backing for when I smite offenders with the hammer of fury – though not necessary, of course.


This paragraph doesn't make sense (another copy-paste error, I assume):

To ensure that all users feel safe and welcome, we do not allow behaviors or content that causes or contributes to an atmosphere that excludes or marginalizes, promotes, encourages, glorifies, threatens acts of violence against, or dehumanizes another individual or community on the basis of their actual or perceived race, gender, age, sexual orientation, gender identity and expression, disability, or held religious beliefs.

There are a few grammar errors (e.g. "behaviors […] that causes: , apparently); and read literally, it doesn't really make sense. This bit:

promotes, encourages, glorifies, threatens acts of violence against,

looks like a mistake introduced in copyediting; it should be:

promotes, encourages, glorifies, or threatens acts of violence against

but I'm not sure how to integrate this into the main list; all my attempts have been unreadable. Perhaps someone from English Language and Usage might be able to manage it?


M--'s comment has saved me writing my own paragraph:

I really liked the examples under Unacceptable Behavior: "No subtle put downs or unfriendly language... Cont'd": meta.stackexchange.com/conduct. This new version is a wall of text that not everyone would read. I am not against the additions per se, but taking away those bullet points in the current version that were really helpful is a questionable decision. – M-- link

This is an important point, but (as I mentioned earlier): the old Code of Conduct was barely linked from anywhere. The main benefit of the clear, pretty, concise Code of Conduct was being able to link it in comments and have it understood. We can't do that with the new version, true…

… but we can use meta. It's where most of our other policies are, after all (e.g. ). As a bonus, that'll let us tailor our recommendations to the kind of inadvertent bad behaviour we see on individual sites, or even write individual posts for different (classes of) situations.

The new Code of Conduct is tailored more towards bad actors than any previous policy we've had; but, as it says in the introduction:

This Code of Conduct is meant to work alongside individual site policies.

The only downside is that we won't be able to lay it out as well as the current CoC, being restricted to little more than CommonMark. But, we could just link to the Internet Archive: that's not going anywhere, despite the IA's recent legal issues.


As a final note: I'd like to say how impressed I've been with the whole feedback process so far; especially with the responses of Bella_Blue, Cesar M and others. It is abundantly clear that they know what they're doing – at least as far as their main job is concerned (human exception handling, and wrangling moderators). I haven't felt ignored, despite overreaction and unprofessional conduct on my part. It really feels like we've got our CM team back.

I still don't quite understand why the new CoC needs to be long and exhaustive, but they've tried to explain it to me, and what I've understood, I've agreed with. (Something something "expectations" something "having unwritten rules" something "legislature" something "justifying moderation decisions".) I am willing to defer to their expertise on this matter.

answered May 3, 2023 at 19:55

wizzwizz4's user avatar

47

There are several points in the proposed policy which I find concerning in regards to curators, potentially casting legitimate curation activity in negative light.

To be clear, I must preface this with: I do not think the policy intends this to be the case. The issue is that we have had repeat complaints that are similar to points raised in the new code of conduct.

My concern is that I believe this can "give ammo" to complainers to not only complain about otherwise curation but also try to escalate. Also worth noting that these complaints ignore the "assume good intent" that the previous Code of Conduct had.

Here are the points that I find problematic and I will illustrate how:

From "Abusive behavior policy"

  • Bullying and Harassment– severe, repeated, or persistent unsolicited conduct, misuse of power or tools, cruel criticism, or attacks that target specific users or groups of people in a manner that causes harm. Content that contributes to a hostile or threatening environment, denies a person's expressed gender identity, or invalidates a person's individual experiences in a manner that causes harm.

Very regularly users equate downvotes with bullying instead of the content rating system it is supposed to be. Also, many curation activities have been labelled "harassment" in the past, like voting for closure, downvoting, commenting to ask for clarification, edits of a post to fix issues. In addition, active curators on a tag are also often accused of "repeatedly" "targetting" a user, whereas they just review most incoming questions.

Overall, a complaint like this can very broadly be used to attack users on the site. And it literally has been. Often.

  • Hostile comments – malicious, unkind, or mocking comments that provoke or insult another person, including (but not limited to) the usage of gendered cursing terms in a derogatory way.

Users have taken issue with many a comment that point out some issue with a post, or with the content (e.g., issue with the code itself), or just comments that are terse. Very often a complaint on Meta is posted and it explains how "many users attacked" the post in some fashion. And very often it is just multiple people expressing that they have trouble understanding the post. Or even offering pointers how to improve it. Yet, such comments are misinterpreted as hostile on a very regular basis.

From "Disruptive use of tooling policy"

  • Misuse of flags – using flags to harass, target, or abuse other users, or misappropriate moderator attention

At this point, it is the norm for users to complain that any moderation activity done on their content is in some form a "misuse" of the systems put into the site. Closures, downvotes, reviews, etc. have all been accused of being "misused".

  • Vandalism of content – deliberate editing to destroy or sabotage content

Some users are very protective of their content to the point that pretty much any edit they perceive as vandalism.

Here is a more concrete example: There was one case where a user added a signature to each of their posts. When it was edited out for being superfluous, the user got rather agitated and even threatened to sue the site for breaching their freedom of expression...for removing the signature.

In general, often we get complaints about edits rooted in "freedom of speech/expression" rhetoric. Thus, many users can turn to this and call any edits they do not like "vandalism".

  • Targeted votes – votes cast in succession that are non-organic in nature or not based on the quality of the content
  • Revenge downvoting – votes cast as a way to harass, target or abuse other users, so as to lower their reputation, and that are not based on the quality of the content
  • Mass downvoting – votes cast against a person or topic that are non-organic in nature or not based on the quality of the content

The same for all three - users have the habit of trying to identify a "bad actor" when they get downvotes on their post. Instead of realising their posts share similar faults. On Stack Overflow, the FAQ even includes Why shouldn't I assume I know who downvoted my post? because of how often users try to do this accuse others of "targetting" or "mass downvoting".

  • Misuse of close votes – voting to close or delete a question with repeated disregard for community consensus, or as a way to harass, target or abuse other users, or misappropriate moderator attention

There is a constant stream of complaints about any sort of closure being "inappropriate".

Even some long standing users express belief that closure should not be used even if a question has problems and cannot be answered. That doing so is wrong and instead we should wait for the author to address the issues instead of closing the question and letting the author address the issues.


Again, I want to point out that I do not believe the proposed Code of Conduct is intended to be used against curators for using close votes or downvotes or any of the other systems built into the site. However, I have seen users criticise and denounce all these activities. Very often due to drastic misunderstanding of what the sites are about and how to interact with them. In basically all cases by ignoring the "assume good intent" directive. Thus, I can already foresee that in the future they would look towards the new Code of Conduct and cherry-pick the things that sound the most like the accusation they are going to make.

I do not have a solution to this. Yet, this is the feedback I have about this proposal. At the very least, we can have the "assume good intent" back in the Code of Conduct and maybe dedicate a section to it. So when a user says "I am being bullied by close votes" there is something to point to as a response.


Also, I would really appreciate it if the company makes it clear what is acceptable in addition to what is not. Very often it feels like curators are put on the firing line against new users loaded up with arguments from off-site sources. That any moderation is "toxic", that rights are being ignored, that curation is aimed at harming the user themselves. Etc. Yet there is no single place that I know of that explains why all of this is a deeply rooted misconception.

And no, various Meta discussions, scattered FAQ entries, and the Help Centre web of articles do not count as "one single place".

answered May 4, 2023 at 11:33

VLAZ's user avatar

23

Essentially turning @Kevin B's comment into an answer:

Is there a list anywhere of what the substantial changes are, relative to the previous CoC? at first glance it appears to more or less be the previous CoC but using more specific terminology rather than the more... open one we had before.

In other words, please tell us what you've changed.

You've given us the main reasons:

  • There are certain things that the current Code of Conduct does not address. The world is ever-changing and it is our responsibility to ensure the safety of users of this network.

  • Upcoming regulatory pressures from Brazil, the EU, and elsewhere demand that our content moderation practices are able to stand up to scrutiny. We do not believe that our current code delivers on those requirements.

You've sorta told us why you've changed, but you haven't explicitly mentioned what's been changed. If you want us to be able to give you detailed, thoughtful feedback, it's much easier for us to do so if we have that information, particularly since many people won't be familiar with the exact copy, intent, and effect of the old one. (I certainly am not.)

Even just bullet points illustrating what you've added, removed, or changed would be fine.

@Mark Olson's comment in response to the 'too much has been changed' comments makes another good case for this:

Even if it's a complete re-write, it's hard to believe that the intended effect of the code will be completely changed. Since you made those changes for a purpose, please share with us what those purposes were (beyond the generic). And it would be very helpful to understand what specific behaviors that are currently permitted (or only vaguely prohibited) will soon be prohibited.

answered May 3, 2023 at 17:52

CDR's user avatar

9

Misleading information - We do not allow any content that promotes false, harmful, or misleading information that carries the risk of harm to a person or group of people

Broadly speaking, we do not allow and may remove misleading information that: Is likely to significantly contribute to the risk of physical harm to a person or a group of people

And what about incorrect (or otherwise harmful) answers? I don't know about non-technological sites, but for technology sites, incorrect answers, or answers that don't give sufficient warnings for the consequences of certain actions, or answers with subtle bugs or unguarded edge-cases can carry the risk of harm to people.

Who knows what a copy-pasted bad answer with a memory leak or resource-not-properly-closed could do in the wild? Such information could certainly meet the criteria of being harmful information, and carrying the risk of harm to people.

And what about overflow bugs? You've got your classic https://en.wikipedia.org/wiki/Therac-25 (not that that was caused by Stack Overflow content, but it was caused by an overflow mistake, and I'm sure Stack Overflow has its fair share of code with unhandled overflow cases).

And what if someone were to create and propagate purposely, subtly buggy/unsafe code to mess up future LLMs? (Ex. See this Live Overflow video (probably an April fools joke)). Does this CoC have implications for such activity within the Stack Exchange network?

Here are some discussions about copy-paste cases that haven't necessarily led to harm to people, but I find worth mentioning anyway: https://twitter.com/Foone/status/1229641258370355200, https://stackoverflow.blog/2019/11/26/copying-code-from-stack-overflow-you-might-be-spreading-security-vulnerabilities/

And yet mods aren't expected to judge or moderate correctness or safety of answers, and we leave such things to the voting/comment/edit system (at least on technology sites- I don't know about non-technology sites). Assuming that won't change, it just seems to me that the wording of this new CoC could use some adjustment to deal with the dissonance on this point.


I see an edit has been made:

Content that falls under this policy can be engaged with in several ways: it may be that editing is enough, it may be that providing a factual answer (using the platform!) is enough, or it may be that it needs to be deleted. We encourage users to exercise their best judgment in how to curate and respond to this type of content and, when in doubt, to flag it or contact us.

I just wonder if it'll be clear to readers that editing should generally not conflict with the original author's intent or change the meaning of the content. Ex. we don't edit out spam from posts that try to hide the spam in other content. We just flag it as spam. I'd also have listed (down)voting.

answered May 3, 2023 at 22:45

starball's user avatar

7

Change the "Political content" header to make it clear that only some political content is not allowed.

Currently all bold headers under "Unacceptable behavior" provide good summaries of unacceptable behavior. You don't really need to read beyond the headings to understand the point. ("Abusive behavior", "Sensitive content and imagery", etc.)

However, this is not true for "Political content". The heading makes it look like it's not allowed in any form, but the description clarifies that only some forms of political content are not allowed.

I suggest to change the heading to make that immediately clear. Perhaps "Harmful political content", or something similar. (I'm not a native speaker, so not sure what adjective would fit there.)

answered May 3, 2023 at 17:47

HolyBlackCat's user avatar

10

Let's suppose for a second your code of conduct wasn't a unilaterally imposed totalitarian bulwark, but a "handshake agreement between users and the company".

How, then, would we the users be able to hold the company (and members of its management) accountable for breaking this agreement? Specifically,

  • Hostile comments - Remember when SE Inc.'s Director of Public Q&A declared company critics are part of the problem and need to leave the network? - those were definitely hostile and derogatory comments. What mechanism did we have to take her to task? None. And what mechanism will we have with this brand new shiny CoC? Again, none.

  • Bullying and Harassment - The company, via its appointed moderators, has, on at least a few occasions, engaged in bullying of users critical of its policies. A prominent case was that of Monica Cellio. To this day, the company holds on to its claim that its actions were somehow justified and nobody has answered for that affair. Of course, such actions are usually hidden from the eyes of most users unless others somehow start up a conversation about it; otherwise - it's secret punishments; penal actions against users are not made public (let alone with access to relevant evidence or adjudicative decisions).

Also, I don't know about you, but where I come from, an agreement requires both parties to, well, agree. And that document is the opposite of agreeable.

... but of course, this is all just a rhetorical exercise. Your ideological preening is tiring. You're just going to continue to do what you want, and we'll just have to hope not to become the focus of attention for some weird US-subcultural sensibility of yours. Actually, I'm worried about what exactly your "pain points" this time are, and whether we're going to have more mistreatment of people with these new excuses like last time.


As a service to my fellow users, here is some music to set the mood for reading the CoC.

José Carlos Santos's user avatar

answered May 6, 2023 at 22:41

einpoklum's user avatar

14

Are AI-generated answers banned under "Disruptive use of tooling" and/or "Inauthentic usage"?

In the section titled "Unacceptable behavior", it states the following (links removed, text otherwise as-is)

  • Disruptive use of tooling - We do not allow any use of privileges in a targeted and disruptive manner that causes harm to the community or compromises the integrity of the content. Read more about our Disruptive Use policy.
  • Inauthentic Usage - We do not allow any use of the system that violates our Acceptable Use policy or directly causes unnecessary and unwanted disruption and/or harm to users and/or the network. Read more about our Inauthentic Usage policy.

It goes on to define that under the section "Inauthentic usage policy" where it bans

  • Artificially boosting the popularity/score of content and/or users.
  • [...]
  • Plagiarizing or copying content from websites, books, or other online and offline tools without proper attribution in a manner that violates our referencing standards.

As ChatGPT-generated answers generally lack citations [i.e., it is presented as if the user wrote it themself, without any mention that it was AI-generated], would that be considered Disruptive usage of tooling, as AI-generated answers are widely considered to be something that "compromises the integrity of the content" and sometimes attract upvotes from people not realizing that they are AI-generated, which is arguably "Artificially boosting the [...] score of content and [...] users"?

answered May 31, 2023 at 4:45

cocomac's user avatar

4

My Miscellaneous Thoughts, Questions, and Suggestions

I'm very pleasantly surprised to see links to the Help Center pages on How to Ask and How to Answer in the "Our expectations for users" section. I'm glad you're shining more light on the Help Center pages, and explicitly wording those guidelines as expectations. Now the problem is just that a lot of people won't ever read the CoC page :P

Suggestion: Put the kindness point at the top of the "expectations" list

For the "Our expectations for users", I'd like to see the point on "Engaging with users" at the top of the bullet list. "No matter where you engage on the network with your peers, we expect all users to treat one another with kindness and respect." should be underlying every other point.

Also, nit: The current CoC page has in big words, "kindness, collaboration, and mutual respect.", but in the new draft, the starting section says "rooted in cooperation and mutual respect" (where's "kindness"?).

Suggestion: Bring back the point on avoiding sarcasm

The current CoC page says "Avoid sarcasm and be careful with jokes", but I don't see any such similar statement in the new draft. Why was that removed? I for one am very glad that this community is currently one where sarcasm is at least stated as something to be avoided. This might fit under the "Bullying and Harassment", or "Hostile comments" sections (I think probably the former), or go under a dedicated bullet point in the "Abusive behavior policy" section.

Suggestion: Bring back the bad/good conduct comparison examples

I agree with what others have stated about the loss of the examples in the new draft about unacceptable comments. I think those examples are very helpful because they're concrete and down-to-earth. I'd like those to stay or carry over in some form.

I also liked having the point that said "No name-calling or personal attacks", which- forgive me if I'm wrong, but- I don't see explicitly covered in the new draft.

Suggestion: Bring back the "Enforcement" section

Why is there no section on Enforcement (that mentions steps like "Warning", "Account Suspension", and "Account Explusion") like there is in the current page?

Suggestion: Concretely explain the meaning of "non-organic" voting

I think "non-organic" might need some more explanation of what it means in the section on bad voting behaviours. My general understanding is that it means "voting on things you wouldn't come across when using the site like an average user", but that's an incredibly vague (and probably poor) definition. It would be nice to pin it down to something or narrow it down to something more concrete.

Misc Suggestions on Links and Wording

In the bullet point on "Sexually Explicit Material", I'd suggest linking to the page in the Terms of Service's section on the Acceptable Use Policy for its related statement on suspensions.

In the section on "Disruptive use of tooling policy", it could be useful to link to the Terms of Service's section on the Acceptable Use Policy for its statement that such violations will result in terminated accounts and blocked addresses.

Can "Content glorifying harm" be changed to "Content that glorifies harm"? The first time I read it, my brain accidentally misread read it as "Content-glorifying harm" (harm that glorifies content) :P

In the "Inauthentic usage policy"'s bullet on multiple accounts, I think it could be nice to link to What are the rules governing multiple accounts (i.e. sockpuppets)?.

Question: Why the non-direct link to tips on engaging with users contemplating self-harm?

Is it intentional that

If you would like to engage with a user in crisis, you may want to read this answer for some helpful tips.

links to a page that then links to https://meta.stackexchange.com/a/340597/997587 ? Or did you actually mean to link directly to that? It's just a bit confusing to get linked to something that doesn't seem to be what the link text seemed to indicate, and have to look for another link in the linked post.

Question: How strict is this CoC on spam and sexually explicit content in user profiles?

Usernames and profiles
While we encourage users to express themselves in their profiles, all user profiles in their entirety are subject to the Code of Conduct and all policies outlined or incorporated therein.

And yet,

Side comment on our optics

In my time on reddit, I've seen a lot of posts that dump on the Stack Overflow community and paint it as enjoying behaviours that break a lot of these rules (Ex. characterizing users as making statements like "that's a stupid question and you're stupid for asking it"). I find it really sad that we've come to have such a reputation / left such an impression. (these threads are pretty easy to find, and get re-hashed often. Just google "site:reddit.com stackoverflow toxic" and use the tools section to limit to results within the past week or month) I continue to make comments in such threads that clarify that we have our Code of Conduct, and that such behaviours are not tolerated by the community as a whole.

See also my other answer posts

answered May 3, 2023 at 23:40

starball's user avatar

7

First, I wanted to say I'm glad you guys are taking the time to give the CoC a face-lift. The revised CoC appears to be quite detailed and has a lot of expanded information in the "Policies hyperlinked in the CoC" section that is much more exhaustive than the current CoC, and frequently links to your /legal page on Stack Overflow, which is nothing but helpful. I'm also happy to see that you incorporated the feedback of not just close to everybody at the company, but also the entire community (moderators first, then the rest of us as of this post).

With those thoughts out of the way, I wanted to ask you to elaborate a bit on what the largest pain points of the current CoC you intended to fix with these changes are. You mentioned:

There are certain things that the current Code of Conduct does not address. The world is ever-changing and it is our responsibility to ensure the safety of users of this network.

And while I agree, I wanted to know what topics the company identified as specifically needing added or expanded upon. My goal in asking is to subject those particular portions to more scrutiny to ensure that the changes are hitting the nail on the head and can stand the test of time.

answered May 3, 2023 at 16:50

Spevacus's user avatar

5

First, I want to thank you for taking this on. I've watched the discussion surrounding every version of these sites' "code" since it was called "The FAQ", and... It's been a shit-show every single time. At some point I realized that it has to be; if discussing such a code wasn't chaotic, it would mean we didn't care; it would all be for nothing.

With that said...

Broad observations

I like that it's short. In particular, it shares something in common with that first "FAQ": the most important bits can all fit on one side of a sheet of 8.5"x11" paper. I... Don't think we've really had that in a lot of years. Not sure if anyone will ever be moved to print this out and hang it on their wall while typing here, but... If anyone did, it might actually not be a waste of wall-space.

I like that there are links to meta posts in tricky situations. For this to be a code, it must be adopted - and daily executed - by all of us. A code isn't static, etched in stone - it's living, etched in our hearts and shaped by our hands. We have callouses from where it rubs on us, and it deserves the same. There is no king here; we have no use for a code that is written in stone.

I'm somewhat annoyed by the frequency of the word "user". This is pedantry for sure, but... "User" is generally either shorthand for "user account" (a set of information used by the software to manage access to the software) or "person using the site" - distinguishing between these two uses is usually possible based on the context (I'll note below where it wasn't), but... It feels a little bit lazy when used too often. I counted 33 instances of "user" or "users" in the draft, which was enough that by the time I reached the end of the document I'd started counting. For comparison, that's exactly as many occurrences of "user" as of "people", "person" and "post" put together, and precisely 33 more uses of "user" than of "pimpmobile".

Observations on specific sections

Abusive behavior policy

cruel criticism, or attacks that target specific users or groups of people in a manner that causes harm.

Pretty sure once someone clears the "causes harm" threshold, we aren't gonna be too fussed about whether or not the target is actually using the site or has an account - IOW, harassing someone into leaving the site isn't a loophole here. Just say "people or groups of people".

**Dangerous speech **– any form of expression (e.g. text, images, or speech) that represents rhetoric that demonizes or denigrates a group of people in a way that depicts them as threats so serious that violence against them becomes acceptable or necessary; rhetoric that increases the risk of violence being condoned or committed against a particular group.

This is a long and complicated sentence, but I think you're aiming for a prohibition on what is sometimes called "incitement". While I generally appreciate the brevity of this document, in this paragraph I felt like it got in the way. Recommend either spending a bit more room breaking it down (or at least breaking the paragraph into multiple sentences...) or link out to something here or in the help center that can lend clarity.

Dehumanization – depriving individuals or groups of people of their perceived humanity and dignity, for example, by comparing humans, groups, or their stated or perceived behaviors in a derogatory manner with non-human entities such as animals perceived as inferior, bacteria, viruses, microbes, diseases, infections, filth, and other qualifiers.

This long sentence, OTOH, I thoroughly enjoyed. No accounting for taste!

Self-harm and suicide

I'm... Not happy to see this here. But I am glad that you included it. There are people I still think about on a regular basis who took their own lives after sharing some of themselves with us on these sites, who never exactly reached out for help but maybe... Tried and weren't understood. Then again, out of all the emails I sent or saw sent to folks who were overtly suicidal in posts here, I'm not sure I could point to one that seemed like a clear win.

IOW... I'm not sure any of the responses or practices discussed in this section do a bit of good, but I don't have any better ideas and if seeing that the section even exists maybe helps someone... Then it's worth having it.

Political content policy

as long as they do not otherwise violate the Code of Conduct and do not contain insulting language directed at individuals.

Ok, yeah, "individual" is another reasonably unambiguous word for "person" - could also use that in the "Abusive behavior policy" instead of "user"...

Hint. Hint.

Misleading information policy

This is all well and good, but also a huge missed opportunity to note that misleading information may also be edited. Like, is very, very likely to be edited. Will almost certainly be edited. Unless, like, it's abundantly clear that the author is trolling and cares nothing for the truth.

I mention this because... Well, editing is probably my favorite feature here. I like editing, and also I like that folks can edit my posts when they find them misleading. Which they frequently do. And yet, it remains a feature and a behavior that seems to trip up folks casually interacting with these sites.

Actually... I'm gonna end on that note. The comments I had on the next two sections - "Disruptive use of tooling policy" and "Inauthentic usage policy" - were pretty nitpicky. But emphasizing the positive ways in which editing is used here to actively combat misinformation is important. Let's please do more of that!

answered May 5, 2023 at 1:33

Shog9's user avatar

6

Whatever else you do please keep this line somewhere:

No name-calling or personal attacks.
Focus on the content, not the person.

It's the one line I find really useful -- which I quote in a moderator message, when I tell someone their comment has been contrary to the CoC.


As well as that one line, consider deleting the longer lists.

That one line lets me forbid someone's saying anything at all about someone else -- because it is so short. By not trying to spell things out it empowers me to act at as a "human exception handler" -- and IMO it doesn't imply that because some category of personal remark or of invective isn't listed it's permitted.

Perhaps you think that, "Unfortunately, things need to be spelled out" -- and so you write this blurb:

To ensure that all users feel safe and welcome, we do not allow behaviors or content that cause or contribute to an atmosphere that excludes, marginalizes, or dehumanizes individuals or communities on the basis of their actual or perceived ethnicity, age, sexual orientation, gender identity and expression, disability, or held religious beliefs.

Perhaps you think, "Yes that just about covers it".

But I'm told you've had some people argue that if an expression isn't listed in the CoC then it's permitted.

If you did believe that now the longer list in the new CoC is exhaustive and that everything else is permitted, I think perhaps the new list still lets someone call you (for example) a "newb" or "noob" -- that form of insult isn't listed, is it. Or, if not an "idiot" because that's a disability, what if someone accused you of being a "SJW"?

Whereas under the old CoC I can ban and sweep away all these forms of expression -- because they're "name-calling" or a "personal attack", and "focus" on "the person" not "the content".

Other "personal" "attributes", gender and ethnicity and all that, are already out-of-bounds and off-topic by the same token.

answered May 20, 2023 at 10:43

ChrisW's user avatar

2

We plan on going live with this update later in May, but until then, this is a very real chance for you to provide actionable feedback on the Code.

It's all a bit rushed, isn't it? May just started and I have this sneaking suspicion that y'all are implying that this will be wrapped up within the next two weeks. That doesn't leave a lot of time for public feedback or discourse, and if you're expecting that I shake your hand on this, I'm going to want to go through this with a fine tooth comb and give you guys a chance to iterate and improve on it.

Don't just bowl us over on this one. Again. Please. I'm begging you.

Could we get at least a month guaranteed to have that debate and discourse?

Particularly, we'd like to make sure we've captured the correct expectations in the "Our expectations for users" and we're very open to improving it further.

Well...if I don't get enough time to comb over everything at least I'll try to contribute here.


I originally had a comment on the "definition of tools" section as it associated with Bullying and Harassment, but I feel like I'm happy/comfortable with the provided definition.


In the Bullying and Harassment section, I think there's a weasel word in this expression. It's not that I disagree, but people interpret "cruelty" in different ways.

Bullying and Harassment – severe, repeated, or persistent unsolicited conduct, misuse of power or tools, cruel criticism, or attacks that target specific users or groups of people in a manner that causes harm. Content that contributes to a hostile or threatening environment, denies a person's expressed gender identity, or invalidates a person's individual experiences in a manner that causes harm.

Someone who's curt with their comments may come across as cruel to another person. How do you plan to adjudicate those situations? Does the CoC give users a blanket ability to just... claim that they are victim to this clause and demand retribution when the person making the comment isn't being cruel, they're just being blunt?


This caught my eye:

This Code of Conduct is meant to work alongside individual site policies. Sites and Chatrooms may choose more restrictive policies for their content than what is allowed here, particularly around what is on-topic or off-topic.

Does this imply that chat can not be less restrictive (within reason)? For instance, permitting profanity/swearing in chat when on the main site it's more aimed towards semi-casual office speech? Or is chat going to be held to a higher universal standard?


Overall though on a first pass, I feel pretty OK with what's here. This might change with some of the revisions made or other suggestions incorporated, but I do think that this establishes a lot of the norms that we've held already and does away with the wild and self-defeating polarization that the most recent CoC revision brought in.

answered May 3, 2023 at 19:32

Makoto's user avatar

16

I have concerns about the "Misleading information policy" and how to adjudicate the border line between "wrong" answers and "misleading" answers. My concern is both substantive (what is the definition of the difference) and procedural (who exactly is empowered to adjudicate misinformation vs. crap and what guidelines do they use)?

Historically, we have had a very powerful tool on the network to combat misinformation, the downvote. Users who post too many downvote-gathering answers face being banned from answering any more questions. Diamond moderators have historically stayed away from the determination of "Truth", instead relying on the community to differentiate high-quality answers from the detritus of naive, misinformed, poorly-sourced, disorganized, and/or just plain bad answers. The overwhelming guidance given to moderators thus far has been to not take preemptive action on answers felt to be "wrong", but only take action against non-answers such as spam, hate speech, patent nonsense (e.g. "apoaspogpergaeprg hi hi hi"), new questions, and commentary (e.g. "Did you ever find a solution to this problem?"). These removal reasons are covered by our existing Spam, Rude, and Not An Answer flags and would not be covered by a hypothetical Misinformation flag.

We even have a standard flag decline reason that moderators use to remind flaggers that moderators do not take action against answers on the basis of them being wrong:

Declined - Flags should not be used to indicate technical inaccuracies, or an altogether wrong answer.

Can I assume that, with the new CoC, this flag decline reason will be going away and/or being replaced with something that acknowledges that moderators will now be handling some wrong answer flags?

Are we going to have a "Misinformation", "Misleading", or "Conspiracy" flag that users can raise on answers and have them evaluated for Truth by moderators?

For example, such a flag might look like one of these:

Misleading: This post answers the question, but it contains content that is unsupported or widely disproved. It is harmful to public health or democratic institutions, and might need to be removed.

Misinformation: While this post answers the question, it relies on lies or falsehoods and threatens public health or democratic institutions. It is not helpful or useful, but dangerous and deceptive.

Conspiracy: This post answers the question, but relies on widely discredited conspiracy theory content such as QAnon, Chemtrails, Flat Earth, Satanic Ritual Abuse, or 9/11 False Flag Operations or is otherwise harmful to public health or democratic institutions. It violates our Misleading Information policy and should be removed.

How exactly should moderators be determining if an Answer brought to their attention is Misinformation that they should take action on right away or simply a Wrong answer to be left to the community to downvote into oblivion? Does it depend on the poster's intent (e.g. posting vaccine Autism nonsense because they don't know any better vs posting vaccine Autism nonsense as part of a calculated campaign of fraud)? Does it depend on how "obvious" the false or unsupported statements are? Can "wrong" answers that require specialized knowledge to recognize as false (e.g. that dereferencing a null pointer in C is defined behavior) ever be considered misinformation or should they always be considered "just wrong" and downvoted?

What constitutes Misinformation vs Incorrect information has varied and continues to vary. For example, at various times in the past few years and according to various authorities, the idea that SARS-CoV-2 originated in a lab has been treated anywhere as a likely and supported idea, a doubtful but reasonable hypothesis, to absolute misinformation. Do moderators have the skills and discernment to adjudicate all of this?

Stepping back for a moment, do we even want moderators to become arbiters of Truth?

I do want to say that I "get" that the misinformation rule is designed to combat things like QAnon, Satanic Child Abuse, and Freemasons-Conquering-The-World conspiracy postings and not posts from ignorant high schoolers who are shaky on C sequence points and exactly what constitutes undefined behavior, but I worry greatly about how this is going to play out in practice where the boundaries between conspiracy and wrong is unclear or opinion-based.

In response to a comment by Starball, is there a difference between someone posting an answer on Stack Overflow that is vulnerable to a known exploit (harmful if a reader uses the code in a production system) and posting an answer on Politics.SE claiming that Joe Biden rigged the 2020 US POTUS election (harmful to democracy)? Do we give the first one a pass because moderators aren't expected to be experts in every known exploit or haxoring technique, but come down hard on the second one because clear and convincing evidence of the legitimacy of Biden's election is easy to find and widely accepted among non-experts? Do we give the first one a pass because it is non-political?

I'm especially concerned how we are going to proceed with Truth adjudication when moderators are not required to be subject matter experts. Are we going to have new policies requiring moderators to prove subject matter expertise in their site's scope (e.g. by sitting some sort of content exam or submitting academic transcripts or professional licenses or certifications), or are we going to introduce a new subject matter expert role? For example, will Medical Sciences.SE need to hire a panel of physicians and public health experts to adjudicate which answers are harmful to public health and subject to immediate deletion under the Misleading Information Policy and which are just crappy answers that can be handled with downvotes? Will we have a rule that only licensed pilots, aircraft mechanics, and air traffic controllers may become or remain diamond moderators on Aviation.SE in order to ensure that moderators will be able to differentiate dangerous misinformation from just plain crap? Will answers on Parenting.SE need to be screened by pediatricians, child psychologists, or Child Protection Service (CPS) officers who will be empowered to preemptively delete content they think could be harmful to children if followed?

In response to a comment by Fattie, I do see something similar. Viewpoints do not become Misinformation because they are wrong, unsupported, or even potentially dangerous in the hands of the foolish or ignorant, they are Misinformation because they are dangerous to those in power. For example, QAnon directly challenges the authority of Joe Biden and he therefore has an interest in finding ways to suppress it in order to bolster his position. Similarly, vaccine denialism is dangerous to Big Pharma and the security of their revenue streams. Now, I don't personally believe in QAnon or vaccine denialism, but I do recognize that they are being slammed as misinformation precisely because they threaten those in power and not because they are wrong or unsupported. So, I would advise that were consider whom we are protecting when we identify, flag, and remove "misinformation" from our sites, and whether those parties deserve our protection.

Also keep in mind that even true information can be "misinformation" when it challenges those in power. It wasn't too many decades ago that Big Tobacco pooh-poohed and suppressed scientific research showing that smoking was harmful, vigorously asserting that it was unsupported and misleading. It wasn't whether smoking was harmful to smokers, but whether publishing allegations of harm was harmful to profits. And it was!

answered May 9, 2023 at 11:26

Robert Columbia's user avatar

15

Some of the new additions like the political content and misleading information policies are much more likely to be violated in chat than on the main sites. There are some sites that deal with content like that like Skeptics and Politics, but on most sites political content would simply be off-topic. So what remains is chat where people might talk about topics like this.

Chat moderation is a bit of a mess, all mods have power there but nobody is actually responsible. And if we encounter a complex case that potentially violates e.g. the misleading information policy, we might have to escalate to the CMs when we cannot judge the case ourselves. The complexity of these new rules makes it much more likely that we have to escalate than before. But that mechanism relies on our own sites, while in chat we might be moderating users that don't have accounts on the site where we are a mod. And it's also not visible to other chat mods that something was escalated.

Is there any guidance specifically on how to moderate chat given the new Code of Conduct?

answered May 3, 2023 at 18:07

Mad Scientist's user avatar

1

Who is meant to be enforcing this? The "Our expectations for users" section says that "if you encounter something that you believe is harmful, please flag it for moderator attention", which implies the moderators are the first line of handling. However, the "misleading information" section says that "we do not allow any content that promotes false, harmful, or misleading information". Are moderators expected to enforce this? If so, is it now expected that moderators are subject matter experts in the sites they moderate? That hasn't previously been a requirement, but without subject matter expertise, I'm not sure a moderator can enforce this policy. In fact, we have a decline reason for flags about inaccuracies and wrong answers.

In the "Unacceptable behavior" section, there could be some redundancy between "misleading information" and "political content". It's not fully clear to me if the "for the purpose of promoting the interests of a political party, government, or ideology" is regarding all content the promotes those interests (if so, it should be in the political content section) or specifically refers to content that "promotes false, harmful, or misleading information" (if so, it's redundant, since all content that promotes such information is prohibited).

In the "Unacceptable behavior" section, the description of "Sensitive content and imagery" is unnecessarily verbose. Suicidal and self-injurious behaviors are harmful behaviors, so the first two sentences are repetitive as they are about promoting or encouraging or providing instruction for harming oneself or others.

In the "Political content" section, the hyperlink says to "Read more in our Political Speech policy". The section is then later called the "Political content policy". Please review to ensure that all references to other sections use the correct names.

Why is the Political content policy formatted differently than the other sections? I find the brief introductory paragraph followed by a small number of bullet points to be easy to read and consume. However, this is a small wall of text. I would recommend reformatting for consistency and readability.

The "Misleading information policy" has some content that is too specific. For example, why specifically call out widely disproven claims regarding health? Any promotion of disproven claims should be prohibited by a misleading information policy. A single bullet point can be used to prohibit this content and give examples of health, historical events, and election fraud, if specific examples are deemed to be necessary.

In the "Abusive behavior policy", "Hostile comments" should be expanded to include "another person or group". To reduce verbosity, it's likely that "hostile comments" is unnecessary and is well covered by the other categories or could very easily be rolled into the other categories by moving a few words.

The "Disruptive use of tooling" policy is extremely verbose. The whole paragraph under "Targeted Voting" is extraneous information. You can remove the concept of "non-organic" and end up with a much smaller, cleaner list that makes it clear that voting is about the content and not people or topics.

In the "Sensitive content and imagery policy", why is "non-consensual imagery" limited to nude and sexually suggestive imagery? With that qualifier, I don't see how it becomes different than sexually explicit material.

In the "Sensitive content and imagery policy", moving "self-harm" to its own bullet adds unnecessary verbosity. It could very easily be combined with "content glorifying harm".

In the "Self-harm and suicide" section, why do you link to the 988 Suicide & Crisis Lifeline? Is this available outside the United States? I believe the link to Suicide.org is sufficient and the reduced verbosity makes it easier to consume.

The style of hyperlinking is quite verbose, and even annoying. Why did you choose to use the "Read more on XYZ" style? For example, instead of saying something like "Read more on how to ask a good question", make "Asking" a hyperlink to more details on asking. When you do use keywords to make hyperlinks, sometimes they aren't quite right, such as making "minimum quality" a hyperlink, when it should probably be "minimum quality standard" or "quality standard", since the link brings you to information about the standard.

I'd recommend running this though a tool that calculated readability scores. I did this for the current version, and it has a Flesch-Kincaid Grade Level of 13 and a Gunning Fog Index of 15.6. These high scores indicate that it may not be the most accessible for people who are not native English speakers. It does get a little bit better if you analyze it section-by-section, but the first line section has a Flesch-Kincaid Grade Level of 11.3 and a Gunning Fog Index of 13 - still a bit on the high side for non-native English speakers, and potentially fatiguing for English speakers to read. Write this for the users, and not the lawyers - the second paragraph in the Abusive behavior policy is a good example of this.

answered May 5, 2023 at 11:01

Thomas Owens's user avatar

3

  • Under "bullying and harassment", what does it mean to "invalidate" "a person's individual experiences in a manner that causes harm"? What is "harm"?

  • Under "dangerous speech", what is "rhetoric that increases the risk of violence being condoned or committed against a particular group"? A discussion of crime can be construed to increase vigilantism (violence against "criminals"), even if the contributor explicitly calls for acting within the law. Never mind—legal police work may count as "violence against criminals".

  • Under "bigotry and discrimination", the list of characteristics is phrased in a way that implies exhaustiveness. Why doesn't it include sex (the most glaring omission by far), national origin, or economic background?

  • Under "extremism", what are "hateful organizations"? (It's ORed with other clauses, so there isn't any definition of "hateful".)

  • Under "hateful imagery", sex, national origin, and economic background are not listed.

  • Under "mocking content", what is "in a manner that could be reasonably interpreted as causing harm"? What harm?

  • Under "Political content", para. 2, sex, national origin, and economic background are once again not listed.

Under "self-harm", it is defined as "suicidal and self-injurious behaviors". It stands to reason that "harm" (sans "self-") is "murderous and injurious behaviors", which is rather hard to effect over TCP/IP.

The other, implied, definition of harm is "everything we prohibit, because things we prohibit are by definition harmful, else we wouldn't prohibit them". It's circular, exploitative, and unhelpful.

This_is_NOT_a_forum's user avatar

answered May 10, 2023 at 17:14

dsz's user avatar

3

Your sentence structure is completely overboard. Look at this:

To ensure that all users feel safe and welcome, we do not allow behaviors or content that cause or contribute to an atmosphere that excludes, marginalizes, or dehumanizes individuals or communities on the basis of their actual or perceived ethnicity, age, sexual orientation, gender identity and expression, disability, or held religious beliefs.

Parse that:

  • To ensure that all users feel
  • (safe and welcome),
  • we do not allow
  • (behaviors or content)
  • that
  • (cause or contribute)
  • to an atmosphere that
  • (excludes, marginalizes, or dehumanizes)
  • (individuals or communities)
  • on the basis of their
  • (actual or perceived)
  • (ethnicity, age, sexual orientation,
    • (gender identity and expression),
  • disability, or held religious beliefs).

I mean, dang!!

50 words in one sentence, from an archetypal "wall of text". It's horrible to read.

Compare with the current CoC, which is 20 words, more readable, more human.

No bigotry. We don’t tolerate any language likely to offend or alienate people based on race, gender, sexual orientation, or religion — and those are just a few examples.


I write too much myself, in the workplace—and people complain, especially people for whom reading English is chore, because it's a second language or because they have other things to do—I try to edit to make my text concise.

Editing to make your text concise, readable, friendly, isn't on your agenda, is it?


And for the record, a teacher maybe encourages good behaviour instead of criticizing bad. This CoC is all about negatives, lists and lists of bad stuff. Even the opening is harsh—the imperative mood:

No matter where you engage on the network with your peers, we expect all users to treat one another with kindness and respect.

How about request, encourage, ask, or even just "need"? This CoC is rude! And bossy.

answered May 13, 2023 at 9:16

ChrisW's user avatar

12

I actually like this a little better than the original version of this though there's a few bits of feedback I'd still have.

Upcoming regulatory pressures from Brazil, the EU, and elsewhere demand that our content moderation practices are able to stand up to scrutiny. We do not believe that our current code delivers on those requirements.

In a sense - this might be somewhat problematic. Certain countries or even states seem to be hurling themselves headlong onto policies that are entirely orthogonal to what SE intends to do. In addition, we moderate communities, not content mostly. I'd love to be wrong but this aspect still feels potentially troublesome to me.

We have outlined below some expectations that are generally true across the network; some sites may have stricter requirements or use different policies for questions/answers/comments. Please adhere to individual site policies where they differ from these expectations.

I like this - but in a document that aims to list out explicitly what the expectations are I feel that "individual site policies as per the site meta sites" might be more precise and give a hat tip to that as a place to look for policies.

On that note - might I suggest that the 'addendums'/links are hosted on meta (under an announcements lock), as is a copy of the COC - if there's changes, it provides a very organic way to keep track of them.

Also on the third run of reading through it - and wizzwizz4's answer I feel like the underlying tension finally dawned on me. We often joke about meta being case law. There's two Western traditions of law - the English one is closer to what we do, and the Napoleonic/Roman system on the continent relies more laws being specific and explicit. While one of the goals is to have more detailed rules for people to refer to- it’s worth remembering that part of a moderator's role is to handle exceptions, and where something's not covered by the rules explicitly, the trust and support for them to handle it in the way they deem best for the community shouldn't be weakened in any way.

This_is_NOT_a_forum's user avatar

answered May 3, 2023 at 23:54

Journeyman Geek's user avatar

10

You objectify people -- saying that users have an ("actual") ethnicity etc:

To ensure that all users feel safe and welcome, we do not allow behaviors or content that cause or contribute to an atmosphere that excludes, marginalizes, or dehumanizes individuals or communities on the basis of their actual or perceived ethnicity, age, sexual orientation, gender identity and expression, disability, or held religious beliefs.

Instead, better to list these as types of abuse or of discrimination, rather than types or categories of people or of user.

The current CoC mentions types of offensive language, that's better than the new text:

No bigotry. We don’t tolerate any language likely to offend or alienate people based on race, gender, sexual orientation, or religion — and those are just a few examples.

The most useful line in the current CoC was this -- it let me moderate any or all personal comments:

No name-calling or personal attacks. Focus on the content, not the person.

answered May 12, 2023 at 19:34

ChrisW's user avatar

11

My feedback amounts to:

  • 1 concern;
  • 1 disappointment;
  • 1 note of appreciation;
  • 1 note of acquiescence to the circumstances that be;
  • and 1 warning.

My main concern to these changes would be over people trying to frame the code of conduct as their personal shield against curation. As the pages were extended to include phrases such as "repeated, or persistent unsolicited conduct, misuse of power or tools, or attacks that target specific users or groups of people in a manner that causes harm", it will be a matter of time before they are inappropriately directed towards curators. This answer already extends on this concern rather well. Still, I will stay optimistic and say that fake shields can only last for so long. As experienced back in 2018, maybe a surge of references to the code of conduct are bound to appear, but that will likely deteriorate over time, just like before.

I am also disappointed in not bringing back "assume good intentions" (or "presume good intent", which is also a great rephrase). I feel that this attitude alone could prevent many escalations happening in the network.

In any case, I appreciate that the code of conduct continues to be updated over time to face the fact that society norms are far from static, and that the proposed changes are made transparent¹, even if just to appease some international entities and/or stakeholders of the company. Observing the new corpus in the perspective of a Stack Overflow user in particular, I do not really envision this as something that will significantly affect how this site is moderated. If I am wrong about this, then I am either oblivious of certain moderation edge cases, or the changes will turn out to be regretful.


¹ It is also no surprise that the ghost of Monica Cellio appears whenever a change of code of conduct is involved. That incident was poorly resolved, and Stack Overflow will need to continue taking efforts to regain trust in this process. It would help to make a very careful assessment when moderating answers in this question, as any moderation actions applied to answers (such as deleting them) are going to be perceived more strongly than in other contexts.

answered May 19, 2023 at 11:14

E_net4's user avatar

4

I recognize that the Code of Conduct aims to be the law of the land here on the network, but I feel that it's a bit too long to be seriously read by users. I'm not against having a long and specific Code of Conduct to leave out any ambiguity, but can we have a short version too? Perhaps something like a tl;dr box at the top or a separate page like each site's /tour.

I see two benefits to doing this:

  • Reducing fancy words and the law-like-ness of the CoC in a summary would make it easier for non-native English speakers to understand.
  • More users would be willing to read a shortened version.

Edit: Apparently the proposed CoC already has a summary, but it isn't clearly a distinguished as summary. I propose a border or colored background to distinguish.

answered May 20, 2023 at 19:36

Michael M.'s user avatar

4

In reading through the proposed CoC, I encountered the following issues which I haven't seen mentioned elsewhere:

  • Unacceptable behavior

    • Abusive behavior
      • Reword (awkward; reduce complexity)

        Curation activities such as voting (upvotes, downvotes, voting to close, etc) don’t typically qualify as abusive behavior.

        to

        Curation activities such as voting (upvotes, downvotes, voting to close, etc) are typically not abusive behaviors.

  • Misleading information policy

    • Remove "around the world" from:

      Is likely to significantly harm democratic institutions or voting processes around the world or to cause voter or census suppression.

      Including "around the world" could be read as requiring that the "significant harm" must be world-wide, rather than, potentially, just local.
    • Remove "machines" from:

      Promotes disproven claims of election fraud or manipulation as factual or severely misrepresents the safety or validity of results from voting machines

      Including "machines" implies that it doesn't apply to forms of voting which don't use machines.
  • Disruptive use of tooling policy

    • Targeted Voting
      • Include both delete and close votes; include both questions and answers:
        • Change:
          • Misuse of close votes – voting to close or delete a question with repeated disregard for community consensus, or as a way to harass, target or abuse other users, or misappropriate moderator attention
          to
          • Misuse of close or delete votes – voting to close or delete a post with repeated disregard for community consensus, or as a way to harass, target or abuse other users, or misappropriate moderator attention"
        • Make who is responsible for bots more clear. Include that the creator of the bot is jointly responsible, if the bot was created to violate the rules. Don't reference "above rules", because doing so implies excluding other rules. Just say that the bot's actions are the responsibility of the users involved. Don't limit user's responsibility for bots to only "flagging or other moderation processes". Responsibility should be for all bot actions (e.g. rapidly posting 1,000 nonsense answers wouldn't be covered as "flagging or other moderation processes").
          • Change:

            When bots or other tools are used to automate flagging or other moderation processes, they fall under the above rules, and the operator is responsible for its actions.

            to

            When bots or other tools are used to automate actions, the operator and/or user who authorized the use of their account is fully responsible for the bot's actions as if performed by themselves. For bots or tools created with the intent to violate the rules which apply to a user or account, the creator may also be held responsible.

  • Inauthentic usage policy

    • Change: You're specifically authorizing multiple accounts which aren't "that none are used to avoid system, or moderator, imposed restrictions or disciplinary limitations", which is, at best, a subset of the behaviors which aren't permitted (e.g. pretending your alternate account is someone else to post comments in order to make it appear that a post is helpful to someone else, etc.):
      • Using multiple accounts and/or profiles to circumvent or evade system, or moderator, imposed restrictions or limitations that one account/profile would have. Each account/profile may represent and be used by only one individual without express written permission from Stack Exchange. One individual may use multiple accounts/profiles, provided that none are used to avoid system, or moderator, imposed restrictions or disciplinary limitations.
      • Sharing accounts/profiles between different individuals without express written permission from Stack Exchange, even if these individuals work for or represent the same corporation or organization.
      to
      • Using multiple accounts and/or profiles to circumvent or evade system, or moderator, imposed restrictions or limitations that one account/profile would have. One individual may use multiple accounts/profiles, provided that none are used to avoid system, or moderator, imposed restrictions or disciplinary limitations, or otherwise violate the rules for using multiple accounts.
      • Sharing accounts/profiles between different individuals, even if these individuals work for or represent the same corporation or organization. Each account/profile may represent and be used by only one individual person throughout the lifetime of the profile/account. Stack Exchange may expressly grant writen permission that an account/profile is not subject to this restriction.
    • Remove "created for a specific purpose."
      • Change:

        Coordinating inauthentic behavior through the usage of fake accounts created for a specific purpose.

        to

        Coordinating inauthentic behavior, including, but not limited to, through the use of fake accounts.

        Don't require use of fake accounts, and that the fake accounts were created for a specific purpose, as we shouldn't have to determine the specific purpose for which the account was created, or that it was even for a specific purpose:

answered May 24, 2023 at 16:40

Makyen's user avatar

2

This current version reads a lot nicer and seems more succinct and neutral than former iterations. Thank you for that.

Some questions and notes, in no particular order, but numbered for the sake of readability and ease of commenting.

  1. [..] have spent hundreds of hours crafting this document to alleviate pain points we have found with our current Code.

    Can we get some insight into that process? What did the team responsible for this CoC consider pain points?

  2. Why, under Self-harm and suicide, does this CoC only mention American helplines? Aren't there international organizations with similar functions?

  3. Under Sensitive content and imagery, why does imagery that induces or glorifies harm get all the attention? Has this been a particularly common and disruptive usage?

  4. We do not allow any content that promotes false, harmful, or misleading information [..]

    Unless, I take it, it was part of an answer written with the best intentions? If that assumption is correct, could you add the word "intentional" there somewhere? Will this be monitored, or is it exclsuively up other users to flag such content? Does this otherwise mean answers will get censored (instead of getting edited after misinformation has been pointed out)?

  5. The Misleading information policy is also referred to as Misinformation policy (similarly to what Thomas Owens points out in their answer about Political speech policy and Political content policy).

  6. As has been pointed out in other answers, if this is to be considered "a handshake agreement between users and the company", the responsibilities of the company are blatantly absent. If users on this platform agree to the CoC, can they e.g. expect fair and transparent reciprocal behaviour? Where is the "Our promises to you" to mirror the "Our expectations for users"? (I'd like to direct your attention to einpoklum's answer for a clearer case.)

answered May 8, 2023 at 17:40

Joachim's user avatar

3

The proposed CoC is so huge and unwieldy and convoluted that it can hardly be plainly understood.

One of the basic principles of justice is that the rules must be made known. A necessary ingredient of that is having them in a form that is accessible to the people supposed to be subject to them without them having to employ a lawyer.

I don't know what new criterion the management thinks would not be fulfilled by the existing CoC, but would be satisfied by the proposal. And as that's apparently the criterion against which the proposal is being drafted, that should be made readily known as well.

'Codes' like this are too much like playing a nasty game of gladiators with the people supposed to be subject to them. The rules' authors are trying to entangle us in the net so they can then stick us with the trident.

It would be better by far to scrap it all and replace it with something plain and simple that makes good sense and justice.

answered May 17, 2023 at 19:14

terry-s's user avatar

2

Typo:

  • Inauthentic usage policy

    Stack Exchange may expressly grant writen permission"

    should be ("written" should have two "t"s):

    Stack Exchange may expressly grant written permission"

Run the whole thing through a spellchecker

I don't know if there are other issues like the above, but it's something that should show up with a basic spelling check, which is available in most editors, even the <textarea> elements in browsers. Given that the above typo exists, it would be a good idea to just put each page into an editor that includes a spellchecker in order to double-check there aren't more similar issues. Obviously, that doesn't guarantee that you get everything, but it would be a baseline from which to start.

answered Jun 4, 2023 at 4:32

Makyen's user avatar

4

For the "Our expectations for users" section:

Voting - Our voting system is central to how Stack Exchange works. Votes are how the Community signals great content and rewards its members for their contributions. Improperly cast votes undermine the integrity of the platform. Read more on how users are expected to use the voting system.

The linked Help Center page says very little about how users are expected to vote. It just says at the bottom:

Voting up a question or answer signals to the rest of the community that a post is interesting, well-researched, and useful, while voting down a post signals the opposite: that the post contains wrong information, is poorly researched, or fails to communicate information.

And it doesn't say anything about fraudulent (Ex. sock-puppet voting on self) or serial voting, which really if you're going to talk about how we're expected to vote, should be part of the document or linked resource. Please fix that. I suppose updating the Help Center page would work.

Actually, why not just add a link in that bullet point to the "Disruptive use of tooling policy" and "Inauthentic usage policy" sections, which do cover various bad voting things?

Also, as far as my understanding goes, people are free to vote in whatever way they want as long as it's not fraudulent or serial, and it's just recommended (in the vote tooltips and Help Center) that votes be used to indicate usefulness. So what's up with saying that we expect people to vote in particular ways? That seems to go against my general understanding (aside from the fraudulent and serial voting part).

answered May 3, 2023 at 22:29

starball's user avatar

7

Just to toss this out for consideration: Decades ago, IBM used to flag text altered in recent releases of a document with "change bars" in the left margin. Some modern wordprocessors are able to do the same thing, though I don't know of one which allows exporting the change bars to the published document. (The usual problem with WYSIWYG is that WYSIAYCG...)

I highly recommend that SE adopt this practice. It makes finding and reviewing alterations TREMENDOUSLY easier, while not requiring that everyone work with a list of where changes were made.

A less print-centric version would be to hypertext the new version with tooltips or sidebar so the change is differently rendered and pointing at it or clicking on it brings up the old text and a discussion of why the change was made. See the Annotated XML Specification for an old but elegant demonstration of this approach, though there it's explaining the original document rather than explaining changes. There's a related page explaining how that was built; better tools exist now which could help simplify the process.

This can be made much easier to review. It should be. And the tools needed to do it will be helpful in the future.

answered May 22, 2023 at 9:53

keshlam's user avatar

4

There are currently moderator actions against ChatGPT usage and other autogenerations. If there are moderator actions, shouldn't the CoC back the moderators up a bit better? I have a feeling that this is in there someplace, but I'm not sure where.

Along the same lines, I don't believe the location of copyright issues feels like it gets the prominence it deserves in the "inauthentic usage" section. In fact, to me it feels sort of hidden. When I read the first few items in the section, it just doesn't feel like if I read on that I would encounter anything having to do with copyright, but there it is (anecdotally, my original intent was to say "copyright issues are missing", but I decided on a more thorough read and a CTRL-F for "copyright" before clicking "Post")

I would go so far as to recommend changing the name of the section to "Copyright violation and other inauthentic usage" and maybe place the copyright issues first. It should probably also discuss AI generated content in this context.

answered May 6, 2023 at 20:00

Scott Seidman's user avatar

4

Would the new Code of Conduct be enforced in a retroactive manner?

Joachim's user avatar

Joachim

11.3k2 gold badges27 silver badges77 bronze badges

answered May 31, 2023 at 17:07

Random Person's user avatar

1

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.