Most community codes of conduct are decoration. They live in a footer link, get copied from a GitHub template in 2019, and collect digital dust until someone screenshots a slur in the forum and the community manager is suddenly writing an apology thread at 11pm.
The problem is not that communities lack rules. The problem is that their rules are not backed by a system. No escalation ladder. No clear reporting process. No record of past decisions. Without those three things, a code of conduct is a wish list, not a governance document.
This guide covers what separates a community code of conduct template that actually works from one that just looks good at launch. You will get a full structural breakdown, a concrete enforcement framework with real ratios, an analysis of five widely-cited CoCs in the wild, a list of the most common failure modes, and a copy-paste template you can adapt in an afternoon.
Platform does not matter here. These principles apply whether you are running a BuddyPress community, a Discord server, a Circle space, a Slack workspace, or a Discourse forum.
Why Most Codes of Conduct Fail
Before building anything, it helps to understand the ways communities ruin their own CoCs. These are the most common failure modes, and they show up across platforms constantly.
Failure Mode 1: Lip Service Documents
A lip service CoC is one that exists to signal good intentions but was never designed to guide decisions. You can spot them by three signs: they use vague language like “be respectful” without defining what disrespect looks like, they list no consequences, and no staff member could tell you what happens after a report is filed.
Communities often create these documents because someone asked “do we have a code of conduct?” and the answer needed to be yes. They check the box. They do not build the system.
Failure Mode 2: No Enforcement
A code of conduct without enforcement is just a suggestion. When members see that violations go unaddressed, two things happen: bad actors learn the rules are optional, and good-faith members lose trust and disengage. The people who leave quietly are often your best contributors.
Enforcement failure usually comes from one of three root causes: the moderation team is too small, the escalation path is unclear so moderators freeze, or leadership overrides moderation decisions without explanation, which demoralizes the team. If you are looking at tools to help automate part of this, auto-moderation can handle routine violations so your team focuses on judgment calls.
Failure Mode 3: Inconsistent Application
Inconsistency is the trust killer. If your CoC is enforced differently depending on who the violator is, what day it is, or how much clout the person has in the community, members notice. And they talk about it. The perception of favoritism spreads faster than any actual rule violation.
Consistent enforcement does not mean identical outcomes for every situation. Context matters. But your process should be consistent even when your outcomes vary. Document your decisions. When you can point to a paper trail, you can defend your judgment calls.
Failure Mode 4: Reporting Black Holes
If someone reports a violation and hears nothing back, they will not report again. Worse, they will tell others not to bother. A working reporting system needs three things at minimum: acknowledgment within 24 hours, a timeline for resolution, and a closing message when the case is decided (even if the outcome is confidential).
Failure Mode 5: Scope Mismatch
A CoC written for a small, synchronous Slack team does not translate to an async forum with thousands of members. A CoC written for adults does not work for a community with minors. When the scope of the document does not match the reality of the community, gaps appear, and violations fall through them.
Anatomy of an Effective Community Code of Conduct
A working community code of conduct has four layers: values, behaviors, consequences, and reporting. Each layer serves a different function, and they have to connect to each other. Here is what each one does.
Layer 1: Values
Values are the “why.” They tell members what kind of community this is and what it stands for. Keep this section short and genuine. Two to four values is enough. More than that and they start to feel like a mission statement that no one reads.
Bad values example: “We value inclusion, respect, kindness, collaboration, growth, transparency, and excellence.”
Better values example: “This community exists for people who build community software. We value direct feedback, honest disagreement, and helping each other get better at the craft. We do not tolerate behavior that makes those conversations unsafe.”
The better version is specific. It tells you what the community is for and what it is against. It gives moderators a reference point when they face judgment calls.
Layer 2: Behaviors
This is where most CoCs either succeed or fall apart. The behaviors section needs to be concrete enough to act on. “Be respectful” is not actionable. “Do not post personal contact information for other members without their permission” is actionable.
Split behaviors into two lists: behaviors that are expected (what good participation looks like) and behaviors that are not allowed (what will result in action). Both lists matter. The expected behaviors section sets a positive standard. The prohibited behaviors section defines the line.
For the prohibited list, cover these categories at minimum:
- Harassment: targeted, repeated behavior intended to distress a specific person
- Discrimination: treatment based on protected characteristics (race, gender, religion, disability, sexual orientation, nationality)
- Threats: explicit or implied threats of harm
- Doxxing: sharing private information without consent
- Spam: repeated unsolicited self-promotion or off-topic posts
- Impersonation: misrepresenting your identity
Add category-specific rules based on your community’s actual risk profile. A developer community might add rules about sharing exploits or malware. A community for freelancers might add rules about poaching clients. Know your context.
Layer 3: Consequences
If your CoC does not describe what happens when rules are broken, it is not a governance document. It is a list of suggestions.
The consequences section should describe your enforcement ladder and note that severity determines where on the ladder a case starts. Minor violations start at the warning level. Severe violations can skip straight to permanent removal.
Be explicit that you reserve the right to act without warning for serious violations. This protects your moderation team from being paralyzed by precedent when a bad actor needs to be removed immediately.
Layer 4: Reporting
The reporting section answers three questions for members who want to report a violation: how do they do it, what happens after they do it, and how are they protected from retaliation.
Include your reporting channel clearly (an email address, a form link, a DM handle). State your acknowledgment timeline. State that reports are treated confidentially. State that you do not tolerate retaliation against reporters.
If you can, describe how conflicts of interest are handled. If a report involves a team member, who reviews it? This level of transparency builds disproportionate trust with your members.
Enforcement Framework: Warnings, Timeouts, and Bans
Having an enforcement ladder written down is not the same as having a working enforcement process. This section covers how to build the ladder, how to calibrate it, and what ratios look like in practice.
The Four-Rung Ladder
Rung 1: Warning. A private message to the member explaining what rule was violated, what they need to change, and that future violations will escalate. Document the warning with a timestamp and a description of the violation. Every moderation action should leave a paper trail.
Rung 2: Temporary Restriction. Limit the member’s ability to participate. This could be a posting cooldown, removal from specific channels, or requiring moderator approval for posts. Duration should match severity: 24-48 hours for a first escalation, one to two weeks for a second.
Rung 3: Temporary Ban (Timeout). Remove the member’s access for a defined period. This is a serious signal. Most members who reach this rung either reform after returning or violate again quickly. A member who violates within 30 days of returning almost always hits a permanent ban.
Rung 4: Permanent Ban. Removal with no return path. Use this for: severe first-time violations (threats, doxxing, targeted harassment campaigns), repeat offenders who have escalated through the lower rungs, and members who create alt accounts to evade prior moderation.
Calibration: When to Skip Rungs
Not every violation starts at rung 1. Here is a practical framework for deciding where to start:
Start at rung 1 (warning) for: first-time rule violations that did not target a specific person, spam from someone who may not have read the rules, minor misconduct in tone or presentation.
Start at rung 2 (restriction) for: second violation within 90 days of a warning, first-time violations that affected multiple members, pattern of borderline behavior that individually stays under the threshold but cumulatively is disruptive.
Start at rung 3 (timeout) for: deliberate rule violations where intent was clear, harassment of a specific member, violation after a recent warning in the same category.
Start at rung 4 (permanent ban) for: threats, doxxing, illegal content, sustained harassment campaigns, evading a previous ban.
Enforcement Ratios in Practice
Based on patterns across active online communities, here is what realistic enforcement ratios look like for communities with 1,000 to 50,000 members. These are reference points for calibrating whether your enforcement is too heavy or too light.
- Per 1,000 active members per month: 8-15 reports received
- Of those reports: 40-50% result in no action (duplicate reports, reports without violation, context that exonerates)
- Of actionable cases: 60-70% result in warnings, 20-25% in restrictions or timeouts, 5-15% in permanent bans
- Permanent bans as share of total membership: under 0.5% per year in a healthy community
If your permanent ban rate is above 1% per year, your community may have a culture problem that enforcement alone cannot fix. If your warning rate is above 5% per month, your rules may be too strict or too poorly communicated.
Documentation Requirements
Every enforcement action needs a record with five fields: date, member ID, violation description, evidence (link to post, screenshot), and action taken. Keep this in a private channel or moderation log that your team can reference. When a member appeals or when patterns emerge, this log is your only reliable record. As your community scales, you may also want to set up space moderators with scoped permissions so enforcement stays distributed without handing out admin access.
Five Real CoCs Analyzed: Django, Rust, Vercel, Supabase, and Figma
Here is a close look at five widely-referenced codes of conduct and what they do well and where they fall short.
1. Django Community Code of Conduct
The Django Code of Conduct is a foundational document in the open source world. It is built on the “be excellent to each other” framing but makes that concrete with specific examples of unacceptable behavior. It explicitly covers in-person events as well as online spaces, which matters for a project with a large conference presence.
What works: The reporting process names specific people with email addresses rather than a generic address. Named contacts increase report rates because members feel they are reaching a human, not a system. The document also states explicitly that harassment outside Django spaces can affect participation in Django spaces, which closes a common loophole.
What could be stronger: The consequences section describes a generic “response” without specifying the enforcement ladder. Members reading it cannot anticipate what will happen after they file a report, which may suppress reports of borderline cases.
2. Rust Community Code of Conduct
The Rust Code of Conduct is written in plain language and has been iterated on publicly over the years. It covers both behavioral expectations and the reasoning behind them, which helps members understand not just what the rules are but why they exist.
What works: Rust’s document explicitly states that the moderation team’s role is to protect the community, not to adjudicate debates. This framing reduces scope creep in moderation and keeps the team focused on conduct rather than opinion. The document also notes that the team will prioritize community safety over individual comfort, giving moderators clear license to act decisively.
What could be stronger: Like Django, the enforcement ladder is not described in detail. The document references “consequences” without specifying what those look like at each severity level.
3. Vercel Community Guidelines
Vercel’s community guidelines are structured differently from a traditional CoC. They are framed as guidelines for the Vercel community forum and lean heavily on positive framing: what good participation looks like.
What works: The document is honest about what the forum is for (troubleshooting and sharing knowledge) and what it is not for (job postings, unrelated self-promotion). This scope clarity reduces the volume of moderation work because members have a clear reference for what belongs.
What could be stronger: The consequences are extremely vague. The document states that violations “may result in post removal or account suspension” without describing when either applies. For a developer community where account suspension could affect access to tools, this ambiguity is worth addressing.
4. Supabase Community Rules
Supabase runs an active Discord community and has written rules for that context specifically. The rules are short, direct, and scoped to Discord behavior patterns (no spamming DMs, no pinging mods unnecessarily, no sharing Discord invites in DMs).
What works: The rules are clearly written for the actual context. Many communities paste a generic GitHub-style CoC into their Discord without adapting it. Supabase wrote channel-specific behavior expectations, which are much more useful for moderators deciding whether a specific post violates a rule.
What could be stronger: The document is rules-focused without a values statement. Members joining the community do not get a clear picture of what the community is for and what kind of culture it aspires to. Values provide context that makes edge-case enforcement decisions easier.
5. Figma Community Forum Guidelines
Figma’s community forum has one of the more detailed enforcement descriptions of any major platform community. The document specifies warning timeframes, describes what types of content go where, and explains the escalation path from post removal to account action.
What works: Figma explicitly describes the difference between post-level actions (remove a post) and account-level actions (restrict or ban a user). This distinction matters because it helps moderators choose the least disruptive intervention. Removing a post is often sufficient. Escalating to account action should be intentional, not reflexive.
What could be stronger: The reporting process is not well-documented. The document tells members to “flag” content but does not explain what happens after they do, which reduces reporting confidence.
Summary: What Each CoC Does Well
| Community | Values Section | Enforcement Ladder | Reporting Process | Scope Clarity |
|---|---|---|---|---|
| Django | Yes | Partial | Strong (named contacts) | Good |
| Rust | Yes | Partial | Good | Good |
| Vercel | Partial | Weak | Minimal | Strong |
| Supabase | No | Partial | Minimal | Strong |
| Figma | Partial | Strong | Weak | Good |
None of these documents is perfect. All of them have strengths you can borrow and gaps you can close in your own document.
Community Code of Conduct Template for 2026
Use this as your starting point. Replace the placeholders in brackets with your community’s specifics. Sections marked [OPTIONAL] are worth including if they apply to your context.
[Community Name] Code of Conduct
Last updated: [Month Year] | Version: [1.0]
Our Purpose
[Community Name] exists for [describe who this is for and what the community does]. We built this space for [specific audience], and we want it to remain a place where [describe the value members get].
Who This Applies To
This code applies to all members in [list spaces: the forum, Discord, Slack, GitHub discussions, events, etc.]. It applies to guests and new members as well as long-term participants. [OPTIONAL: Behavior in external spaces that targets members of this community may also be considered.]
Expected Behavior
- Engage with good faith. Assume others are trying to be helpful until there is evidence otherwise.
- Give feedback on ideas, not on people. Critique the work, not the person who made it.
- Acknowledge when you are wrong. It models the behavior the community needs.
- Credit others for their work and ideas.
- [OPTIONAL] Use content warnings or spoiler tags where appropriate.
Prohibited Behavior
The following behaviors will result in moderation action:
- Harassment: Repeated, targeted behavior intended to distress or intimidate a specific person, including in DMs.
- Discrimination: Content that demeans people based on race, ethnicity, gender, gender identity, sexual orientation, disability, nationality, religion, or age.
- Threats: Explicit or implied threats of harm to any person.
- Doxxing: Sharing another person’s private information without their consent.
- Spam: Repeated unsolicited promotion, off-topic posts, or disruptive volume posting.
- Impersonation: Misrepresenting your identity, credentials, or affiliation.
- [Add community-specific prohibitions here]
Enforcement
Reports are handled by [the moderation team]. The team will review every report and take one of the following actions based on severity:
- Warning: Private message explaining the violation and what changes are required. Documented internally.
- Temporary Restriction: Posting limited or disabled for a defined period. Used for second violations or first-time violations with meaningful impact on other members.
- Temporary Ban: Access removed for a defined period. Reserved for deliberate violations and repeated escalation.
- Permanent Ban: Access permanently removed. Applied for severe violations (threats, doxxing, illegal content), sustained harassment campaigns, or repeated violations after return from a temporary ban.
We reserve the right to act immediately and without warning for severe violations.
Reporting
To report a violation, contact us at [[email protected] / use the report form at URL / DM @moderator-handle].
What to include: a description of what happened, links or screenshots if available, and the approximate date and time.
What to expect: We will acknowledge your report within 24 hours and let you know when it has been resolved. Reports are treated confidentially. We will not reveal who filed a report without the reporter’s permission. We do not tolerate retaliation against members who file good-faith reports.
Appeals
If you believe a moderation decision was made in error, you may appeal by contacting [[email protected]]. Appeals are reviewed by a different team member than the one who handled the original case. We will respond within five business days.
Updates to This Document
This code will be updated as the community grows. We will announce significant changes in [#announcements / the newsletter / the forum]. The version number and last-updated date are shown at the top of this document.
Putting Your Community Guidelines Into Practice
Writing the document is the start, not the finish. Here is what you need to do before you publish it.
Train Your Moderation Team
Every moderator needs to read the CoC and be able to answer three questions: what does a warning look like, what is the reporting workflow, and what should they do if they are unsure about a case? Run a 30-minute sync before launch and create a private channel where moderators can consult each other before acting.
Set Up Your Moderation Log
Create a private document or spreadsheet with columns for date, member, violation, evidence link, and action taken. Every action gets logged. This log protects your team from disputes and helps you identify patterns including serial reporters, recurring violators, and categories of violations that need clearer rules.
Publish and Pin It Prominently
A CoC that members cannot find does not work. Pin it in your welcome channel. Link it from your onboarding flow. Reference it in your sign-up confirmation email. Communities that surface their CoC during onboarding report higher voluntary compliance than those that bury it in a footer. If your community also handles support questions, read how to let your community answer support questions without turning it into chaos.
Review It Every Six Months
Communities change. New platforms add new behavior patterns. Your member base grows and shifts. Set a calendar reminder for every six months to review the document with your moderation team. Ask: are there violation types we are seeing that are not covered? Are there rules we are not enforcing? Has the community’s scope changed?
Be Transparent About Enforcement
You do not need to share case details, but periodic transparency builds trust. A quarterly post that says “we handled 23 moderation cases this quarter, including 14 warnings, 6 restrictions, and 3 bans” tells members that the rules are real. It deters bad actors and reassures good-faith members.
What to Do When Enforcement Gets Hard
Some situations make enforcement genuinely difficult. Here are three that come up often and how to handle them.
High-Status Violators
When someone with high community standing violates the CoC, the pressure to look the other way is real. Resist it. Apply the same process you would for anyone else. If you cannot do that, your CoC is not a governance document, it is a document that protects the powerful.
Document the decision carefully. If you make an exception for any reason, write down exactly why. That note protects you if the situation comes up later.
He-Said-She-Said Reports
When a report has no independent evidence and the parties give conflicting accounts, you cannot always determine what happened. In these cases, your options are: take no action (and document why), issue a general reminder to both parties without assigning blame, or escalate to a senior moderator or independent reviewer.
Never dismiss these reports without documentation. Even if you cannot act, a second complaint against the same person from a different reporter six months later will be much more actionable if you have a record of the first one.
Moderator Burnout
Moderation work is emotionally taxing. Moderators who review harassment reports regularly, manage angry appeals, and make judgment calls under pressure burn out. Build rotation schedules, set limits on how many cases any one moderator handles per week, and create a space for your team to decompress. The highest risk to your enforcement system is not a bad actor. It is your moderation team quitting.
Key Takeaways
A community code of conduct that members follow is not a legal document or a virtue signal. It is a system. It has a values layer that tells members what kind of community they joined, a behaviors layer that defines the rules concretely, a consequences layer that describes what happens when rules break, and a reporting layer that gives members a path to take action.
Build the system before you need it. Communities that write their CoC after a crisis are always writing it under pressure, with the community watching, which produces exactly the lip-service document they were trying to avoid.
Use the template above as a starting point. Study the examples from Django, Rust, Vercel, Supabase, and Figma for what to borrow and what to improve. Train your team before launch. Log every decision. Review the document every six months.
The communities that handle conflict well are not the ones that avoid conflict. They are the ones that built a system for handling it before they needed one.
If you are building or migrating a community platform and need the underlying infrastructure to support enforcement workflows, reporting systems, and member management at scale, talk to our team about what that build looks like.