Every day, people use Facebook to share their experiences, connect with friends and family, and build communities. We are a service for more than two billion people to freely express themselves across countries and cultures and in dozens of languages.

We recognize how important it is for Facebook to be a place where people feel empowered to communicate, and we take seriously our role in keeping abuse off our service. That’s why we’ve developed a set of Community Standards that outline what is and is not allowed on Facebook.

Our policies are based on feedback from our community and the advice of experts in fields such as technology, public safety and human rights. To ensure that everyone’s voice is valued, we take great care to craft policies that are inclusive of different views and beliefs, in particular those of people and communities that might otherwise be overlooked or marginalized.

01 HOW WE DEVELOP OUR COMMUNITY STANDARDS

Our Community Standards are a living document meaning that we are constantly evolving our policies to keep pace with changing online behaviors.

The Content Policy team is based in over 11 offices around the world, spanning every time zone, and is made up of subject matter experts on diverse topics such as terrorism, hate speech, and child safety.

Every two weeks, the Content Policy team runs a meeting called the Policy Forum to discuss potential refinements to our Community Standards and ads policies. We bring in experts from around the company to participate in this meeting, including members of our safety and cybersecurity policy teams, counterterrorism specialists, Global Operations employees, Product Managers, researchers, public policy leads and representatives from our legal, communications and diversity teams. On a number of occasions, we have brought journalists and academics into this meeting so they can observe our work.

02 THE VALUES BEHIND OUR COMMUNITY STANDARDS

There are 5 core values that sit behind all our policies – and we have these values in mind
when developing our Community Standards.

  1. Voice: We believe freedom of expression is a fundamental human right and the goal of our Community Standards is to create a place for expression and give people voice. A commitment to expression is paramount, but we recognize the internet creates new and increased opportunities for abuse. For these reasons, when we limit expression we do it in service of one or more of the following values:
    • Authenticity: We want to make sure the content people are seeing on Facebook is authentic. We believe that authenticity creates a better environment for sharing, and that’s why we don’t want people using Facebook to misrepresent who they are or what they’re doing.
    • Safety: We are committed to making Facebook a safe place. Expression that threatens people has the potential to intimidate, exclude or silence others and isn’t allowed on Facebook.
    • Privacy: We are committed to protecting personal privacy and information. Privacy gives people the freedom to be themselves, and to choose how and when to share on Facebook and to connect more easily.
    • Dignity: We believe that all people are equal in dignity and rights. We expect that people will respect the dignity of others and not harass or degrade others.

03 OUR POLICY AREAS

Our policies are based on feedback from our community and the advice of experts in fields such as technology, public safety and human rights. To ensure that everyone’s voice is valued, we take great care to craft policies that are inclusive of different views and beliefs, in particular those of people and communities that might otherwise be overlooked or marginalized.

Our Community Standards cover a wide range of policy areas to catch all kinds of harmful content – from bullying, harassment and hate speech, to graphic violence and credible threats all the way to fake accounts, fraud and impersonation.Some examples of policy areas include:

04 ENFORCEMENT

Our policies are only as good as our enforcement, and we use a combination of reports fromour community, review by our teams, and technology to identify and review content against our standards.

User reports

Every single thing on Facebook can be reported – page, profile, post, photo, comment – and anyone can report content to us if they think it violates our standards. One report is enough for us to take down content if it violates our policies – we do not remove content just because it has been reported a certain number of times. Not everything people find offensive or upsetting will violate our policies, so we also offer ways for people to customize and control what they see by unfollowing, blocking and snoozing people, and hiding posts, people, Pages

Human review

Today, we primarily rely on artificial intelligence to detect violating content on Facebook and Instagram, and our technology is often confident enough that a piece of content violates our standards to delete it automatically. But there are still many cases in which a trained human reviewer is critical to enforcing our standards fairly and accurately, particularly in cases where the context surrounding a piece of content is important:

  • For example: hate speech. Our systems can recognize specific words that are commonly used as hate speech, but not the intentions of the people who use them. So our team reviews this content.

We have over 35,000 people working in safety and security at Facebook, of which, about 15,000 are content reviewers. This team reviews content 24/7 and includes native language speakers. Our reviewers are from all different backgrounds and includes experts in areas like child safety, hate speech and counter-terrorism. We also make a point to hire people with the necessary language and cultural context for the markets in which we operate.

Our human reviewers undergo extensive training when they join and throughout their time working for Facebook. We also do our own proactive audits, where we conduct re-reviews that help us figure out if we are getting it right.

We also make sure they have all the support they need, including constant access to wellness and psychological support teams.

Technology/Artificial intelligence
Because of the volume of content we review on our platforms every day, today we primarily rely on AI for the detection of violating content on Facebook and Instagram

Our algorithms are getting better all the time at identifying content that obviously violates our Community Standards and automatically taking it down before anyone sees it.

We also use artificial intelligence to rank and prioritize content that is flagged for review, so that our human reviewers are focused on the most important cases – or cases where additional context is required.

95% of the organized hate, firearms, adult nudity and sexual activity, drug sales, suicide and self injury, fake accounts, terrorist propaganda, child nudity and exploitation, and violent and graphic content we remove from Facebook is found proactively using technology.

Consumer Standards Enforcement Report
For a long time, we heard from our partners and our community that they wanted a better sense about how well we were doing at enforcing our policies, and in catching harmful content.

In mid-2017, we published our first Community Standards Enforcement Report which shared numbers on the content we were seeing and catching that violated our Community Standards.

It’s a work in progress as we continue to refine our methodology but since that first report three years ago, we’ve more than doubled the number of policies we report on, and we’ve increased the frequency of reports from half yearly to quarterly.

We believe that increased transparency tends to lead to increased accountability and responsibility so publishing this data should help us improve faster.

FOR MORE INFORMATION:
https://www.facebook.com/communitystandards/
https://transparency.facebook.com/community-standards-enforcement
©️ Facebook