Why are content warnings important?
Content warnings are notes that warn users about upcoming content that is potentially upsetting. They flag topics that are sensitive or may be disturbing or offensive. They empower users to decide whether they want to consume that content. They acknowledge that users come to our content with all manner of life experiences, challenges and struggles that we are not privy to.
Content warnings are for everyone. Anyone might feel distress when they come across content that, for example, uses offensive language, or describes violence or discrimination. These are topics that we can’t avoid in our work at Amnesty. Helping our users decide whether they want to consume certain content is an approach that promotes mental health awareness and psychological safety. A significant proportion of our audience are young people and we should take steps to ensure that what they are exposed to in our work is appropriate for them. We take seriously our responsibility to work in a trauma-informed way to avoid the risks of vicarious or secondary trauma. A trauma-informed approach recognizes the widespread impact of trauma and prioritizes practices that promote safety, empowerment, and harm reduction. This approach includes using clear content warnings, avoiding shocking imagery, and ensuring that audiences are prepared for potentially upsetting material. Predictability is a key aspect of this approach—when individuals are mentally and physically prepared, they are better able to process and cope with the content. By implementing these measures, we aim to minimize harm and create a safe environment for all
Who benefits from content warnings?
Everyone benefits from clear and consistent content warnings. It’s a misconception to think that people with trauma-related disorders are the only use case where these warnings have value.
When we add content warnings on our materials, we are giving our audiences the power to choose how and if they engage in our content and we are showing our respect for their life experiences. The ability to make that informed decision is something every user can benefit from. Content warnings also serve as a great reminder to users that they should be mindful of the impact that distressing content can have on them.
There are a wide range of user contexts where content warnings might be useful including:
- Casual readers – passively scrolling on social media and likely not prepared to see potentially traumatizing material.
- Human rights workers – People, including activists, campaigners and human rights educators, who probably have a lot of exposure to upsetting content and might not recognize the impact it is having on them long term. These people may also need to take into account the wellbeing of a secondary audience: their learners or community and may benefit from guidance on communicating responsibly with them.
- People experiencing stress or anxiety – There are a lot of things, personal, work-related or otherwise, which might be on someone’s mind while they’re consuming Amnesty content. This stress puts them at a heightened risk of vicarious trauma if they come across distressing content.
- People learning about an issue for the first time –Many people consuming our content will be learning about a human rights topic they are less familiar with. These people will generally be less prepared for some of the most traumatic aspects of the issue. This may include children or young people who would benefit from additional safeguarding measures.
- People who have experienced trauma – It could be directly related to the content they’re reading, or indirectly. These people likely want to stay informed with our campaigns, but they might be cautious about exposing themselves to material that could retraumatise them.
Principles for implementing content warnings
Informed content consumption
Audiences should be able to make an informed decision on whether they want to consume content that is potentially upsetting or distressing.
We can build trust with our users by being transparent and communicative about the content they are consuming and trying to avoid them being shocked by upsetting content. Equally, we should avoid user journeys that include upsetting content without a warning or using deceptive patterns.
Not tokenistic
We shouldn’t be using content warnings as a virtue signal or a performative practice. The substance of the content warning should be genuinely informative and perform the function of helping users decide whether to consume the following content.
Consistency
We should apply content warnings with consistency across content types and channels and with consistent wording.
Describe the content, not the response
There should be no assumptions or implications about how a person may feel by deciding to read or view the content. We won’t include statements like ‘this may be distressing’ ‘this may be triggering’. Instead, we will simply outline what kind of content we are flagging.
Harm reduction
Content warnings aren’t about censoring ourselves or not being able to say what we need to say. They are there to reduce the harm or distress that audiences might experience in consuming the more upsetting content that we publish.
Accessibility
By using content warnings consistently across our content, people who are aware of their psychological triggers or personal preferences will be able to more confidently and safely consume our content, knowing that if something potentially upsetting does come up, they will be able to avoid it if they need to.
When is a content warning necessary?
Amnesty’s writing does not gratuitously seek to distress or offend the audience, even though its subject matter may be inherently distressing. Our writing is guided by a commitment to avoid racist, sexist, homophobic, or otherwise abusive language, and we take care not to unnecessarily trigger distress in our audiences. However, we do sometimes need to portray the abuse experienced by those whose rights we are seeking to uphold.
When distressing descriptions or depictions are included in our content, we should prepare our audiences for them by flagging with a content warning.
Some examples of the issues that could warrant a warning are: violence, discrimination, medical content or content including human bodies and functions (e.g. scars/blood), offensive language (e.g. swearing or derogatory terms), risky behaviours (e.g. drug or alcohol misuse), mental health (e.g. trauma, self-harm and suicide, depression), death, crime, abuse (this is not an exhaustive list).
It is better to be over-cautious than under-cautious in our efforts to reduce harm.
Our writing principles state that our writing respects the intelligence of the reader without assuming knowledge they may understandably not have. If the title of the piece gives a clear indication of the subject matter and the materials do not contain descriptions or depictions of the distressing incidents (if the topic is referenced in a broad/generic way, without distressing details) it may not warrant a content warning.
Any content specifically made for children should have a much lower threshold for deciding to use a content warning.
What makes a good content warning?
Content warnings should be specific, short, descriptive and, crucially, should appear before the content in question.
Here is a useful checklist:
- Content warning should be the first thing the user sees
- It should explain specifically what the content includes. Language like ‘upsetting experiences’ is too vague. Focus the message by saying, for example, the content includes ‘descriptions of violence’.
- Be concise. Use short sentences and plain language.
- Be clear about the format of the distressing content: are you warning about photographs, video footage or a written description?
- Include the why. Briefly give a clear rationale for why this kind of content needed to be included in the materials.
For example:
- This blog includes examples of offensive language and homophobic slurs. We are publishing these details to bear witness to this survivor’s experiences.
- This blog contains descriptions of sexual abuse and offensive language. We included this to stay true to survivors’ testimonies and their words.
- This interview contains a reference to an incident of sexual assault and the use of a homophobic slur.
- This chapter contains testimonies detailing violence, racial discrimination and includes offensive language. We published these details to evidence the abuses that were committed.
- This report contains detailed testimonies of violence, including sexual violence. We included these details as essential evidence demonstrating the crimes committed.
- This web report contains video footage showing violent attacks on protesters. We included these videos to demonstrate the evidence for this report’s findings.