Rejigger triggers?

From tumblr:

The reason why people on tumblr over-use the concept of “trigger” rather than just “thing I don’t like” or “thing that makes me angry” or “thing that makes me sad” is that, literally, in the political/fandom part of tumblr culture are required to establish your right not to read a thing, and you only have rights if you can establish that you’re on the bad end of an axis of oppression. Hence, co-opting the language of mental illness: trigger.

i.e. trigger warning culture is a rational response to an environment in which media consumption is mandatory. It’s not hypersensitivity so much as the only way to function.

There is a secondary thing, which is, here we are all oppressed, which ties into the feeling that you only have rights if you can establish that you’re at the bad end of an axis of oppression, but I’m not sure I can totally articulate that thing.

The idea that oppression confers legitimacy does seem to be ascendant, and not just on tumblr. Hostile political debates these days often turn into arguments about which side is the injured party, with both claiming to be unfairly caricatured or oppressed. This is pretty bad if it displaces a substantive exchange of ideas, though it may be hard to fix in a society that’s correcting for bias against oppressed groups. The cure isn’t necessarily worse than the disease, though that’s a question worth looking into, as is the question of whether people can learn to see through false claims of grievance.

On the other hand, I don’t think ‘I will (mostly) disregard your non-triggering aversions’ implies ‘you only have rights to the extent you’re oppressed’. I think the deeper problem is that social interaction between strangers and acquaintances is increasingly taking place in massive common spaces, on public websites.

If we’re trapped in the same common space (e.g., because we have a lot of overlapping interests or friends), an increase in your right to freely say what you want to say inevitably means a decrease in my right to avoid hearing things I don’t want to hear. Increasing my right to only hear what I want to will likewise decrease your right to speak freely; at the very least, you’ll need to add content warnings to the things you write, which puts an increasing workload on writers’ plates as the list of reader aversions they need to keep track of grows longer. (Blogging and social media platforms also make things much more difficult, by forcing trigger warnings and content to compete for space at the start of posts.)

I don’t know of any easy, principled way to solve this problem. Readers can download software that blocks or highlights posts/websites using specific words, such as Tumblr Savior and FB Purity. Writers can adopt content warnings for the most common and most harmful trigger and aversions out there, or the ones that are too vague to be caught by word/phrase blockers.

But vague rules are hard to follow. So it’s understandable that people would gravitate toward a black-and-white ‘trigger’ v. ‘non-trigger’ dichotomy in the hope that the scientific authority and naturalness of a medical category would simplify the problem of deciding when the reader’s right-to-not-hear outweighs the writer’s right-to-speak-freely. And it’s equally understandable that people who don’t have ‘triggers’ in the strictest sense, but are still being harmed in a big way by certain things people say (or ways people say them), will want to piggyback off that heuristic once it exists.

‘Only include content warnings for triggers’ doesn’t work, because ‘trigger’ isn’t a natural kind and people mean different things by it. Give some groups an incentive to broaden the term and others an incentive to narrow it, and language will diverge even more. ‘I’ll only factor medical information into my decisions about how to be nice to people’ is rarely the right approach.

‘Always include content warnings for triggers’ doesn’t work either. There are simply too many things people are triggered by.

If we want rules that are easy to follow in extreme cases while remaining context-sensitive in mild cases, we’ll probably need some combination of

‘Here are the canonical content warnings that everyone should use in public spaces: [A], [B], [C]…’

and

‘If you have specific reason to think other information will harm part of your audience, the nice thing to do is to have a private conversation with some of those audience members and consider adding more content warnings. If it’s causing a lot of harm to a lot of your audience, adding content warnings transitions from “morally praiseworthy” to “morally obligatory”.’

The ambiguity and context-sensitivity of the second rule is made up for by the very clear and easy-to-follow first rule. Of course, I only provided a schema. The whole point of the first rule is to actually give concrete advice (especially for cases where you don’t know much about your audience). That project requires, if you’re going to do it right, that we collect base rate information on different aversions and triggers, find a not-terrible way of ranking them by ‘suffering caused’, and find a consensus threshold for ‘how much suffering it’s OK for a random content generator to cause in public spaces’.

That wouldn’t obviate the need for safe spaces where the content is more carefully controlled, but it would hopefully make movies, books, social media, etc. safe and enjoyable for nearly everyone, without requiring people to just stop talking about painful topics.

Advertisement

2 thoughts on “Rejigger triggers?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s