Back when I was an active user, I learned a lot of people only take in one piece of media and nothing else. I’m sure the site culture has shifted since then, but I remember a ton of entries exaggerating how extreme or unique basic tropes in My Little Pony: Friendship Is Magic are.
A lot of tropes were also constantly referred to as “anime tropes” and people suggested Japanese names for them when there was nothing about them that was unique to anime or Japanese culture.
I’m just curious how this would differ from automatic moderating tools we already have. I know moderating actually can be a traumatic job due to stuff like gore and CSEM, but we already have automatic filters in place for that stuff, and things still slip through the cracks. Can we train an AI to recognize it when it hasn’t already been put into a filter? And if so, wouldn’t it hit false positives and require an appeal system, which could still be used to traumatize people?