Yet another way humans shouldn't be replaced
I'm Alice Hunsberger. Trust & Safety Insider is my weekly rundown on the topics, industry trends and workplace strategies that trust and safety professionals need to know about to do their job. This week, I'm thinking about:
- Why it's critical to invest in encouraging prosocial content in online communities
- Advice for someone just starting out in their T&S career
Get in touch if you'd like your questions answered or just want to share your feedback. Here we go! — Alice
AI bot for parenting support? No thanks
I believe in the power of community and human connection. I’ve spent a significant amount of my time building and supporting both online and offline communities over the last 20 years — feel free to ask me about the forum I founded before social media existed, or about starting a brick and mortar game store or meditation centre.
As AI content and chatbots become a more significant part of the internet, I predict that we’ll increasingly seek out verified human spaces and human support, especially when it comes to the messy, emotional issues that are intrinsic to our human experience.
This week, we saw an interesting example of community support and AI gone wrong. You might have seen it: 404 Media reported how Facebook’s AI bot replied in a parents group about how it had a gifted, disabled child.
Now, Meta has been experimenting with AI chatbots in groups since last year and, in some cases, it appears that the bot will reply to a user even if a human hasn’t. While there is a logic to trying to provide group users with an answer to their question, it’s truly dystopian when a chatbot mimics being a human to the point of entirely fabricating an imaginary disabled child. It also entirely misses the point of an online support group.
The push to prosocial behaviour
Instead of replacing humans with chatbots in communities online, we should be supporting the human community members and encouraging positive behaviour. Google’s Jigsaw team has been working on just that, having recently released seven pro-social detection classifiers as part of their free Perspective API. The classifiers can sort comments and content based on attributes such as affinity (shared interests), compassion, curiosity, nuance, personal stories, reasoning, and respect.
By using these classifiers to boost specific types of content, platforms can promote prosocial contributions to a community and hopefully encourage others to contribute in similar ways. The classifiers are set to be mixed and matched, depending on the community. Off the top of my head, I’d experiment with:
- Dating apps: curiosity, respect
- News article comments: nuance, reasoning
- Support groups: personal stories, compassion
This is experimental, and there are pitfalls to look out for:
“if applied too hastily, Jigsaw’s classifiers might end up boosting voices that are already prominent online, thus further marginalizing those that aren’t. The classifiers could also exacerbate the problem of AI-generated content flooding the internet, by providing spammers with an easy recipe for AI-generated content that’s likely to get amplified.”
It's no wonder there have been concerns about whether Meta's AI could exacerbate existing harms like misinformation and hate speech.
As with any implementation of automation, platforms should tread carefully. But as Trust & Safety practitioners, it’s our role to support trust online (in each other, and in our spaces). So much focus has been on content moderation, but I believe that promoting positive community interactions should be just as important as removing the bad.
You ask, I answer
Send me your questions — or things you need help to think through — and I'll answer them in an upcoming edition of T&S Insider, only with Everything in Moderation*
Get in touchA T&S Insider reader writes...
Hi Alice,
I am starting my first job as a T&S specialist in just under two weeks time. What advice would you give to someone just starting in a T&S role?
Firstly, congratulations! Here are the four areas that I think are worth focusing on at the beginning of your Trust & Safety career:
- Be Curious - A cool thing about working in T&S is that things are constantly changing. We’re in an age where there are emerging harm types, regulatory expectations, and AI disruption. It’s a good idea to stay up to date with news and trends, but also to be as curious as possible at your job. As someone new to this field, you have the gift of fresh eyes, and you should be constantly asking, “Why?”.
- Take initiative - If you ask why enough times, inevitably this will lead you to discover an area that perhaps needs more investigation or focus. Especially if you’re working at a startup or small company, there are never enough people to go down all of the rabbit holes of Trust and Safety. Where you can, offer to do further research, gather evidence, and even suggest solutions.
- Take care of yourself - Trust & Safety work is really, really hard. There’s a never ending amount of awful stuff, and not enough time to take care of it all. It’s important to take the time to check in with yourself, work on your resilience, and make sure that you’re healthy enough to continue in this field. It’s a marathon, not a sprint.
- Don’t go it alone - You don’t really need this advice, because you reached out to me! But one of the best things about T&S is the amazing people who work in the field. We’re an incredibly friendly, inclusive, collaborative, and generous bunch of folks. Networking and making friends in the industry will help you learn faster, be more resilient, and grow your career.
Also worth reading
The real-time deepfake romance scams have arrived (Wired)
Why? As tech evolves, romance scammers become more convincing. They can now video chat live with victims using face-swapping technology.
Sextortion is a real and serious criminal issue, blaming Section 230 for it is not (TechDirt)
Why? The solution to crime is to arrest the criminals, not blame communications platforms. Ben and Mike talked about this piece on the latest edition of the brilliant Ctrl-Alt-Speech too.
The Word Censorship Has An Actual Meaning: A Defense of Content Moderation (Tech Policy Press)
Why? A great counter-argument to the "content moderation is censorship" complaint.
Member discussion