The messy AI empathy loop
I'm Alice Hunsberger. Trust & Safety Insider is my weekly rundown on the topics, industry trends and workplace strategies that trust and safety professionals need to know about to do their job. This week, I'm thinking about:
- How we're teaching AI empathy and emotional intelligence, only for it to teach us those same skills.
- How my plans to write a post on policy got foiled - but ultimately replaced by something better.
Forgive the shorter than usual newsletter this week – I tried to take it easier over the weekend as it was Mother's Day. As you read this, I'll be en route to the Marketplace Risk Management conference in San Francisco - if you're planning to attend, drop me a line.
As always, get in touch if you'd like your questions answered or just want to share your feedback. Here we go! — Alice
Will AI teach empathy, or remove the need for it?
Most of the discussion about AI so far has been about how and when models will acquire artificial general intelligence. Much less, however, has been about how they develop human emotional intelligence (HEI). But the idea of creating 'EI robots' that can detect facial expression and respond to emotions is gathering steam. And that starts to have implications for how we interact online.
Take one of the most emotional things that we do as humans: dating. Whitney Wolfe Herd, founder of dating app Bumble, recently had us imagine a world in which AI can teach people to have more healthy and equitable relationships by coaching them on what to say to other daters. She explained:
“your dating concierge could go out there and date for you with another dating concierge, […] and then you don’t have to talk to 600 people.”
Thanks to advances in how LLMs interpret T&S policy, there are already dating features that do an element of this: for example, Tinder's Are You Sure feature intervenes when someone tries to send a message that is potentially harmful. But, as Wolfe Herd envisages, it could be exponentially more powerful using more advanced forms of generative AI.
Imagine a situation in which an LLM not only tells you why what you said is harmful, but also provides suggestions on what to say instead and potentially chooses the response on your behalf. The goal is not just about avoiding harm but, to coin Wolfe Herd's phrase, creating "more healthy and equitable relationships".
Removing the sting
As someone who has led Trust & Safety at dating apps for most of my career, I’ve observed that the worst comes out in people when they’re rejected. The sting of being told "no" or "you're not for me" understandably makes people react in ways that are hard to control. In these situations, it’s possible that outsourcing all that messy stuff to an AI version of yourself could prevent those hurt feelings. We might even be able to mitigate some forms of harm that are commonplace in the dating sphere.
But all of this makes me wonder — at what point do AI interventions go from teaching human users better emotional intelligence to replacing the need for emotional intelligence? Just like people may no longer learn to spell because autocorrect fixes it for them seamlessly, will people no longer learn to deal with each other's emotions either?
Maybe the AI empathy loop isn't all it's cracked up to be. Hit reply to let me know what you think.
You ask, I answer
Send me your questions — or things you need help to think through — and I'll answer them in an upcoming edition of T&S Insider, only with Everything in Moderation*
Get in touchBehind the scenes of writing a policy whitepaper
I always suggest to people that they join a community of practitioners. Opportunities arise to collaborate and work on projects that end up being much stronger than if you go it alone. Here’s a concrete example of why, and how this kind of collaboration comes about.
Two months ago, I sat down and wrote a draft outline of "Best practices on writing policy and community guidelines". It was to be a companion piece to "Best practices for moderation appeals" which had just been published. Something must have been in the air, because around the same time, Matt Soeth (formerly head of Trust and Safety at Spectrum Labs and now senior advisor at All Tech is Human) asked me to be on his new podcast to talk about policy, and then some folks at the Integrity Institute organised a call to help a member with advice on writing and launching policy at startups.
The call was a lot of fun – many of us were (or had been) the only policy writers at our respective companies, and we really enjoyed bouncing ideas off other people and collaborating. We had so much to talk about that Sabrina Pascoe (head of Trust and Safety Policy at Care.com) suggested writing a blog post together on some of what we’d talked about.
I quickly abandoned my own article idea. It was clear that the collaborative article was going to be much better, not least because the authors have over 75 years of policy experience (!!!). The blog post ended up turning turned into a full-on whitepaper, which we're really proud of.
I’ve found these groups to be really rewarding, and an excellent way to build my network, make new friends, and grow professionally. And you don’t need to have high levels of experience to contribute to one. It's a great way to gain more experience and learn from others. In addition to the Integrity Institute, there are opportunities for this kind of collaboration at All Tech is Human and the TSPA. Seek them out, have a go.
Also worth reading
Elevenlabs is building an army of voice clones (The Atlantic)
Why? A great look at how AI can be used to replicate people's voices - and the pandora's box that is being opened.
Making deepfake images is increasingly easy – controlling their use is proving all but impossible (The Guardian)
Why? A look at the Australian regulator's fight against deepfakes.
The Meh Election (Anchor Change)
Why? In a global election year, why do things feel so quiet?
Member discussion