Personalisation is everywhere. Netflix recommends shows. Spotify curates playlists. Amazon predicts purchases. But when it comes to workplace technology, personalisation can feel different—more intrusive, more surveillance-like.
At redthrd, we've thought deeply about how to use AI for personalisation in a way that helps people without crossing ethical lines. Here's our approach.
The Personalisation Paradox
There's a fundamental tension: more data enables better personalisation, but more data collection feels more invasive. And unlike Netflix, employees often can't opt out.
This asymmetry of power makes ethical considerations even more important than in consumer applications.
Our Principles
We work with anonymised patterns. We don't need to know that Sarah specifically hasn't used a feature—just that someone in her role typically benefits from it.
We analyse usage patterns, not content. We know you use Teams but rarely Channels—we don't know what you're discussing.
No leaderboards. No "you're behind your peers" messages. We focus on enabling, not shaming.
Users can always see what data we use and why we make recommendations. No black boxes.
What This Looks Like in Practice
- "Based on your role, most people benefit from..."
- "We noticed you use Excel frequently—here's a tip..."
- "Teams in your department have found value in..."
- "Your productivity score is below average"
- "Your manager can see that you're not using..."
- "Compared to colleagues, you're falling behind..."
The Outcome
AI that empowers rather than surveils. That helps rather than judges. That respects human autonomy while enabling human potential.
When done right, AI personalisation feels like having a helpful colleague who notices you're struggling with something and shares a useful tip. It doesn't feel like being watched, evaluated, or controlled. That's the experience we're building at redthrd.