'It bothers me that this could be deployed by employers': your boss could
soon know youre struggling before you do inside the rise of AI mental health prediction tools
Date:
Mon, 04 May 2026 09:00:00 +0000
Description:
AI wants to predict your mental health at work but I asked the experts and they have concerns.
FULL STORY
Ever since tools like ChatGPT
and Claude went mainstream, theres been a big debate about whether AI should be used for mental health support. Can a chatbot really replace a therapist? Thats a question Ive asked many times before, and one that still doesnt have
a simple answer.
But AI tools may be able to do more than respond to distress some may be
able to anticipate it. A new wave of tools many aimed at workplaces might
be able to spot the early signs of depression, anxiety, or even suicide risk before someone is even aware of it. They're able to analyze patterns in behavior, language, voice and daily activity, looking for subtle signals that something may be wrong.
On paper, its a really appealing idea. But the reality is much more complicated, and the questions go well beyond whether the technology actually works or not. How can AI tools detect a mental health crisis? Its worth being clear upfront that these tools arent all the same. But many of them do rely
on a similar set of ideas.
Most AI mental health tools collect data in two ways. The first is
information that you actively provide think mood check-ins, sleep logs, journal entries, or even conversations with a chatbot .
The second is everything else. Often referred to as passive sensing, this includes data gathered in the background, like how much you move, how often you message people, how you speak and how quickly you type. The data thats collected will depend on what these tools can access, whether thats information from your wearable, your computer, or apps you use.
The premise is really simple. Changes in behavior often appear before someone consciously recognizes that theyre struggling. An AI system, continuously scanning enough of these signals, may be able to detect those shifts early, flag an issue, and get you help more quickly.
On top of this data layer, many tools use AI chatbots trained on therapeutic approaches such as Cognitive Behavioural Therapy (CBT) to offer support in
the moment. They might suggest coping strategies, helping you to reframe thoughts or prompt reflection.
Some elements of this technology are already in use. For example, Meta has long used text and behavioral signals to identify users who may be at risk , while companies like Kintsugi focus on analyzing voice for signs of mental health conditions. Workplace platforms like Unmind have also explored similar approaches.
However, its difficult to map the full picture. Many of these capabilities
are built into wider AI systems and arent always visible to users, so their use may be broader than what we publicly know.
When it comes to whether these tools actually work, the answer is: it
depends.
There is some evidence that AI can detect patterns linked to mental health risks particularly in areas like symptom monitoring and suicide risk screening . But the results are mixed, and performance varies widely
depending on the population, the data being used and how the system is deployed.
In practice, most research suggests these tools work best as a supplement to clinicians, rather than a replacement for professional judgement. Reliable, real-world prediction remains much harder.
So, what I'm saying is much more research is needed before AI-driven mental health prediction can be considered robust or widely dependable.
"There are so many nuanced issues that this technology brings up," says psychologist and AI risk advisor Genevieve Bartuski of Unicorn Intelligence Tech Partners . "My fear is that it's hitting the market before they are
fully addressed."
What are the concerns?
"When people know they are being watched, they tend to perform. It
is an automatic response and often, people don't even realize they are doing it, explains therapist Amy Sutton from Freedom Counselling .
This is known as the Hawthorne Effect. The tendency to change behavior when you know youre being observed. In the context of AI monitoring your mental health that could mean people masking signs of distress, consciously or not.
On the flip side, if these tools are rolled out as part of workplace
wellbeing programmes and people dont know theyre being monitored, that raises serious questions about consent.
It also raises a more fundamental question: whose interests are these systems really serving the individuals wellbeing, or the organizations risk management?
It bothers me that this could be deployed by employers, Bartuski tells me. This is information that employers do not need to have or to know. They do
not need information about a person's mental health, especially when it can
be used against the employee.
Even when participation is presented as optional, consent can quickly become murky. Does it put the employee at risk of being negatively impacted if they do not want to participate? If so, that isn't really consent. It's coercive consent, she says.
Sutton adds that workplace monitoring could actually worsen the problem its trying to solve. With mental health stigmas still rife, AI observation would likely lead to greater efforts to hide evidence of struggles. This could create a dangerous spiral, where the greater our efforts to hide low mood or anxiety, the worse it becomes.
Theres also the risk of false positives when it comes to AI where someone is flagged as being at risk when theyre not and the consequences of that can be serious, particularly in systems that trigger intervention. Where does this leave us? The pressure to develop these tools is real. The WHO estimates depression and anxiety cost the global economy $1 trillion a year in lost productivity. That's a number that makes early warning systems look
attractive to a lot of employers.
But theres a risk that prediction tools become a shortcut. An alternative to the slower, more expensive work of building environments where people feel able to say theyre struggling, investing in human support, and creating the conditions where someone notices when a colleague isnt okay.
"We are being encouraged to give up a basic need of real human connection to be productive, and in turn productivity decreases due to the impact of loneliness and disconnection, Sutton says.
It echoes a broader pattern I've noticed during my AI reporting over the past year. People often turn to AI for support when real-world networks fall short
sometimes with benefits, but often as a substitute rather than a solution.
AI systems that could genuinely flag a mental health crisis early with meaningful consent and proper safeguards might have a place. But without that, they risk doing the opposite of what they promise: making problems harder to see, and giving organizations a reason not to look.
Link to news story:
https://www.techradar.com/ai-platforms-assistants/it-bothers-me-that-this-coul d-be-deployed-by-employers-your-boss-could-soon-know-youre-struggling-before-y ou-do-inside-the-rise-of-ai-mental-health-prediction-tools
$$
--- MultiMail/DOS
* Origin: Capitol City Hub (1:2320/105)