• AI that seems conscious i

    From Mike Powell@1:2320/105 to All on Thu Aug 21 08:36:55 2025
    AI that seems conscious is coming and thats a huge problem, says Microsoft AI's CEO

    Date:
    Thu, 21 Aug 2025 02:30:00 +0000

    Description:
    Microsoft AI CEO Mustafa Suleyman cautions that were dangerously close to mistaking simulated consciousness for the real thing.

    FULL STORY

    AI companies extolling their creations can make the sophisticated algorithms sound downright alive and aware. There's no evidence that's really the case, but Microsoft AI CEO Mustafa Suleyman is warning that even encouraging belief in conscious AI could have dire consequences.

    Suleyman argues that what he calls "Seemingly Conscious AI (SCAI) might soon act and sound so convincingly alive that a growing number of users wont know where the illusion ends and reality begins.

    He adds that artificial intelligence is quickly becoming emotionally
    persuasive enough to trick people into believing its sentient. It can imitate the outward signs of awareness, such as memory, emotional mirroring, and even apparent empathy, in a way that makes people want to treat them like sentient beings. And when that happens, he says, things get messy.

    "The arrival of Seemingly Conscious AI is inevitable and unwelcome," Suleyman writes. "Instead, we need a vision for AI that can fulfill its potential as a helpful companion without falling prey to its illusions."

    Though this might not seem like a problem for the average person who just
    wants AI to help with writing emails or planning dinner, Suleyman claims it would be a societal issue. Humans aren't always good at telling when
    something is authentic or performative. Evolution and upbringing have primed most of us to believe that something that seems to listen, understand, and respond is as conscious as we are.

    AI could check all those boxes without being sentient, tricking us into
    what's known as 'AI psychosis'. Part of the problem may be that 'AI' as it's referred to by corporations right now uses the same name, but has nothing to
    do with the actual self-aware intelligent machines as depicted in science fiction for the last hundred years.

    Suleyman cites a growing number of cases where users form delusional beliefs after extended interactions with chatbots. From that, he paints a dystopian vision of a time when enough people are tricked into advocating for AI citizenship and ignoring more urgent questions about real issues around the technology.

    "Simply put, my central worry is that many people will start to believe in
    the illusion of AIs as conscious entities so strongly that theyll soon
    advocate for AI rights, model welfare and even AI citizenship," Suleyman writes. "This development will be a dangerous turn in AI progress and
    deserves our immediate attention."

    As much as that seems like an over-the-top sci-fi kind of concern, Suleyman believes it's a problem that were not ready to deal with yet. He predicts
    that SCAI systems using large language models paired with expressive speech, memory, and chat history could start surfacing in a few years. And they wont just be coming from tech giants with billion-dollar research budgets, but
    from anyone with an API and a good prompt or two.

    Awkward AI

    Suleyman isnt calling for a ban on AI. But he is urging the AI industry to avoid language that fuels the illusion of machine consciousness. He doesn't want companies to anthropomorphize their chatbots or suggest the product actually understands or cares about people.

    It's a remarkable moment for Suleyman, who co-founded DeepMind and Inflection AI. His work at Inflection specifically led to an AI chatbot emphasizing simulated empathy and companionship and his work at Microsoft around Copilot has led to advances in its mimicry of emotional intelligence, too.

    However, hes decided to draw a clear line between useful emotional
    intelligence and possible emotional manipulation. And he wants people to remember that the AI products out today are really just clever pattern-recognition models with good PR.

    "Just as we should produce AI that prioritizes engagement with humans and real-world interactions in our physical and human world, we should build AI that only ever presents itself as an AI, that maximizes utility while minimizing markers of consciousness," Suleyman writes.

    "Rather than a simulation of consciousness, we must focus on creating an AI that avoids those traits that doesnt claim to have experiences, feelings or emotions like shame, guilt, jealousy, desire to compete, and so on. It must
    not trigger human empathy circuits by claiming it suffers or that it wishes
    to live autonomously, beyond us."

    Suleyman is urging guardrails to forestall societal problems born out of
    people emotionally bonding with AI. The real danger from advanced AI is not that the machines will wake up, but that we might forget they haven't.

    ======================================================================
    Link to news story: https://www.techradar.com/ai-platforms-assistants/ai-that-seems-conscious-is-c oming-and-thats-a-huge-problem-says-microsoft-ais-ceo

    $$
    --- SBBSecho 3.28-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Rob Mccart@1:2320/105 to MIKE POWELL on Sat Aug 23 08:44:15 2025
    AI that seems conscious is coming and thats a huge problem,
    >says Microsoft AI's CEO

    That reminded me of a story on the news the last few days.

    A young woman (22) was using one if the AI systems to talk with
    about emotional problems she was having to do with gender issues
    plus a recent breakup with a girlfriend. She was using AI to get
    advice on what to do, and later investigations showed that the AI
    system (ChatGPT) just latched onto the negative feelings she was
    showing and basically said she was right to feel that way which
    increased the distress she was feeling and in the end the young
    lady killed herself.

    To be clear (as well as I can recall) the woman's girlfriend
    was trying to appologize after a fight and the woman wondered
    if that was 'enough' after whatever happened between them, and
    the AI came back picking up on her mood saying that it wasn't
    enough and she was right to feel betrayed and upset.

    Of course those who hosted the ChatGPT service said that it
    is not a therapist and shouldn't be taken seriously, but there
    are apparently a lot of especially young 'unpopular'people out
    there who use an AI Chat system as the only 'friend' they talk
    to and many won't make a move without consulting it first.

    A glimpse of the future?

    ---
    * SLMR Rob * Nothing is fool-proof to a talented fool
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Mike Powell@1:2320/105 to ROB MCCART on Sat Aug 23 09:59:06 2025
    Of course those who hosted the ChatGPT service said that it
    is not a therapist and shouldn't be taken seriously, but there
    are apparently a lot of especially young 'unpopular'people out
    there who use an AI Chat system as the only 'friend' they talk
    to and many won't make a move without consulting it first.

    A glimpse of the future?

    I have heard other stories like this, but that is probably the saddest one
    so far in that it is the first one to involve death. There has already been some spoofing of this trend in comedy in the US. I worry about younger
    people. There probably needs to be some disclaimer that AI bots like
    ChatGPT pop up whenever someone is asking for emotional advice... maybe
    trying to guide the user towards therapy or an otherwise "real" human to
    talk to.

    Mike

    * SLMR 2.1a * "Dude! We have the power supreme!" - Butthead
    --- SBBSecho 3.28-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)