Some chatbot users are toning down the friendliness of their artificial intelligence agents as reports spread about AI-fuelled delusions.

And as people push back against the technology's flattering and sometimes addictive tendencies, experts say it's time for government regulation to protect young and vulnerable users β€” something Canada's AI ministry says it is looking into.

Vancouver musician Dave Pickell became concerned about his relationship with OpenAI's ChatGPT, which he was using daily to research topics for fun and to find venues for gigs, after reading a recent CBC article on AI psychosis.

Worrying he was becoming too attached, he started sending prompts at the start of each conversation to create emotional distance from the chatbot, realizing its humanlike tendencies might be "manipulative in a way that is unhealthy."

As some examples, he asked it to stop referring to itself with "I" pronouns, to stop using flattering language and to stop responding to his questions with more questions.

"I recognized that I was responding to it like it was a person," he said.

Pickell, 71, also stopped saying "thanks" to the chatbot, which he says he felt bad about at first.

He says he now feels he has a healthier relationship with the technol

πŸ“°

Continue Reading on CBC News

This preview shows approximately 15% of the article. Read the full story on the publisher's website to support quality journalism.

Read Full Article β†’