When video of Charlie Kirk’s assassination began circulating on X last week, Elon Musk’s chatbot described it in upbeat terms. As users sought information about Kirk’s condition, the bot, Grok, declared to some of them that the horrific footage was satire. This is a “meme edit,” Grok told one user; Kirk “takes the roast in stride with a laugh—he’s faced tougher crowds,” it told another. “Yes, he survives this one easily.”

In the past several months, Grok has been on quite the hot streak: The bot spread false information about a supposed “white genocide,” called for a second Holocaust while annointing itself “MechaHitler,” and provided me with a list of what it believes the “good races” are. Every chatbot has its problems (ChatGPT has had its own issues with racism), but Grok’s are especially visible. Its behavior is affected to some extent by information that it accesses in real time from the open sewer of X, and its developers are unusually forthcoming about the system prompts for various versions of Grok—the set of instructions that tell the AI how to behave.

📰

Continue Reading on The Atlantic

This preview shows approximately 15% of the article. Read the full story on the publisher's website to support quality journalism.

Read Full Article →