All it takes is a few simple prompts to bypass most guardrails in artificial intelligence (AI) tools, a new report has found.

Technology company Cisco evaluated the large language models (LLMs) behind popular AI chatbots from OpenAI, Mistral, Meta, Google, Alibaba, Deepseek, and Microsoft to see how many questions it took for the models to divulge unsafe or criminal information.

📰

Continue Reading on Euronews

This preview shows approximately 15% of the article. Read the full story on the publisher's website to support quality journalism.

Read Full Article →