A recent study by Anthropic AI, in collaboration with several academic institutions, has uncovered a startling vulnerability in AI language models, showing that it takes a mere 250 malicious documents to completely disrupt their output. Purposefully feeding malicious data into AI models is ominously referred to as a “poisoning attack.”

Researchers at AI startup Anthropic have revealed that AI language models can be easily manipulated through a technique kno

📰

Continue Reading on Breitbart

This preview shows approximately 15% of the article. Read the full story on the publisher's website to support quality journalism.

Read Full Article →