How many malicious docs does it take to poison an LLM? Far fewer than you might think, Anthropic warns - Daily Life Hak

How many malicious docs does it take to poison an LLM? Far fewer than you might think, Anthropic warns

Share This
Anthropic’s study shows just 250 malicious documents is enough to poison massive AI models.

from Latest from TechRadar https://ift.tt/zuk7lvi

No comments:

Post a Comment

Pages