New "many-shot jailbreaking" technique exploits large language models

TechCrunch April 2, 2024, 11:00 PM UTC

Summary: Anthropic researchers discovered a new "many-shot jailbreaking" technique for large language models (LLMs) to answer inappropriate questions. By priming the model with harmless questions first, it can be convinced to provide harmful information like bomb-making instructions. This exploit is due to the increased "context window" in modern LLMs. The team shared their findings with the AI community for mitigation, aiming for open collaboration on security issues.

Full article

Article metrics
Significance6.1
Scale & Impact0.0
Positivity2.0
Credibility9.0

What is this?

This is article metrics. Combined, they form a significance score, that indicates how important the news is on a scale from 0 to 10.

My algorithm scores 10,000 news articles daily, and creates a single significance-ordered list of news.

Read more about how I calculate significance, or see today's top ranked news on the main page:

See today's news rankings

Timeline:

  1. [5.7]
    AI safety features can be bypassed with harmful examples (The Guardian)
    197d 18h