AI industry workers warn of risks, call for protections

The Guardian June 4, 2024, 07:00 PM UTC

Summary: Current and former AI industry workers, including OpenAI and Google DeepMind employees, issued an open letter highlighting the lack of safety oversight in the industry and the need for whistleblower protections. The letter emphasizes the importance of transparency and accountability in AI development. Concerns about AI risks have intensified with technological advancements, prompting calls for increased regulation and employee safeguards.

Full article

Article metrics

The article metrics are deprecated.

I'm replacing the original 8-factor scoring system with a new and improved one. It doesn't use the original factors and gives much better significance scores.

Timeline:

  1. [4.6]
    Former OpenAI employees push for better whistleblower protection in AI (The Verge)
    106d 3h
    Source
  2. [4.5]
    AI workers from OpenAI, Anthropic, and Google DeepMind warn of dangers, call for transparency and whistleblower protections (The Washington Post)
    106d 6h
    Source
  3. [4.8]
    OpenAI insiders warn of reckless pursuit of AI dominance (The New York Times)
    106d 8h
    Source
  4. [5.3]
    Internal divisions at OpenAI persist post-coup attempt, impacting leadership (Financial Times)
    111d 17h

  5. [0.9]
    Safety employees leaving OpenAI due to lack of trust (Giant Freakin Robot)
    116d 2h
    Source
  6. [4.6]
    AI safety summit attendees leave, OpenAI faces dissent and criticism (The Guardian)
    120d 11h
    Source
  7. [5.7]
    OpenAI's safety execs quit, founders face scrutiny (Business Insider)
    122d 20h

  8. [5.9]
    OpenAI's Superalignment team disbanded due to resource issues (TechCrunch)
    123d 2h

  9. [4.7]
    OpenAI criticized for prioritizing products over safety (The Guardian)
    123d 10h
    Source
  10. [5.7]
    Top OpenAI researchers depart over safety vs. product prioritization (Financial Times)
    124d 1h

  11. [5.4]
    OpenAI dissolves Superalignment AI safety team (CNBC)
    124d 4h

  12. [4.9]
    OpenAI safety team faces mass exodus over leadership concerns (Vox.com)
    124d 5h
    Source
  13. [4.4]
    OpenAI co-founder Ilya Sutskever departs amid internal turmoil (The Guardian)
    126d 6h
    Source
  14. [3.3]
    Ilya Sutskever leaves OpenAI after nearly a decade (Gizmodo)
    126d 20h
    Source
  15. [4.4]
    Ilya Sutskever leaving OpenAI, Jakub to take over (The Verge)
    126d 22h
    Source
  16. [4.3]
    Ilya Sutskever leaves OpenAI amid leadership crisis (CNBC)
    126d 22h
    Source