Study finds AI political content often non-deceptive

reason.com

New research indicates that fears about artificial intelligence (AI) causing instability in elections through misinformation might be exaggerated. Computer scientists Arvind Narayanan and Sayash Kapoor conducted the study and are working on a book about AI and its capabilities. The researchers looked at 78 examples of political content created by AI during recent elections globally. They suggest that while AI can create false content, it has not significantly changed the nature of political misinformation. Surprisingly, many of the AI-generated pieces had no dishonest intent. In their findings, 39 out of the 78 cases did not aim to deceive. Some campaigns used AI to enhance their messaging. In Venezuela, journalists employed AI avatars to report news safely. Similarly, in California, a candidate with laryngitis used AI to communicate effectively during events. The study also showed that creating misleading content did not always require AI. The researchers pointed out that similar misleading content could be produced affordably without AI, using traditional methods like hiring editors or actors. Overall, Narayanan and Kapoor argue that misinformation's success in elections depends more on the message's alignment with the audience's beliefs than on the technology used to create it. They recommend focusing on the reasons people believe misinformation rather than on the tools that produce it.


With a significance score of 4, this news ranks in the top 9% of today's 18453 analyzed articles.

Get summaries of news with significance over 5.5 (usually ~10 stories per week). Read by 9000 minimalists.


loading...