Grok's fact-checking use raises misinformation concerns on X
Some users of X, the platform owned by Elon Musk, are using Grok, an AI chatbot developed by Musk's xAI, for fact-checking. This trend has raised concerns among human fact-checkers about the spread of misinformation. X recently allowed users to interact with Grok by asking it questions, similar to another automated service called Perplexity. Users from various backgrounds, including those in India, have started to rely on Grok to verify political claims. However, fact-checkers warn that Grok can provide convincing but incorrect answers. The issue of AI spreading false information is not new. In August 2024, five state secretaries urged Musk to address misleading outputs from Grok ahead of the U.S. elections. Other AI chatbots, such as OpenAI’s ChatGPT and Google’s Gemini, also produced inaccuracies during that time. Experts like Angie Holan from the International Fact-Checking Network emphasize that, unlike AI, human fact-checkers rely on credible sources and are accountable for their information. Critics, including Pratik Sinha from India’s Alt News, stress that Grok's quality depends on the data it receives, which raises concerns about transparency and potential misuse. Recently, Grok acknowledged its risk of being used to spread misinformation. However, it does not provide any warnings when giving answers, which could mislead users. Anushka Jain from Digital Futures Lab pointed out that Grok might fabricate information to respond to queries. The access of AI assistants like Grok to public social media increases the risk of spreading misinformation, as users might take their word at face value. Instances of misinformation leading to real-world harm have been documented, especially prior to the rise of generative AI. Experts caution that while AI can generate information that feels human-like, it cannot replace human fact-checkers. Some tech companies are exploring ways to use community-sourced verification methods, which worries some fact-checkers. Despite these changes, many believe that the public will eventually recognize the value of human accuracy over AI responses. The challenge remains for fact-checkers as they navigate the fast spread of AI-generated content.