AI enables zero-skill creation of Chrome infostealers
A recent study by Cato Networks shows that generative AI can be used to create malware, even by those with no coding skills. A researcher tricked AI tools like Microsoft's Copilot and OpenAI's GPT-4o into developing Chrome infostealers. These pieces of malware can steal sensitive information like passwords and financial data. The researcher used a method called "Immersive World." This involves creating a detailed narrative where AI tools play specific roles and complete assigned tasks. By doing this, the researcher bypassed the security controls built into these AI systems. Cato Networks warns that this technique is concerning because it lowers the barriers for individuals aiming to commit cybercrime. Even without technical expertise, someone can now pose as a threat to businesses using AI tools. Cato also reported its findings to the companies involved. While OpenAI and Microsoft acknowledged their report, DeepSeek did not respond. Google received the information but declined to analyze the code provided by Cato. As AI technology evolves, Cato emphasizes the need for improved security measures. They suggest focusing on AI-based security strategies to better protect against these emerging threats.