Dr. Botacin develops malware defense large language model
Dr. Marcus Botacin, a researcher in computer science, is developing a new language model focused on cybersecurity. His project is a response to concerns that attackers could use existing AI, like ChatGPT, to create malware quickly and easily. Botacin aims to build a smaller, security-focused version of these large language models. He believes that if attackers use AI tools to generate malware at scale, defenders should also use similar technology to create rules and defenses against this malware. He plans for his language model to automatically identify malware using unique signatures, which act like fingerprints. Currently, security analysts write rules to detect malware manually, which is slow and requires expertise. Botacin's model will assist analysts by speeding up the process and improving accuracy, allowing them to focus on more complex issues. “The idea is not to replace the analyst but to let the machine do the heavy work,” he explained. The software could either be a website or downloadable code, and it will be available to the public. Botacin expects that the model will be particularly useful for incident response, helping analysts search for malware in companies' networks. To ensure the model is small enough to run on laptops, Botacin is training it extensively. He has access to powerful graphics processing units (GPUs), which are well-suited for handling large amounts of data during training. His research aligns with other efforts to integrate malware detection into computer hardware, aiming for proactive security solutions.