AI models may learn continuously, improving adaptability
Recent advancements in artificial intelligence (AI) have sparked excitement about the potential for human-level systems, but significant limitations remain. Currently, AI learning occurs in two phases: training, where systems learn from data, and inference, where they apply that knowledge. Unfortunately, once trained, AI models cannot learn from new experiences or adapt to changes without undergoing extensive retraining. This gap in continuous learning makes AI systems less capable than human intelligence. While humans learn in a fluid manner, AI systems cannot incorporate new information on the fly. This limitation is known as "catastrophic forgetting," which refers to the issue of losing previous knowledge when acquiring new information. Researchers aim to enable AI systems to learn continuously, referred to as continual learning. Various methods, such as model fine-tuning, retrieval-augmented generation, and in-context learning, attempt to address this challenge. However, these methods have limitations and often fail to provide the scalability or efficiency needed for real-world applications. New AI startups, like Writer and Sakana, are developing innovative approaches to tackle these issues. Writer's self-evolving models can learn and adapt during use, while Sakana's Transformer2 allows models to adjust their skills based on specific tasks in real-time. Both firms demonstrate a shift towards creating AI that is not static and can develop capabilities over time. The potential for continual learning in AI systems could radically transform how they interact with users and perform tasks, making them more personalized and effective. As these developments unfold, they promise to create significant competitive advantages and enhance the utility of AI across various fields. The pursuit of continual learning marks a notable milestone in AI research and development.