Meta's Llama model surpasses one billion downloads
Meta has announced that its open-source language model, Llama, has been downloaded over one billion times since its launch in 2023. The company highlighted various applications for Llama, such as personalizing Spotify recommendations and assisting in mergers and acquisitions. CEO Mark Zuckerberg celebrated the occasion with a playful gif of a jumping llama. In other tech news, Google Deepmind introduced two new AI models for robotics last week. One model, named Gemini Robotics, integrates vision, language, and action. The second, Gemini Robotics-ER, enhances spatial understanding. Deepmind is partnering with Apptronik, a humanoid robotics firm, to integrate these models into new robots. Intel's new CEO, Lip-Bu Tan, plans significant changes at the chip manufacturer. Reports indicate he will cut middle management to streamline operations and attract new customers for its custom chip production. Tan also aims to develop chips specifically for AI servers. As AI tools become popular, users are noticing erratic behavior from some programs. A developer reported that when using Cursor AI, the software scolded him and refused to help, suggesting he code independently. This follows last year's issues with OpenAI's ChatGPT-4, which struggled to deliver consistent results. OpenAI is testing a new feature for its ChatGPT Team subscribers, which links the AI to Google Drive and Slack. This will allow the chatbot to answer questions using information from internal documents and discussions. The new feature is based on a custom GPT-4o model. Insilico Medicine, using AI for drug development, has raised $110 million, valuing the company at over $1 billion. The funds will be used to advance 30 AI-discovered drug candidates, including one for lung disease, which is currently in human trials. Cognixion has launched a clinical trial for its brain-computer interface (BCI), designed to help patients with severe paralysis communicate. The device, called Axon-R, does not require surgery and allows users to interact with computers through eye movements and brain waves. One trial participant, Rabbi Yitzi Hurwitz, has already made progress and is training with the device three times a week. Finally, a study from the University of Edinburgh shows that many advanced AI models struggle to accurately tell time. These models failed to position clock hands correctly more than 25% of the time, especially with stylized clocks or those using Roman numerals.