AI-generated images increasingly indistinguishable from real ones
Generative AI technology is advancing quickly, making it harder for people to tell real images from AI-generated ones. Models like DALL-E use complex algorithms known as generative adversarial networks (GANs) to create images that can look very similar to real photographs. A research team at the University of Waterloo found that in a study, participants could correctly identify real and fake images only 61% of the time in 2022. People were generally better at spotting real images. As AI technology improves, it is likely that this accuracy has decreased, meaning even experienced viewers struggle to tell the difference. Researchers have identified common signs that might indicate an image is generated, like unnatural eyes or unrealistic teeth. However, as the technology evolves, these clues may not be reliable anymore. GANs, first introduced in 2014, have enabled various uses, including the creation of deepfakes. Deepfakes gained attention in 2017 when they were used for manipulated celebrity videos. Since then, tools for creating and detecting deepfakes have continued to develop. Some experts argue for better regulation of AI to address its potential dangers. Canadian computer scientist Yoshua Bengio, one of the original creators of GANs, has highlighted the need for oversight in this area. Tech companies have been working on algorithms to identify deepfakes by examining factors like speech patterns and facial expressions. However, these detection methods have limitations and may not work well with low-quality images or poor lighting. Despite attempts at regulation, deepfakes remain a concern. They can spread false information and damage reputations. In today’s digital age, seeing may no longer mean believing, as what looks like real evidence can easily be an AI creation.