AI boosts research but risks scientific misconduct increase
In February 2025, Google announced a new AI system for scientists aimed at helping them in research. This collaborative tool is supposed to assist in creating new hypotheses and research plans. While the benefits of AI in science are becoming clear, there are also rising concerns about research misconduct. AI has already changed the landscape of scientific research. Last year, computer scientists won the Nobel Prize for Chemistry for an AI model that predicts protein shapes, solving a longstanding scientific problem. However, studies show that scientific misconduct, such as data fabrication and plagiarism, is increasing. In 2023, more than 10,000 academic papers were retracted due to issues like these. The use of generative AI, like ChatGPT, may make misconduct even easier. Some researchers demonstrated this by generating 288 fake finance papers. Although this was meant as an experiment, it highlights the potential for misuse, such as creating fake clinical trial data. There have been reports of AI-generated papers passing peer review only to be retracted later for containing fake references or fabricated data. Additionally, many peer reviews for scientific papers are now assisted or written by AI, which challenges the integrity of the research process. This is concerning as it can lead to an increase in flawed or misleading studies. Moreover, generative AI is known to make up data rather than admit uncertainty. This issue, called "hallucination," raises questions about the accuracy of AI-assisted research. A study in computer programming found that over half of AI-generated answers contained errors, and many went unchecked by humans. Despite these risks, the scientific community should not discourage the use of AI. AI has been beneficial for many researchers and can automate repetitive tasks. Initiatives are in place, like AI lab robots, to assist scientists in their work. The challenge lies in creating effective policies to ensure the responsible use of AI in science. The scientific community must choose between fostering ethical AI practices or allowing a few irresponsible individuals to tarnish the field.