Research summaries written by AI fool scientists

Research summaries written by AI fool scientists

  • Science
  • January 13, 2023
  • No Comment
  • 4

FECA0ED6 15DF 4CFF 8D8C6881CAFE713E source News For Everyone Zoohouse News

According to a preprint published on the bioRxiv server in late December, an artificial intelligence (AI) chatbot can write fake abstracts of research papers so compelling that scientists often fail to recognize them1. Researchers are divided on the implications for science.

“I’m very worried,” says Sandra Wachter, who studies technology and regulation at the University of Oxford, UK, and was not involved in the research. “If we’re now in a situation where the experts can’t determine what’s true or not, we lose the middleman we desperately need to guide us through complicated issues,” she adds.

The chatbot ChatGPT creates realistic and intelligent-sounding texts in response to user input. It’s a “big language model,” a system based on neural networks that learn to perform a task by processing huge amounts of existing human-made text. The software company OpenAI, based in San Francisco, California, released the tool on November 30th and it is free to use.

Since its publication, researchers have grappled with the ethical issues surrounding its use, as much of its output can be difficult to distinguish from human-written text. Scientists have published a preprint2 and an editorial3 of ChatGPT. Now, a group led by Catherine Gao of Northwestern University in Chicago, Illinois, has used ChatGPT to generate artificial summaries of research papers to test whether scientists can recognize them.

Researchers asked the chatbot to write 50 medical research abstracts based on a selection published in JAMA, The New England Journal of Medicine, The BMJ, The Lancet and Nature Medicine. They then compared these to the original abstracts by running them through a plagiarism detector and an AI exit detector, and asked a group of medical researchers to identify the fabricated abstracts.

Under the radar

The abstracts generated by ChatGPT sailed through the plagiarism check: the median originality score was 100%, indicating no plagiarism was detected. The AI ​​output detector detected 66% of generated abstracts. But the human reviewers didn’t fare much better: they correctly identified only 68% of generated abstracts and 86% of genuine abstracts. They incorrectly identified 32% of generated abstracts as genuine and 14% of genuine abstracts as generated.

“ChatGPT writes credible scientific abstracts,” say Gao and colleagues in the preprint. “The limits of the ethical and acceptable use of large language models to support scholarly writing have yet to be determined.”

Wachter says if scientists can’t determine if the research is true, there could be “dim consequences.” Not only is it problematic for researchers who might be drawn down erroneous lines of inquiry because the research they read was fabricated, there are also “impacts on society at large because scientific research plays such a large role in our society.” For example, it could mean research-based policy decisions are wrong, she adds.

But Arvind Narayanan, a computer scientist at Princeton University in New Jersey, says: “It is unlikely that any serious scientist will use ChatGPT to generate abstracts.” He adds that it is “irrelevant” whether generated abstracts are recognized be able. “The question is whether the tool can generate an accurate and compelling summary. It’s not possible, so the benefit of using ChatGPT is tiny and the downside is significant,” he says.

Irene Solaiman, who researches the social impact of AI at Hugging Face, an AI company headquartered in New York and Paris, is concerned about the dependency on large language models for scientific thinking. “These models are trained on information from the past, and social and scientific progress can often come from thinking differently than in the past, or being open to thinking differently,” she adds.

The authors suggest that those evaluating scholarly communication, such as research reports and conference proceedings, should adopt policies to stamp out the use of AI-generated text. If institutions decide to allow the use of the technology in certain cases, they should establish clear rules for disclosure. Earlier this month, the Fortieth International Conference on Machine Learning, a major AI conference due to be held in Honolulu, Hawaii, in July, announced that it has banned papers authored by ChatGPT and other AI language tools.

Solaiman adds that in areas where fake information can put people’s safety at risk, such as in medicine, may need to take a more rigorous approach to verifying the accuracy of the information.

Narayanan says the solutions to these problems shouldn’t focus on the chatbot itself, “but rather on the perverse incentives that lead to this behavior, like universities conducting hiring and promotion reviews by reviewing papers without regard to their quality.” or effect count”.

This article is reproduced with permission and was first published on January 12, 2023.

Related post

YouTube Music contractors strike over alleged unfair labor practices • Zoo House News

YouTube Music contractors strike over alleged unfair labor practices…

A group of 40 YouTube Music workers went on strike on Friday. The striking workers, employed by Alphabet subcontractor Cognizant, allege…
What’s new in Iceland in 2023

What’s new in Iceland in 2023

Are you considering a trip to Iceland this year? The Nordic nation welcomes multiple new openings in 2023, from a mystical…
Andrew Tate: Influencers threatened workers with violence, victim claims

Andrew Tate: Influencers threatened workers with violence, victim claims

Testimony, which appears in a leaked ZooHouseNews court document, says that “the alleged victim continued to work to a strict schedule…stayed…

Leave a Reply

Your email address will not be published. Required fields are marked *