Springer Nature, the world’s largest academic publisher, has clarified its policies on the use of AI writing tools in scientific papers. The company announced this week that software like ChatGPT can’t be credited as an author in papers published in its thousands of journals. However, Springer says it has no problem with scientists using AI to help write or generate ideas for research, as long as this contribution is properly disclosed by the authors.
ChatGPT can’t be credited as an author, says world’s largest academic publisher


“We felt compelled to clarify our position: for our authors, for our editors, and for ourselves,” Magdalena Skipper, editor-in-chief of Springer Nature’s flagship publication, Nature, tells The Verge. “This new generation of LLM tools — including ChatGPT — has really exploded into the community, which is rightly excited and playing with them, but [also] using them in ways that go beyond how they can genuinely be used at present.”
ChatGPT and earlier large language models (LLMs) have already been named as authors in a small number of published papers, pre-prints, and scientific articles. However, the nature and degree of these tools’ contribution varies case by case.
In one opinion article published in the journal Oncoscience, ChatGPT is used to argue for taking a certain drug in the context of Pascal’s wager, with the AI-generated text clearly labeled. But in a preprint paper examining the bot’s ability to pass the United States Medical Licensing Exam (USMLE), the only acknowledgement of the bot’s contribution is a sentence stating the program “contributed to the writing of several sections of this manuscript.”
Crediting ChatGPT as an author is “absurd” and “deeply stupid,” say some researchers
In the latter preprint paper, there are no further details offer on how or where ChatGPT was used to generate text. (The Verge contacted the authors but didn’t hear back in time for publication.) However, the CEO of the company that funded the research, healthcare startup Ansible Health, argued the bot made significant contributions. “The reason why we listed [ChatGPT] as an author was because we believe it actually contributed intellectually to the content of the paper and not just as a subject for its evaluation,” Ansible Health CEO Jack Po told Futurism.
Reaction in the scientific community to papers crediting ChatGPT as an author have been predominantly negative, with social media users calling the decision in the USMLE case “absurd,” “silly,” and “deeply stupid.”
Arguments against giving AI authorship is that software simply can’t fulfill the required duties, as Skipper and Nature Springer explain. “When we think of authorship of scientific papers, of research papers, we don’t just think about writing them,” says Skipper. “There are responsibilities that extend beyond publication, and certainly at the moment these AI tools are not capable of assuming those responsibilities.”
Software cannot be meaningfully accountable for a publication, it cannot claim intellectual property rights for its work, and cannot correspond with other scientists and with the press to explain and answer questions on its work.
If there is broad consensus on crediting AI as an author, though, there is less clarity on the use of AI tools to write a paper, even with proper acknowledgement. This is in part due to well-documented problems with the output of these tools. AI writing software can amplify social biases like sexism and racism and has a tendency to produce “plausible bullshit” — incorrect information presented as fact. (See, for example, CNET’s recent use of AI tools to write articles. The publication later found errors in more than half of those published.)
It’s because of issues like these that some organizations have banned ChatGPT, including schools, colleges, and sites that depend on sharing reliable information, like programming Q&A repository StackOverflow. Earlier this month, a top academic conference on machine learning banned the use of all AI tools to write papers, though it did say authors could use such software to “polish” and “edit” their work. Exactly where one draws the line between writing and editing is tricky, but for Nature Springer, this use-case is also acceptable.
“Our policy is quite clear on this: we don’t prohibit their use as a tool in writing a paper,” Skipper tells The Verge. “What’s fundamental is that there is clarity. About how a paper is put together and what [software] is used. We need transparency, as that lies at the very heart of how science should be done and communicated.”
This is particularly important given the wide range of applications AI can be used for. AI tools can not only generate and paraphrase text, but iterate experiment design or be used to bounce ideas off, like a machine lab partner. AI-powered software like Semantic Scholar can be used to search for research papers and summarize their contents, and Skipper notes that another opportunity is using AI writing tools to help researchers for whom English is not their first language. “It may be a leveling tool from that perspective,” she says.
Skipper says that banning AI tools in scientific work would be ineffective. “I think we can safely say that outright bans of anything don’t work,” she says. Instead, she says, the scientific community — including researchers, publishers, and conference organizers — needs to come together to work out new norms for disclosure and guardrails for safety.
“It’s incumbent on us as a community to focus on the positive uses and the potential, and then to regulate and curb the potential misuses,” says Skipper. “I’m optimistic that we can do it.”
Springer Nature, the world’s largest academic publisher, has clarified its policies on the use of AI writing tools in scientific papers. The company announced this week that software like ChatGPT can’t be credited as an author in papers published in its thousands of journals. However, Springer says it has no…
Recent Posts
- FTC Chair praises Justice Thomas as ‘the most important judge of the last 100 years’ for Black History Month
- HP acquires Humane Ai and gives the AI pin a humane death
- DOGE can keep accessing government data for now, judge rules
- In a test, 2000 people were shown deepfake content, and only two of them managed to get a perfect score
- Quordle hints and answers for Wednesday, February 19 (game #1122)
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010