Note
This is a draft of the text that will appear on the journal website. To propose and discuss changes, please join the online forum.
Use of artificial intelligence (AI)¶
Artificial intelligence (which includes, but is not limited to, large language models) can be used responsibly and ethically throughout the course of a scientific study. Possible use cases include the development of data processing tools, the scientific analysis of data, and improvement of the presentation of the final written manuscript. When not properly supervised, however, the use of AI tools can lead to the use of erroneous and flawed material that the authors might later present in an authoritative manner. The use of AI tools can also be the source of serious ethical breaches, with the greatest risks being plagiarism, the violation of confidentiality agreements, and copyright infringement.
The journal’s guiding principle for the use of AI is that the authors of a manuscript are entirely responsible for the integrity of their work, regardless of whether AI was used or not. An AI tool can not be listed as an author as it can not take personal responsibility for its contribution to the work. The journal will not attempt to detect if AI was used in any part of a manuscript, but will instead rely on the responsible disclosure by the authors wherever AI use made a significant contribution to the manuscript.
The following sections provide guidelines for common AI uses, which include language improvement, generative imagery, scientific analysis, and peer review.
Language improvement¶
Artificial intelligence tools can be useful for helping improve the presentation of author-written text in a manuscript. Such tools are ubiquitous in word processing platforms and softwares, and range from simple spelling and grammar checkers, to the translation of text between languages, to more advanced tools that make significant changes to the tone and form of the original text. It is not necessary for the authors to disclose the use of AI language improvement tools, and the authors are ultimately responsible for every word that appears in a manuscript. Given the ethical risks, however, the journal strongly advises against using AI for anything other than language improvement.
Generative imagery¶
Artificial intelligence models that generate images based on a user prompt are by necessity trained on large quantities of data. The images used for training these models may be copyrighted, and care must be taken to ensure that the intellectual property rights of the training materials are not violated. For example, when images with a CC-BY copyright are adapted or transformed, one is required to give credit for the image and to provide a link to the original license. Most of the popular commercial AI models fall short of this legal responsibility. For this reason, as a general rule, the journal does not allow the use of generative AI imagery in its publications. Exceptions to this rule are possible when the AI model is documented to be trained exclusively on images that either are in the public domain, or that have a license that does not require attribution (such as the CC0 license). For these exceptions, the authors must acknowledge the software that was used to generate the image.
Scientific analysis tools¶
Artificial intelligence tools may be used as part of a scientific analysis. Examples include the use of AI-generated computer code, AI-assisted debugging of codes, pattern recognition, and other data analysis techniques that are suitable to an AI approach. The use of AI tools for these purposes should be acknowledged in the manuscript just as one would acknowledge the use of any other non-AI scientific analysis tool. In particular, whenever the author uses AI for a non-trivial task, its use must be acknowledged.
When acknowledging the use of AI as part of a scientific task, the authors should provide sufficient detail such that the AI-generated results can be reproduced. The amount of detail provided should be commensurate with the importance and novelty of the result. Whereas trivial results may require little or no explanation, major results should include the full scripts and prompts along with detailed information about the version of the AI model and its dependencies.
Peer review¶
As part of the peer review process, the reviewer is asked to assess several aspects of a submitted manuscript. These include the originality of the work, the soundness of the methodology and analysis approach, and the appropriateness of the interpretations. In addition, the reviewer is also asked to provide constructive criticism for how the manuscript could be improved. The reviewer assessments are the primary source that an editor relies upon to make a decision on whether a manuscript should be accepted for publication or not. Artificial intelligence tools, and in particular large language models, often have difficulties in distinguishing fact from fiction and are thus entirely incapable of replacing the role of a reviewer. The journal does not allow AI to be used as a tool to assess a submitted manuscript during peer review.
If a reviewer uses AI to improve the language of their review, they must take care that they do not violate the confidentiality of the review process. Both the manuscript and reviews are considered to be confidential up until the point where the manuscript is published online. Under no circumstance should the reviewer ever provide a submitted manuscript that is under review to an AI. Furthermore, the reviewer should only use AI for language improvement when they have a clear legal assurance that all content will remain confidential and will not be used for further AI training. Most commercial AIs can not provide this assurance.