AI Usage Guidelines
Tektonika guidelines about AI usage, version 2, January 2026
All results produced with the help of various AI tools are ultimately the responsibility of those submitting articles for review. In other words, if it was produced by AI, it is the authors’ obligation to carefully review the results, in order to ensure their accuracy and validity. Authors must also pay attention to possible copyright or authorship issues, for example when analysing datasets from external sources with AI tools, or using such tools to synthetise scientific literature. These general principles are broken down below according to the different types of usage.
Use of AI in the research process
Machine learning, deep learning and other forms of AI are increasingly used in the research process, such as to analyze large datasets, identify patterns, facilitate comparative studies, or for image analysis. Any use of AI in the research process itself, with tools developed by the authors or publicly available, must be acknowledged.
In general, the correct place to do that is in the methods section of the manuscript. The methods section must include a clear explanation of how AI was integrated into the research process (e.g., data analysis, modeling, code generation, image manipulation or other applications). Authors must justify their use of AI, explaining why it was necessary or beneficial. They must verify and confirm that AI tools do not introduce biases or errors into the research.
In addition to the explanation in the methods section, the Data Availability section at the end of the manuscript should provide links or references to the AI tools used. This should aim to ensure reproducibility of the study, validation of the scientific method, and transparency.
Use of generative AI in manuscript preparation
Following recent developments in generative AI, particularly large language models (LLM) like ChatGPT, Claude, Gemini, or specialized tools like Elicit, Paperpal or NotebookLM, there have been numerous proposals that argue that “Artificial Intelligence may help scientific writing”. Although AI may indeed be efficient in assisting authors in writing and editing their research manuscripts, authors risk losing control over the content and scientific reasoning presented in their article.
The most important rule is transparency. Authors must disclose and justify any use of AI tools in the cover letter to the editor and in the AI Use Statement section at the end of the manuscript.
The only exception is when authors write their own text then use AI tools to improve spelling, grammar, or general language clarity. This limited use supports inclusivity, particularly for non-native English writers. In this case disclosure is not mandatory. If they wish, the authors may acknowledge the use of AI tools in their cover letter and the AI Use Statement section.
Authors may turn to AI tools to draft entire paragraphs based on key sentences, lists of bullet points, or loosely organized notes. We believe that such practices significantly increase the risk of losing control over the scientific content. If AI is used in this way, authors must carefully review and edit all generated content, consider such content as guidance and rewrite their own text. For this case, authors must fully disclose AI assistance in their cover letter, specifying the process, prompts, and tools employed, and acknowledge it as well in the AI Use Statement section to ensure transparency.
In the two cases above, AI use is only safe and beneficial if the author has sufficient proficiency in written English to evaluate, validate, and rewrite the suggestions generated by AI tools. Should this not be the case, checking by a third-party proficient English speaker is recommended as a safeguard. Authors should however be aware that such reviewing by an English-language editor can be beneficial for the grammar, but that risk may persist for the scientific content.
Authors might consider using general LLM or more specialized tools to curate, summarize, or synthesize scientific literature, particularly in review or state-of-the-art sections. Authors must carefully review all AI outputs, and critically read and evaluate any literature suggested by the AI tools. Indeed this practice carries significant risks as it may compromise the authors’ ability to ensure the relevance, completeness and nuanced interpretation of the cited literature. Authors should be aware that AI may fabricate citations that are entirely fictitious, while appearing plausible and closely resembling legitimate references. They should also be vigilant about plagiarism issues, given that these tools often derive exact passages from the original sources. Any such use must be thoroughly explained and acknowledged in the cover letter and AI Use Statement section.
AI may be used only in a limited capacity for creating or editing images and diagrams, strictly for illustrative or aesthetic purposes. Authors must clearly disclose and justify this use. A statement about AI usage must be added in the figure caption. Any other use of AI to process, analyze, and then visually represent data is considered part of the research process. As such, it must be rigorously justified and clearly explained in the methods section and follow the rules detailed above in "Use of AI in the research process".
In general, all other applications of AI are prohibited. If uncertain, authors must approach the editors and fully disclose and justify their use.
Importantly, if our editors suspect undisclosed or prohibited AI use, authors will be asked to justify their process, or the manuscript may be rejected. The editors reserve the right to reject submissions, and eventually withdraw published manuscripts, if AI use violates ethical or professional standards, or does not follow the above rules and procedures.
Use of generative AI in manuscript review
Reviewers are strictly prohibited from using any online AI tools to summarize or evaluate submitted manuscripts. Indeed, for reviewers, the most critical point is to uphold the manuscript's confidentiality and respect the authorship rights. Uploading a manuscript, or part of it, to third-party systems (LLM or more specialized tools) risks exposing the content to unauthorized reuse for AI training purposes. This poses a serious threat of future plagiarism and theft of ideas.
Reviewers, in particular non-native English writers, may be tempted to use AI to improve their written review reports. Only minimal AI assistance, such as basic spelling or grammar check, is deontologically acceptable. Any broader use of AI by reviewers, such as refining arguments or enhancing reasoning, risks distorting the reviewer's opinion and substituting it with the AI's output. This may also compromise the confidentiality of the results and ideas presented in the submitted manuscript. Such use is thus not allowed.