Generative AI and AI-Assisted Technologies Policy
At BioNatura Journal, we follow the Elsevier, ICMJE, and WAME recommendations to ensure transparency and integrity. This policy applies to authors, reviewers, and editors.
1. Policies for Authors
Generative AI Tools (like ChatGPT) may be used to improve readability and language. However:
- Authorship: AI tools cannot be listed as authors or co-authors. Authorship implies legal responsibilities that only humans can fulfill.
- Human Oversight: Authors are 100% accountable for the accuracy, integrity, and originality of the work. AI-generated output must be carefully reviewed for biases or "hallucinations" (fake citations).
- Images and Artwork: We do not permit the use of generative AI to create or alter scientific images (micrographs, gels, clinical photos). The only exception is if AI is the research method itself (e.g., biomedical imaging analysis), which must be detailed in the Methods section.
Mandatory Disclosure: Authors must include an "AI Declaration Statement" before the References section, specifying the tool used and the purpose. Basic grammar/spell checking does not require disclosure.
2. Policies for Reviewers and Editors
Confidentiality Mandate: Reviewers and editors are strictly prohibited from uploading submitted manuscripts (or any part of them) into generative AI tools.
Doing so violates the author's proprietary rights and data privacy. Peer review requires human critical thinking that cannot be substituted by AI. Reviewers are responsible for the content of their review reports.
3. Ethical Compliance
AI must not be used to fabricate, falsify, or manipulate data. BioNatura Journal uses image forensics and plagiarism detection tools to identify irregularities.