- Purpose and Scope
The purpose of this AI and Generative Tools Policy is to establish clear principles, responsibilities, and boundaries governing the acceptable use of artificial intelligence (AI)–based and generative tools in the research, writing, peer-review, and editorial processes of the journal.
The rapid development and increasing availability of AI and generative technologies present both opportunities and challenges for scholarly communication. While such tools may support certain aspects of academic work—such as language editing or organizational assistance—their use also raises important concerns related to authorship, originality, transparency, data integrity, confidentiality, and accountability.
This policy aims to ensure that the use of AI and generative tools:
- supports, rather than undermines, the integrity and reliability of the scholarly record;
- remains consistent with internationally recognized standards of research ethics and publication ethics;
- preserves human responsibility, intellectual ownership, and accountability at all stages of the publication process; and
- is applied transparently and responsibly, without compromising scientific rigor or ethical standards.
This policy applies to all participants in the journal’s publication ecosystem, including:
- authors submitting manuscripts to the journal;
- peer reviewers involved in the evaluation of submissions;
- editors and members of the editorial team;
- editorial staff and any individuals involved in manuscript handling or decision-making.
The scope of this policy covers the use of AI and generative tools at all stages of the scholarly publishing process, including but not limited to:
- manuscript preparation and writing;
- language editing and stylistic refinement;
- data analysis, visualization, and image processing;
- peer review and editorial assessment;
- post-publication activities, where applicable.
This policy should be read in conjunction with the journal’s other editorial and ethical policies, including those related to authorship, data integrity and reproducibility, plagiarism and similarity, image manipulation, peer review, and publication ethics. In cases where overlap exists, the most restrictive applicable policy shall prevail.
The journal recognizes that AI technologies are evolving rapidly. Accordingly, this policy is intended to provide a principled and adaptable framework rather than an exhaustive list of tools or use cases. The journal reserves the right to update or revise this policy as technologies, ethical standards, and regulatory requirements develop.
- Definitions and Terminology
For the purposes of this policy, the following terms are defined to ensure clarity, consistency, and a shared understanding of the concepts related to the use of artificial intelligence and generative tools in the scholarly publishing process. These definitions apply throughout this policy and in all related editorial and ethical contexts.
Artificial Intelligence (AI)
Artificial Intelligence (AI) refers to computational systems or software designed to perform tasks that typically require human intelligence. These tasks may include, but are not limited to, natural language processing, pattern recognition, data classification, image analysis, predictive modeling, and automated decision support.
Generative Tools / Generative AI
Generative tools or generative AI refer to AI-based systems capable of producing new content based on patterns learned from existing data. Such content may include text, images, audio, video, code, data representations, or other outputs that appear original but are generated algorithmically.
AI-Based Tools
AI-based tools encompass both generative and non-generative systems that rely on artificial intelligence techniques to assist or automate specific tasks. This includes tools used for language editing, grammar checking, translation, summarization, data analysis, image processing, reference management, or organizational support.
AI-Generated Content
AI-generated content refers to any text, image, figure, data output, analysis, or other material that is produced wholly or in substantial part by an AI or generative system, rather than directly created by a human author.
Human Authorship
Human authorship denotes the intellectual contribution, responsibility, and accountability of one or more human individuals who conceive, design, conduct, interpret, and report research. Human authorship requires meaningful intellectual involvement and cannot be attributed to artificial intelligence systems or automated tools.
AI-Assisted Content
AI-assisted content refers to material that has been created by human authors with limited support from AI-based tools, such as assistance with language editing, grammar correction, stylistic refinement, or organizational structure, without the AI system generating substantive scientific content, interpretations, or conclusions.
Editorial Decision-Making
Editorial decision-making refers to the process by which editors evaluate manuscripts and determine outcomes such as acceptance, revision, or rejection. This process requires independent human judgment and accountability and must not be delegated to or automated by AI systems.
Peer Review
Peer review is the critical evaluation of a manuscript by independent experts in the relevant field, conducted to assess scientific quality, originality, methodological rigor, ethical compliance, and relevance. Peer review relies on human expertise and judgment and cannot be replaced by AI-generated evaluations.
Disclosure
Disclosure refers to the transparent declaration by authors, reviewers, or editors of any use of AI or generative tools that materially contributed to the preparation, evaluation, or handling of a manuscript, in accordance with the journal’s policies.
Confidential Information
Confidential information includes unpublished manuscripts, data, reviewer reports, editorial correspondence, and any materials obtained through the submission or peer-review process. Such information must not be shared, uploaded, or processed using AI-based tools in ways that compromise confidentiality or data protection.
Misuse of AI and Generative Tools
Misuse refers to any use of AI or generative tools that violates this policy, undermines research integrity, compromises confidentiality, misrepresents authorship or originality, fabricates or alters data or images, or obscures human responsibility and accountability.
- Principles Governing the Use of AI and Generative Tools
The journal recognizes that artificial intelligence and generative tools may offer limited and legitimate support in certain aspects of scholarly work. However, their use must be governed by clear principles that preserve the integrity of the scientific record, uphold ethical standards, and ensure full human accountability.
The following principles apply to all uses of AI and generative tools in connection with manuscript preparation, peer review, editorial assessment, and post-publication activities.
3.1 Primacy of Human Responsibility and Accountability
Human authors, reviewers, and editors retain full responsibility for all scholarly content, editorial decisions, and ethical obligations associated with their roles. AI and generative tools cannot assume responsibility, accountability, or authorship and must not be presented as independent contributors to scholarly work.
All intellectual decisions—including study design, data interpretation, scientific reasoning, editorial judgment, and peer-review evaluation—must remain under the direct control of qualified human participants.
3.2 Preservation of Research Integrity and Originality
The use of AI and generative tools must not compromise the originality, authenticity, or integrity of scholarly work. Content generated, altered, or assisted by AI must not misrepresent the novelty, provenance, or intellectual contribution of the authors.
AI tools must not be used to fabricate, falsify, manipulate, or selectively generate data, results, images, analyses, or conclusions, nor to obscure the distinction between original research and reused or generated material.
3.3 Transparency and Traceability
Any use of AI or generative tools that materially contributes to the preparation, evaluation, or handling of a manuscript must be transparent and, where required by journal policy, explicitly disclosed.
Authors, reviewers, and editors must be able to explain how AI tools were used, for what purpose, and to what extent. The use of AI must be traceable in a manner that allows editorial assessment and accountability without revealing confidential or proprietary information.
3.4 Proportionality and Purpose Limitation
The use of AI and generative tools must be proportionate to their intended purpose and limited to clearly defined, appropriate functions. AI tools should be used only where they provide practical assistance without replacing essential human intellectual input or judgment.
Uses that extend beyond supportive or auxiliary functions—particularly those that influence scientific interpretation, editorial decisions, or peer-review judgments—are not compatible with responsible scholarly practice.
3.5 Protection of Confidentiality and Data Security
AI and generative tools must not be used in ways that compromise the confidentiality of unpublished manuscripts, peer-review materials, personal data, or proprietary information.
Manuscripts, reviewer reports, editorial correspondence, and associated data must not be uploaded to external AI systems or processed through tools that retain, reuse, or expose such content beyond the control of the journal and its participants, unless explicitly permitted by journal policy and applicable data protection regulations.
3.6 Respect for Intellectual Property and Legal Obligations
The use of AI and generative tools must comply with applicable intellectual property laws, licensing requirements, and contractual obligations. Users are responsible for ensuring that AI-assisted content does not infringe third-party rights and that any reuse of copyrighted material is properly authorized and attributed.
The journal does not accept claims of ignorance regarding the training data, outputs, or limitations of AI tools as justification for violations of copyright, licensing, or ethical standards.
3.7 Ethical and Contextual Evaluation
The appropriateness of AI use is assessed contextually, taking into account the nature of the research, the role of the individual involved (author, reviewer, editor), and the potential impact on research integrity, fairness, and trust.
The journal applies a proportional and case-by-case approach when evaluating AI-related issues, distinguishing between responsible, transparent use and practices that undermine ethical or scholarly standards.
3.8 Alignment with Editorial Independence
AI and generative tools must not influence or determine editorial or peer-review outcomes. Editorial independence, professional judgment, and scholarly evaluation must remain fully human-driven.
AI tools may support administrative or organizational tasks but must not be used to automate acceptance, rejection, or evaluative decisions, nor to rank, score, or classify manuscripts in ways that replace editorial judgment.
- Permitted Uses of AI and Generative Tools
The journal recognizes that artificial intelligence and generative tools may be used in a limited and responsible manner to support certain technical and auxiliary aspects of scholarly work. Permitted uses are restricted to functions that do not compromise human authorship, scientific integrity, confidentiality, or editorial independence.
Any permitted use of AI or generative tools must remain supportive in nature, must not replace essential human intellectual contribution, and must comply with the principles outlined in this policy.
4.1 Language Editing and Stylistic Refinement
AI-based tools may be used to assist with:
- grammar correction, spelling, and punctuation;
- improvement of linguistic clarity, readability, and academic style;
- minor rephrasing for coherence or consistency of language.
Such use is permitted provided that:
- the scientific content, meaning, interpretation, and conclusions are not altered;
- the manuscript remains the original intellectual work of the authors;
- authors carefully review and take full responsibility for all AI-assisted edits.
4.2 Formatting and Structural Assistance
AI tools may be used to support:
- organization of text into standard manuscript sections;
- formatting of references, tables, and headings in accordance with journal guidelines;
- consistency checks related to style, terminology, or citation format.
These tools must not be used to generate substantive content or to modify the scientific logic or structure of the work beyond organizational assistance.
4.3 Summarization for Internal Use
Authors may use AI tools to generate internal summaries of their own work for drafting purposes, revision planning, or internal review, provided that such summaries:
- are not submitted as original scholarly content;
- do not replace the authors’ own abstract, conclusions, or interpretations;
- are fully reviewed and verified by the authors prior to any use in the manuscript.
4.4 Reference and Citation Support
AI-based tools may be used to assist with:
- identifying potentially relevant literature for further human review;
- managing references and citations;
- checking citation completeness or consistency.
Authors remain fully responsible for:
- verifying the accuracy, relevance, and originality of cited sources;
- ensuring that references are correctly attributed and not fabricated or hallucinated by AI tools.
4.5 Translation and Language Support
AI tools may be used for translation or language support, particularly for authors whose first language is not English, provided that:
- the translated text is carefully reviewed and corrected by the authors;
- the translation does not introduce inaccuracies or distort scientific meaning;
- responsibility for the final text remains entirely with the authors.
4.6 Use of AI in Non-Scientific Administrative Tasks
AI tools may be used for administrative or organizational tasks related to manuscript preparation, such as:
- preparing cover letters or internal checklists;
- organizing supplementary materials;
- managing submission-related documentation.
Such uses must not involve the generation or alteration of scientific data, analyses, or conclusions.
4.7 Editorial and Workflow Support
Editors and editorial staff may use AI-based tools to support:
- administrative workflow management;
- tracking of submissions and review timelines;
- consistency checks for compliance with journal policies.
AI tools must not be used to:
- make or automate editorial decisions;
- evaluate scientific merit;
- replace human editorial judgment.
4.8 Reviewer Support (Limited Use)
Reviewers may use AI tools as supportive aids for:
- improving the clarity and organization of their review reports;
- checking grammar or readability of their comments.
Reviewers must not:
- upload manuscripts or confidential materials to external AI systems;
- use AI tools to generate substantive scientific evaluations or recommendations;
- rely on AI-generated assessments in place of their own expert judgment.
4.9 Disclosure of Permitted Use
Where required by journal policy, permitted uses of AI and generative tools must be transparently disclosed, particularly when such tools contribute materially to manuscript preparation.
Disclosure must be accurate, specific, and limited to the nature and purpose of the permitted use, without overstating or understating the role of AI in the work.
- Prohibited Uses of AI and Generative Tools
The journal strictly prohibits the use of artificial intelligence and generative tools in ways that undermine research integrity, misrepresent authorship, compromise confidentiality, or replace essential human intellectual responsibility. Prohibited uses apply regardless of whether such use is intentional or unintentional.
Any use of AI or generative tools that falls outside the permitted purposes defined in this policy is considered unacceptable.
5.1 AI as an Author or Contributor
AI systems, generative models, or automated tools must not be listed as authors, co-authors, or contributors to a manuscript. AI tools cannot meet authorship criteria and cannot assume responsibility for the content, originality, or integrity of scholarly work.
Attributing authorship, contributions, or accountability to AI systems is strictly prohibited.
5.2 Generation of Scientific Content and Conclusions
AI and generative tools must not be used to:
- generate original scientific content, including hypotheses, research questions, interpretations, discussions, or conclusions;
- produce or substantially draft abstracts, results sections, or discussion sections presenting scientific reasoning;
- create or modify theoretical frameworks or analytical narratives presented as the authors’ own scholarly work.
Any content generated in violation of these prohibitions constitutes misrepresentation of authorship and originality.
5.3 Fabrication, Falsification, or Manipulation of Data and Images
The use of AI tools to fabricate, falsify, manipulate, or selectively generate data, images, figures, graphs, or analytical outputs is strictly prohibited.
This includes, but is not limited to:
- generating synthetic data without explicit disclosure and editorial approval;
- altering image-based data in ways that misrepresent original observations;
- enhancing, removing, or modifying features in images to influence interpretation.
Such practices are considered serious breaches of research integrity.
5.4 Replacement of Human Judgment in Peer Review or Editorial Decisions
AI tools must not be used to:
- evaluate the scientific merit, originality, or validity of manuscripts;
- generate peer-review reports or recommendations;
- score, rank, or classify manuscripts for editorial decision-making;
- automate acceptance, rejection, or revision decisions.
Peer review and editorial decisions must remain fully human-driven.
5.5 Use of AI to Bypass Ethical, Legal, or Editorial Requirements
AI and generative tools must not be used to:
- evade plagiarism detection, similarity checks, or ethical review;
- obscure or disguise reused, generated, or third-party content;
- fabricate citations, references, data availability statements, or ethical approvals;
- generate false or misleading disclosures.
Any attempt to use AI to circumvent journal policies constitutes misconduct.
5.6 Use of AI on Confidential or Unpublished Materials
Uploading, sharing, or processing unpublished manuscripts, reviewer reports, editorial correspondence, or confidential data through external AI systems is strictly prohibited where such use compromises confidentiality, data protection, or intellectual property rights.
This prohibition applies to authors, reviewers, editors, and editorial staff alike.
5.7 Undisclosed or Misleading Use of AI Tools
Failure to disclose the use of AI or generative tools where disclosure is required by journal policy is prohibited.
Misleading, incomplete, or inaccurate disclosures regarding the role of AI in manuscript preparation, review, or editorial handling are treated as violations of transparency and integrity standards.
5.8 Delegation of Accountability to AI Systems
Authors, reviewers, and editors must not attribute errors, inaccuracies, ethical breaches, or misconduct to AI systems. Responsibility for all content, decisions, and actions remains with the human participants involved.
Claims that AI tools were responsible for errors or policy violations are not accepted as mitigating factors.
- Authorship, Responsibility, and Accountability
The journal affirms that authorship, responsibility, and accountability in scholarly publishing rest exclusively with human participants. The use of artificial intelligence or generative tools does not alter, reduce, or transfer these responsibilities under any circumstances.
This section clarifies the roles and obligations of authors, reviewers, editors, and the journal with respect to the use of AI and generative tools.
6.1 Human Authorship as a Fundamental Requirement
Authorship of scholarly work requires meaningful intellectual contribution, critical judgment, and accountability, all of which must be exercised by human individuals. AI systems and generative tools cannot fulfill authorship criteria and must not be credited as authors or contributors.
All listed authors must:
- have made a substantive intellectual contribution to the work;
- take public responsibility for the content of the manuscript;
- be able to explain, defend, and verify the work in its entirety.
The use of AI tools does not diminish or replace these requirements.
6.2 Author Responsibility for AI-Assisted Content
Authors remain fully responsible for all content included in their manuscripts, regardless of whether AI or generative tools were used at any stage of preparation.
This responsibility includes, but is not limited to:
- ensuring the accuracy, originality, and integrity of the content;
- verifying that AI-assisted text, images, or outputs do not contain errors, fabricated information, or misrepresentations;
- confirming that AI use complies with journal policies and applicable ethical and legal standards.
Authors must not attribute inaccuracies, ethical breaches, or policy violations to the use of AI tools.
6.3 Collective Responsibility of Co-Authors
All co-authors share collective responsibility for the content of the manuscript. The corresponding author is responsible for coordinating disclosures related to AI use and for ensuring that all co-authors are informed of, and agree with, any AI-assisted contributions.
Disagreements among authors regarding AI use do not absolve individual authors of responsibility.
6.4 Reviewer Responsibility and Accountability
Reviewers are responsible for conducting independent, objective, and confidential evaluations of manuscripts. The use of AI tools by reviewers, where permitted, does not relieve them of responsibility for the content, quality, or integrity of their review reports.
Reviewers must ensure that:
- their evaluations reflect their own expert judgment;
- confidential manuscript content is not disclosed or exposed through AI tools;
- any AI-assisted support remains auxiliary and transparent where required.
6.5 Editorial Responsibility and Decision-Making
Editors retain full responsibility for all editorial decisions and actions. AI tools must not influence or replace editorial judgment, independence, or accountability.
Editors are responsible for:
- ensuring that AI use within editorial workflows complies with journal policies;
- evaluating disclosures related to AI use by authors and reviewers;
- addressing potential misuse of AI tools in accordance with editorial and ethical guidelines.
6.6 Accountability for Ethical and Policy Violations
All participants in the publication process are accountable for compliance with this policy. Violations related to the use of AI or generative tools are assessed in accordance with the journal’s ethical policies and may result in editorial action.
Accountability applies irrespective of:
- the level of technical expertise of the individual;
- the specific AI tool used;
- whether the violation resulted from misunderstanding, negligence, or deliberate misuse.
6.7 No Transfer of Liability to AI Systems
The journal does not recognize AI systems as agents capable of bearing ethical, legal, or scholarly responsibility. Liability for all content, decisions, and actions remains with the human individuals involved.
Claims that AI tools acted autonomously or unpredictably do not constitute a defense against policy violations.
- Use of AI and Generative Tools in Manuscript Preparation
The journal permits the limited and responsible use of artificial intelligence and generative tools in manuscript preparation only where such use supports technical or linguistic aspects of writing and does not compromise human authorship, scientific integrity, or transparency.
This section provides specific guidance on acceptable and unacceptable uses of AI and generative tools during the preparation of manuscripts for submission.
7.1 General Conditions
Any use of AI or generative tools in manuscript preparation must:
- remain auxiliary and supportive in nature;
- not replace human intellectual contribution or scientific judgment;
- be consistent with the journal’s ethical, editorial, and authorship policies;
- be subject to full human review, verification, and accountability.
Authors are responsible for ensuring that AI-assisted contributions do not introduce inaccuracies, misrepresentations, or ethical concerns.
7.2 Writing and Text Development
AI tools may be used for:
- language editing, grammar correction, and stylistic refinement;
- improving clarity, coherence, and readability of text drafted by the authors;
- limited rephrasing for linguistic purposes.
AI tools must not be used to:
- generate substantive scientific content, interpretations, or conclusions;
- draft sections that present original scientific reasoning, including results or discussion;
- create or substantially rewrite abstracts, hypotheses, or analytical narratives.
7.3 Abstracts and Summaries
Authors must ensure that abstracts and summaries reflect their own scientific understanding and interpretation of the work.
The use of AI tools to generate or substantially draft abstracts, graphical abstracts, or lay summaries is not permitted. Any limited AI assistance used for language refinement must be fully reviewed and controlled by the authors.
7.4 Figures, Tables, and Visual Content
AI and generative tools must not be used to generate, alter, or enhance figures, images, graphs, or visual data in ways that affect scientific interpretation.
Any use of AI tools related to figures or images is subject to the journal’s Image Manipulation Policy and must not compromise data integrity or transparency.
7.5 Data Analysis and Interpretation
AI tools must not be used to:
- analyze research data in ways that are not explicitly described and justified;
- generate results, statistical outputs, or interpretations without full methodological transparency;
- replace established analytical methods with opaque or unverifiable AI-generated outputs.
Where AI-based analytical tools are legitimately used as part of the research methodology, their use must be clearly described, scientifically justified, and distinguished from AI-assisted manuscript preparation.
7.6 Citations, References, and Factual Accuracy
AI tools may be used to assist with organizing references or identifying potentially relevant literature; however:
- authors must verify the accuracy and existence of all cited sources;
- fabricated or hallucinated references are strictly prohibited;
- responsibility for citation accuracy rests entirely with the authors.
7.7 Ethical Statements and Declarations
AI tools must not be used to generate ethical approval statements, informed consent declarations, funding disclosures, conflict of interest statements, or other mandatory declarations.
All such statements must be prepared by the authors based on accurate, verifiable information and institutional approvals.
7.8 Disclosure of AI Use in Manuscript Preparation
Authors must disclose any use of AI or generative tools that materially contributed to manuscript preparation, in accordance with the journal’s disclosure requirements.
Disclosures must specify:
- the name of the tool used;
- the purpose of its use (e.g., language editing);
- confirmation that the authors retain full responsibility for the content.
Failure to provide accurate disclosure may result in editorial action.
- Use of AI and Generative Tools in Data Analysis and Research Activities
The journal recognizes that artificial intelligence–based tools may be legitimately used as part of research methodologies in certain disciplines, including data-intensive, computational, or image-based research. However, such use must be scientifically justified, methodologically transparent, and fully accountable to human researchers.
This section distinguishes between the use of AI as a research method and the use of AI as a manuscript preparation aid, and sets clear requirements for acceptable practice.
8.1 AI as Part of the Research Methodology
The use of AI-based tools, algorithms, or models as an integral component of the research methodology is permitted where such use:
- is scientifically appropriate to the research objectives;
- is consistent with accepted standards in the relevant field;
- contributes meaningfully to data analysis, modeling, classification, or interpretation.
In such cases, AI is treated as a research instrument or analytical method, not as an author or decision-maker.
8.2 Methodological Transparency and Documentation
Authors must provide clear and detailed descriptions of any AI-based methods used in the research, including:
- the type and purpose of the AI model or algorithm;
- the role of AI in data processing, analysis, or interpretation;
- relevant software, tools, platforms, or frameworks used;
- key parameters, training procedures, and validation methods, where applicable.
Descriptions must be sufficient to allow critical evaluation of the methodology and assessment of reproducibility, in accordance with the journal’s Data Integrity and Reproducibility Policy.
8.3 Human Oversight and Interpretive Control
All AI-assisted analyses must be subject to active human oversight. Authors remain responsible for:
- designing the analytical framework;
- validating AI outputs against scientific expectations;
- interpreting results in their proper scientific context.
AI tools must not be used to autonomously generate interpretations, conclusions, or claims without human evaluation and verification.
8.4 Data Integrity and Quality Assurance
The use of AI tools must not compromise data integrity. Authors must ensure that:
- input data are accurate, appropriate, and ethically obtained;
- data preprocessing, augmentation, or transformation steps are transparently reported;
- AI outputs are critically assessed for bias, error, or artifacts.
The generation of synthetic data using AI is permitted only where it is scientifically justified, clearly labeled as such, and fully disclosed, and where it does not mislead readers regarding the nature of the underlying data.
8.5 Reproducibility and Verification
Authors must take reasonable steps to support the reproducibility and verification of AI-assisted research, including:
- providing sufficient methodological detail to allow independent evaluation;
- describing limitations related to proprietary models, restricted data, or computational constraints;
- retaining underlying data, models, or outputs for verification where ethically and legally permissible.
Where code, models, or data cannot be shared, authors must clearly justify such limitations.
8.6 Ethical and Legal Considerations
AI-based research must comply with all applicable ethical, legal, and regulatory requirements, including those related to:
- human participants and animal research;
- data protection, privacy, and consent;
- intellectual property and licensing.
Authors are responsible for ensuring that training data, model use, and analytical outputs do not infringe third-party rights or violate ethical standards.
8.7 Distinction Between Research Use and Manuscript Assistance
The use of AI as part of the research methodology must be clearly distinguished from the use of AI tools for manuscript preparation or language editing.
Failure to clearly differentiate these uses may result in requests for clarification or further editorial assessment.
8.8 Editorial Assessment of AI-Based Research
Manuscripts involving AI-based analytical methods may be subject to additional editorial or peer-review scrutiny to assess:
- methodological rigor and transparency;
- validity of AI-assisted findings;
- compliance with ethical and data integrity standards.
Editors may request additional information, documentation, or clarification regarding AI-based methods during the review process.
- Use of AI and Generative Tools by Reviewers
Peer review is a confidential, independent, and human-centered process that relies on the expertise and professional judgment of reviewers. The use of artificial intelligence and generative tools by reviewers is permitted only in a strictly limited and responsible manner that does not compromise confidentiality, impartiality, or the integrity of the review process.
9.1 General Principles
Reviewers remain fully responsible for the content, quality, and recommendations expressed in their review reports. AI and generative tools must not replace the reviewer’s independent scientific judgment or be used to generate substantive evaluations of a manuscript.
Any permitted use of AI tools by reviewers must comply with this policy, the journal’s Peer Review Policy, and applicable ethical and data protection standards.
9.2 Permitted Uses
Reviewers may use AI-based tools solely as supportive aids for:
- improving the clarity, grammar, and organization of their own review reports;
- checking spelling, language consistency, or readability of comments drafted by the reviewer.
Such use must be limited to text authored by the reviewer and must not involve the processing or analysis of manuscript content beyond what is strictly necessary for language refinement.
9.3 Prohibited Uses
Reviewers must not use AI or generative tools to:
- upload, share, or process submitted manuscripts, figures, data, or supplementary materials through external AI systems;
- generate peer-review reports, evaluations, or recommendations;
- summarize, analyze, critique, or assess manuscript content using AI tools;
- identify flaws, novelty, or significance through automated or AI-generated assessments;
- delegate ethical, methodological, or scientific judgment to AI systems.
9.4 Confidentiality and Data Protection
All manuscripts, reviewer reports, editorial correspondence, and associated materials are confidential. Reviewers must not expose such materials to AI systems that retain, reuse, or train on uploaded content or that otherwise compromise confidentiality.
Reviewers are responsible for ensuring that any AI tools used do not store, transmit, or reuse text or information related to the manuscript under review.
9.5 Disclosure and Transparency
Where the use of AI tools by reviewers goes beyond minor language refinement, reviewers must inform the editorial team and seek guidance.
Failure to comply with confidentiality or disclosure requirements may result in removal from the journal’s reviewer pool and further editorial action where appropriate.
9.6 Accountability and Ethical Oversight
Reviewers remain accountable for compliance with this policy. Any concerns regarding the inappropriate use of AI tools by reviewers may be investigated by the editorial team in accordance with the journal’s ethical policies.
The journal reserves the right to take appropriate action where violations of this policy are identified.
- Use of AI and Generative Tools by Editors and Editorial Staff
Editorial decision-making is a core scholarly responsibility that requires independent human judgment, professional expertise, and ethical accountability. The use of artificial intelligence and generative tools by editors and editorial staff is permitted only where such use supports administrative efficiency and does not influence editorial independence or scholarly evaluation.
10.1 Editorial Independence and Human Judgment
All editorial decisions—including manuscript acceptance, rejection, revision, and post-publication actions—must be made by human editors. AI and generative tools must not be used to evaluate scientific merit, originality, ethical compliance, or to determine editorial outcomes.
Editors remain fully accountable for all editorial actions and decisions, regardless of any AI-assisted support used in administrative workflows.
10.2 Permitted Uses
Editors and editorial staff may use AI-based tools to support:
- administrative and workflow management tasks;
- tracking submission status, deadlines, and review timelines;
- identifying incomplete submissions or missing documentation;
- formatting checks or consistency reviews against journal guidelines;
- language refinement of editorial correspondence drafted by the editor.
Such uses must not involve the automated assessment of manuscript content or replacement of editorial judgment.
10.3 Prohibited Uses
Editors and editorial staff must not use AI or generative tools to:
- assess or score the scientific quality, novelty, or relevance of submissions;
- generate editorial decisions, recommendations, or communications on substantive matters;
- rank, prioritize, or filter manuscripts based on AI-generated evaluations;
- replace human oversight in ethical assessment or policy enforcement.
AI tools must not be used to create the appearance of objectivity or neutrality in decisions that require expert human judgment.
10.4 Use of AI in Screening and Quality Control
AI-based tools may be used to assist with technical screening tasks, such as identifying missing sections or flagging potential policy non-compliance, provided that:
- all flagged issues are reviewed and interpreted by a human editor;
- AI-generated flags do not constitute final determinations;
- editors retain full discretion in evaluating and acting on such information.
10.5 Confidentiality and Data Protection
Editors and editorial staff must ensure that manuscripts, reviewer reports, and confidential editorial communications are not uploaded to or processed by AI systems in ways that compromise confidentiality, data protection, or intellectual property rights.
Only tools that meet applicable data protection standards and that do not retain or reuse confidential content may be used.
10.6 Transparency and Oversight
The journal maintains oversight of AI use within editorial workflows. Editors are responsible for ensuring that any AI-assisted processes remain consistent with this policy and are subject to internal review where necessary.
The journal reserves the right to restrict or discontinue the use of specific AI tools within editorial operations if concerns arise regarding transparency, confidentiality, or ethical compliance.
10.7 Accountability for Policy Compliance
Editors and editorial staff are accountable for compliance with this policy. Any misuse of AI or generative tools may result in internal review and appropriate editorial or administrative action.
- Transparency and Disclosure Requirements
Transparency regarding the use of artificial intelligence and generative tools is essential to maintaining trust in the scholarly record and ensuring accountability in the publication process. The journal requires clear, accurate, and proportionate disclosure of AI use where such use materially contributes to manuscript preparation, research activities, peer review, or editorial handling.
11.1 General Disclosure Principle
Any use of AI or generative tools that has a material impact on the preparation, evaluation, or handling of a manuscript must be transparently disclosed in accordance with this policy.
Disclosure is required irrespective of whether AI use is permitted under this policy and does not imply misconduct or negative evaluation when such use is compliant, limited, and responsible.
11.2 Disclosure by Authors
Authors must disclose the use of AI or generative tools when such tools have been used in any of the following ways:
- language editing or stylistic refinement beyond trivial corrections;
- assistance in organizing or structuring the manuscript;
- preparation of figures, tables, or visual materials, where applicable;
- data analysis, modeling, or image processing as part of the research methodology.
Author disclosures must:
- identify the name of the tool or system used;
- describe the purpose and scope of its use;
- confirm that authors retain full responsibility for the content.
Disclosures must be included in a dedicated statement within the manuscript, as specified in the Author Guidelines.
11.3 Disclosure by Reviewers
Reviewers are not required to disclose minor AI use limited to language refinement of their own review reports. However, reviewers must inform the editorial team if AI tools are used in any way that goes beyond such limited support or that may raise confidentiality or ethical concerns.
Failure to disclose inappropriate AI use may result in editorial action.
11.4 Disclosure by Editors and Editorial Staff
Editors and editorial staff must ensure that any AI-assisted processes used within editorial workflows remain transparent, documented, and compliant with this policy.
Where AI tools are used in editorial screening or administrative processes, such use must not affect editorial independence or substitute for human judgment. Internal documentation of AI-assisted workflows may be maintained by the journal for oversight purposes.
11.5 Accuracy and Completeness of Disclosures
Disclosures must be accurate, specific, and complete. Vague, misleading, or incomplete disclosures are not acceptable.
Failure to disclose required AI use, or the provision of inaccurate or misleading information regarding AI involvement, may be treated as a breach of transparency and research integrity standards.
11.6 Editorial Review of Disclosures
All disclosures related to AI and generative tool use are subject to editorial assessment. Editors may request clarification, additional information, or revisions to disclosures where necessary to ensure transparency and policy compliance.
Where undisclosed or inappropriate AI use is identified, the journal may take action in accordance with its ethical and editorial policies.
- AI-Generated Content, Images, and Data
The journal maintains a strict distinction between AI-assisted support and AI-generated scholarly content. While limited AI assistance may be permitted under this policy, the generation of scientific content, images, or data by AI raises significant concerns regarding originality, authorship, transparency, and research integrity.
This section sets clear conditions under which AI-generated content, images, or data may be evaluated and, where applicable, restricted or prohibited.
12.1 AI-Generated Textual Content
AI-generated textual content must not be presented as original scholarly work authored by humans. The generation of substantive scientific text—including hypotheses, analyses, interpretations, or conclusions—by AI tools is not permitted.
Where AI tools have been used to assist with language refinement or limited rephrasing, the final text must reflect the authors’ own intellectual contribution and be fully reviewed and verified by the authors.
12.2 AI-Generated Images and Visual Materials
The use of AI tools to generate, synthesize, or substantially alter images, figures, or visual materials is generally not permitted, particularly where such images represent experimental results, observations, or data.
AI-generated or AI-enhanced images must not:
- misrepresent original observations or experimental outcomes;
- introduce features not present in the original data;
- obscure, remove, or exaggerate relevant elements.
Any permitted use of AI tools in image processing must comply strictly with the journal’s Image Manipulation Policy and be transparently disclosed.
12.3 AI-Generated Data and Synthetic Data
The generation of data using AI or generative models, including synthetic datasets, is permitted only where:
- such data generation is scientifically justified and methodologically sound;
- the synthetic nature of the data is explicitly stated and clearly distinguished from empirical data;
- the use of synthetic data does not mislead readers regarding the origin, reliability, or limitations of the findings.
Synthetic data must not be used to replace empirical data without clear justification and editorial transparency.
12.4 Validation and Verification of AI Outputs
All AI-generated or AI-assisted outputs must be subject to rigorous human validation. Authors are responsible for:
- verifying the accuracy and scientific plausibility of AI outputs;
- identifying and correcting errors, hallucinations, or artifacts;
- ensuring consistency between AI-assisted outputs and the underlying data or methodology.
Failure to adequately validate AI outputs constitutes a breach of research integrity.
12.5 Disclosure and Labeling Requirements
Any AI-generated content, images, or data that are permitted under this policy must be clearly disclosed and appropriately labeled within the manuscript.
Disclosures must:
- specify the nature and purpose of AI generation;
- describe how AI outputs were validated;
- distinguish AI-generated material from human-generated content.
12.6 Editorial Assessment of AI-Generated Materials
Manuscripts containing AI-generated or AI-assisted content may be subject to additional editorial or peer-review scrutiny.
Editors may request:
- original, unprocessed data or images;
- documentation of AI tools, parameters, or workflows;
- clarification of the role of AI in content or data generation.
Failure to provide satisfactory clarification may result in editorial action, including rejection of the manuscript.
- Data Protection, Confidentiality, and Intellectual Property
The use of artificial intelligence and generative tools in scholarly publishing raises significant considerations related to data protection, confidentiality, and intellectual property. The journal requires that all participants in the publication process ensure that the use of such tools complies with applicable legal, ethical, and contractual obligations.
13.1 Data Protection and Privacy
All uses of AI and generative tools must comply with applicable data protection and privacy regulations, including those governing the processing of personal data, sensitive information, and research data.
Authors, reviewers, editors, and editorial staff are responsible for ensuring that:
- personal data are processed lawfully, fairly, and transparently;
- only data necessary for the intended purpose are used;
- appropriate safeguards are applied to prevent unauthorized access, disclosure, or misuse.
AI tools must not be used in ways that violate data protection obligations or compromise the privacy of individuals whose data are included in research or editorial materials.
13.2 Confidentiality of Unpublished Materials
Unpublished manuscripts, reviewer reports, editorial correspondence, and associated materials are confidential and must be protected throughout the editorial and peer-review process.
Such materials must not be:
- uploaded to external AI systems that store, reuse, or train on submitted content;
- processed through AI tools that lack adequate confidentiality or security guarantees;
- shared with third parties through AI-assisted platforms without explicit authorization.
This obligation applies to all participants in the publication process.
13.3 Institutional and Contractual Obligations
Authors and editors must ensure that the use of AI and generative tools does not conflict with institutional policies, funding agreements, or contractual obligations related to data use, confidentiality, or intellectual property.
Where restrictions apply, such limitations must be respected and, where relevant, disclosed.
13.4 Intellectual Property and Copyright
The use of AI tools must comply with applicable intellectual property and copyright laws. Authors are responsible for ensuring that:
- AI-assisted content does not infringe third-party rights;
- any reuse of copyrighted material is properly authorized and attributed;
- licensing requirements associated with AI tools or generated outputs are respected.
The journal does not accept lack of knowledge regarding the training data or licensing terms of AI tools as justification for copyright or licensing violations.
13.5 Ownership of AI-Assisted Content
Authors retain responsibility for the content of their manuscripts, including AI-assisted components. The use of AI tools does not alter authorship, ownership, or accountability for the work.
Authors must ensure that the terms of use of AI tools employed do not conflict with their ability to grant the journal the rights necessary for publication.
13.6 Protection of Proprietary and Sensitive Information
AI tools must not be used to process proprietary, confidential, or sensitive information in ways that expose such information to unauthorized parties or external systems.
Where research involves sensitive datasets, proprietary methods, or restricted-access materials, authors and editors must exercise heightened caution in the use of AI tools.
13.7 Editorial Oversight
The journal reserves the right to request clarification regarding data protection, confidentiality, or intellectual property issues related to AI use. Editors may require additional documentation or assurances to ensure compliance with legal and ethical standards.
Failure to comply with data protection, confidentiality, or intellectual property requirements may result in editorial action in accordance with the journal’s ethical policies.
- Compliance with Ethical and Legal Standards
The journal requires that all uses of artificial intelligence and generative tools comply with applicable ethical principles, legal requirements, and accepted standards of scholarly publishing. Compliance is essential to safeguarding research integrity, protecting participants and stakeholders, and maintaining trust in the scholarly record.
14.1 Alignment with Research and Publication Ethics
The use of AI and generative tools must be consistent with established principles of research integrity and publication ethics, including honesty, transparency, accountability, and fairness.
AI tools must not be used in ways that:
- misrepresent authorship, originality, or intellectual contribution;
- obscure the provenance of data, images, or analyses;
- compromise the reliability or interpretability of research findings.
All AI-related practices are subject to the same ethical expectations as other research and publication activities.
14.2 Compliance with Applicable Laws and Regulations
Authors, reviewers, editors, and editorial staff are responsible for ensuring that their use of AI and generative tools complies with applicable laws and regulations, including but not limited to:
- data protection and privacy legislation;
- intellectual property and copyright law;
- regulations governing human participants and animal research;
- contractual and funding-related obligations.
The journal does not provide legal advice and expects participants to ensure their own compliance with relevant legal frameworks.
14.3 Accountability for Ethical and Legal Breaches
Non-compliance with ethical or legal standards related to AI use may result in editorial action, regardless of whether the breach arose from deliberate misuse, negligence, or misunderstanding.
Claims that ethical or legal violations resulted from the autonomous behavior of AI systems are not accepted as mitigating factors.
14.4 Ethical Review and Editorial Oversight
The journal reserves the right to conduct editorial review of AI-related practices where ethical or legal concerns arise. Editors may request additional information, documentation, or clarification to assess compliance.
Where necessary, the journal may consult institutional ethics committees, legal advisors, or external experts to evaluate complex or sensitive cases.
14.5 Consistency with Other Journal Policies
This policy operates in conjunction with the journal’s other editorial and ethical policies, including those addressing authorship, research ethics, data integrity, image manipulation, plagiarism, peer review, and post-publication actions.
In cases of inconsistency or overlap, the most restrictive applicable policy shall prevail.
14.6 Ongoing Responsibility and Awareness
Given the evolving nature of AI technologies and regulatory frameworks, authors, reviewers, and editors are expected to remain informed about relevant ethical and legal developments related to AI use.
The journal may update this policy to reflect changes in ethical standards, legal requirements, or best practices.
- Editorial Assessment and Oversight
The journal applies active editorial oversight to ensure that the use of artificial intelligence and generative tools complies with this policy and with the journal’s broader ethical and editorial standards. Editorial assessment is conducted in a fair, proportionate, and context-sensitive manner.
15.1 Scope of Editorial Assessment
Editorial assessment of AI and generative tool use may occur at any stage of the publication process, including:
- initial submission screening;
- peer review;
- editorial decision-making;
- post-publication review.
Editors may evaluate disclosures, manuscript content, methodologies, and supporting materials to assess compliance with this policy.
15.2 Assessment of Disclosures and Transparency
Editors review AI-related disclosures to determine whether:
- the use of AI tools is consistent with permitted practices;
- disclosures are accurate, complete, and sufficiently detailed;
- AI use has been appropriately distinguished from human intellectual contributions.
Where disclosures are unclear or incomplete, editors may request clarification, revision, or additional information from the authors.
15.3 Evaluation of AI-Assisted Research Methods
For manuscripts involving AI-based analytical methods, editors may assess:
- the scientific justification for the use of AI tools;
- the transparency and reproducibility of the methodology;
- compliance with data integrity, ethical, and legal standards.
Additional peer review or expert consultation may be requested where AI-based methods raise complex methodological or ethical questions.
15.4 Requests for Documentation and Verification
Editors may request supporting documentation related to AI use, including:
- descriptions of AI tools, models, or workflows;
- information on training data, parameters, or validation procedures, where relevant;
- original or unprocessed data or images for verification.
Failure to provide requested information may affect editorial decisions.
15.5 Handling of Suspected Policy Violations
Where potential misuse of AI or generative tools is identified, editors will assess the issue in accordance with the journal’s ethical policies and principles of fairness and proportionality.
Possible actions may include:
- requests for clarification or correction;
- revision of disclosures or manuscript content;
- rejection of the manuscript;
- post-publication actions, where applicable.
15.6 Editorial Independence and Discretion
Editorial assessments related to AI use are conducted independently and are not influenced by external parties, commercial considerations, or the availability of AI tools.
The journal retains full editorial discretion in determining appropriate actions in response to AI-related concerns.
15.7 Documentation and Record-Keeping
The journal may maintain internal records of AI-related assessments and decisions for oversight, consistency, and accountability purposes, in accordance with data protection and confidentiality requirements.
- Misuse of AI and Generative Tools
Misuse of artificial intelligence and generative tools refers to any application of such tools that violates this policy, undermines research integrity, compromises transparency or confidentiality, or misrepresents human authorship and accountability.
Misuse may occur intentionally or unintentionally and is evaluated based on its nature, severity, and potential impact on the scholarly record.
16.1 Forms of Misuse
Misuse of AI and generative tools includes, but is not limited to:
- presenting AI-generated content, images, or data as original human-authored work;
- using AI tools to fabricate, falsify, manipulate, or selectively generate data, images, or analyses;
- delegating scientific reasoning, interpretation, or conclusions to AI systems;
- using AI to generate peer-review reports or editorial evaluations;
- uploading confidential or unpublished materials to AI systems that compromise data protection or confidentiality;
- failing to disclose required AI use or providing misleading disclosures;
- using AI tools to evade plagiarism detection, ethical review, or editorial scrutiny;
- attributing errors, inaccuracies, or ethical breaches to AI systems in an attempt to shift responsibility.
16.2 Intent and Responsibility
The assessment of misuse does not depend solely on intent. Lack of familiarity with AI tools, misunderstanding of their capabilities, or unintentional misuse does not exempt individuals from responsibility.
All participants in the publication process are expected to exercise due diligence and informed judgment when using AI or generative tools.
16.3 Severity and Contextual Evaluation
Cases of suspected misuse are evaluated contextually, taking into account:
- the role of the individual involved (author, reviewer, editor);
- the nature and extent of AI use;
- the impact on research integrity, transparency, or trust;
- whether corrective action can adequately address the issue.
The journal applies a proportional approach, distinguishing between minor breaches and serious violations.
16.4 Reporting and Identification of Misuse
Potential misuse of AI or generative tools may be identified during:
- editorial screening;
- peer review;
- post-publication review;
- third-party notifications or concerns.
Editors may request clarification or additional information when misuse is suspected.
16.5 Relation to Research Misconduct
Serious misuse of AI and generative tools—particularly where it involves data fabrication, falsification, plagiarism, or deliberate misrepresentation—may constitute research misconduct and will be handled in accordance with the journal’s Publication Ethics and Malpractice Policy.
- Consequences of Non-Compliance
Failure to comply with this policy may result in editorial action intended to protect the integrity of the scholarly record, ensure fairness, and uphold ethical and professional standards. Consequences are determined based on the nature, severity, and impact of the non-compliance, and are applied in a proportionate and context-sensitive manner.
17.1 General Principles
Consequences of non-compliance are guided by the principles of:
- fairness and proportionality;
- transparency and due process;
- consistency with the journal’s ethical and editorial policies.
Non-compliance may be addressed at any stage of the publication process, including before or after publication.
17.2 Actions Prior to Publication
Where non-compliance is identified before publication, editorial actions may include:
- requests for clarification, correction, or additional disclosure;
- revision of manuscript content or disclosures;
- rejection of the manuscript;
- suspension of editorial processing pending further assessment.
Failure to respond adequately to editorial requests may result in rejection.
17.3 Actions After Publication
Where non-compliance is identified after publication, the journal may take appropriate corrective action, including:
- publication of corrections or clarifications;
- issuance of an expression of concern;
- retraction of the article, where warranted.
Post-publication actions are conducted in accordance with the journal’s policies on corrections, retractions, and expressions of concern.
17.4 Consequences for Reviewers and Editors
Non-compliance by reviewers or editors may result in:
- removal from the reviewer pool or editorial role;
- restrictions on future participation in the journal’s processes;
- internal review or corrective measures, where appropriate.
Such actions are taken to preserve the integrity and credibility of the peer-review and editorial processes.
17.5 Notification and Documentation
Individuals affected by editorial actions related to non-compliance will be informed of the nature of the concern and the action taken, where appropriate.
The journal may maintain internal records of non-compliance cases for oversight, consistency, and accountability, in accordance with data protection requirements.
17.6 No Mitigation Through AI Attribution
Attributing non-compliance, errors, or ethical breaches to the behavior of AI or generative tools does not mitigate responsibility or consequences.
Responsibility for compliance remains with the human participants involved.
- Post-Publication Issues and Corrections
The journal is committed to maintaining the accuracy, transparency, and integrity of the scholarly record. Where concerns related to the use of artificial intelligence or generative tools arise after publication, the journal will address such issues in a fair, transparent, and proportionate manner.
18.1 Identification of Post-Publication Issues
Post-publication concerns related to AI or generative tool use may be identified through:
- author notification;
- editorial review;
- reader or third-party inquiries;
- institutional or external reports.
All concerns are subject to editorial assessment in accordance with the journal’s ethical and editorial policies.
18.2 Editorial Assessment and Investigation
Editors will assess the nature and potential impact of the identified issue, including:
- whether AI use was appropriately disclosed;
- whether AI use complied with this policy at the time of publication;
- whether the issue affects the validity, reliability, or interpretation of the work.
Where necessary, editors may request clarification, documentation, or additional information from the authors.
18.3 Corrective Actions
Depending on the findings of the editorial assessment, the journal may take one or more of the following actions:
- publication of a correction or clarification;
- issuance of an expression of concern;
- retraction of the article, where the integrity of the work is substantially compromised.
Corrective actions are taken in accordance with the journal’s policies on corrections, retractions, and expressions of concern.
18.4 Transparency of Post-Publication Actions
Any post-publication action related to AI or generative tool use will be clearly identified and permanently linked to the original article.
Notices will describe the nature of the issue and the reason for the action, while respecting confidentiality and due process.
18.5 Author Cooperation and Responsibility
Authors are expected to cooperate fully with post-publication assessments and investigations related to AI use.
Failure to cooperate or to provide satisfactory clarification may influence the nature and outcome of post-publication actions.
18.6 Relation to Other Post-Publication Mechanisms
This section operates in conjunction with the journal’s broader policies on post-publication discussion, corrections, retractions, and appeals.
Issues related to AI use may also be addressed through established post-publication discussion mechanisms where appropriate.
- Policy Review and Updates
The journal recognizes that artificial intelligence and generative technologies are evolving rapidly, as are the ethical, legal, and professional standards governing their use in scholarly publishing. Accordingly, this policy is subject to periodic review and update.
19.1 Periodic Review
This policy will be reviewed regularly by the editorial team to ensure continued relevance, clarity, and alignment with:
- developments in AI and generative technologies;
- emerging ethical standards and best practices in scholarly publishing;
- changes in legal and regulatory frameworks;
- guidance from recognized organizations concerned with research integrity and publication ethics.
19.2 Policy Updates and Revisions
The journal reserves the right to update, revise, or amend this policy as necessary. Updates may be made in response to:
- technological developments;
- identified gaps or ambiguities in the policy;
- practical experience gained through editorial application;
- external recommendations or requirements.
Revisions are intended to strengthen, not weaken, the journal’s commitment to research integrity and transparency.
19.3 Communication of Changes
Substantive changes to this policy will be communicated through appropriate channels, including updates on the journal’s website or editorial communications.
Authors, reviewers, and editors are responsible for ensuring that they are familiar with the most current version of this policy at the time of submission, review, or editorial handling.
19.4 Applicability of Updated Policies
Unless otherwise stated, updated versions of this policy apply to new submissions and ongoing editorial processes from the date of implementation. Previously published articles remain subject to the policy version in effect at the time of publication, unless post-publication issues warrant review under updated standards.
- Contact and Further Information
Questions regarding this AI and Generative Tools Policy, its interpretation, or its application may be directed to the journal’s editorial office.
Authors, reviewers, and editors are encouraged to contact the editorial office in advance of submission or review if clarification is needed regarding the appropriate and compliant use of AI or generative tools.
Inquiries related to specific manuscripts or editorial decisions should be submitted through the journal’s official communication channels to ensure proper documentation and fair handling.
The journal welcomes constructive feedback on this policy and may consider such input when reviewing or updating the policy in accordance with its established procedures.