Generative AI and Automated Tools
Use and Disclosure Policy
To preserve scholarly integrity, The Journal of Social, Economic and Politics Research (SEPR) adopts the following principles regarding generative AI, large language models (LLMs), and automated tools (e.g., for text, images, code, data analysis):
For Authors:
- No authorship by AI:AI systems/tools cannot be listed as authors. Human authors bear full responsibility for all content, including text, data, figures, and references, regardless of tool use.
- Mandatory disclosure:Any substantive use of generative AI or automated tools (e.g., drafting, paraphrasing, translation beyond routine grammar checks; generating figures/tables/code; data analysis/modeling; image manipulation) must be clearly disclosed in the manuscript (e.g., Methods/Acknowledgments), specifying the tool name, provider, version/model, and purpose of use.
- Accountability and verification:Authors must verify the accuracy, originality, permissions, and citation completeness of AI-assisted content. Hallucinated references, fabricated data/results, or undisclosed AI-generated text/graphics constitute misconduct.
- Data protection:Do not upload confidential, proprietary, or personal/identifiable data to public AI tools without explicit consent and legal/ethical clearance. Authors are responsible for compliance with data protection laws and institutional policies.
- Image and media integrity:AI-generated or AI-enhanced visuals must be labeled as such in the figure caption and described in Methods. Manipulation that misleads or alters scientific meaning is prohibited.
- Permitted language editing:Use of tools for language editing/grammar/formatting is allowed and need not be disclosed if limited to superficial edits without altering the scientific content. Substantive rewriting requires disclosure as above.
For Editors and Reviewers:
- Confidentiality first:Editors and reviewers must not upload any part of a submitted manuscript (including supplementary files) to public AI tools. If institutional, secure AI tools are used (e.g., to check clarity or support triage), they must comply with confidentiality and data-security requirements.
- Human judgment: Editorial and peer-review decisions are made by humans. AI may assist (e.g., language checks, workflow triage), but it does not replace editorial judgment.
- Screening:The journal may use AI-assisted utilities for similarity checks, reference validation, image integrity screening, or statistical anomaly detection. Any flags are screening indicators, not determinations; final assessments are made by editors.