Skip to Main Content

AI in ECU Research: Authorship & Publishing

How to address AI in Authorship?

Although AI typically does not qualify for authorship, researchers must establish clear and ethical practices for recognising AI contributions, through acknowledgments, disclaimers, or statements.

If AI was used in significant ways (e.g., for data analysis, literature review, or writing), acknowledge the use of AI tools or platforms in the acknowledgments section of the paper. If AI tools were used to assist with parts of the research, clarify the extent of the AI’s involvement.

Researchers must be transparent about the specific roles AI played (e.g., assisting with data analysis, running simulations, generating graphics).

For examples of how to acknowledge the assistance of AI in your research, please refer to the section on AI Acknowledgment.

 

AI's role in research

AI's role in research:

  • Cannot be listed as an author and must be acknowledge in research contributions
  • AI tools should enhance human oversight, not replace it

ARC & NHMRC perspective

The NHMRC and ARC have clear rules about authorship.

According to these guidelines, AI cannot be an author. An author must make a significant intellectual or scholarly contribution to the research and its outcomes.

 


 
  • Researchers are advised to be cautious when using AI in preparing grant applications due to potential issues with authorship and intellectual property.
  • Institutions must ensure all information is accurate, current, and compliant with the Australian Code for Responsible Conduct of Research.
  • Peer reviewers must maintain confidentiality as mandated by ARC policies. Releasing materials into AI tools constitutes a confidentiality breach.
  • Reviewers should avoid AI use in assessments, as it can compromise the quality and integrity of peer review. Reviews must be high-quality and constructive without AI-generated generic comments.
  • Researchers must ensure the accuracy of all information in their applications and are held accountable for any misinformation, including errors from AI use.
  • Both applicants and their institutions must certify the factual accuracy of application content, taking responsibility for any inaccuracies introduced by generative AI.
  • Peer reviewers are prohibited from using generative AI tools to process or evaluate any part of a grant application, as this could compromise the confidentiality and integrity of the NHMRC's evaluation process.

 

Major publishers perspective

 

Scholarly publishers are starting to release policies regarding the use of generative AI by authors or are updating their author guidelines to specify how AI can be utilised and acknowledged appropriately. Below is a list of links to the updated policies and guidance from major publishers. 
  • AI cannot be credited as an author
  • Researchers must disclose AI’s role in their work
  • AI-generated content must be accurate and original

There are several potential applications of AI governance platforms in scholarly publishing:

  • Checking whether AI-generated content adheres to publisher guidelines and ethical principles.
  • Detecting and monitoring bias for recommendations such as reviewer and journal suggestion.
  • Monitoring integrity detection tools for fairness and accuracy.

 
  • Does not attribute authorship to AI.
  • Does not allow the inclusion of generative AI images in publications.
  • Authors should declare any use of AI tools used to generate text.
  • There must be human accountability for the final version of the text.
  • Authors must be owners of the original work.

"Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria (imprint editorial policy link). Notably an attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs. Use of an LLM should be properly documented in the Methods section (and if a Methods section is not available, in a suitable alternative part) of the manuscript. The use of an LLM (or other AI-tool) for “AI assisted copy editing” purposes does not need to be declared. In this context, we define the term "AI assisted copy editing" as AI-assisted improvements to human-generated texts for readability and style, and to ensure that the texts are free of errors in grammar, spelling, punctuation and tone. These AI-assisted improvements may include wording and formatting changes to the texts, but do not include generative editorial work and autonomous content creation. In all cases, there must be human accountability for the final version of the text and agreement from the authors that the edits reflect their original work."

https://www.springer.com/gp/editorial-policies/artificial-intelligence--ai-/25428500

“ Where authors use generative AI and AI-assisted technologies in the writing process, these technologies should only be used to improve readability and language of the work and not to replace key authoring tasks such as producing scientific, pedagogic, or medical insights, drawing scientific conclusions, or providing clinical recommendations”.

Wiley has released guidelines for the responsible use of AI in authorship. These guidelines and FAQs offer advice on using AI tools in manuscript preparation while preserving the author's voice and expertise, ensuring content reliability, safeguarding intellectual property and privacy, and adhering to ethical standards.

Wiley's position statement on AI content "scraping" emphasises the following key points:

  • Respect for Intellectual Property Rights: AI advancements should respect intellectual property rights, including fair compensation and proper attribution for content creators.
  • Authorisation: AI developers must obtain permission before using Wiley content for AI development, training, or implementation.
  • Transparency: Clear attribution and data provenance are essential for ethical AI development.
  • Licensing Frameworks: Wiley has developed flexible and fair licensing frameworks tailored to various use cases and development needs.
  • Ethical Practices: Wiley advocates for ethical and legal data sourcing practices and encourages industry-wide adoption of proper licensing practices.

 

Navigating publishers' guidelines for using generative AI tools in research or publishing involves understanding and adhering to specific policies related to the ethical and responsible use of AI. 

Explore a step-by-step approach to navigate these guidelines:

Check the publisher’s website or author guidelines for specific mentions of AI usage, including any restrictions, allowances, or recommendations.

Tip: Look for terms like "AI-generated content," "ethical use of AI," or "data integrity."


 

Ensure that the AI-generated content aligns with the publisher’s ethical standards, especially concerning authorship, transparency, and plagiarism.

Tip: AI should not be used to mislead, and you should always disclose the involvement of AI tools where applicable.

Ensure AI tools don’t generate content that infringes on copyright. Some AI tools may use pre-existing datasets, so make sure your use of generated material doesn’t violate intellectual property rights.

Tip: Verify that AI-generated content doesn’t duplicate or mimic copyrighted works without proper attribution.

If using AI on sensitive or identifiable data, ensure it complies with the data privacy and security policies of the publisher.

Tip: Adhere to data privacy regulations (Australia's Privacy Act).

Some publishers may require clear acknowledgment of AI tools used in your research. This includes disclosing which AI tools were used and how they contributed to the research.

Tip: Refer to AI tools explicitly in your methodology and include them in the acknowledgments or relevant sections of your research.

Some publishers may have specific rules about what type of AI-generated content is allowed (e.g., text, images, code). Make sure your use of generative AI aligns with these restrictions.

If your work involves novel or experimental use of generative AI tools, contact the publisher to seek permission or clarification.

Tip: Publishers may allow new AI methods, but it's safer to get prior approval, especially for unconventional use.

AI guidelines may evolve as new tools and ethical concerns emerge. Regularly check for updates to the publisher's AI policies to stay compliant.

Tip: Subscribe to newsletters or alerts from publishers for the latest changes to AI usage rules.