Guidelines on the responsible use of generative AI in research developed by the European Research Area Forum
With the rapid spread of the use of this technology in all domains including in science, these recommendations address key opportunities and challenges. Building on the principles of research integrity, they offer guidance to researchers, research organisations, and research funders to ensure a coherent approach across Europe. The principles framing the new guidelines are based on existing frameworks such as the European Code of Conduct for Research Integrity and the guidelines on trustworthy AI..
AI is transforming research, making scientific work more efficient and accelerating discovery. While generative AI tools offer speed and convenience in producing text, images and code, researchers must also be mindful of the technology’s limitations, including plagiarism, revealing sensitive information, or inherent biases in the models.
Key takeaways from the guidelines include:
- Researchers refrain from using generative AI tools in sensitive activities such as peer reviews or evaluations and use generative AI respecting privacy, confidentiality, and intellectual property rights.
- Research organisations should facilitate the responsible use of generative AI and actively monitor how these tools are developed and used within their organisations.
- Funding organisations should support applicants in using generative AI transparently
As generative AI is constantly evolving, these guidelines will be updated with regular feedback from the scientific community and stakeholders.
Further information can be found here.