Skip to content

Friend or foe? Exploring the implications of large language models on the science system

Author: Fecher, B., Hebing, M., Laufer, M., Pohle, J., & Sofsky, F.
Published in: AI & Soc
Year: 2023
Type: Academic articles
DOI: 10.1007/s00146-023-01791-1

The advent of ChatGPT by OpenAI has prompted extensive discourse on its potential implications for science and higher education. While the impact on education has been a primary focus, there is limited empirical research on the effects of large language models (LLMs) and LLM-based chatbots on science and scientific practice. To investigate this further, we conducted a Delphi study involving 72 researchers specializing in AI and digitization. The study focused on applications and limitations of LLMs, their effects on the science system, ethical and legal considerations, and the required competencies for their effective use. Our findings highlight the transformative potential of LLMs in science, particularly in administrative, creative, and analytical tasks. However, risks related to bias, misinformation, and quality assurance need to be addressed through proactive regulation and science education. This research contributes to informed discussions on the impact of generative AI in science and helps identify areas for future action.

Visit publication
Download Publication

Publication

Connected HIIG researchers

Benedikt Fecher, Dr.

Associate Researcher & Former Head of Research Programme: Knowledge & Society

Marcel Hebing, Prof. Dr.

Associated Researcher: Knowledge & Society

Melissa Laufer, Dr. (on parental leave)

Head of Research Programme: Knowledge & Society

Jörg Pohle, Dr.

Head of Research Program: Actors, Data and Infrastructures

Fabian Sofsky

Former Associated Researcher: The Evolving Digital Society

Explore current HIIG Activities

Research issues in focus

HIIG is currently working on exciting topics. Learn more about our interdisciplinary pioneering work in public discourse.