Friend or foe? Exploring the implications of large language models on the science system
Author: | Fecher, B., Hebing, M., Laufer, M., Pohle, J., Sofsky, F. |
Published in: | AI & Society, 38 |
Year: | 2023 |
Type: | Academic articles |
DOI: | 10.1007/s00146-023-01791-1 |
The advent of ChatGPT by OpenAI has prompted extensive discourse on its potential implications for science and higher education. While the impact on education has been a primary focus, there is limited empirical research on the effects of large language models (LLMs) and LLM-based chatbots on science and scientific practice. To investigate this further, we conducted a Delphi study involving 72 researchers specializing in AI and digitization. The study focused on applications and limitations of LLMs, their effects on the science system, ethical and legal considerations, and the required competencies for their effective use. Our findings highlight the transformative potential of LLMs in science, particularly in administrative, creative, and analytical tasks. However, risks related to bias, misinformation, and quality assurance need to be addressed through proactive regulation and science education. This research contributes to informed discussions on the impact of generative AI in science and helps identify areas for future action.
Visit publication |
Connected HIIG researchers
Benedikt Fecher, Dr.
Marcel Hebing, Prof. Dr.
Melissa Laufer, Dr. (on parental leave)
Jörg Pohle, Dr.
Fabian Sofsky
- Open Access
- Transdisciplinary
- Peer Reviewed