You can find the abstracts below.
Note: One of the original speakers is unable to join, but we’re pleased to announce that Frank van Rijnsoever will be joining us to complete the program.
Please register when joining the AI-symposium organized by the VVSOR-SWS on Nov 28, 2024, in Utrecht. Please also denote whether you will join the Drinks gathering.
Abstracts
Balazs Aczel (Eötvös Loránd University)
Consensus-based principles for AI-usage in research
The future of scientific research is expected to be significantly influenced by the presence of Artificial Intelligence (AI) but without a well-defined conceptual framework, guidelines, or policy, the trajectory of this transformative development remains uncertain. While AI could drastically increase the quality and efficiency of research, there also lies a considerable risk of compromising research safety, degrading scientific quality, and evading responsibility for potentially adverse outcomes if these principles are not rigorously upheld. Our interdisciplinary expert panel reached a consensus on these overarching principles and the underlying values for responsible AI usage in research, providing essential preparatory steps and offering a comprehensive checklist to ensure adherence to these standards.
Frank van Rijnsoever (Utrecht University)
Unbiased assessment of adaptive expertise using design scenarios and artificial intelligence
Higher education has the responsibility to educate students to take on grand societal challenges (GSCs), such as climate change or threats to global health. To this end, students and professionals need to develop adaptive expertise, which is the ability to perform at a high level when facing new or changing problem situations. Scholars struggle with the unbiased measurement of adaptive expertise. Various questionnaires exist that measure adaptive expertise. However, these tools are self-assessments, which might lead to bias when assessing adaptive expertise. We currently lack an unbiased assessment tool for adaptive expertise. A promising direction lies in the development of design scenarios, presenting real-world, open-ended problems for students to solve.
In this project, we developed a series of 72 design scenarios using artificial intelligence (AI) (GPT-4.0), which varied systematically in two aspects: complexity and scientific domain. Next, we asked a panel of 530 students and young professionals to propose solutions to the problems presented in four of these design scenarios that were shown to them. We used Llama3, a local large language model, to assess the extent to which the solutions demonstrated adaptive expertise. After several tests, we concluded that AI assesses the solutions in a consistent and reliable manner. Moreover, we related the assessment of the solutions to three variables that should be associated with adaptive expertise: (1) the relatedness of the problem domain to the content of the educational program, (2) the relatedness of the complexity of the task to the level of the educational program, and (3) self-assessed adaptive expertise. Our regression analyses show that all variables are related to the assessed adaptive expertise in a manner that is theoretically expected, providing evidence for the validity of our approach. Based on this, we discuss the theoretical and practical implications of our findings.
Sebastiaan Mathôt (Rijksuniversiteit Groningen)
Developing custom teaching tools that use large language models
Large language models have taken the world by storm. As teachers, our focus has so far mainly been on dealing with students’ use of ChatGPT for written assignments. We have taken few steps to integrate LLMs into our teaching in ways that do not use ChatGPT or similar web interfaces, in part because the underlying technology seems daunting and out of reach. In this talk, I will show how we can integrate LLMs into our teaching, using as an example Heymans, an AI tutor that we are developing in Groningen, and that we have already piloted in a number of courses. Heymans mainly provides automated grading of open-ended exams, and interactive formative quizzes based on textbook material. This makes it possible to use these valuable teaching methods with large numbers of students. I will argue that we should not let ourselves become dependent on commercial providers (OpenAI, Google, Microsoft, etc.), but rather that we can and should leverage the considerable resources and knowledge that we, as educational organizations, have at our disposal to take ownership of this new technology so that we can use it on our terms.