Context
The rise of Generative AI (GenAI) tools and their impact on teaching, learning, and assessment practices is currently a significant topic of discussion in higher education. Since November 2022, when OpenAI introduced ChatGPT, its online conversational AI chatbot, educators and students have been challenged by the capabilities of these new tools, which include similar systems from rival tech companies such as Google’s Gemini, GitHub’s Copilot, Microsoft’s Copilot, and Anthropic’s Claude.
These tools provide personalised, instant help with various tasks such as summarising literature, brainstorming ideas, and writing code and text, although some limitations in transparency and accuracy might exist.
As educators, we saw the potential of these tools and wanted to consider how best to incorporate them into our teaching and assessments to support our students. In order to do this most effectively, we set out to explore the practical applications of these tools to understand how they might specifically enhance programming skills and critical thinking. We set out to fill a knowledge gap and obtain insights that could provide valuable direction in how we approach education in light of these new technologies.
Project
The objective of the study GENIAL (Generative AI Tools as a Catalyst for Learning) was to explore how university students in full-time undergraduate and postgraduate courses use popular GenAI tools in their learning and assessment.
The project initially launched as a small focus group initiative in June 2023, evaluating the efficacy of code generation tools. Over the 2023–2024 academic year, as interest grew in the field, the initiative evolved into a multidisciplinary research project, investigating the learning behaviours of around 220 students in four undergraduate and three postgraduate courses, including quantitative and qualitative subjects. The courses evaluated ran in the autumn and winter terms of 2023–2024 in the LSE Departments of Statistics, Management, and Public Policy and in the Data Science Institute.
We used various data collection methods to gather reliable and high-quality data. During the first term we ran a survey at the end of, dedicated in-class activities, during which students were asked to work independently and to use the chatbots as an aid. In the second term, our data collection was no longer restricted to the use of chatbots in class. We expanded our data collection efforts to include surveys and focus groups, and every week, we requested participants to share chat logs related to their learning and participation in the course, both in and out of the classroom. Furthermore, we obtained students’ assignment submissions and chat logs.
Findings
The project found that although GenAI tools can be a very helpful learning tool for some students, the growing over-reliance of HE students on these tools for learning and assessment risks circumventing rather than enhancing the learning process. The biggest pedagogical challenge is that students may use the tools to replace their learning process and critical skills.
We argue that students may rely on GenAI differently for learning and for assessments, and that they tend to focus more on the output or performance than on the learning journey itself. We also observed that some students use GenAI platforms as a substitute for learning rather than as a tool to enhance learning.
Our findings raise questions about how GenAI can be successfully integrated into the curriculum without jeopardising learning and led to the development of policy recommendations focusing on curriculum planning and assessment design so that educators can adapt to these challenges and incorporate GenAI as an aid to learning.
Outputs
Dorottya Sallai, Jonathan Cardoso-Silva, Marcos E. Barreto, Francesca Panero,Ghita Berrada, and Sara Luxmoore. “Approach Generative AI Tools Proactively or Risk Bypassing the Learning Process in Higher Education”, November 2024.
Dorottya Sallai, Jon Cardoso-Silva, Marcos Barreto, “To improve their courses, educators should respond to how students actually use AI” LSE Impact Blog, 22 May 2024.
Contribution to DigiCo’s FAQ Document: Unpacking the basics of AI in support of adult educators
Project leads
Dr Marcos Barreto, Assistant Professor of Data Science (Education), Department of Statistics, LSE
Dr Jon Cardoso-Silva, Assistant Professor of Data Science (Education), Data Science Institute, LSE
Research team
Dr Ghita Berrada, Assistant Professor of Data Science (Education), LSE Data Science Institute
Dr Francesca Panero, Assistant Professor, LSE Statistics
Dr Casey Kearney, Assistant Professor, LSE School of Public Policy
Dr Dorottya Sallai, Associate Professor (Education) of Management, LSE Department of Management
Leonard Hinckeldey, Research Assistant, MSc Applied Social Data Science (2022/23), LSE
Sara Luxmoore, Research Assistant, MSc Applied Social Data Science (2022), LSE
Ananya Reddi, Research Assistant, MSc Data Science (2022/23), LSE
Mentors and collaborators
Professor Wicher Bergsma, Department of Statistics, LSE
Mark Baltovic, Senior Academic Developer, Eden Centre, LSE
Dr Anica Kostic, LSE Fellow, Department of Statistics, LSE
Dr Jenni Carr, Senior Academic Developer, Eden Centre, LSE
Dr Marina Franchi, Senior Academic Developer, Eden Centre, LSE
Jeni Brown, Head, Digital Skills Lab, LSE
Michael Wiemers, Digital Skills Lab, LSE