Measuring Learning
In the introductory overview, we highlighted the significant problems and false positives that accompany current Large Language Model (LLM) detection tools. The inability to identify LLM output, plus the tools’ demonstrated ability to mimic an expected homework assignment, complicate our teaching and learning task significantly. This leads us to clarify what abilities students need to demonstrate, and how they might do so without a short, written summary or analysis of assigned reading.
In this module, we explore the ways in which your measurement of learning and assessment of student work may be modified in response to LLMs.
We provide several different approaches that you might take, along with some examples of faculty and instructors who have reconsidered their measurements of student learning. However, we begin with a central question: How can your students make explicit the learning they have achieved in your course?
Innovative Learning is available for individual consultations for those who want to explore their course learning outcomes and assessments in greater detail, and discuss what you are prioritizing in the learning process. Please contact us for an in-person or virtual meeting.
Learning Outcomes
Most courses at Purdue have learning outcomes, skills, knowledge or understanding that students are assumed to have upon successful completion of a course. Many of our courses include foundational disciplinary knowledge and an expectation that students will demonstrate understanding of that knowledge. The ability to recall specific information, explain or define key disciplinary vocabulary, or solve basic problems are often part of our educational responsibility. However, faculty generally prioritize critical thinking skills and higher order learning outcomes that involve analysis, application and synthesis over recall and comprehension. Whatever your course outcomes, LLMs have shown a marked ability to mimic expected student written output to problems designed to measure their learning. Current and freely available LLMs are markedly deficient in quantitative and physics problem-solving, but these are the weakest versions of the tools we will likely encounter, and training the dataset has been shown to markedly improve their ability to provide expected output to higher-order assessment questions.
Given recent LLMs ability to provide expected answers in introductory coding, humanities and social science and life science assessments, what are we to do?
First, critically explore the learning outcomes for your course. To what degree is students’ ability to summarize key concepts essential? Are there specific areas in which application or analysis is of greater importance, and could that analysis be made explicit in various stages?
Suggestions
Modifying outcomes is unlikely to staunch the influence of LLMs on written student work. Even introduction of metacognitive reflection and personal narrative will not wholly obviate the use of these tools. Yet, you can encourage significant intrinsic motivation development by helping students analyze their own learning. Encouraging students to express and refine their learning goals for the course can encourage more self-directed learning {citation need}. Additionally, you might consider having the students explore each of the learning outcomes and indicate where they see themselves currently in their ability to attain them or demonstrate competency. Structuring reflective practice into assignments can help student engagement and performance.
Relevance
Students also respond when they believe the course material and disciplinary knowledge is relevant to them. Relevance can be construed as the academic content, the student’s identity, and the connection between them. Students want to know how the skills and knowledge they are learning matter, on a personal level. Purdue instructors found that Including activities and assessments that are more relevant to students’ plan of study increases their perception of the relevance of the course and their motivation to participate.
One approach to convey relevance is to prioritize the process over the product, especially in assessment and grading structures. When students can explore disciplinary processes, with multiple opportunities for feedback, they perform better on assessment metrics.
Expertise
Engaging directly with your disciplinary lens and evaluating claims as an expert can be illustrative to your students. LLMs provide innumerable examples of seemingly passable explanations for phenomena. However, they often lack both the capability of explanatory depth or disciplinary interpretations (unless prompted repeatedly). You might engage directly with an LLM yourself and prompt is for basic explanations of knowledge relevant to your discipline, and then add complexity or local circumstances that the LLM would be hard-pressed to evaluate properly. For example, an LLM can provide a relatively accurate summary of different approaches farmers might take to maximize corn yields without becoming dependent on proprietary seed blends, but it may have difficulty advocating for a specific approach with a particular disciplinary lens, or applying specific soil conditions and economic realities in Tippecanoe County to the problem.
You are best suited to evaluate specific questions that can highlight the limitations of an LLM. But you can also encourage your students to engage with an LLM and apply your disciplinary lens to the output. And these outputs change, so each student can consider an approach to an explanation with a unique perspective.
Module Navigation
- Next Module: Creative Pedagogy and AI
- Previous Module: Considerations for Your Syllabus and Course
- Current Topic: Generative AI
- Next Topic: GTAs
Leave Your Feedback
You must be logged in to post a comment.