AI in University Education Is Rapidly Evolving
The use of AI in university education is evolving both academically and practically. Scholarly research and educator commentary increasingly highlight AI’s potential to support differentiated instruction; enhance feedback loops; foster metacognitive skills; encourage ethical reflection on technology use.
Not long ago, generative AI tools like ChatGPT were viewed primarily as threats to university education. The concern was straightforward: If students could use AI to bypass learning by submitting assignments they had not genuinely worked on, it could undermine the very purpose of university education.
Additional concerns focused on AI’s limitations: Could AI hallucinate and lead to incorrect understanding? Might it mislead due to lack of true comprehension or contextual awareness? Would it reinforce biases embedded in training data? Could it ever replace human judgment, empathy, or ethical reasoning?
These concerns led to defensive measures: AI checkers, paper-based exams, and strict bans on AI-assisted work. But, as artificial intelligence gained traction in the professional world, the academic response began to shift, from policing to exploring how AI might supercharge learning.
Current Thinking: AI in the Classroom Today
While skepticism remains, there is growing acceptance of AI as a powerful pedagogical ally. Universities are increasingly making AI tools available to faculty and students, investing in training resources to support adoption. At the same time, institutions of higher learning are proceeding cautiously. Many have issued policy documents and guidelines that define acceptable AI use, outline disclosure expectations, and clarify how instructors may set classroom-specific rules. These frameworks also address how inappropriate AI use will be treated under academic integrity policies.
My Journey: Adopting Artificial Intelligence in the Classroom
Like many educators, I too am in the process of evolving my thinking about the adoption of AI in the classroom. I have started to explore if and how I could shift my focus from tracking misuse to leveraging AI to become effective in the classroom.
The answers though, I have to admit, were not obvious. After much back and forth, and a few missteps, I realized that I would have to go back to basics to find answers. To this end, I turned to a list I had assembled over time of what I understood to be good pedagogical practices – my best practices reference guide. The themes in that reference list include:
- Deconstructing content: Using a building blocks approach to content development and to the delivery of content, illustrating and mapping through examples how the concepts are applied, problems are solved and decisions made, and providing feedback through assignments and assessments.
- Encouraging student reflection: Facilitating exploration and creating safe spaces for students to reflect on their learning experiences, thereby avoiding passive consumption of information.
- Highlighting the learning arc: Explaining how the syllabus supports learning goals, how topics connect and build upon each other, and the rationale behind the instructional approach — what Cook-Sather (2011) refers to as pedagogical transparency.
- Discussing relevance: Connecting classroom topics to real-world applications, influential thinking, and policy developments.
With these best practices as my guide, I approached the adoption of artificial intelligence in two ways. One, to provide tools that students could use to self-learn more effectively, to have opportunity outside the classroom to reflect on their learning experiences and engage with a subject matter “expert” even when the instructor was not available. The other, to help me create content like explainer videos and mind-maps, content I may not have had the time to develop otherwise, efficiently.
My Experience Deploying AI as a Pedagogical Tool
To help me manage risk and to accommodate the limitations of my personal technical knowledge with respect to AI, I restricted the ways in which I introduced artificial intelligence into the classroom to three:
- An on-demand AI tutor: Designed to help students review content at their own pace, engage in simulated peer or instructor interactions, and receive real-time feedback. This tool aims to enhance feedback loops and foster metacognitive skills, encouraging learners to reflect on their own thinking.
- AI-generated mind maps: These help students visualize complex topics and understand the learning arc, how concepts interconnect and build upon one another.
- AI-generated video and audio content: Created from lecture notes and instructor materials, these resources support differentiated instruction and offer students choice in learning modalities.
With respect to the AI tutor, there were concerns from the start about hallucinations. The use of the Retrieval-Augmented Generation (RAG), a technique that allowed me to anchor the large-language model (LLM) with content from my lecture slides and notes, was helpful in managing some of that risk. Still, to ensure that the scope and capabilities of the tools were not misunderstood, it felt important to spend time in the classroom highlighting the limitations of AI tools, and to call out the “beta” nature of the AI tutor I had developed.
Thankfully, the early experience with the AI tutor was good, with the tool providing good enough answers to student queries. Even when in some instances, the responses were not fully clear or self-explanatory, students still found the interactive experience helpful in furthering their understanding of the concepts.
With respect to the audio and video content, the challenges were about completeness. The AI generated content was able to capture the core of the content effectively without hallucination but missed certain nuances of the topics and struggled when the content turned math heavy. As a consequence, the AI-generated audio and video files have been more helpful in serving as introductory content than as primary.
In contrast, the AI tool was able to create mind-maps, expertly and accurately. Not only was the quality of the output good, in many instances, it was even better than maps I had previously created.
In terms of student outcomes, I have observed an increase in student engagement with greater participation and curiosity. First indications are that artificial intelligence has enabled students to personalize their learning journeys, accessing content in formats that suit their needs. Encouraged by these tools, students have begun using AI to explore topics beyond the syllabus and simulate peer-to-peer discussions, a sign of self-directed learning and deeper understanding.
Conclusion: AI introduces a New Era of Learning
My AI adoption journey is in its initial stages. Still, my experience suggests that AI in university education is no longer just about preventing academic dishonesty. It is also about unlocking potential. As an educator, I am only beginning to grasp the possibilities and feel both nervous and excited about them.
Nervous, because the implications are vast and still unfolding. The long-term effects of AI-generated course content, automated feedback, and personalized learning remain unclear. While these tools can enhance engagement and efficiency, we do have to ensure critical thinking is not inadvertently diminished, or academic integrity compromised in the process.
Excited, because I see how artificial intelligence, when thoughtfully integrated, can reinforce core pedagogical goals such as student engagement, adaptive learning, and inclusive education. It brings us closer to the ultimate aim of education: making learning accessible, personalized, and abundant for anyone who seeks it. It may even challenge existing pedagogical goals, raise questions about the future of standardized curricula and fixed learning periods (pacing), and prompt the development of new frameworks for fostering higher-order thinking, ethical reasoning, and collaborative learning.