This post is part of our [work in] Progress series, an effort to share our preliminary findings on the impact of artificial intelligence in higher education to help the field move at the pace of technology.
In education, there is always a push-pull between the need for innovative advancement and proof of efficacy. Online education, the internet, and even calculators were all met with initial questions and resistance. Now, the dichotomy of progress and proof is on full display as higher education wrestles with the evaluation, adoption, and ethics of generative artificial intelligence tools.
At WGU Labs, we recognize that AI can offer significantly more affordable and accessible learning opportunities, enabling all learners, especially those facing systemic barriers, to achieve economic and personal fulfillment. We are currently exploring AI and its potential to transform the postsecondary experience across our work, including our Solutions Lab, which conducts small-scale, exploratory pilots that identify real learner challenges. The approach, which focuses on cutting-edge research and rapid-testing answers, intends to allow administrators and faculty members to understand the efficacy of innovative approaches before scaling them.
While many of the pilots are in their early stages, early findings can help higher education administrators, policymakers, and EdTech founders fine-tune the adoption, regulation, and creation of generative AI tools to benefit learners.
The Concerns and Benefits of AI
To advance AI efforts in higher education, stakeholders must first understand the current landscape of concerns. According to the latest Faculty EdTech Survey from the College Innovation Network (CIN) at WGU Labs, 83% of faculty members see the value of EdTech, but 33% don’t trust that existing products are effective. Among administrators, there is a similar pattern. Our Administrator EdTech Survey revealed that half of administrators (52%) are positive about using AI tools, yet 67% report their institution lacks policies guiding students and faculty on their use. Concerns and uncertainties stem from decades of flash-in-the-pan innovations that over-promised and under-delivered. With the high stakes of student success, the need for evidence of successful product implementation is understandable.
It is crucial to understand that AI focuses less on fundamentally altering student learning and more on scaling proven effective learning methods such as adaptive learning, project-based learning, formative assessments, and high-dosage tutoring work. However, these programs have struggled to gain traction due to the significant resource investment required, making scalability difficult. AI can accelerate the growth of these practices, making the benefits accessible to a wider range of institutions and students.
WGU Labs’ AI Pilots
In two current pilots, our Solutions Labs team is combining its deep expertise in learning science and knowledge of how learning models scale to explore how higher education can use AI to benefit more institutions and learners.
Training AI Models as a Learning Resource
One pilot we are currently running is the integration of AI into an introductory programming course at Western Governors University (WGU).
Data showed that students were struggling to complete the course, an important milestone in the program. We always take a student-centered approach to designing interventions, so our pilot began by interviewing students. Through those interviews, we discovered that many learners already leveraged AI-based resources outside the institution’s ecosystem. Based on that information, we concluded that examining the impact of an AI tool trained on the course's learning materials could enhance course completion and program progression.
In our initial pilot, we provided faculty members access to a chat-based AI tool, that we trained in specific course content. However, adoption and usage of the tool were low. As we moved to the student pilot, we sought to eliminate adoption barriers and concluded that students would be more inclined to use a familiar tool. Consequently, we provided students with unlimited access to ChatGPT Team and other custom GPTs trained on open source content related to the course.
Early results show the tool is helping. The three key findings of the pilot were:
- Students use of AI is goal-directed and oriented toward learning.
The students in the pilot were primarily adult learners with competing priorities. They used the tools to support their learning goals by saving time, validating their understanding, and overcoming specific challenges. Despite the convenience, students recognized the importance of truly learning the material and demonstrating their skills without over-relying on AI, emphasizing the need for a balance between efficiency and genuine understanding. - Students use AI as a reference, teacher, and coach.
The students in the pilot used the AI tool in three ways: as a reference, a teacher, and a coach. As a reference students used AI for quick answers and information. When students engaged AI as a teacher, they focused on improving learning outcomes and knowledge or skill development. The prompts students wrote when using AI as a coach centered on improving their learning experience and process. - Anecdotally, students reported improved learning and experience with AI.
Students indicated they are gaining confidence in using the tool, and that it is becoming more helpful. One student indicated that having access to ChatGPT promoted more “light bulb moments,” a new opportunity to think differently about the content and the different tools that are available.
Opportunities to smooth adoption were also part of early learnings. Initial faculty concerns, rightly, centered on accuracy and interest in adopting new tools with intentionality. These initial hesitancies potentially reinforce data from our CIN EdTech surveys around faculty apprehension regarding AI tools and uncertainty around administrative support or regulation. In the future, multiple factors could reduce these hurdles, including more comprehensive and transparent AI policies and cases that demonstrate a positive effect on student learning (without shortcutting the learning experience), follow ethical practices, and have efficient implementation. could reduce these hurdles. However, further research into the specific challenges causing resistance is needed.
Delivering Personalized Support with AI
The second active pilot at WGU applies a randomized controlled trial to empirically assess the impact of an AI-assisted learning platform on student academic outcomes and their perceptions of the course and the faculty compared to traditional learning methods.
In the pilot, the instructor created a set of videos before the start of the course. The control group of students watched the videos in the same order, while the treatment group watched the videos in a branching scenario in which students see specific videos depending on how they interact with the platform. This is made possible by an AI engine that listens to the conversation and seamlessly integrates pre-recorded video segments to create a natural, conversational flow. When done correctly, the lesson resembles a live one-on-one video conference with the instructor.
In traditional learning environments, students often do not receive personalized instruction or real-time feedback that can improve their learning due to high student-to-teacher ratios and limited time and budget. This AI-assisted learning platform seeks to address these issues.
Our pilot aims to evaluate this platform by measuring its impact on academic performance, student perceptions of EdTech, faculty, and their ability to persist when faced with challenging topics. This is an active pilot and we are still analyzing the data. We will share the results soon.
In addition to the pilots above we are advancing work related to improving assessment using AI, including:
- Improving the discussion board experience and incorporating discussion engagement into a system of continuous assessment
- Helping candidate teachers experience realistic role plays of parent-teacher conferences in field-centered teacher preparation programs
- Joining a community of AI problem solvers to make the evaluation of open response assessments more efficient and consistent.
Such pilots can pave the way for further explorations. Given the many unknowns surrounding AI, these exploratory pilots and the insights drawn from them will help higher education stay abreast of technological advancements and evolving learner expectations. We plan to share more results from these efforts, and other pilots, in forthcoming blogs.