This post is part of our [work in] Progress series, an effort to share our preliminary findings on the impact of artificial intelligence in higher education to help the field move at the pace of technology.

Introduction

More than half (57%) of higher education institutions now consider AI to be a strategic priority, according to the recently released 2025 EDUCAUSE AI Landscape Study. But widespread usage remains elusive due primarily to funding and policy constraints. According to the EDUCAUSE study, only 2% of institutions are supporting AI initiatives through new sources of funding, and many executive leaders underestimate the financial burden associated with AI implementation.

While the report notes growth in AI usage across higher education year over year, it’s worth noting that while ChatGPT has been on the market for nearly three years, 43% of higher education institutions don’t consider it a strategic priority, and 46% aren’t using it in assessment or curriculum design.

Rapid change and innovation are notoriously difficult to achieve in higher education, intensifying the challenge of integrating a swiftly evolving technology. At WGU Labs, we believe the best way to understand how new technology can improve the student experience is to try it ourselves. Late last year, we hosted an internal hackathon to build three multi-agent systems aimed at improving different aspects of the learning experience. This low-stakes effort enabled everyone on our team to learn how to use AI, move beyond analysis paralysis, and gain the experience and higher comfort level needed to mature our AI efforts. We hope that by sharing our process (as well as the insights and logistical challenges we experience while building our AI portfolio via this series), other higher education institutions might develop their own internal hackathons to move their organizations forward.

Goals and process

Over five intensive weeks, cross-functional teams across WGU Labs embarked on a structured development journey, producing solutions and business cases using an AI platform to develop multi-agent systems. Our ultimate goal in using a multi-agent platform is to create a full-service AI-enabled educational experience. A single agent will struggle to provide multiple services simultaneously (e.g., tutoring + career coaching + assessment + instruction) or prioritize information and actions, and may also suffer from hallucinations. Leveraging multiple agents with various areas of expertise across the learning journey that can interact with one another when appropriate (e.g., an assessment bot flagging that a student needs more scaffolding in a particular focus area to an instructor bot). 

While the hackathon included a competitive element, its core focus was on fostering collective learning, building new skills, and enhancing collaborative practices. This initiative not only allowed us to test cutting-edge technology but also empowered teams to work together more effectively, amplifying the speed and impact of our work.

Each hackathon team was led by a research and development manager, with support from product managers, consultants, learning experience designers (LXDs), researchers, marketing and communications specialists, and operations staff. Together, they tackled projects aligned with key focus areas:

  • Student support and coaching
  • Assessment and learning support
  • Curriculum and instruction

Deliverables included:

  • Project prospectus
  • Functional prototype or minimally viable product (MVP)
  • User interface wireframe
  • Pilot plan & scaling plan
  • Market analysis & business case
  • Communications plan

Learnings

Providing technology access alone isn’t enough

Our hackathon was meticulously planned and organized by a cross-functional team. Our process included multiple organization-wide meetings, team meetings, training sessions, working sessions, as well as the development of all the deliverables outlined above. Even so, many participants felt that the compressed timeline and competing priorities limited their creativity and meaningful iteration. Participants noted the pace was stressful, leading to compromises in quality. 

Compounding these challenges, the team faced difficulties with the external vendor. While the vendor initially demonstrated certain platform capabilities in multiple meetings, those capabilities often fell short in practice. This misalignment resulted in frequent adjustments and shifts in plans, adding further complexity to the team’s efforts.

While we did our best to provide the materials, resources, and time needed to complete this hackathon, we learned that investing in upskilling opportunities like these requires putting other projects on hold, if possible. For our next AI hackathon, we plan to bring the team together in person at our headquarters, where we’ll spend several days dedicated solely to the hackathon.

 

At the time of our hackathon, multi-agent AI systems had several limitations 

While each team was tasked with developing a functional prototype or MVP, it quickly became clear that this goal was impossible under the restraints of the multi-agent AI platform we used. Instead, we pivoted to developing concepts for future products. 

A major constraint was that we couldn’t get any of the bots, or agents, within the system to communicate with one another. The assessment team, for example, developed a prompt bot to set up the assessment prompt and rubric, an evaluator bot to evaluate the learner and provide feedback, and a coach bot to provide coaching to struggling students. Getting these bots to work together required manually inputting the data from one bot to another — a tedious and time-consuming process. 

Additionally, the student support team ran into issues connecting the bot to live student data, which was foundational to their vision of a personalized support tool. Their goal was for the bot to be able to pull real-time data to give it an understanding of the student it was engaging with — and the context of their academic path and progress. Without this real-time data, the personalization that their minimally viable product was able to provide was very superficial. There was also no LMS integration, which meant the agent had to ask for the user’s (student’s) name, look up the associated data, and identify their needs based on a set of rules defined by the team. This resulted in choppy, awkward interactions between the users and the agent, rather than the seamless, personalized integration the team had envisioned. 

These limitations highlighted the need to deploy additional diligence and testing when building future products or vetting vendor solutions to ensure they work as intended. This vetting process increases in importance for fast-moving build events like hackathons, particularly when working in teams that are less familiar with technical tools. 

Personalization must be strategic and specific

Personalization is one of the biggest buzzwords in tech these days — and has been the holy grail in education for decades. The problem is that when every new technology claims to provide personalization, the word starts to lose its meaning. 

We learned that to create a truly personalized system, we need to provide nuanced guidance on what that personalization entails. Assessments, for example, can be personalized to be more relevant to the learner’s intended career, but they can also be personalized to the student’s unique learning needs, e.g., providing a student with dyslexia with an oral assignment. The same can be said for curriculum. Does personalization mean the course content is different depending on the learner’s career goals? Or does it mean that the course delivery format is customizable depending on the learner’s wants and needs? Is it both and more?

There are myriad possibilities for personalization. Therefore, getting the most out of an AI tool requires upfront strategic thinking about the specific components of the student experience you want to be personalized — and then providing the background materials and coaching to get the bot or multi-agent system to learn how to provide this service. Over time, AI systems can then adapt and improve by learning from past interactions, enabling more flexible and responsive personalization.

Conclusion

When it comes to selecting EdTech products, our research shows that both administrators and faculty prioritize products backed by evidence, such as successful implementation at other institutions. This is noteworthy given less than 10% of EdTech products currently on the market are backed by rigorous evidence of their efficacy. That’s exactly why our team at WGU Labs is hosting hackathons like this one, conducting small-scale pilots through our Solutions Lab, providing a space for peers to explore and experiment with new technologies through our AI Playground, gathering feedback through our Student Insights Council, and publishing what we’re learning through our [work in] Progress series. If we don’t make space to explore, test, share our findings, and iterate, the field will be operating in the dark. 

The WGU Labs Multi-Agent AI Hackathon proved to be a transformative initiative, advancing our organizational goals while providing vital insights into how we can enhance our tools, processes, and collaboration. It demonstrated the power of collective learning and underscored the importance of embracing experimentation and adaptability as we push the boundaries of innovation. While challenges emerged, they offered invaluable lessons that will guide us in future initiatives. By building on these learnings, we can continue to accelerate progress, foster a culture of innovation, and make meaningful strides in driving impactful change.

The EDUCAUSE study mentioned in the introduction of this piece also revealed a widening digital divide in AI adoption, with larger institutions that have more resources being more likely to have AI policies and strategies in place. The report's authors encourage these larger institutions to document their learnings and success with AI to share with their peers. That’s exactly what WGU Labs endeavors to do with our [work in] Progress series, which includes posts like this one. Stay up to date on our findings on the impact of AI on the learning experience by subscribing to our newsletter