Following the crowds to agentic AI

The ASU+GSV Summit convenes leaders across EdTech, workforce development, and educational policy. I came into the 2026 summit as a learning scientist, paying attention to both what was being said about AI, workforce alignment, and learning systems and to how those claims were being supported, challenged, or left unresolved.  

When I had an open window, I followed the crowds, which invariably led to sessions on AI. One that caught my attention as a researcher was Microsoft's session titled “Accelerating Academic Research with AI Agents and Modern Data Analytics” with presenter Michael de la Cruz. At this session, I watched high-level platforms like Microsoft Azure, Fabric, and Foundry integrate data collection, analysis, and AI-driven workflows to support research from early design through validation and publication at scale.

The tools didn't stand out to me as much as the problem they were designed to address: rapidly producing actionable research that is both credible and relevant. Multi-agent tools can help us collect, organize, categorize, and analyze the data we need to perform this research faster than we have ever been able to before.

This had me wondering: Are there there any drawbacks to scaling solutions at universities through AI agents? Is speed ever neutral?

Holding equity at scale

I bring a narrative researcher’s lens to the application of agentic AI. My research is about engaging learners in making meaning of their experiences, listening to what was said, and what was not said. The work moves slowly. This kind of qualitative data takes time to interpret. It requires sound judgement.

What would it mean to involve technology in my judgment and inference work? When the Microsoft session showed how AI can support large datasets while maintaining traceability to source data, I found myself asking harder questions. Can we handle the known biases in the data underlying these models? When AI systems surface patterns, whose ways of knowing are centered in that process?

I am still working through them.

Answering the equity question: speed is not neutral

Much of the data used in these systems—including AI-enabled research tools, data analytics platforms, and decision-support systems built on learner data—reflects existing patterns, including gaps, exclusions, and structural inequities.

When those patterns are surfaced and acted on without careful interpretation, they are likely to be reinforced rather than challenged. In this context, speed is not neutral. It can accelerate flawed decisions that carry forward the same assumptions that shaped the data in the first place.

That matters because these insights do not stay at the level of analysis. They inform decisions about what gets designed and funded, and who is supported. Without intentional attention to how conclusions are reached and applied, faster insight generation can lead to faster replication of the same problems, now at a greater scale.

What this means for my work

My fellow researchers and I will figure out the implications as we become more practiced with multi-agent tools. For me, that means approaching both the new technology and my organization’s commitment to equity by:

  • Maintaining visibility into how research conclusions are reached and creating opportunities for interpretation and challenge
  • Continuing to ask who defines the patterns that matter, and whose ways of knowing are centered when those patterns are defined
  • Ensuring that the people most directly affected by these systems—learners, educators, and communities—have a voice in shaping definitions and solutions.

What the 2026 summit surfaced for me was a shift in how insight moves into action. As that movement accelerates, so do the consequences of how we interpret, prioritize, and decide. The question is no longer only what we can build, but how we ensure that what we build reflects the complexity of the people and systems it is meant to support.

Dr. Jenn Killham is an expert in human-machine interaction and emerging technologies that are impactful, equitable, and ethical. Her knack for seeing the big picture makes Jenn a go-to voice on early-stage strategic initiatives, particularly around AI adoption and implementation. All em dashes in this written work are her own. Read more.