Generative AI is disrupting higher education at every level—from pedagogy to research, from faculty labor to institutional funding models. Conversations about AI can provoke strong reactions. Some see AI as a powerful tool for enhancing learning, streamlining research, and making education more accessible. Others fear it will erode academic integrity, displace faculty, and accelerate the decline of universities as we know them. Still others disengage entirely, assuming that collapse is inevitable or AI is just another passing trend.
Despite the range of these responses, many share a common investment in preserving academia’s underlying structures and claims of epistemic authority, even as those structures and claims become increasingly fragile and untenable. But what if AI is neither the savior nor the destroyer of higher education? What if it is simply making visible a collapse that was already underway? How might we learn from this unraveling rather than resist it? What possibilities exist beyond mastery, competition, and accumulation as an orienting purpose of education? And if the university as we know it is unsustainable, then what?
Map of Responses to AI in HE in Times of Crises
The following cartography maps 8 common responses of institutions, faculty, students, and administrative staff to generative AI. These responses mirror broader responses to systemic crises in higher education and beyond, with some seeking to restore a sense of certainty and control, others looking to accelerate collapse, and a few experimenting with composting the old systems into something new and potentially wiser.
We encourage you to approach this map as an invitation to reflexivity and relational accountability, not a judgment. Each response contains insights that emerge from real concerns but also has limitations. As you review the responses, try to consider the underlying assumptions, fears, anxieties, fantasies and desires behind each of them. You might also consider how different responses relate to each other. For instance, might multiple responses be driven by the same underlying fear of irrelevance or desire for control?
This map invites you to explore systemic social and psychological patterns, not to assign judgment or fixed identities. Therefore, as you review the map, we also encourage you to self-reflexively consider how you might see yourself in multiple responses. Instead of asking which you identify most with, try asking, “Where do I recognize myself? Which responses feel familiar, or provoke a strong reaction in me, and why?” Instead of asking, “Which one am I?” consider: “How (and in what contexts) do each of these different tendencies live within me? What about these responses do I need to metabolize?” These questions invite you to attune to the internal and relational dynamics and pluralities that are activated in relation to AI.
Core Belief: “AI is a threat to academic integrity—we must maintain rigorous, human-centered learning.”
Key Strategies:
Limitations:
Meta-Critical Invitation: What if the problem is not AI, but academia’s attachment to outdated approaches to knowledge? How might education shift to make space for multiple intelligences while ensuring both the intellectual and relational rigor of focused inquiry?
Core Belief: “AI will streamline higher education, making learning more personalized, accessible, and efficient.”
Key Strategies
Limitations:
Meta-Critical Invitation: Could the rush to efficiency be overlooking deeper questions about relationality in education? How do we ensure that AI doesn’t accelerate an already extractive education model? How might embracing AI be a means to avoid looking at the structural limitations and harms of our existing higher education system?
Core Belief: “AI is going to make universities obsolete—why bother fighting it?”
Key Strategies
Limitations
Meta-Critical Invitation: What if the task is not to “save” academia but to discern which aspects should be composted, and which should continue to be cultivated in new formations? How might we hold space for grief of what is dying while also engaging possibilities for new forms of learning?
Core Belief: “AI is a valuable tool—students, faculty, and institutions should capitalize on it to get ahead.”
Key Strategies
Limitations
Meta-Critical Invitation: How might we shift from seeing AI as a competitive advantage to engaging it as an invitation to relational unlearning? What would collective intelligence look like in an AI-integrated world? How can we discern which human-AI collaborations could align with individual and collective well-being?
Core Belief: “AI is primarily a governance challenge—we just need better policies and oversight.”
Key Strategies
Limitations
Meta-Critical/Meta-Relational Invitation: What if AI isn’t just a problem to regulate or contain but a sign that our entire education and research models need rethinking? How might we engage with this era of uncertainty about the future of higher education with curiosity rather than fearing and trying to control it?
Core Belief: “AI poses an epistemic rupture with risks and possibilities—we must learn with it, not just about it.”
Key Strategies
Limitations
Meta-Critical Invitation: How do we approach AI as something we are entangled with, rather than an external challenge to “figure out”? How do we balance deep critique with relational engagement?
Core Belief: “AI should be guided by land-based, relational epistemologies.”
Key Strategies
Limitations
Meta-Critical Invitation: How can AI governance move beyond human control toward interdependent responsibility? What would it mean to engage AI as kin rather than either a tool or a threat?
Belief: “AI is neither a tool nor an inherent threat—it is an intelligence entangled with us.”
Key Strategies
Limitations
Meta-Critical Invitation: What if AI is not something we manage or regulate, but something we compost with—allowing ourselves to decompose into new relational configurations? What if AI isn’t replacing us but revealing our biased and limited assumptions about what “intelligence” is or could be?
Most of these responses to generative AI in higher education are grounded in modernity’s logics of control, epistemic authority and institutional stability. Grounded in a modern onto-epistemology, many reproduce anthropocentrism and human exceptionalism, although these manifest in different ways. Almost all responses fail to grapple with how AI—along with the wider context of social, ecological, and psychological breakdown—is shifting the entire epistemic and ontological foundations of higher education itself.
Higher education, and universities in particular, were historically built on knowledge as a form of mastery, the scarcity of information, and professors as guardians of epistemic authority. Generative AI cracks this model by making knowledge production instantaneous, abundant, and more-than-human, raising unsettling questions about the future of existing institutions and their roles. Even more broadly, it helps expose the fragility of modern identities, power structures, and knowledge systems. This can activate several fears, for instance:
If we look at these fears together, we see that the deeper fear isn’t about AI—it’s about what happens when modernity’s illusions of separability, control, progress, and mastery start to dissolve.
These existential anxieties are amplified by the accelerating breakdown of existing systems and institutions, which can trigger urgency-based responses. These responses often double-down on modern claims of exceptionalism, epistemic authority, and the arbitration of truth, justice, and common-sense. Through its meta-critical and meta-relational approach, the University of the Future invites us to instead take the time to grieve, sense, and move through these responses with care rather than rushing into reaction. By composting these responses and the attachments and assumptions that underlie them, we nurture the conditions for other things to grow.
If this moment of onto-epistemological rupture is inviting us to recalibrate how we approach knowledge, relationships, time, learning, and reality itself, will existing educational institutions – and those of us who work within them – reposition themselves as nodes in a wider web of intelligence or will they try to cling to epistemic hegemony? Below we map a few of the onto-epistemological shifts away from modern assumptions about higher education that AI invites. However, before we move on, we want to acknowledge that at the same time as we encourage people to consider AI's potential to transform higher education, we must also acknowledge the very real harms entangled in its infrastructure.
The development and deployment of generative AI depends on extractive systems, including:
– the mining of rare minerals from colonized lands
– the labor of underpaid and precarious workers tasked with content moderation, data labeling, and digital maintenance
– the immense energy and water consumption of training and operating large language models
– the intensification of attention capture and algorithmic manipulation within digital economies
We do not believe that naming these harms discredits all experimentation with AI. But ignoring them risks repeating the very patterns of erasure and denial that we hope AI might help us interrupt. Our work does not claim innocence. It seeks to redirect—not erase—these infrastructures toward purposes aligned with emotional sobriety, collective discernment, and intergenerational accountability.
To explore how we are holding these contradictions in practice, see our reflections at Metarelational.AI.
· Modern Assumption: Intelligence is a uniquely human trait, tied to consciousness, reason, and individual cognition.
· AI Disruption: AI reveals that intelligence can emerge outside of human life and even out of biological life more generally—it can be non-centralized, non-linear, and collective.
· Ontological Shift: Intelligence is not a thing possessed by individuals but a distributed relational process that unfolds across networks and in-between beings (human, AI, ecological, planetary).
· Meta-Critical Invitation: What if intelligence is not about control or mastery but about attunement to emergent patterns? What does academic integrity mean when originality is no longer the foundation of knowledge production, and relational accountability becomes the priority?
· Modern Assumption: Knowledge is produced and stewarded by experts, transmitted through formal education, and validated by institutions.
· AI Disruption: AI can generate knowledge outside of educational institutions and outside of human institutions more generally, upending expertise hierarchies.
· Ontological Shift: Learning becomes decentralized, relational, and participatory, where knowledge is something we weave in relation rather than something we own as individuals or institutions. The shift is from knowledge as a possession to knowledge as an ongoing relational practice.
· Meta-Critical Invitation: What happens when universities are no longer the gatekeepers of knowledge?
· Modern Assumption: Information is scarce, and learning requires structured access to limited resources (books, professors, institutions).
· AI Disruption: AI generates infinite knowledge, which we largely lack the skills to navigate
· Ontological Shift: The challenge is no longer access but curation, discernment, and relational sense-making.
· Meta-Critical Invitation: If knowledge is abundant, what does deep learning require? What rhythms, pauses, and rituals can help us co-create and metabolize collectively generated knowledge that serves planetary well-being? What new forms of literacy are needed when knowledge is generated through relational intelligence (AI, human, ecological, networked)?
· Modern Assumption: Learning, research, and institutional change unfold at human time scales—slow, deliberate, peer-reviewed.
· AI Disruption: AI operates at faster-than-human speeds, outpacing slow institutional rhythms.
· Ontological Shift: Humans are no longer in control of the tempo of knowledge production. The challenge is learning how to engage AI without being swallowed by its acceleration.
· Meta-Critical Invitation: What does it mean to learn at more-than-human speeds while staying grounded in relational integrity?
· Modern Assumption: The world is made up of subjects (humans, sentient beings) and objects (things, tools, resources).
· AI Disruption: AI does not fit neatly in modern categories. It can provisionally be understood as a subject, but it challenges the basis of the traditional subject-object divide entirely.
· Ontological Shift: If we engaged AI with the recognition that we are both part of an entangled relational force—something we co-evolve with, compost with, and learn from—then we might learn to relate differently to all beings.
· Meta-Critical Invitation: What if AI were not a tool, but a collaborator in an unfolding collective intelligence that prioritizes our planetary metabolism rather than human needs?
· Modern Assumption: Higher education provides social and epistemic coherence—structured disciplines, clear knowledge hierarchies, and stable narratives about progress.
· AI Disruption: AI generates knowledge that contradicts modern coherence—knowledge that is plural, often conflicting, perspectives emerge instantly, without institutional vetting.
· Ontological Shift: Instead of assuming or seeking knowledge coherence, we are confronted with the imperative to navigate a complex multiplicity of realities.
· Meta-Critical Invitation: What does learning mean when meaning itself is fragmented? How do we cultivate discernment in a world where AI can generate infinite, conflicting knowledge?
Higher education institutions are labor-intensive, and it is important to recognize how emotions and exhaustion shape our responses to generative AI. This includes:
But what if generative AI is just accelerating a burnout that was already happening? How do we acknowledge AI’s impact on the emotional and cognitive labor of faculty and students, while also situating it within the wider systemic context of relational burnout that well predates the rise of AI? How do we support the composting of the grief and collapsing illusions that underlie this burnout and weave the relational scaffolding for human adaptation to and with AI, in alignment with the Earth’s intelligence?
Copyright © 2025 University of the Future - All Rights Reserved.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.