That was one of the clearest messages from the recent DIG25 roundtable hosted by Roc Technologies and the University of Reading – a conversation that surfaced not just what AI is doing in Higher Education, but what AI is demanding of it. The roundtable was attended by senior IT leaders from universities across the UK. The message was not about hype or inevitability, but about pace, pressure and the growing need for universities to regain strategic control over a technology that is already deeply embedded across their estates.
Much of this acceleration is coming not just from internal demand and experimentation but from vendor integration. Core platforms are adopting AI by default, adding capabilities rapidly and sometimes without reference to real institutional needs. Universities are finding themselves in a position where AI is being introduced whether they are ready or not. This does not reflect a lack of strategy; rather, it highlights the reality that institutions are being asked to govern, evaluate and operationalise technologies that are already in motion. The strategic challenge is no longer whether to use AI, but how to ensure its adoption serves institutional purpose rather than vendor momentum.
This is particularly complex within the structure of a university. Unlike traditional enterprises, universities are ecosystems: academic departments, research units, professional services, central operations and diverse user groups with distinct motivations and pressures. AI does not land cleanly in environments like these. It lands in fragments — through student behaviour, staff experimentation, research practices and enterprise software updates. Bringing coherence to this landscape requires not simply coordination, but clarity of intent. Many institutions are being encouraged to “use AI”, yet the strategic outcome is not always defined. The risk is that AI starts to shape the institution faster than the institution shapes its use of AI.
A fundamental capability shift is therefore underway. Universities have deep intellectual capital, but AI introduces behavioural, ethical and operational patterns that require new forms of literacy at every level. Students are enthusiastic adopters, but not always discerning users. Staff may feel compelled to engage with AI to remain current, yet are not always certain where it genuinely creates value. IT teams, typically optimised for stability and predictable systems, are now working with probabilistic technologies that behave in ways traditional models do not easily accommodate. This uneven literacy doesn’t just slow adoption; it creates inconsistent results from pilots and proofs of concept, making it harder to judge where meaningful value lies.
Experimentation is becoming essential, yet experimentation requires a form of institutional confidence that university IT teams are not always resourced or structured for. Creating environments where AI can be tested safely – without risking service reliability, data protection or academic integrity – is a significant shift from the operational models many teams are rightly proud of. Yet without these safe spaces, institutions are left to adopt AI features without the ability to test them meaningfully. The capability to trial, evaluate and refine is quickly becoming as important as the capability to implement.
Governance is central to this challenge, but it is also one of its most challenging components. AI governance cannot be fully defined in advance because the technologies themselves are still evolving. Policies must often be ratified by committees with varied levels of AI expertise, and this can lead to guardrails that are imperfect but necessary. It means treating governance as a living system – starting with principles that can adapt, rather than attempting to finalise frameworks that will be outdated before they are implemented. To this end, ISO42001 can offer a useful starting point. That said, enforcement remains complex, particularly in federated academic structures, but without early governance, institutions risk allowing students and staff to set the rules by default.
The financial dimension adds further tension. AI costs are difficult to predict, not only because vendor pricing models are evolving, but because the real cost of AI is spread across compute, licensing, skills, training and ongoing governance. Concern about cost escalation is significant – it’s not just about a sudden spike if compute requirements grow, or if vendor pricing models change, but the risk of the AI bubble bursting, much like to dotcom bubble did in the early 2000s. At the same time, defining ROI is a challenge. The benefits of AI often manifest as avoided incidents, time saved or improved insight, none of which fit neatly into traditional ROI frameworks. Procurement becomes harder when outcomes cannot be easily modelled, and institutions risk delaying valuable initiatives simply because the financial narrative is unclear.
Across all of this lies the question of influence. Universities, like many sectors, want deeper dialogue with vendors – not because they resist innovation, but because they want AI to develop in ways that supports their needs: academic purpose, operational resilience and student success. When vendors introduce AI features without understanding the context in which they will be used, institutions bear the cost of integration without always receiving the benefit. In a financially constrained sector, value must be explicit, not assumed. This is where collaboration becomes strategically significant. Higher Education has a strong tradition of sharing knowledge, but competitive pressures risk undermining this at the very moment when collective influence over vendors is most needed.
For all its complexity, AI presents a turning point for the sector – not a technological turning point, but an organisational one. The question facing universities is not whether AI will be adopted, but whether it will be adopted in ways that strengthen institutional resilience, capability and purpose. The institutions that succeed will be those that build literacy across their communities, create real spaces for testing and learning, define governance early and refine it often, and work together to shape vendor behaviour rather than simply react to it. AI will not wait for universities to be ready. But with strategic clarity, shared learning and purposeful governance, universities can take ownership of how AI evolves within their environments – turning pressure into momentum, and momentum into meaningful institutional value.