AI: Friend or Foe? Reflections from UCISA DIG 2025
Insights on how UK universities are adopting AI safely and ethically, where the opportunities for efficiency are, and risks to avoid.
This year’s UCISA DIG event was themed around the duality of our attitude toward AI: is it a friend or a foe? After two days of sessions, demos, and debates, we have come away inspired by the robust discussions, the progress and the move beyond the duality of the friend vs foe narrative.
What was clear across the two days was that universities are facing a defining moment, one where IT teams sit right at the heart of how institutions embrace (or resist) the AI wave.
For some, AI represents progress and potential; for others, it’s uncertainty and risk. But if there was one shared sentiment in the room, it was this: AI is here to stay. The challenge for universities now is how to embrace it safely, strategically, and sustainably. From the first session, there was a distinct feeling of pragmatic curiosity. Everyone, from IT directors to service managers, was asking the same question: “How do we make AI work for us, not against us?”
Because AI is no longer a future concept. It’s already in the classroom, the help desk, the admin portal, and the research lab. And yet, across every talk, the tension was clear: balancing innovation with governance, efficiency with ethics, and experimentation with control.
As one delegate summed it up between sessions:
“AI isn’t the problem, how we manage it is.”

If there was one shared theme across the sessions, it was AI’s potential to do more with less.
One of the most talked-about sessions came from Christi Hopkinson and Stella Sage at the University of the West of England (UWE).
They shared the story of their “Copilot Discovery” project, an AI chatbot built in just seven weeks to support students after UWE made the difficult decision to close its on-campus advice points.
The goal was simple: improve access to information, reduce repetitive service desk calls, and maintain consistency across responses. The results were impressive:
But perhaps the biggest benefit was something less tangible but just as valuable: cultural change. Teams that rarely collaborated before began sharing knowledge and insight. The AI system became a bridge between departments, not just a support for students.
Of course, there were bumps along the way. Some staff questioned the speed of rollout, and others struggled with the contradiction of promoting AI tools after warning students not to use them. But this iterative process wasn’t about perfection, it was about progress.
“Less is more,” Christi noted when describing their lessons learned. “A smaller, well-curated knowledge base makes AI smarter and more accurate.”

At the University of Manchester, James Hampton and Krista Robbie showcased another kind of AI journey, rooted in governance and structure.
Their “Manchester Hive” initiative is building a university-wide framework for AI adoption: developing policy, offering training, and aligning departments under shared principles of security and inclusion.
They’re exploring AI’s use across everything from research to telephony, where automation has already saved seven minutes per call, and looking at how AI can enhance staff experience without overwhelming systems.
Manchester’s message was simple: AI should be treated as a service, not a shiny standalone technology. Their approach – “be curious but controlled” – resonated across the conference.
It’s a phrase that captures the spirit of higher ed right now: eager to explore, but mindful of what’s at stake.
While AI can transform learning, teaching and operations when it’s under control, it can also arm attackers and increase institutional vulnerability when not managed well. Deepfakes, AI-generated phishing, data leaks, and model poisoning are now daily realities for universities
But if Manchester showed us how to bring AI under control, Brandon Cooke, CTO for Public Sector & Secure at Check Point, reminded everyone what happens when we don’t.
His session — “AI for Cybersecurity, and Cybersecurity for AI” — explored the double-edged nature of the technology.
AI is transforming learning, teaching, and operations… but it’s also arming attackers.
Jack Adams, Cyber Security Consultant at the University of Wolverhampton, pointed out that 97% of UK universities experienced a cyberattack or breach in the past year, with AI making those attacks faster, smarter, and harder to detect.
In his session “Managing AI and Data Risks in HE with Microsoft Purview”, he also highlighted the ethical minefields:
Universities need AI, but they also need to defend AI, with zero-trust principles, cloud-native security, and strong governance baked in from the start.
“We need AI for cybersecurity, and cybersecurity for AI”, another speaker highlighted.

By the end of the conference most attendees agreed that AI itself isn’t inherently good or bad, it’s all about how we manage it.
Sonia, Head of Higher Education Practice at CGI and a former CIO at several UK universities spoke about finding the balance between cost, service, and sustainability: using AI to “do more with less,” without losing sight of people or policy.
Start small, choose practical, outcomes-first projects, and measure impact early. Embed AI into existing tools (like Microsoft Teams) so it feels natural. And above all, focus on building trust and capability, not just capability alone. The sweet spot is secure, scalable, and simple AI: delivering real results while staying true to institutional values.
At AppsAnywhere, we believed that technology should empower IT teams. Just like with any other software, universities need a safe, scalable, and secure way to deliver and control AI tools. That’s why institutions across the world use AppsAnywhere to manage their entire software ecosystem – including AI applications – from one trusted platform.
With AppsAnywhere, you can:
We help institutions maintain the same governance and visibility over AI tools that they already have for everything else, helping them stay compliant, efficient and curious. Putting in place the right guardrails gives IT teams more confidence to innovate with AI on their own terms.
Want to explore how AppsAnywhere can support your university’s AI strategy?
Sign up to our newsletter.
AppsAnywhere is a global education technology solution provider that challenges the notion that application access, delivery, and management must be complex and costly. AppsAnywhere is the only platform to reduce the technical barriers associated with hybrid teaching and learning, BYOD, and complex software applications, and deliver a seamless digital end-user experience for students and staff. Used by over 3 million students across 300+ institutions in 22 countries, AppsAnywhere is uniquely designed for education and continues to innovate in partnership with the education community and the evolving needs and expectations of students and faculty.

Register your interest for a demo and see how AppsAnywhere can help your institution. Receive a free consultation of your existing education software strategy and technologies, an overview of AppsAnywhere's main features and how they benefit students, faculty and IT, and get insight into the AppsAnywhere journey and post launch partnership support.

Register your interest for a demo and see how AppsAnywhere can help your institution. Receive a free consultation of your existing education software strategy and technologies, an overview of AppsAnywhere's main features and how they benefit students, faculty and IT, and get insight into the AppsAnywhere journey and post launch partnership support.