Digital Exams in Higher Education: How to Prevent Students from Using AI During Exams
Digital exams are transforming higher education, but AI is reshaping exam security. Learn how to prevent students from using AI, by creating a secure exam en...

Digital exams are now firmly established within higher education. What began as a response to disruption has settled into something far more permanent, shaping how assessment is delivered across the digital campus.
As teaching and learning have become more flexible and accessible, assessment has naturally followed, creating new opportunities for students while introducing a more complex set of challenges for institutions, particularly around consistency and academic integrity. For higher education IT teams, this shift is increasing the need for stronger online exam security and more effective AI cheating prevention strategies.
Maintaining academic integrity in this environment is no longer straightforward. The rise of AI tools, combined with unrestricted access to the internet, has fundamentally changed what is possible during an exam, making it harder to define and maintain consistent conditions across different settings. As a result, institutions are exploring new ways to prevent students from using AI during exams without compromising accessibility.
Guidance from the Quality Assurance Agency’s Quality Compass (2024) reflects this shift, emphasising that institutions need to move beyond relying on detection alone and instead adapt both assessment design and delivery to the realities of AI.
This is starting to shape the wider conversation around digital exams. The focus is no longer limited to access or scalability, but is increasingly centred on questions of control, visibility, and consistency, particularly for institutions looking to prevent students from using AI while still delivering a fair and accessible experience.
The growth of digital exams is not just the result of recent disruption. It reflects a broader shift in how students expect to learn and be assessed. As teaching has moved increasingly online, assessment has followed, creating a more consistent experience across the student journey.
This is particularly true when looking ahead to the next generation of students. Research conducted in our Generation Alpha report highlights how familiar younger learners already are with digital environments. Many are already used to completing work online, accessing tools on demand, and moving seamlessly between devices. With this, expectations around digital assessment are forming earlier, long before students arrive at university.
In practice, digital exams are now delivered across on-campus labs, remote settings, and hybrid environments, often incorporating bring your own device approaches, including BYOD exams. While this flexibility supports different learning contexts, it also introduces greater variation in how exams are experienced.
That variation is where the challenge begins to emerge. As digital exams expand across different formats and environments, maintaining a consistent experience becomes more difficult, and more important. This also makes it harder to maintain a secure exam environment across different locations and devices.
It also raises questions around access and equity. As reliance on personal devices increases, not all students have access to the same level of hardware or connectivity. Differences in performance, availability, or familiarity with technology can all influence how an exam is experienced, and ultimately, how fairly it reflects a student’s ability.
The role of AI in education has evolved rapidly, and its impact on assessment is becoming increasingly difficult to ignore. Tools capable of generating written responses, solving complex problems, or assisting in real time are now widely available and easy for students to access.
This shift is already visible in student behaviour. In a recent study carried out by AI in Education Statistics, it is suggested that more than half of higher education students are using AI to generate content for assessed work, highlighting how quickly these tools have become embedded in day-to-day academic practice.
In the context of digital exams, this creates a new level of complexity. When students are working on connected devices, often outside of tightly controlled environments, managing access to external tools becomes much harder to define, let alone enforce.
For institutions, the question is no longer whether AI will play a role in assessment, but how to respond in a way that is both practical and fair. Some approaches have focused on identifying AI-generated content after an assessment has taken place, but this has proven difficult to rely on consistently. Outcomes can vary, and in many cases, it is not straightforward to demonstrate misuse with enough certainty.
This is where some of the limitations of AI detection tools start to become more visible. Reports have shown that these systems can produce inconsistent results, and in some cases incorrectly flag legitimate work.
Examples highlighted in coverage from Ars Technica have shown widely circulated cases where historical documents and well-known texts, including the US Constitution and the Bible were identified as AI-generated, which raises broader concerns about how dependable these tools are in academic settings.
Even OpenAI withdrew its own detection tool after acknowledging issues with accuracy, which has added to the wider uncertainty around relying on detection as a primary approach.
At the same time, concern among educators is clearly increasing. Reports from The School House Blog suggest that 96% of instructors believe their students to have engaged in some form of academic misconduct over the past year, reflecting how quickly expectations and behaviours are shifting.
As a result, attention is moving away from detection and towards prevention. Rather than trying to determine whether AI has been used after the fact, institutions are increasingly looking at how exam environments themselves can reduce the opportunity for misuse. This shift is central to modern AI cheating prevention strategies.
At the same time, the conversation is not entirely about restriction. In many areas of higher education, there is a growing recognition that AI will continue to be part of how students learn and work. This is leading to a more mixed approach, where some forms of assessment are designed to incorporate AI in a controlled way, while others still rely on more tightly managed environments.
Creating a secure exam environment is central to delivering digital exams at scale. As assessment expands across different locations and devices, maintaining consistency becomes more difficult without the right controls in place.
A single solution is rarely enough on its own. Institutions are increasingly adopting exam security software as part of a layered approach to maintaining control during digital exams, supporting stronger online exam security.
Locked-down exam software remains a widely used approach, restricting access to browsers, external applications, and system functions. While effective, it is rarely sufficient on its own, particularly in more flexible settings where students may have access to multiple devices during an exam.
Access to applications is another key consideration. Within a digital campus, students typically have access to a wide range of software .During an exam, that access needs to be tightly controlled. Providing only the required tools through a centralised platform reduces risk and creates a more consistent experience across devices.
This becomes even more prevalent in BYOD exams. When institutions have less direct control over the hardware being used, the focus shifts towards how software is delivered and managed. Ensuring that every student can access what they need, without introducing differences in performance or configuration, is often where the complexity of digital exams becomes most visible.
Introducing tighter controls around digital exams can easily create friction for students, particularly when the process feels unfamiliar or overly restrictive. Even relatively small issues can become distractions at a critical moment.
For that reason, how a secure exam environment is implemented matters just as much as the controls themselves. Systems need to work reliably and should feel consistent with the tools students are already used to.
Clarity also plays an important role. When students understand what is expected of them, what they can access, and how the environment will behave, there is less room for confusion. This consistency is essential for maintaining academic integrity across digital exams.
Institutions need to be able to create environments that are controlled enough to support academic integrity, while still reflecting how students work across a digital campus.
The challenge is finding a balance between control and usability, ensuring that digital exams are both secure and accessible. It is this balance that is starting to shape how digital exam environments are designed in practice.
In practice, creating this kind of balanced exam environment often comes down to how access and visibility are managed behind the scenes.
For many institutions, one of the more consistent challenges is controlling access to software without introducing unnecessary complexity for students. In a typical digital campus, students are used to having a wide range of applications available to them, but during an exam that needs to be more deliberately defined. Making sure that only the required tools are accessible, while still allowing students to work in a way that feels familiar, is not always straightforward.
This is where platforms such as AppsAnywhere tend to come into the conversation. By providing a centralised way to deliver applications, institutions can shape what is available to students during an exam, regardless of whether they are working on university-managed devices or their own. Because applications can be delivered in different ways depending on the device, whether through streaming, virtualisation, or local delivery, it also helps reduce some of the variation that can otherwise affect the exam experience.
Alongside access, visibility plays a quieter but equally important role. While much of the focus in exam security is on what happens during the assessment itself, understanding how software is used more broadly can help inform how those environments are designed in the first place. This is where platforms such as LabStats add another layer, offering insight into application usage, demand, and access patterns over time.
Taken together, approaches like these allow institutions to move beyond simply restricting access and towards shaping exam environments more deliberately. Rather than trying to control every variable in isolation, the focus shifts towards creating conditions that are consistent, appropriate to the assessment, and aligned with how students are already working.
In that sense, the challenge is less about preventing every possible misuse of technology, and more about designing exam environments that make sense within a digital campus, where access, control, and experience all need to be considered together.
Sign up to our newsletter.
AppsAnywhere is a global education technology solution provider that challenges the notion that application access, delivery, and management must be complex and costly. AppsAnywhere is the only platform to reduce the technical barriers associated with hybrid teaching and learning, BYOD, and complex software applications, and deliver a seamless digital end-user experience for students and staff. Used by over 3 million students across 300+ institutions in 22 countries, AppsAnywhere is uniquely designed for education and continues to innovate in partnership with the education community and the evolving needs and expectations of students and faculty.

Register your interest for a demo and see how AppsAnywhere can help your institution. Receive a free consultation of your existing education software strategy and technologies, an overview of AppsAnywhere's main features and how they benefit students, faculty and IT, and get insight into the AppsAnywhere journey and post launch partnership support.

Register your interest for a demo and see how AppsAnywhere can help your institution. Receive a free consultation of your existing education software strategy and technologies, an overview of AppsAnywhere's main features and how they benefit students, faculty and IT, and get insight into the AppsAnywhere journey and post launch partnership support.