20 May 2025 · articles
AI in Network Security: 5 Critical Factors to Consider
The recent cyber attacks targeting major UK retailers like Marks & Spencer, Harrods, and The Co-operative Group underline the growing sophistication of cyber attackers and the urgency with which businesses and public institutions alike must adapt. Artificial intelligence (AI) is increasingly being integrated into cybersecurity environments, but while it brings new capabilities, it also introduces significant new risks. This blog explores the security implications of deploying AI systems themselves - including the vulnerabilities they create, how they expand the attack surface, and what must be done to secure them.
20 May 2025
Author: Rebecca Shortland | Senior Marketing Coordinator
Artificial intelligence is rapidly reshaping the landscape of network security; While artificial intelligence is often promoted as a powerful security enhancer, its integration into networks and systems is far from risk-free. AI systems introduce unique vulnerabilities: they can be manipulated, misused, or trained on compromised data. Rather than just thinking about how AI can defend networks, organisations must increasingly focus on how to defend their networks from insecure or poorly governed AI.
For most organisations, the real challenge is managing the risks introduced by AI itself - from uncontrolled access to sensitive data, to bias in decision-making, and the potential for adversarial attacks on models. These aren't just theoretical concerns; they are practical threats that demand immediate governance and technical mitigation.
Even national security organisations like the UK's NCSC use AI to bolster defences - but their example also highlights how centralised AI systems, if compromised, could become critical failure points. The scale and scope of AI make it essential not just to use it wisely, but to build in protections against misuse or malfunction from the outset.
The introduction of AI has fundamentally changed our digital world - and in many ways for the better. However, with over four in ten businesses reporting a cyber security breach or attack in the past year, it’s clear that many organisations are still struggling to implement effective, resilient security strategies.
So, here are 5 things to consider when it comes to AI and network security before you give your AI systems full access to your networks.
1. Ensure that data is transmitted over secure, encrypted channels, and that data at rest is encrypted using strong standards.
It’s important that all data transmitted across networks is protected through secure, encrypted channels to prevent interception or tampering during transit. Likewise, robust encryption standards should be implemented for data stored at rest (including databases, file systems, and backups) to safeguard sensitive information against unauthorised access even in the event of a breach or physical compromise.
In response to the escalating threat landscape (and 22% of reported data breaches being attributed to a lack of encryption), UK organisations have increasingly recognised the critical role of encryption in their cybersecurity strategies - with 35% of organisations now encrypting data at rest, and 39% securing data in transit. But these encryption measures should be part of a comprehensive data protection strategy, regularly audited and updated to reflect evolving security best practices and regulatory requirements.
As cyber threats become more sophisticated, it's essential for organisations to stay ahead by implementing robust encryption protocols and ensuring all data is adequately protected.
2. Only give AI access to the data that's essential for the AI to function.
With AI expanding the attack surface, adopting a “better safe than sorry” mindset is no longer optional - it’s essential. A zero-trust policy, often referred to as Zero Trust Architecture (ZTA), is critical when AI is part of your security environment. Unlike traditional VPNs, which grant broad access to entire networks once a user is verified, ZTA offers more granular control (limiting access to only the specific systems, applications, or data required). Every user, device, and application must continuously verify their identity, regardless of how long they've been part of the network. Essentially, assume every connection is a potential threat, until proven otherwise.
Zero Trust is something the UK’s Home Office has announced plans to implement across all government departments, which is expected to set a benchmark for the private sector - particularly in industries handling sensitive data, such as finance and healthcare. Implementing ZTA significantly reduces the risk of unauthorised access and lateral movement within networks, providing a robust defence against sophisticated AI-powered cyber threats.ainst sophisticated AI-powered cyber threats.
3. Use anonymisation techniques that strip personal identifiers.
When leveraging AI in environments where network security and data privacy are paramount, it is essential to apply anonymisation techniques that effectively remove or obscure personal identifiers (both direct and indirect) from the data used in AI training, inference, or monitoring.
In AI-driven security systems, this will help ensure compliance with privacy regulations, still allow for effective pattern recognition/anomaly detection, and reduce the risk of exposing sensitive user information in the event of a breach. So apply your anonymisation rigorously, evaluate it regularly, and combine it with as many other privacy-preserving methods you can to ensure data minimisation and ethical AI use within secure network infrastructures.
4. Ensure all AI-related data activity is logged and monitored in real-time.
You need to know not just what data was accessed, but by whom, why, and when.
Traditional networking approaches can’t provide the visibility required to monitor AI-based systems - modern AI environments demand networks that are dynamic, adaptive, and aware of context. A shift to intent-based networking and identity-aware infrastructure (where access depends on who or what is making the request, not just where it’s coming from) is key to securing modern AI environments. Real-time monitoring also enables faster detection of anomalies, supports forensic investigations, and helps enforce compliance with internal policies and external regulations.
Despite the significant increase in severe cyber attacks over the past year, only a third of UK businesses (33%) have deployed security monitoring tools, and even less (31%) have undertaken cyber security risk assessments in the last year. Implementing intent-based networking and identity-aware infrastructure can provide the necessary visibility and control to secure AI-driven systems. By continuously verifying every device, user, and application, organisations can better detect and respond to threats in real-time, ensuring compliance with both internal policies and external regulations.
5. Establish strong internal governance over who can train models, what data they use, and how results are validated.
Be aware that without strict oversight, AI systems can be compromised or biased - and that can lead to very serious issues (like processing information in a way that may breach GDPR). If you're unsure, the UK's Centre for Data Ethics and Innovation published a review into bias in algorithmic decision-making, outlining key steps to support organisations in using algorithms responsibly while ensuring ethical innovation. By adhering to these guidelines and maintaining strict oversight, organisations can develop AI systems that are not only effective but also fair, transparent, and compliant with UK regulations.
Only authorised personnel should have access to model training environments, and training data must be carefully curated to ensure it’s relevant, clean, and free from sensitive or misleading information. It’s also beneficial to implement clear protocols defining how models are tested for accuracy, robustness, and potential security implications before deployment. This not only protects the integrity of the AI systems but also ensures accountability and transparency.
Striking the balance
The integration of AI into public sector networks is no longer hypothetical - it’s already happening. But while AI offers opportunities for smarter, more responsive systems, it also creates new vulnerabilities that can’t be ignored. From data misuse to model manipulation, AI systems can become both targets and tools of attack.
The NCSC have stated that there will be a strong digital divide between systems keeping pace with AI-enabled threats, and a large number that don’t and are thus more vulnerable - making cyber security at scale increasingly important to 2027 and beyond.
The public sector, in particular, must navigate a uniquely high-stakes threat landscape, but it also stands to gain massive efficiencies from smart, well-governed AI integration. The government has been clear: public bodies need to spend less time on admin and more time delivering the services people rely on. With the right safeguards (strong encryption, limited data access, anonymisation, real-time monitoring, and robust internal governance) AI can become an enabler rather than a risk.
Ultimately, organisations must shift from a mindset of “AI will secure us” to “we must secure our AI.” That means building systems that are not just AI-enabled, but AI-resilient, designed to withstand both external threats and internal misuse. Innovation can’t come at the expense of security. With the right controls in place, AI can be part of the solution, but only if we fully account for the risks it brings.
Embrace the innovation, but don’t forget to lock the doors behind it!
While legacy solutions often layer on fragmented fixes, what’s needed now is a unified, future-ready platform that seamlessly combines cutting-edge connectivity, security, and observability in one place. That’s exactly what Cloud Gateway has built. A single, unified platform - built and managed by industry experts - which is designed to help organisations embrace innovation confidently, without compromising on control.
Want to see how organisations can prioritise data protection, user verification and device security in this new digital landscape? Discover how Cloud Gateway helps organisations take control.