
Whenever a new technology appears, it’s usually two steps forward, one step backward. The backward step is usually security-related. Such is the story with AI, and more specifically, Model Context Protocol (MCP). Innovation keeps on running ahead of security[1].
On the one hand, MCP servers have been a boon to engineers. LLMs[2] can now speak in ‘common tongue’ to each other, to data sources, tools, and even people. They can connect to data they wouldn’t otherwise have access to, beyond training data or what’s public online.
Usually, that means data in private systems belonging to companies. That’s so useful actually for better-behaved AI that MCP adoption may be far more widespread than most people realize, with over 15,000 MCP servers worldwide according to Backslash Security.
Co-Founder and CEO of Teleport.
But like any tech, MCP can be exploited. Hundreds of MCP servers were recently found to leak sensitive data and facilitate remote code execution attacks due to incomplete or inadequate access controls. Trend Micro even says threat actors could target hardcoded credentials in MCP servers[3]. Any veteran engineer could have seen that coming from a mile away.
‘How to secure MCP’ is therefore a question many enterprises and security teams will ask. But hackers do not attack protocols directly, which makes the better question this: how do you make your underlying infrastructure, of which MCP is one part, more resilient against common attack vectors like phishing?
Hackers don’t attack protocols – they attack mistakes
Almost every attack, excepting the odd zero-day exploit, begins with a mistake, like exposing a password or giving a junior employee access to privileged data. It’s why phishing via credentials abuse is such a common attack vector.
It’s also why the risk of protocols being exploited to breach IT infrastructure[4] doesn’t come from the protocol itself, but the identities interacting with the protocol.
Any human or machine user reliant on static credentials or standing privileges is vulnerable to phishing. This makes any AI or protocol (MCP) interacting with that user vulnerable, too.
This is MCP’s biggest blindspot. While MCP allows AI systems to request only relevant context from data repositories or tools, it doesn’t stop AI from surrendering sensitive data to identities that have been impersonated via stolen credentials.
That’s a big loophole when it’s easier than ever to impersonate other users unnoticed by obtaining valid static credentials (e.g. passwords, API keys). MCP also lacks any inherent access control features.
So, securing MCP is really about making sure only authorized identities are interacting with AI. But knowing who or what is an authorized user is difficult in today’s landscape of fragmented identities.
Welcome to hell, aka identity fragmentation
Complex modern computing environments have made it harder than ever for engineers to manage and protect infrastructure. You can see one symptom of this complexity in how enterprises handle role-based access controls: many have more roles than employees.
Think of identity management[5] today like a big, interconnected archipelago of islands. Each island represents parts of your computing infrastructure – cloud platforms, on-prem servers, SaaS, legacy systems, etc. Each has its own customs office and passport systems, except your passport (identity) on one island doesn’t work on the next.
Sometimes you need a passport, other times a visa. Some islands have strict guards, others barely check your credentials, and others still, well, let’s just say they lost your records entirely.
If you’re the customs officer, it’s impossible to easily track who’s coming and going across islands. Some have outdated or fake passports floating around, which might take ages for customs to realize.
This is hard enough if the ‘customs officer’ is a security team, but let’s say the officer’s an AI model. It won’t tell the CEO of a company apart from an impostor CEO. It only cares that ‘the CEO’ is asking for access to financial records.
Again, that’s a blindspot for MCP, and so is the fact that a hacker could pretend to be a database[6], microservice, or AI agent. They could do so trivially since many machines rely on static, over-privileged credentials that can be stolen.
MCP won’t mitigate this unless paired with a security model that lets teams manage identities of humans, machines, and AI more cohesively.
Making identities unspoofable
If you’re deploying MCP and AI, you should combine it with a cybersecurity[7] approach that isn’t based on secrets and siloed identities.
If you want to eliminate secrets, back all your identities, including AI, with cryptographic authentication (Trusted Platform Module, biometrics). Even MCP deployments have to get onboard with this, because if an API key leaks, any attacker can impersonate anyone or anything.
So, replace those standing secrets for agents with strong, ephemeral authentication, combined with just-in-time access.
Speaking of access, the access controls of your chosen LLM[8] should be tied to the same identity system as the rest of your company. Otherwise, there’s not much stopping it from disclosing sensitive data to the intern asking for the highest-paid employees.
You need a single source of truth for identity and access that applies to all identities. Without that, it becomes impossible to enforce meaningful guardrails.
Some startups[9] will inevitably try to solve AI security with solutions that manage AI identities in a vacuum, but that would make identity fragmentation even worse. AI doesn’t belong on an island, but in a framework where it’s aware of broader access policies for other users in your infrastructure.
However you achieve that with tooling, you should be able to consistently apply policy across your identities from one place, whether it’s for AI, cloud services[10], servers, remote desktops, databases, Kubernetes, etc. Those identities should only ever have privileges when actively needed, which means no standing access on idle.
It would be irresponsible to say that unifying identities eradicates all cybersecurity complexity. That said, a lot of the complexity disappears when you tidy your space. The more complex a system is, the more likely it is that someone will make a mistake. And mistakes are, fundamentally, what we need to prevent.
We’ve listed the best IT management tools[11].
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro[12]
References
- ^ security (www.techradar.com)
- ^ LLMs (www.techradar.com)
- ^ servers (www.techradar.com)
- ^ IT infrastructure (www.techradar.com)
- ^ identity management (www.techradar.com)
- ^ database (www.techradar.com)
- ^ cybersecurity (www.techradar.com)
- ^ LLM (www.techradar.com)
- ^ startups (www.techradar.com)
- ^ cloud services (www.techradar.com)
- ^ We’ve listed the best IT management tools (www.techradar.com)
- ^ https://www.techradar.com/news/submit-your-story-to-techradar-pro (www.techradar.com)