
Whenever we talk about security management, the topic of best practices will normally crop up. Taking lessons from your peers is a natural step, and the security fraternity is particularly keen to help each other improve. Yet how do those practices get established, and what makes them “good”? When are new practices needed, and when should older methods die out?
As the wider world of IT evolves, so does the list of practices and techniques needed to keep those IT environments secure. For example, cloud computing launched with AWS’s first service in November 2004. The practices around securing cloud deployments had to keep pace with the rapid iteration of the cloud in general. However, AWS, Microsoft, Google and others all have their own services, ways of handling data, and best practices for security.
At the same time, every career path has its own history, list of best practices, and received wisdom that may no longer be fit for purpose. Consider epidemiology, which is the study of data around health and diseases. This can provide an apt metaphor for the evolution of IT security.
Managing Director EMEA North and South, Qualys.
In the beginning
Bloodletting was state of the art medicine for thousands of years, going all the way back to Ancient Egypt. From the 11th century AD, ‘barber surgeons’ had successful careers and were trusted professionals. Bloodletting ultimately ceased as a practice – and as a career – due to a high correlation between the practice and patients dying. Rather than relying on anecdotes and received wisdom, the data showed that this practice was ineffective at best, and actively harmful at worst. It was literally worse than doing nothing.
Disease theory developed as a way to understand threats and what caused illness. Epidemiology emerged when the prevailing theory was that “miasmas” or “swamp gasses” were the main cause of diseases. People recognized that illnesses were caused by something that was invisible, and that could be passed from person to person without contact.
Yet during a cholera outbreak in nineteenth century London, “scientists” purposefully ignored overwhelming evidence that cholera was a waterborne disease. Those that supported the miasma theory were thinking of their careers and personal identities. Eventually, the evidence from John Snow – coupled with the Broad Street pump being disabled, leading to lower infection rates – forced the issue. This data-led approach led to the measurement based practice of epidemiology that we have today.
Evolution and Risk Measurement
For the IT security sector, the amount of data available around what is taking place in the cloud should make it easier to secure those environments. Yet this data has to be used effectively if it is to make a difference. Telemetry data on its own is not measurement.
In the world of epidemiology, people dying around you is data, but knowing why is measurement, which leads to investigation and then to a solution. Conversely, the consistent lack of measurement signals a practice’s eventual demise. It is at this evolutionary cross road that we find security.
Measurement involves using data to understand the wider picture that exists, and then creating the processes to solve the problems that are discovered. For CISOs, developments like cloud and AI have changed the state of the game. For example, the advent of generative AI is a catalyst for a measurement evolution in security.
Generative AI writes code that in turn creates more code, designs applications and implements workloads and AI agents. This approach should remove barriers to value generation for many more people. But how many will understand that process and be able to pinpoint the risks that exist?
Amplifying risk
AI will also amplify potential risk scenarios. Threat actors will use AI to improve their attacks, both in terms of quality and quantity. The AI-enabled organization will speed up their processes and make more money, while potentially leaving aside considerations like risk or security.
Understanding of how code is created and applications developed will shrink, leading to problems if and when things stop working. So how can CISOs be confident in their approach around risk?
Should we throw up our hands and claim that measuring risk is impossible? No – this is the same approach that those behind the miasma theory took, where there was not enough data available and no wish to test theories. Instead, we have to look at measurement in this new environment in order to deliver what is needed.
Where AI can improve results is by looking at what the real problems are. Automation can help you define the problem through measurement, model those risks and then set out to solve those problems.
Making risk operations work
Once we have measurement in place, we then have to use this information as effectively and productively as possible. This is where an evolution in approach is needed around measuring risk then using that information to effect change. Setting up a Risk Operations Centre, or ROC, makes it easier to collaborate on outcomes around remediation, mitigation or risk transfer.
Just like a Security Operations Centre covers all the data around IT threats and risks, a ROC’s role is to collect and normalize measurement around risk, then provide that information to those that need it to take some form of action.
The goal is to make those operational processes easier for all involved. Where the ROC is different to a SOC is that ROC focuses on the wider business problems that exist around value-at-risk, while the SOC looks at what the security team needs to know around those IT issues.
CISOs will look at ROC to quantify cyber risk for the business specifically. On the data side, this involves threat intelligence, time series data, network graphs, business priorities, and online analytical processing to create risk insight in a specific context, as well as using measurement algorithms optimized for historical trend analysis.
By focusing on risk, CISOs can measure their success and ensure their organizations stay secure. This emphasizes how CISOs make assumptions about value-at-risk, plausible future loss, and how to optimize decisions about risk over time.
Measuring risk involves looking at business impact and potential threats. However, this measurement is not enough on its own. Turning measurement into operational practices will provide you with guidance on where to concentrate efforts to reduce risk automatically.
Just as new processes developed based on measurement, risk operations will develop based on how effectively you can use data, understand your position, and direct your efforts to have the most impact.
We’ve listed the best online cybersecurity courses.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro