What is visibility and why it’s important to cloud security
Visibility is often quoted as a critical aspect of cloud security. In fact, most will argue that it’s the first step for any cloud security practice.
They’re right. But what exactly does everyone mean by visibility and why is it so important?
The need for visibility
Visibility is needed on a few levels. The first is simply knowing what’s running in your cloud. The second is to understand what’s on those systems, how those systems operate, and how they interact with each other.
The reason behind this push is simple. If you don’t understand what’s running in your cloud, your organization can’t evaluate the risks it faces.
At its heart, the cloud is an amplifier. What used to take multiple teams and a major capital investment is now just an API call away. If so inclined, you could create an entire data centre in the cloud with just one template.
That flexibility extends even further as cloud native designs scale up and down to meet customer demand. Long gone are the days of builders being able to keep the structure of their environment in their head or even a document because it changed so rarely.
Visibility is the first step of making sense of this new environment. By focusing on visibility, you can surface potential risks and evaluate them in their proper context.
Visibility in action
If the goal of visibility is to be aware of what’s running in your cloud, how exactly does that work?
At the simplest, you could log in each cloud service you use and check to see what’s running in each and every region for that provider. How many virtual machines are running in these regions? Which databases are available over here? What user accounts have been created?
As you might imagine, that approach quickly becomes overwhelming. There needs to be a better way.
Each of the big three cloud service providers—AWS, Azure, and Google Cloud—offer some form of native metrics and monitoring service. These services let you use dashboards and other tools to help you understand what’s running in your cloud by automating the manual process to help you make sense of it all.
But, that still only covers the top level of what resources are running in your account. While this approach makes it easier to see the number of datastore, virtual machines/instances, and other key resources, you still need to figure out what’s on them and how they are supposed to interact.
To gain this level of visibility you’re going to need to use a variety of cloud service provider offerings, third party tools, and your (hopefully up-to-date) architecture diagrams.
You’re going to need to find tools that look into these systems and their behaviours.
The ironic challenge
One might imagine that over time, visibility would be easier for your teams. They should gain a familiarity with the tooling, how other teams build, and how best to work beyond the simple, “This exists” to the more in-depth layer of understanding of what’s happening in your cloud.
That’s true. But only in part.
The irony of trying to gain visibility is it can actually get harder the more mature your organization is with respect to its cloud journey.
As more and more teams start building the cloud, the sheer volumes of resources that you need to keep track of becomes a problem. The tooling and processes you used in the beginning probably won’t scale with you and you’ll need a dedicated solution.
At the same time, the variety of resources being used will increase. Teams will use more and more of the offerings from your cloud service provider and start to adopt cutting edge services sooner.
On the opposite end of the spectrum, you’ll also see more and more of your legacy estate be brought into the cloud. What once was a Linux-only environment now may see a significant amount of Windows workloads as you wind down your data centres.
With those legacy workloads, you’ll want to ensure that you have as much visibility as possible even though they may not have the same level of protection you’ve built in from the start with your newer, cloud native designs.
Not to make a challenge even more difficult, sometimes the team structure or pace of change won’t allow you to maintain visibility in those resources. Here intermittent visibility might be the best bet.
Moving past visibility
Gaining and maintaining visibility is a strong first step. But it’s only the first step. Once you start down this path, you’ll quickly realize your efforts should expand into areas like traceability—being able to track workflows through a system—and observability—understanding a system from its outputs.
These two areas of practice will help your teams move past just knowing what’s in your cloud to having an understanding of what’s actually happening.
Traceability and observability aren’t usually spoken about in a security context. They are almost entirely in the land of the builder. I would argue that the insights these techniques provide for system performances and operations are valuable to the security team as well.
Understanding how data is being processed and stored by your systems is always helpful to your security practice. It will help you better interpret the data you get from your security tools.
In fact, that type of data should be seen as the complement to sharing security information—like vulnerability data, riskier behaviours, etc.—with the development teams.
Working together from a common dataset with tools that are customized for each perspective—security and builder—is what makes teams stronger…and both perspectives start with visibility.