Eliminating Security Risks in Kubernetes with Chris Porter

Episode Summary

Chris Porter is the director of solutions engineering at StackRox, makers of the industry’s first Kubernetes-native container security platform. Previously, Chris worked as the director of field sales engineers at Bracket Computing, a technical solutions architect and senior manager of systems engineering at Cisco, and a software engineer at VA Software, iBeam Broadcasting, and Silicon Graphics, among other positions. He is also an author and a certified AWS solutions architect and security specialist. Join Corey and Chris as they talk about bringing security to Kubernetes while touching upon how nobody really manages application security—they just pretend to; why security needs to think the same way as microservices; how a lot of people end up using the container model incorrectly by thinking they’re the same as VMs; what billing and security have in common; why security needs to be baked into the foundation vs. treated as an afterthought; why you should aim for incremental security improvements; what Chris thinks the business value of Kubernetes is; why Chris doesn’t think moving applications to containers automatically makes them more secure, and more.

Episode Show Notes & Transcript

About Chris Porter
Chris Porter is the Director of Solutions Engineering at StackRox, the leader in Kubernetes-native container security. Porter has more than 20 years of experience in pre-sales engineering roles, serving and advising customers on security for email, web, cloud, and now Kubernetes and containers. Porter is a certified AWS Solutions Architect and AWS Security Specialist, is the author of a Cisco Press book on Email Security, and holds a Master’s degree from Stevens Institute of Technology.


Linked Referenced:


Transcript
Announcer: Hello, and welcome to Screaming in the Cloud with your host, Cloud Economist Corey Quinn. This weekly show features conversations with people doing interesting work in the world of Cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.


Corey: This episode is sponsored in part by our friends at Linode. You might be familiar with Linode; they’ve been around for almost 20 years. They offer Cloud in a way that makes sense rather than a way that is actively ridiculous by trying to throw everything at a wall and see what sticks. Their pricing winds up being a lot more transparent—not to mention lower—their performance kicks the crap out of most other things in this space, and—my personal favorite—whenever you call them for support, you’ll get a human who’s empowered to fix whatever it is that’s giving you trouble. Visit linode.com/screaminginthecloud to learn more, and get $100 in credit to kick the tires. That’s linode.com/screaminginthecloud.


Corey: This episode is sponsored by ExtraHop. ExtraHop provides threat detection and response for the Enterprise (not the starship). On-prem security doesn’t translate well to cloud or multi-cloud environments, and that’s not even counting IoT. ExtraHop automatically discovers everything inside the perimeter, including your cloud workloads and IoT devices, detects these threats up to 35 percent faster, and helps you act immediately. Ask for a free trial of detection and response for AWS today at extrahop.com/trial.


Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. I’m joined on this promoted episode by Chris Porter, director of solutions engineering over at StackRox. Chris, welcome to the show.


Chris: Thank you.


Corey: So, Kubernetes security is the start and the stop of what StackRox does. Now, my impression of Kubernetes security is coming purely from Ian Coldwater’s Twitter feed, so when we talk about the real world of security that isn’t, you know, 280 characters or less per comment, it changes a little bit. Tell me a little bit about I guess, first what you folks do, and also why it’s a hard problem.


Chris: Yeah, so the challenge for Kubernetes is that, of course, it’s a different platform, but you still have the same security concerns. Every platform that comes along has its own nuances in how the applications are running on that platform. In this case, we’re handing a lot of that control over to developers. And what people forget about, as a platform it’s complicated. 


And teams get started, and you think about productivity, and you think about how do I get my application running, and how do I have services that my clients can reach? And they’re forgetting that a lot of the power that’s given to the developers actually has security implications. So, we’re trying to raise the attention there, both by talking about it and letting people know, opening some eyes, and providing a software product that—a little bit of a nagging reminder about some of these security settings that they’ve either forgotten about, they’re not paying attention to, or just flat out are unaware of.


Corey: So, I look at this from a somewhat naive perspective, where, okay, I know how to manage application security—well, I don’t; no one does; we pretend we do—and I know how to handle Linux environment security—again, same caveats apply—what needs to change when I start looking down the, I guess, interminable cliff, that is the Kubernetes slide into microservices complexity? What makes it different?


Chris: Well, the tendency is to treat it like every other platform, and then the containers are just little VMs, and I have VMs in the Cloud, and I have VMs on-premise, or maybe I even have physical machines still, a data center. And I want to measure certain signals and understand what activities are associated with an attack. That typically involves things like hunting for vulnerabilities on running machines, and hunting for configuration errors on those machines, and changing it, or patching and keeping those up to date. They’re long-lived. 


As they like to say, they’re pets. We care for them, we keep them alive. Containers, the phrase is, “Treat them like cattle, and not pets.” They’re supposed to be disposable, and containers in Kubernetes are no different. They’re quick to start up and quick to throw away. 


And one of the, sort of, characteristics is that you don’t try to fix a container, just like, you know, [laugh] treat cattle as a number. And if they get sick, well there’s always another one in line. So, it’s about dispose of and replace these. And then their definition ultimately comes from some code that was used to construct them. So, when you can’t patch a running container—or at least you’re not supposed to—because you lose that patch the next time things get restarted, doing security in the traditional way becomes really hard. You have to look at changing it in the code. 


And that’s kind of the mantra behind microservices, that if you’re fixing a bug, if you’re updating a package, if you’re going to incorporate into service, you change the code, you rewrite, and then you roll out that change on a regular basis. And security needs to be thinking in the same way. Like, what happens if you can’t change the running application? How do you do security in that model? And our answer is typically just, you got to go change this in the code somewhere, some setting that you don’t like, or some vulnerability was introduced early in that process, and you have to go to the beginning of that process, fix that, roll out the change.


Corey: So, one of the problems, I guess, at that point is that you’re trying to track down security, both from a proactive perspective as well as a forensics and diagnostic story where the thing that got exploited no longer exists by the time he wound up discovering that it had, in fact, had been exploited. It seems like it does change the story significantly around how this winds up working. Is that accurate? Is that inaccurate?


Chris: Yeah, you’re absolutely right. In fact, we see the power of forensics, of tracking that event, not just for the purpose of reacting to it and sending a line to a very expensive security event monitoring tool to keep around forever, but to teach us a lesson that the event that occurred in our production application that was related to a security incident—somebody installed and ran a crypto miner, somebody put a backdoor in the environment, somebody exploited the application through a known vulnerability, those root causes are lessons that we can learn to apply to prevent it from happening again: to go back to the source code and say, “Well, this attack involves these three links in a chain, and we can interrupt these attacks by eliminating any one of those things.” So, of course, fixing the known vulnerability—if you’re running Apache, you’re responsible for keeping Apache up-to-date, if you’re running a full operating system in your container, there’s a lot of utilities in there, and you’re either responsible for maintaining them or eliminating them. And so picking out each of those individual items is an opportunity; the attacks here, anything that goes a little in disarray in your environment is an opportunity, actually, to go back and correct the problem in the first place by changing the configuration and hardening against those attacks in the future. 


We also have this notion that containers are VMs. They’re really not. And if you’re trying to go in and modify them and install user accounts, and modify packages, and do things on them that you would do in a virtual machine, then you’re making a mistake. You’re not using the model correctly, I suppose. But we could take advantage of that. 


Signals that would be a normal day-in and day-out on a virtual machine, like a package being installed, actually end up being a tripwire here for what looks like an attack. So, somebody modifying the configuration of that container is either just making a really weird mistake, like trying to go in to add a user account or install some software on there, or of course, it’s a malicious actor who’s just using the tools that are available to go pull down a compiler, compile a crypto miner and start mining Ethereum on your dime.


Corey: Yeah, this does align, I guess, our respected industries here where I talk about billing, you talk about security, functionally one tends to lead to the other, if you often—you’ll find that the way to discover a bunch of crypto mining stuff that’s spun up in your AWS account is via the bill. Now, that only works up to a certain point of scale because you’re spending, eh, when you’re spending a couple tens of millions a month, it’s hard to pick out that giant spike when compared to the giant normal that is the rest of your bill. But I feel like you’re sort of tied, in that billing and security are aligned spiritually in that people care a lot about both of them only after they have failed to adequately care about both of them. It always feels like a trailing action, not like it’s something that people wind up focusing on, front and center. Is that aligned with what you see in the market? Are you hoping to drive an educational story that changes that, or something else?


Chris: Yeah. So, I mean, you hit upon a bunch of hot topics for me personally. One is that I’ve been doing this for a while as a career, where I’ve been working for security vendors, companies who have a story to tell, and a product that addresses the problem, but we always see security is kind of an afterthought. It’s always something that gets brought in a little bit later. And you’re right, it’s not important until there’s a breach, or until your PCI auditor says you’re not even close to being prepared for PCI audit. 


So, it becomes a hot topic all of a sudden. And it’s hard to add security in later, especially in a world like this where your teams are prototyping, and developing, and they spend time and effort on that, and then all of a sudden, you bring in a security tool like ours that says, “Hey, these things are all wide open. This is all far too much privilege to grant these applications, and now you’ve got to go back and revise it so that your application works with those settings down.” And so the correct settings are better—it’s easier to deal with that at the start. The other aspect that you mentioned, like the billing thing here, where people don’t see a way to tackle this, it’s a big nebulous problem, and they don’t see any way—like unless they cut their bill by 50 percent, or they’re in perfect security, it’s hard to make the leap to how do I get there, but we’re making the point that you can make a few little changes here and there. 


You won’t be PCI compliant in five days, but you can improve your security incrementally, just a little bit, just like you can reduce your costs a little bit. There’s some low hanging fruit, there’s some easy things that you can do, and then there’s topics that I call ‘Advanced Security for Kubernetes,’ which is really nothing more than just, hey, some of the features in Kubernetes can eliminate entire classes of security risk. And so they’re something you have to think about and make sure your applications can work with it, but if you can get there, you’ll make those dramatic improvements. But it’s nice to have stepping stones to get there. It’s not something that you have to do all at once. And risk is not a thing that has to be zero in all cases. I mean, there’s no such thing as security risk of zero, but better security tomorrow than today is a goal in and of itself.


Corey: Yeah, and let’s not get ourselves. Amazon frequently says that security is job zero. Every time I hear that what I really interpret it as is, they came up with a whole list of jobs, realized, “Oh, crap, we forgot security,” and then put it on as job zero so they can pretend they baked it in from the beginning. In practice, I find that almost no one cares about security until they really had to care about security. Does that match what you see? I mean, I’m not trying to make your customers or folks who suddenly find themselves caring a lot about security feel bad for it. I’m just trying to understand how you see the world?


Chris: Well personally, I feel like the world would be a better place if people naturally thought about the advantages of better security. I mean, I for one would not be chasing down credit agencies and card companies because my identity was stolen a few years ago. And believe me, you’d really don’t want to deal with that process. 


Corey: Oh, it offends me to have to deal with that process. It’s from a perspective of, “Let me get this straight. You didn’t exercise diligence over who you gave money to, and now you’re trying to make this my problem?” It bugs me on a visceral level.


Chris: That’s right. And a retailer that can’t detect that an out of state license or other ID was used to open up a credit card at a store, and then immediately going to use it to buy gift cards. Those are an attack chain, which we would call it. These are patterns of attack that we know about, and anything that resembles that attack pattern should at least raise some red flags there. So, yes, I mean, I’d love for folks to treat this not as a problem to be solved, but also as an opportunity. 


But yes, you’re right about generally, people don’t care about this until either there’s been a breach, or there is some sort of incident that occurs, or realization, or maybe I’ve got to meet some external auditor’s requirements. It could be something industry like PCI, it could be regulatory compliance like GDPR in Europe, but generally, there’s some driving force other than just the desire to have a more secure platform. Now, many organizations have mature security organizations, and they have requirements and goals, and they think about the kinds of data that they want to have, and how they want to respond to that. And I think Kubernetes security can actually fit into that pretty well. It still has very much a big surface area of stuff that you can measure, signals that are interesting to look at, a set of reactions that you can take when you go in there. 


It’s just that the nature of it is a little bit different. It takes, often, some learning, I think that changing the way an organization does security to this code-driven model, to this preventive approach is a great advantage, too, but nobody shifts—and especially big organizations—don’t turn on a dime to take advantage of those kinds of things.


Corey: Now, I’ve been trying for a long time to identify the business value of Kubernetes. And it’s pretty clear from what you’re saying that given the fact that you represent a company whose entire value proposition is security for Kubernetes, that improved security posture is not the slam dunk, hit it out of the park narrative of ‘why Kubernetes?’ That I naively had hoped it was. Is this making it worse? I mean, is it still worth going in the Kubernetes direction if… from your perspective, from a business value standpoint, what is the business value of Kubernetes?


Chris: That’s a great question, and there’s probably 100 different answers. We usually tie it into the business goals. When you look at a business, an organization, and if it’s not a tiny little social media startup with eight world-class engineers, most organizations struggle with how do I deliver more value to my customers? How do I get that banking app to support new payment systems? How do I move at the speed that consumers want with mobile devices and websites? 


And the model of software development that traditionally started in a dev team isolated somewhere pushing to an operations team to manage it typically ran into a series of challenges in that. And in my background as a software engineer, I never knew who was going to be running it, or what kind of hardware it was going to be on. There were a whole bunch of challenges about that. So, the promise of container and Kubernetes is that I take my environment with me. And I think that putting that control in the hands of the developers to specify everything avoids a lot of that pain that goes from switching environments and probably get into trouble for mentioning this, but a lot of organizations see Kubernetes as a way to provide a multi-cloud strategy. I don’t know if I agree with that, but Kubernetes being a generic, open-source way to specify a platform for running these applications on.


Corey: Oh, yeah. I mean, we saw with the recent re:Invent announcement that, now that EKS anywhere is available, and they’re open-sourcing it, finally, we’re freed from the old-school problem of only being able to run Kubernetes on top of AWS. Wait a minute, no one ever claimed that was the case. What is the value here? It gets more and more confusing, the more you look at a lot of these things. And every deep discussion of why Kubernetes seems to turn into something resembling circular reasoning.


Chris: Yep, absolutely. I think one aspect of it kind of goes unnoticed is something that some much, much smarter people than me came up with a few years ago that I was working with. And the realization that applications are generally subject to the environment in which they’re running in. Now, with Cloud, you kind of get to this state where the application specifies what it needs. I can specify that I need a certain amount of redundancy, geo-redundancy, I need availability zone redundancy, I need a certain amount of storage performance, and network performance. 


So, the application really defines what the infrastructure needs to do, and I think Kubernetes does that; we call it this declarative approach: you declare what you want the running environment to be. Now, it’s not quite as easy as you’d like it to be. I’d like to be able to specify things like, what is my mean time between failures, and my recovery time objectives, and other things, other software-defined service-level objectives a little bit more clearly. But we’re climbing up that ladder, and Kubernetes does that, where I can specify some of these things for my application. How many replicas do I want of this? 


So, you’re nearing the point where the software platform can actually do whatever it needs to do to configure whatever resources to meet those higher-level goals. So, that declarative nature, I think, is very powerful. And just because Kubernetes has one API for that doesn’t mean it’s the end all be all, but that idea of the software defining what it needs to run is a really powerful one. And then of course that declarative nature is also really important for security. If we know that certain activity is not required for this application, why not declare it to be impossible to do that? If you don’t need to write files—and you shouldn’t be writing files to your container file system—then make it impossible to write files. And then that way we exclude a whole class of security exploits that require writing some sort of payload to a disk and then running.


Corey: Oh, yeah. I agree wholeheartedly with that. One of the value propositions of AWS Lambda for me was, sure it was a bit of a learning curve, but what do you mean, I can’t write local files to the file system within this function? Oh, I can only write to /tmp, and only for this long, and it’s ephemeral. And it’s a platform that’s defined by its constraints. 


And on the one hand, while it’s great to be able to say, “Oh, okay, greenfield. I can build around those constraints, and it’s fine.” It feels like the Kubernetes story has always been focused much more around migrating existing applications that never conceived of a stateless model into a cloud-native style world.


Chris: Yep, you’re right. And I’m nodding my head vigorously at everything you’re saying. Like, using the platform to constrain the applications is really powerful. But you’re right, we see a lot of lift and shift, as they call it, or application modernization. Like, oh, I’ll just put this into a container and it’ll just run. 


And it’s interesting because Kubernetes originally really didn’t account for stateful applications. It had kind of assumed that you were going to have some nebulous data store—maybe an RDS, or a DynamoDB, or something—it was going to be outside of Kubernetes, and they didn’t really account for any other type of stateful applications in here. It’s been a few years since they introduced it, but it still feels like it was thought of afterwards. So, the idea of lifting and shifting is a great one, but now there’s a lot of teams on the security side of things, a lot of teams think that just putting something into a container naturally makes it more secure. 


And I’m, kind of, still trying to shuffle that in my head. I don’t really think so. I don’t think that containers—just like virtualization—were never really designed as security barriers. This certainly wasn’t. And I think we’re just waiting for some of the possible exploits that might happen between containers. 


You’ve already seen good examples of container escape attacks that have been shown to be practical. So, use it for what it’s like, but lift and shift is going to be hard. Again, I think you can get better without being perfect. And so if I have an application that was built starting a few years ago with Java on Linux you’ve run on virtual machines, there’s still probably a lot of things that would need to make it perfectly containerizable. But teams can move on that, I think, incrementally; make little changes here and there to improve the capability of resisting an attack when someone exploits a known vulnerability or an unknown vulnerability.


This episode is sponsored by our friends at New Relic. If you’re like most environments, you probably have an incredibly complicated architecture, which means that monitoring it is going to take a dozen different tools. And then we get into the advanced stuff. We all have been there and know that pain, or will learn it shortly, and New Relic wants to change that. They’ve designed everything you need in one platform with pricing that’s simple and straightforward, and that means no more counting hosts. You also can get one user and a hundred gigabytes a month, totally free. To learn more, visit newrelic.com. Observability made simple.


Corey: That’s part of the problem is, it feels on some level like you’re never going to win with security. It’s always something you can continue to improve at and lead to a better place. The problem is, is the journey, not a destination, and a clear lesson we’ve taken from all of this stuff is that you can’t buy security. Counterpoint: there sure are an awful lot of vendors willing to sell it to me. So—


Chris: [laugh]. That’s right.


Corey: —from that perspective, StackRox is not a services company. You’re a software company. 


Chris: That’s right.


Corey: You have a security-oriented platform around Kubernetes. What makes you folks different? What makes you not security-in-a-box, or the checklist compliance audit game that doesn’t materially change your security posture? Why you?


Chris: I think we’re pretty realistic about security. And you won’t get—at least for me—a line about StackRox will solve your security problems. We’re there to—


Corey: Sure. You work in solutions engineering, not that baseless marketing. Please, continue.


Chris: [laugh]. That’s right. So, when I’m talking to my potential customers, I have to deliver. So, the next day—we’re a small company—I will show you how the product works, and then the next day I’ll help you get it running in your environment. And so you mentioned a journey and not a destination. 


It’s never going to be done. You’ll never have fixed all of those vulnerabilities; there’s always more to do. So, our software really is about designing and enforcing a process, again, using Kubernetes. So, one of things I think that’s different about us is that we’re never going to show you a solution that says, “Run this StackRox library in your application,” or, “Change out this Kubernetes component for this StackRox component.” What we’re there to do is to show you that, hey, your application is running with a very high privilege level, and that means a Unix process privilege level or it could mean a privileged container, but there’s a setting change that you can make. 


And even if you can’t change it today, the security team is aware of that’s running because it increases the likelihood of an exploit being serious. And so we’re there to nag you, you as the developer. You are the one who configured this, or you failed to change the default and the default is a bad one for security, so go change it in the source code. It means that after some time following our instructions, hopefully, our product has made your application better. And I’m not sure if I would say this, but you wouldn’t need our product anymore if that was all there was to it. 


Changing those settings will help you, and of course, teams will change and they’ll bring on new applications and they’ll have a whole new set of the same problems again and again, so our solution is about nagging you and reminding you that, hey, this is something you haven’t maintained, this is an image you haven’t updated in a while. This is a network setting that is wide open to the entire internet. Now, there’s a call for that sometimes, but you should be aware of it, and not set that up inadvertently that you’re publishing this service publicly without knowing about it because that’s what bites people.


Corey: How do you handle the noise problem, where you wind up with so many different stories about, “Oh, this problem is going to be massive and the rest,” and, “Holy crap, you haven’t rotated your IAM credentials in 60 days,” and versus the, “Oh, by the way, you have no credential set for root and anyone who hits this endpoint can access stuff.” You wind up with the truly valuable important things getting lost to noise. How do you tackle that?


Chris: It’s a big problem. And especially as you’re typically multiplying that problem as you go from an application that might have been on a few VMs, now, to dozens of containers, you’ve potentially got that over and over again. So, the way that we do it, again, is trying to treat this realistically. You’re not going to fix every vulnerability, and the perfect is the enemy of the good, right? That’s the phrase out there. 


So, we try to use a measure of the total risk to help you prioritize. Now, prioritizing things may not seem like the smartest thing, like, leaving something for later, but we know that organizations do this anyway. There’s always going to be things that are left until later, like you said—


Corey: “You need to fix these one to three things right now,” gets better results then, “Here’s the 500 list of things you need to do.” This is the problem with Nessus reports is that, historically, they did a terrible job of highlighting what’s the checkbox compliance stuff versus the actual high level of risk.


Chris: That’s right. That’s right. And a real simple example would be that in a Kubernetes cluster, you’ve got all these pods running, which would have containers in them, they’re all potentially on the network, but because you have to kind of declare that in a configuration as to what level of exposure it has, we know what’s sitting behind that ELB; we know the services that are exposed, we know the ones that have other types of ingress, and so there’s an example of prioritization: you got the same vulnerability in 22 different pods, but it’s the front door that’s going to get that probe; that’s the place to fix it first. So, we can use some of the attributes of the environment to help us prioritize that. I mentioned things like privileges. 


Sometimes, you’re just going to have to live with a high level of privilege, something needs to run as root all the time. Well, got to be careful with that one. And maybe we think about other defensive tactics around that. But we also want to make sure that it’s not exposed publicly. This is the SSH-open-to-the-world problem in a traditional security group. 


Sometimes it is going to be necessary, but you want to keep the awareness high, you want to keep the number of cases where you have to do that low, and if there’s a vulnerability in SSH, you better make sure that that thing is patched on all the EC2 instances that are exposed in that way. So, it’s about prioritization, it’s about being realistic that you can’t fix everything all at once, and a little bit of improvement is better than doing nothing.


Corey: So, when does it make sense for companies to consider bringing you on board? I mean, the easy answer is, “Oh, at the very beginning, when you’re just sketching out ideas on a whiteboard,” yet, in practice, there are so many other competing priorities, that seems unlikely. When does it make actual business sense to bring you folks in?


Chris: Well, I like to come at this from the developers’ perspective. And let’s face it, developers hate security tools because all they do is nag you, they’re noisy, they constrain what you can do. Generally, they have some sort of dashboard or logging system somewhere else that I have to go and look, and I don’t really want to deal with them at all. So, in general, though, if I’ve got a security problem that is hard to unwind, I almost always wish I knew about this earlier. Like, before I start using that Java library, or that version of Ubuntu Linux, or before I start using a configuration from an image that I got from Docker Hub, it’d be nice to know upfront that, hey, that thing hasn’t been maintained in seven months, or that setting is going to cause a privilege escalation problem. 


The earlier the better. Your right, teams aren’t going to go out and think about security before they even have a single cluster, but the Kubernetes clusters themselves, there’s some configuration options in Amazon EKS, that you might want to avoid, again, those wide-open settings. So, as early as possible. From a pure software vendor perspective, we’d love to have everybody thinking about this problem from day one, but from the developer perspective, it is nice to have some insight to be able to assess what you’re using, not just for its usefulness, but for how much trouble am I going to get in with the security teams once this thing is running? We talk to a lot of organizations that are basically at the point where this app is ready to go to production and that’s the first time that security ever even heard about this effort. And [laugh] and it’s a little bit hard to retrofit the security at that point. If it requires fundamentally re-architecting the services and finding alternate sources for base images, those are big fundamental changes that can throw off your production delivery date plans.


Corey: So, if you had one takeaway that people could, I guess, carry forward with them from your approach to what you’ve seen, and how all of this works, what would it be? It’s easy to say, “Oh, just buy a product that solves the problem.” But that’s not enough in its own right; there needs to be something that aligns with a fundamental shift in strategy. What takeaway would you give people so that they can start with today?


Chris: So, we like to point out, again, that if you’re choosing Kubernetes, for whatever reason it might be: to make more dynamic delivery of services better, to meet some internal goals, or just because it’s loads of fun, I’d like people to understand that this is a complicated platform, and there is both negatives and positives to that. The negatives are that it’s a complex platform, it has surface area that can be attacked itself, you’re handing over a lot of infrastructure decisions to developers who don’t always make the best decisions, or aren’t always aware, sometimes, of the security implications of those things. But on the positive, there’s so much you can do with it: that you can actually get better security than you could in traditional environments by using, we talked about earlier, that constraining of our applications. Actually, the folks at Google wrote a really nice document explaining some of this, how do you get to better security with something like Kubernetes? So, I want teams to be aware. 


And it’s not just us saying this. The platform has a lot of features in it that interact in ways that are not always obvious, and that some of those decisions, some of the defaults, have those security implications. In fact, the folks behind and supporting Kubernetes, the Cloud Native Computing Foundation, actually paid to have a code audit and a security review, a pen test of Kubernetes itself. They did this last year and presented the results at KubeCon. And that’s awesome because there’s a group of people who are really interested in making sure this is a secure platform. 


But some of the lines that we like to use about how complex it is, and how hard it is for teams to figure out exactly what’s going on because of multiple layers of abstractions, really point to that security message that we talk about. So, as far as buying a tool, well if you have enough diligence and you have understanding of all of the topics in Kubernetes, you can do a lot of this yourself, but the tools make that easier. Nobody cares about a Kubernetes setting, except maybe with the developer and the one DevOps guy that’s been saddled with the security stuff. The CSO doesn’t care if that Kubernetes setting is being used. But your CSO does care about whether you’re meeting PCI compliance or whether we’re impacted by that new vulnerability we saw on the news. 


So, tying those goals down to the individual settings is the job of a product like StackRox. So, we’re there to help you achieve those higher-level security goals. But it comes down to what’s available in the platform. Use the platform for everything it’s worth. That’s the message. Use it for its security value as well as its productivity value.


Corey: Thank you so much for taking the time to speak with me about, well, a variety of things that I don’t tend to spend enough time talking about, according to everyone trying to sell Kubernetes. But that’s a separate problem. If people want to hear more, where can they find out about you and the company?


Chris: Well, we’re easily found on the web at stackrox.com. You can also reach out to me, I’m just [email protected]. We’re happy to answer any questions. The way that we sell our product is through customers evaluating it in their own environment, so come and kick the tires with us. 


If not, a lot of the information about how to use Kubernetes securely in your environment is available on our blog. So, we publish articles about these features and how to make best use of them. And so, that knowledge and our experience is shared for up on the website.


Corey: Thank you. And we’ll of course throw links to where you can be found into the [00:32:28 show notes]. Chris, thank you so much for taking the time to speak with me today. I appreciate it. 


Chris: Thank you, Corey, for having me.


Corey: Chris Porter, director of solutions engineering at StackRox. I’m Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you hated this podcast, please leave a five-star review anyway on your podcast platform of choice, and tell me why security is not a problem in serverless, and I should use that instead.


Announcer: This has been this week’s episode of Screaming in the Cloud. You can also find more Corey at screaminginthecloud.com, or wherever fine snark is sold.


This has been a HumblePod production. Stay humble.
Newsletter Footer

Get the Newsletter

Reach over 30,000 discerning engineers, managers, enthusiasts who actually care about the state of Amazon’s cloud ecosystems.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Sponsor Icon Footer

Sponsor an Episode

Get your message in front of people who care enough to keep current about the cloud phenomenon and its business impacts.