Clinton Herget, Field CTO at Snyk, joins Corey to discuss Snyk Cloud, a cloud security solution that allows developers to get full visibility into the complexity of their cloud environment and mitigate deployment risks. Clinton dives into how Snyk’s vulnerability intelligence is designed to do more than just scan for code errors, but to also look for vulnerabilities in how different platforms communicate with each other to give a better picture of potential deployment and security risks. Clinton also reveals how he went from building software for a living to talking about building software, which is much easier, and how his 20 years of development experience allows him to have tremendous empathy for the developer community Snyk aims to help.
Clinton Herget is Field CTO at Snyk, the leader is Developer Security. He focuses on helping Snyk's strategic customers on their journey to DevSecOps maturity. A seasoned technnologist, Cliton spent his 20-year career prior to Snyk as a web software developer, DevOps consultant, cloud solutions architect, and engineering director. Cluinton is passionate about empowering software engineering to do their best work in the chaotic cloud-native world, and is a frequent conference speaker, developer advocate, and technical thought leader.
Announcer: Hello, and welcome to Screaming in the Cloud
with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud
Corey: This episode is brought to us in part by our friends at Pinecone. They believe that all anyone really wants is to be understood, and that includes your users. AI models combined with the Pinecone vector database let your applications understand and act on what your users want… without making them spell it out.
Make your search application find results by meaning instead of just keywords, your personalization system make picks based on relevance instead of just tags, and your security applications match threats by resemblance instead of just regular expressions. Pinecone provides the cloud infrastructure that makes this easy, fast, and scalable. Thanks to my friends at Pinecone for sponsoring this episode. Visit Pinecone.io
to understand more.
Corey: This episode is bought to you in part by our friends at Veeam. Do you care about backups? Of course you don’t. Nobody cares about backups. Stop lying to yourselves! You care about restores, usually right after you didn’t care enough about backups. If you’re tired of the vulnerabilities, costs and slow recoveries when using snapshots to restore your data, assuming you even have them at all living in AWS-land, there is an alternative for you. Check out Veeam
, thats V-E-E-A-M for secure, zero-fuss AWS backup that won't leave you high and dry when it’s time to restore. Stop taking chances with your data. Talk to Veeam. My thanks to them for sponsoring this ridiculous podcast.
Corey: Welcome to Screaming in the Cloud
. I’m Corey Quinn. One of the fun things about establishing traditions is that the first time you do it, you don’t really know that that’s what’s happening. Almost exactly a year ago, I sat down for a previous promoted guest episode much like this one, With Clinton Herget at Snyk
—or Synic; however you want to pronounce that. He is apparently a scarecrow of some sorts because when last we spoke, he was a principal solutions engineer, but like any good scarecrow, he was outstanding in his field, and now, as a result, is a Field CTO. Clinton, Thanks for coming back, and let me start by congratulating you on the promotion. Or consoling you depending upon how good or bad it is.
Clinton: You know, Corey, a little bit of column A, a little bit of column B. But very glad to be here again, and frankly, I think it’s because you insist on mispronouncing Snyk as Synic, and so you get me again.
Corey: Yeah, you could add a couple of new letters to it and just call the company [Synack 00:01:27]. Now, it’s a hard pivot to a networking company. So, there’s always options.
Clinton: I acknowledge what you did there, Corey.
Corey: I like that quite a bit. I wasn’t sure you’d get it.
Clinton: I’m a nerd going way, way back, so we’ll have to go pretty deep in the stack for you to stump me on some of this stuff.
Corey: As we did with the, “I wasn’t sure you’d get it.” See that one sailed right past you. And I win. Chalk another one up for me and the networking pun wars. Great, we’ll loop back for that later.
Clinton: I don’t even know where I am right now.
Corey: [laugh]. So, let’s go back to a question that one would think that I’d already established a year ago, but I have the attention span of basically a goldfish, let’s not kid ourselves. So, as I’m visiting the Snyk website, I find that it says different words than it did a year ago, which is generally a sign that is positive; when nothing’s been updated including the copyright date, things are going really well or really badly. One wonders. But no, now you’re talking about Snyk Cloud, you’re talking about several other offerings as well, and my understanding of what it is you folks do no longer appears to be completely accurate. So, let me be direct. What the hell do you folks do over there?
Clinton: It’s a really great question. Glad you asked me on a year later to answer it. I would say at a very high level, what we do hasn’t changed. However, I think the industry has certainly come a long way in the past couple years and our job is to adapt to that Snyk—again, pronounced like a pair of sneakers are sneaking around—it’s a developer security platform. So, we focus on enabling the people who build applications—which as of today, means modern applications built in the cloud—to have better visibility, and ultimately a better chance of mitigating the risk that goes into those applications when it matters most, which is actually in their workflow.
Now, you’re exactly right. Things have certainly expanded in that remit because the job of a software engineer is very different, I think this year than it even was last year, and that’s continually evolving over time. As a developer now, I’m doing a lot more than I was doing a few years ago. And one of the things I’m doing is building infrastructure in the cloud, I’m writing YAML files, I’m writing CloudFormation templates to deploy things out to AWS. And what happens in the cloud has a lot to do with the risk to my organization associated with those applications that I’m building.
So, I’d love to talk a little bit more about why we decided to make that move, but I don’t think that represents a watering down of what we’re trying to do at Snyk. I think it recognizes that developer security vision fundamentally can’t exist without some understanding of what’s happening in the cloud.
Corey: One of the things that always scares me is—and sets the spidey sense tingling—is when I see a company who has a product, and I’m familiar—ish—with what they do. And then they take their product name and slap the word cloud at the end, which is almost always codes to, “Okay, so we took the thing that we sold in boxes in data centers, and now we’re making a shitty hosted version available because it turns out you rubes will absolutely pay a subscription for it.” Yeah, I don’t get the sense that at all is what you’re doing. In fact, I don’t believe that you’re offering a hosted managed service at the moment, are you?
Clinton: No, the cloud part, that fundamentally refers to a new product, an offering that looks at the security or potentially the risks being introduced into cloud infrastructure, by now the engineers who were doing it who are writing infrastructure as code. We previously had an infrastructure-as-code security product, and that served alongside our static analysis tool which is Snyk Code, our open-source tool, our container scanner, recognizing that the kinds of vulnerabilities you can potentially introduce in writing cloud infrastructure are not only bad to the organization on their own—I mean, nobody wants to create an S3 bucket that’s wide open to the world—but also, those misconfigurations can increase the blast radius of other kinds of vulnerabilities in the stack. So, I think what it does is it recognizes that, as you and I think your listeners well know, Corey, there’s no such thing as the cloud, right? The cloud is just a bunch of fancy software designed to abstract away from the fact that you’re running stuff on somebody else’s computer, right?
Corey: Unfortunately, in this case, the fact that you’re calling it Snyk Cloud does not mean that you’re doing what so many other companies in that same space do it would have led to a really short interview because I have no faith that it’s the right path forward, especially for you folks, where it’s, “Oh, you want to be secure? You’ve got to host your stuff on our stuff instead. That’s why we called it cloud.” That’s the direction that I’ve seen a lot of folks try and pivot in, and I always find it disastrous. It’s, “Yeah, well, at Snyk if we run your code or your shitty applications here in our environment, it’s going to be safer than if you run it yourself on something untested like AWS.” And yeah, those stories hold absolutely no water. And may I just say, I’m gratified that’s not what you’re doing?
Clinton: Absolutely not. No, I would say we have no interest in running anyone’s applications. We do want to scan them though, right? We do want to give the developers insight into the potential misconfigurations, the risks, the vulnerabilities that you’re introducing. What sets Snyk apart, I think, from others in that application security testing space is we focus on the experience of the developer, rather than just being another tool that runs and generates a bunch of PDFs and then throws them back to say, “Here’s everything you did wrong.”
We want to say to developers, “Here’s what you could do better. Here’s how that default in a CloudFormation template that leads to your bucket being, you know, wide open on the internet could be changed. Here’s the remediation that you could introduce.” And if we do that at the right moment, which is inside that developer workflow, inside the IDE, on their local machine, before that gets deployed, there’s a much greater chance that remediation is going to be implemented and it’s going to happen much more cheaply, right? Because you no longer have to do the round trip all the way out to the cloud and back.
So, the cloud part of it fundamentally means completing that story, recognizing that once things do get deployed, there’s a lot of valuable context that’s happening out there that a developer can really take advantage of. They can say, “Wait a minute. Not only do I have a Log4Shell vulnerability, right, in one of my open-source dependencies, but that artifact, that application is actually getting deployed to a VPC that has ingress from the internet,” right? So, not only do I have remote code execution in my application, but it’s being put in an enclave that actually allows it to be exploited. You can only know that if you’re actually looking at what’s really happening in the cloud, right?
So, not only does Snyk cloud allows us to provide an additional layer of security by looking at what’s misconfigured in that cloud environment and help your developers make remediations by saying, “Here’s the actual IAC file that caused that infrastructure to come into existence,” but we can also say, here’s how that affects the risk of other kinds of vulnerabilities at different layers in the stack, right? Because it’s all software; it’s all connected. Very rarely does a vulnerability translate one-to-one into risk, right? They’re compound because modern software is compound. And I think what developers lack is the tooling that fits into their workflow that understands what it means to be a software engineer and actually helps them make better choices rather than punishing them after the fact for guessing and making bad ones.
Corey: That sounds awesome at a very high level. It is very aligned with how executives and decision-makers think about a lot of these things. Let’s get down to brass tacks for a second. Assume that I am the type of developer that I am in real life, by which I mean shitty. What am I going to wind up attempting to do that Snyk will flag and, in other words, protect me from myself and warn me that I’m about to commit a dumb?
Clinton: First of all, I would say, look, there’s no such thing as a non-shitty developer, right? And I built software for 20 years and I decided that’s really hard. What’s a lot easier is talking about building software for a living. So, that’s what I do now. But fundamentally, the reason I’m at Snyk, is I want to help people who are in the kinds of jobs that I had for a very long time, which is to say, you have a tremendous amount of anxiety because you recognize that the success of the organization rests on your shoulders, and you’re making hundreds, if not thousands of decisions every day without the right context to understand fully how the results of that decision is going to affect the organization that you work for.
So, I think every developer in the world has to deal with this constant cognitive dissonance of saying, “I don’t know that this is right, but I have to do it anyway because I need to clear that ticket because that release needs to get into production.” And it becomes really easy to short-sightedly do things like pull an open-source dependency without checking whether it has any CVEs associated with it because that’s the version that’s easiest to implement with your code that already exists. So, that’s one piece. Snyk Open Source, designed to traverse that entire tree of dependencies in open-source all the way down, all the hundreds and thousands of packages that you’re pulling in to say, not only, here’s a vulnerability that you should really know is going to end up in your application when it’s built, but also here’s what you can do about it, right? Here’s the upgrade you can make, here’s the minimum viable change that actually gets you out of this problem, and to do so when it’s in the right context, which is in you know, as you’re making that decision for the first time, right, inside your developer environment.
That also applies to things like container vulnerabilities, right? I have even less visibility into what’s happening inside a container than I do inside my application. Because I know, say, I’m using an Ubuntu or a Red Hat base image. I have no idea, what are all the Linux packages that are on it, let alone what are the vulnerabilities associated with them, right? So, being able to detect, I’ve got a version of OpenSSL 3.0 that has a potentially serious vulnerability associated with it before I’ve actually deployed that container out into the cloud very much helps me as a developer.
Because I’m limiting the rework or the refactoring I would have to do by otherwise assuming I’m making a safe choice or guessing at it, and then only finding out after I’ve written a bunch more code that relies on that decision, that I have to go back and change it, and then rewrite all of the things that I wrote on top of it, right? So, it’s the identifying the layer in the stack where that risk could be introduced, and then also seeing how it’s affected by all of those other layers because modern software is inherently complex. And that complexity is what drives both the risk associated with it, and also things like efficiency, which I know your audience is, for good reason, very concerned about.
Corey: I’m going to challenge you on aspect of this because on the tin, the way you describe it, it sounds like, “Oh, I already have something that does that. It’s the GitHub Dependabot story where it winds up sending me a litany of complaints every week.” And we are talking, if I did nothing other than read this email in that day, that would be a tremendously efficient processing of that entire thing because so much of it is stuff that is ancient and archived, and specific aspects of the vulnerabilities are just not relevant. And you talk about the OpenSSL 3.0 issues that just recently came out.
I have no doubt that somewhere in the most recent email I’ve gotten from that thing, it’s buried two-thirds of the way down, like all the complaints like the dishwasher isn’t loaded, you forgot to take the trash out, that baby needs a change, the kitchen is on fire, and the vacuuming, and the r—wait, wait. What was that thing about the kitchen? Seems like one of those things is not like the others. And it just gets lost in the noise. Now, I will admit to putting my thumb a little bit on the scale here because I’ve used Snyk before myself and I know that you don’t do that. How do you avoid that trap?
Clinton: Great question. And I think really, the key to the story here is, developers need to be able to prioritize, and in order to prioritize effectively, you need to understand the context of what happens to that application after it gets deployed. And so, this is a key part of why getting the data out of the cloud and bringing it back into the code is so important. So, for example, take an OpenSSL vulnerability. Do you have it on a container image you’re using, right? So, that’s question number one.
Question two is, is there actually a way that code can be accessed from the outside? Is it included or is it called? Is the method activated by some other package that you have running on that container? Is that container image actually used in a production deployment? Or does it just go sit in a registry and no one ever touches it?
What are the conditions required to make that vulnerability exploitable? You look at something like Spring Shell, for example, yes, you need a certain version of spring-beans in a JAR file somewhere, but you also need to be running a certain version of Tomcat, and you need to be packaging those JARs inside a WAR in a certain way.
Corey: Exactly. I have a whole bunch of Lambda functions that provide the pipeline system that I use to build my newsletter every week, and I get screaming concerns about issues in, for example, a version of the markdown parser that I’ve subverted. Yeah, sure. I get that, on some level, if I were just giving it random untrusted input from the internet and random ad hoc users, but I’m not. It’s just me when I write things for that particular Lambda function.
And I’m not going to be actively attempting to subvert the thing that I built myself and no one else should have access to. And looking through the details of some of these things, it doesn’t even apply to the way that I’m calling the libraries, so it’s just noise, for lack of a better term. It is not something that basically ever needs to be adjusted or fixed.
Clinton: Exactly. And I think cutting through that noise is so key to creating developer trust in any kind of tool that scanning an asset and providing you what, in theory, are a list of actionable steps, right? I need to be able to understand what is the thing, first of all. There’s a lot of tools that do that, right, and we tend to mock them by saying things like, “Oh, it’s just another PDF generator. It’s just another thousand pages that you’re never going to read.”
So, getting the information in the right place is a big part of it, but filtering out all of the noise by saying, we looked at not just one layer of the stack, but multiple layers, right? We know that you’re using this open-source dependency and we also know that the method that contains the vulnerability is actively called by your application in your first-party code because we ran our static analysis tool against that. Furthermore, we know because we looked at your cloud context, we connected to your AWS API—we’re big partners with AWS and very proud of that relationship—but we can tell that there’s inbound internet access available to that service, right? So, you start to build a compound case that maybe this is something that should be prioritized, right? Because there’s a way into the asset from the outside world, there’s a way into the vulnerable functions through the labyrinthine, you know, spaghetti of my code to get there, and the conditions required to exploit it actually exist in the wild.
But you can’t just run a single tool; you can’t just run Dependabot to get that prioritization. You actually have to look at the entire holistic application context, which includes not just your dependencies, but what’s happening in the container, what’s happening in your first-party, your proprietary code, what’s happening in your IAC, and I think most importantly for modern applications, what’s actually happening in the cloud once it gets deployed, right? And that’s sort of the holy grail of completing that loop to bring the right context back from the cloud into code to understand what change needs to be made, and where, and most importantly why. Because it’s a priority that actually translates into organizational risk to get a developer to pay attention, right? I mean, that is the key to I think any security concern is how do you get engineering mindshare and trust that this is actually what you should be paying attention to and not a bunch of rework that doesn’t actually make your software more secure?
Corey: One of the challenges that I see across the board is that—well, let’s back up a bit here. I have in previous episodes talked in some depth about my position that when it comes to the security of various cloud providers, Google is number one, and AWS is number two. Azure is a distant third because it figures out what Crayons tastes the best; I don’t know. But the reason is not because of any inherent attribute of their security models, but rather that Google massively simplifies an awful lot of what happens. It automatically assumes that resources in the same project should be able to talk to one another, so I don’t have to painstakingly configure that.
In AWS-land, all of this must be done explicitly; no one has time for that, so we over-scope permissions massively and never go back and rein them in. It’s a configuration vulnerability more than an underlying inherent weakness of the platform. Because complexity is the enemy of security in many respects. If you can’t fit it all in your head to reason about it, how can you understand the security ramifications of it? AWS offers a tremendous number of security services. Many of them, when taken in some totality of their pricing, cost more than any breach, they could be expected to prevent. Adding more stuff that adds more complexity in the form of Snyk sounds like it’s the exact opposite of what I would want to do. Change my mind.
Clinton: I would love to. I would say, fundamentally, I think you and I—and by ‘I,’ I mean Snyk and you know, Corey Quinn Enterprises Limited—I think we fundamentally have the same enemy here, right, which is the cyclomatic complexity of software, right, which is how many different pathways do the bits have to travel down to reach the same endpoint, right, the same goal. The more pathways there are, the more risk is introduced into your software, and the more inefficiency is introduced, right? And then I know you’d love to talk about how many different ways is there to run a container on AWS, right? It’s either 30 or 400 or eleventy-million.
I think you’re exactly right that that complexity, it is great for, first of all, selling cloud resources, but also, I think, for innovating, right, for building new kinds of technology on top of that platform. The cost that comes along with that is a lack of visibility. And I think we are just now, as we approach the end of 2022 here, coming to recognize that fundamentally, the complexity of modern software is beyond the ability of a single engineer to understand. And that is really important from a security perspective, from a cost control perspective, especially because software now creates its own infrastructure, right? You can’t just now secure the artifact and secure the perimeter that it gets deployed into and say, “I’ve done my job. Nobody can breach the perimeter and there’s no vulnerabilities in the thing because we scanned it and that thing is immutable forever because it’s pets, not cattle.”
Where I think the complexity story comes in is to recognize like, “Hey, I’m deploying this based on a quickstart or CloudFormation template that is making certain assumptions that make my job easier,” right, in a very similar way that choosing an open-source dependency makes my job easier as a developer because I don’t have to write all of that code myself. But what it does mean is I lack the visibility into, well hold on. How many different pathways are there for getting things done inside this dependency? How many other dependencies are brought on board? In the same way that when I create an EKS cluster, for example, from a CloudFormation template, what is it creating in the background? How many VPCs are involved? What are the subnets, right? How are they connected to each other? Where are the potential ingress points?
So, I think fundamentally, getting visibility into that complexity is step number one, but understanding those pathways and how they could potentially translate into risk is critically important. But that prioritization has to involve looking at the software holistically and not just individual layers, right? I think we lose when we say, “We ran a static analysis tool and an open-source dependency scanner and a container scanner and a cloud config checker, and they all came up green, therefore the software doesn’t have any risks,” right? That ignores the fundamental complexity in that all of these layers are connected together. And from an adversaries perspective, if my job is to go in and exploit software that’s hosted in the cloud, I absolutely do not see the application model that way.
I see it as it is inherently complex and that’s a good thing for me because it means I can rely on the fact that those engineers had tremendous anxiety, we’re making a lot of guesses, and crossing their fingers and hoping something would work and not be exploitable by me, right? So, the only way I think we get around that is to recognize that our engineers are critical stakeholders in that security process and you fundamentally lack that visibility if you don’t do your scanning until after the fact. If you take that traditional audit-based approach that assumes a very waterfall, legacy approach to building software, and recognize that, hey, we’re all on this infinite loop race track now. We’re deploying every three-and-a-half seconds, everything’s automated, it’s all built at scale, but the ability to do that inherently implies all of this additional complexity that ultimately will, you know, end up haunting me, right? If I don’t do anything about it, to make my engineer stakeholders in, you know, what actually gets deployed and what risks it brings on board.
Corey: This episode is sponsored in part by our friends at Uptycs. Attackers don’t think in silos, so why would you have siloed solutions protecting cloud, containers, and laptops distinctly? Meet Uptycs - the first unified solution that prioritizes risk across your modern attack surface—all from a single platform, UI, and data model. Stop by booth 3352 at AWS re:Invent in Las Vegas to see for yourself and visit uptycs.com. That’s U-P-T-Y-C-S.com. My thanks to them for sponsoring my ridiculous nonsense.
Corey: When I wind up hearing you talk about this—I’m going to divert us a little bit because you’re dancing around something that it took me a long time to learn. When I first started fixing AWS bills for a living, I thought that it would be mostly math, by which I mean arithmetic. That’s the great secret of cloud economics. It’s addition, subtraction, and occasionally multiplication and division. No, turns out it’s much more psychology than it is math. You’re talking in many aspects about, I guess, what I’d call the psychology of a modern cloud engineer and how they think about these things. It’s not a technology problem. It’s a people problem, isn’t it?
Clinton: Oh, absolutely. I think it’s the people that create the technology. And I think the longer you persist in what we would call the legacy viewpoint, right, not recognizing what the cloud is—which is fundamentally just software all the way down, right? It is abstraction layers that allow you to ignore the fact that you’re running stuff on somebody else’s computer—once you recognize that, you realize, oh, if it’s all software, then the problems that it introduces are software problems that need software solutions, which means that it must involve activity by the people who write software, right? So, now that you’re in that developer world, it unlocks, I think, a lot of potential to say, well, why don’t developers tend to trust the security tools they’ve been provided with, right?
I think a lot of it comes down to the question you asked earlier in terms of the noise, the lack of understanding of how those pieces are connected together, or the lack of context, or not even frankly, caring about looking beyond the single-point solution of the problem that solution was designed to solve. But more importantly than that, not recognizing what it’s like to build modern software, right, all of the decisions that have to be made on a daily basis with very limited information, right? I might not even understand where that container image I’m building is going in the universe, let alone what’s being built on top of it and how much critical customer data is being touched by the database, that that container now has the credentials to access, right? So, I think in order to change anything, we have to back way up and say, problems in the cloud or software problems and we have to treat them that way.
Because if we don’t if we continue to represent the cloud as some evolution of the old environment where you just have this perimeter that’s pre-existing infrastructure that you’re deploying things onto, and there’s a guy with a neckbeard in the basement who is unplugging cables from a switch and plugging them back in and that’s how networking problems are solved, I think you missed the idea that all of these abstraction layers introduced the very complexity that needs to be solved back in the build space. But that requires visibility into what actually happens when it gets deployed. The way I tend to think of it is, there’s this firewall in place. Everybody wants to say, you know, we’re doing DevOps or we’re doing DevSecOps, right? And that’s a lie a hundred percent of the time, right? No one is actually, I think, adhering completely to those principles.
Corey: That’s why one of the core tenets of ClickOps is lying about doing anything in the console.
Clinton: Absolutely, right? And that’s why shadow IT becomes more and more prevalent the deeper you get into modern development, not less and less prevalent because it’s fundamentally hard to recognize the entirety of the potential implications, right, of a decision that you’re making. So, it’s a lot easier to just go in the console and say, “Okay, I’m going to deploy one EC2 to do this. I’m going to get it right at some point.” And that’s why every application that’s ever been produced by human hands has a comment in it that says something like, “I don’t know why this works but it does. Please don’t change it.”
And then three years later because that developer has moved on to another job, someone else comes along and looks at that comment and says, “That should really work. I’m going to change it.” And they do and everything fails, and they have to go back and fix it the original way and then add another comment saying, “Hey, this person above me, they were right. Please don’t change this line.” I think every engineer listening right now knows exactly where that weak spot is in the applications that they’ve written and they’re terrified of that.
And I think any tool that’s designed to help developers fundamentally has to get into the mindset, get into the psychology of what that is, like, of not fundamentally being able to understand what those applications are doing all of the time, but having to write code against them anyway, right? And that’s what leads to, I think, the fear that you’re going to get woken up because your pager is going to go off at 3 a.m. because the building is literally on fire and it’s because of code that you wrote. We have to solve that problem and it has to be those people who’s psychology we get into to understand, how are you working and how can we make your life better, right? And I really do think it comes with that the noise reduction, the understanding of complexity, and really just being humble and saying, like, “We get that this job is really hard and that the only way it gets better is to begin admitting that to each other.”
Corey: I really wish that there were a better way to articulate a lot of these things. This the reason that I started doing a security newsletter; it’s because cost and security are deeply aligned in a few ways. One of them is that you care about them a lot right after you failed to care about them sufficiently, but the other is that you’ve got to build guardrails in such a way that doing the right thing is easier than doing it the wrong way, or you’re never going to gain any traction.
Clinton: I think that’s absolutely right. And you use the key term there, which is guardrails. And I think that’s where in their heart of hearts, that’s where every security professional wants to be, right? They want to be defining policy, they want to be understanding the risk posture of the organization and nudging it in a better direction, right? They want to be talking up to the board, to the executive team, and creating confidence in that risk posture, rather than talking down or off to the side—depending on how that org chart looks—to the engineers and saying, “Fix this, fix that, and then fix this other thing.” A, B, and C, right?
I think the problem is that everyone in a security role or an organization of any size at this point, is doing 90% of the latter and only about 10% of the former, right? They’re acting as gatekeepers, not as guardrails. They’re not defining policy, they’re spending all of their time creating Jira tickets and all of their time tracking down who owns the piece of code that got deployed to this pod on EKS that’s throwing all these errors on my console, and how can I get the person to make a decision to actually take an action that stops these notifications from happening, right? So, all they’re doing is throwing footballs down the field without knowing if there’s a receiver there, right, and I think that takes away from the job that our security analysts really shouldn’t be doing, which is creating those guardrails, which is having confidence that the policy they set is readily understood by the developers making decisions, and that’s happening in an automated way without them having to create friction by bothering people all the time. I don’t think security people want to be [laugh] hated by the development teams that they work with, but they are. And the reason they are is I think, fundamentally, we lack the tooling, we lack—
Corey: They are the barrier method.
Clinton: Exactly. And we lacked the processes to get the right intelligence in a way that’s consumable by the engineers when they’re doing their job, and not after the fact, which is typically when the security people have done their jobs.
Corey: It’s sad but true. I wish that there were a better way to address these things, and yet here we are.
Clinton: If only there were better way to address these things.
Clinton: Look, I wouldn’t be here at Snyk if I didn’t think there were a better way, and I wouldn’t be coming on shows like yours to talk to the engineering communities, right, people who have walked the walk, right, who have built those Terraform files that contain these misconfigurations, not because they’re bad people or because they’re lazy, or because they don’t do their jobs well, but because they lacked the visibility, they didn’t have the understanding that that default is actually insecure. Because how would I know that otherwise, right? I’m building software; I don’t see myself as an expert on infrastructure, right, or on Linux packages or on cyclomatic complexity or on any of these other things. I’m just trying to stay in my lane and do my job. It’s not my fault that the software has become too complex for me to understand, right?
But my management doesn’t understand that and so I constantly have white knuckles worrying that, you know, the next breach is going to be my fault. So, I think the way forward really has to be, how do we make our developers stakeholders in the risk being introduced by the software they write to the organization? And that means everything we’ve been talking about: it means prioritization; it means understanding how the different layers of the stack affect each other, especially the cloud pieces; it means an extensible platform that lets me write code against it to inject my own reasoning, right? The piece that we haven’t talked about here is that risk calculation doesn’t just involve technical aspects, there’s also business intelligence that’s involved, right? What are my critical applications, right, what actually causes me to lose significant amounts of money if those services go offline?
We at Snyk can’t tell that. We can’t run a scanner to say these are your crown jewel services that can’t ever go down, but you can know that as an organization. So, where we’re going with the platform is opening up the extensible process, creating APIs for you to be able to affect that risk triage, right, so that as the creators have guardrails as the security team, you are saying, “Here’s how we want our developers to prioritize. Here are all of the factors that go into that decision-making.” And then you can be confident that in their environment, back over in developer-land, when I’m looking at IntelliJ, or, you know, or on my local command line, I am seeing the guardrails that my security team has set for me and I am confident that I’m fixing the right thing, and frankly, I’m grateful because I’m fixing it at the right time and I’m doing it in such a way and with a toolset that actually is helping me fix it rather than just telling me I’ve done something wrong, right, because everything we do at Snyk focuses on identifying the solution, not necessarily identifying the problem.
It’s great to know that I’ve got an unencrypted S3 bucket, but it’s a whole lot better if you give me the line of code and tell me exactly where I have to copy and paste it so I can go on to the next thing, rather than spending an hour trying to figure out, you know, where I put that line and what I actually have to change it to, right? I often say that the most valuable currency for a developer, for a software engineer, it’s not money, it’s not time, it’s not compute power or anything like that, it’s the right context, right? I actually have to understand what are the implications of the decision that I’m making, and I need that to be in my own environment, not after the fact because that’s what creates friction within an organization is when I could have known earlier and I could have known better, but instead, I had to guess I had to write a bunch of code that relies on the thing that was wrong, and now I have to redo it all for no good reason other than the tooling just hadn’t adapted to the way modern software is built.
Corey: So, one last question before we wind up calling it a day here. We are now heavily into what I will term pre:Invent where we’re starting to see a whole bunch of announcements come out of the AWS universe in preparation for what I’m calling Crappy Cloud Hanukkah this year because I’m spending eight nights in Las Vegas. What are you doing these days with AWS specifically? I know I keep seeing your name in conjunction with their announcements, so there’s something going on over there.
Clinton: Absolutely. No, we’re extremely excited about the partnership between Snyk and AWS. Our vulnerability intelligence is utilized as one of the data sources for AWS Inspector, particularly around open-source packages. We’re doing a lot of work around things like the code suite, building Snyk into code pipeline, for example, to give developers using that code suite earlier visibility into those vulnerabilities. And really, I think the story kind of expands from there, right?
So, we’re moving forward with Amazon, recognizing that it is, you know, sort of the de facto. When we say cloud, very often we mean AWS. So, we’re going to have a tremendous presence at re:Invent this year, I’m going to be there as well. I think we’re actually going to have a bunch of handouts with your face on them is my understanding. So, please stop by the booth; would love to talk to folks, especially because we’ve now released the Snyk Cloud product and really completed that story. So, anything we can do to talk about how that additional context of the cloud helps engineers because it’s all software all the way down, those are absolutely conversations we want to be having.
Corey: Excellent. And we will, of course, put links to all of these things in the [show notes 00:35:00] so people can simply click, and there they are. Thank you so much for taking all this time to speak with me. I appreciate it.
Clinton: All right. Thank you so much, Corey. Hope to do it again next year.
Corey: Clinton Herget, Field CTO at Snyk. I’m Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry comment telling me that I’m being completely unfair to Azure, along with your favorite tasting color of Crayon.
Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com
to get started.
Announcer: This has been a HumblePod production. Stay humble.