Exposing the Latest Cloud Threats with Anna Belak

Episode Summary

Anna Belak, Director of The Office of Cybersecurity Strategy at Sysdig, joins Corey on Screaming in the Cloud to discuss the findings in this year’s newly-released Sysdig Global Cloud Threat Report. Anna explains the challenges that teams face in ensuring their cloud is truly secure, including quantity of data versus quality, automation, and more. Corey and Anna also discuss how much faster attacks are able to occur, and Anna gives practical insights into what can be done to make your cloud environment more secure.

Episode Show Notes & Transcript

About Anna

Anna has nearly ten years of experience researching and advising organizations on cloud adoption with a focus on security best practices. As a Gartner Analyst, Anna spent six years helping more than 500 enterprises with vulnerability management, security monitoring, and DevSecOps initiatives. Anna's research and talks have been used to transform organizations' IT strategies and her research agenda helped to shape markets. Anna is the Director of The Office of Cybersecurity Strategy at Sysdig, using her deep understanding of the security industry to help IT professionals succeed in their cloud-native journey.

Anna holds a PhD in Materials Engineering from the University of Michigan, where she developed computational methods to study solar cells and rechargeable batteries.



Links Referenced:



Transcript

Announcer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.



Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. This promoted guest episode is brought to us by our friends over at Sysdig. And once again, I am pleased to welcome Anna Belak, whose title has changed since last we spoke to Director of the Office of Cybersecurity Strategy at Sysdig. Anna, welcome back, and congratulations on all the adjectives.



Anna: [laugh]. Thank you so much. It’s always a pleasure to hang out with you.



Corey: So, we are here today to talk about a thing that has been written. And we’re in that weird time thing where while we’re discussing it at the moment, it’s not yet public but will be when this releases. The Sysdig Global Cloud Threat Report, which I am a fan of. I like quite a bit the things it talks about and the ways it gets me thinking. There are things that I wind up agreeing with, there are things I wind up disagreeing with, and honestly, that makes it an awful lot of fun.



But let’s start with the whole, I guess, executive summary version of this. What is a Global Cloud Threat Report? Because to me, it seems like there’s an argument to be made for just putting all three of the big hyperscale clouds on it and calling it a day because they’re all threats to somebody.



Anna: To be fair, we didn’t think of the cloud providers themselves as the threats, but that’s a hot take.



Corey: Well, an even hotter one is what I’ve seen out of Azure lately with their complete lack of security issues, and the attackers somehow got a Microsoft signing key and the rest. I mean, at this point, I feel like Charlie Bell was brought in from Amazon to head cybersecurity and spent the last two years trapped in the executive washroom or something. But I can’t prove it, of course. No, you target the idea of threats in a different direction, towards what people more commonly think of as threats.



Anna: Yeah, the bad guys [laugh]. I mean, I would say that this is the reason you need a third-party security solution, buy my thing, blah, blah, blah, but [laugh], you know? Yeah, so we are—we have a threat research team like I think most self-respecting security vendors these days do. Ours, of course, is the best of them all, and they do all kinds of proactive and reactive research of what the bad guys are up to so that we can help our customers detect the bad guys, should they become their victims.



Corey: So, there was a previous version of this report, and then you’ve, in long-standing tradition, decided to go ahead and update it. Unlike many of the terrible professors I’ve had in years past, it’s not just slap a new version number, change the answers to some things, and force all the students to buy a new copy of the book every year because that’s your retirement plan, you actually have updated data. What are the big changes you’ve seen since the previous incarnation of this?



Anna: That is true. In fact, we start from scratch, more or less, every year, so all the data in this report is brand new. Obviously, it builds on our prior research. I’ll say one clearly connected piece of data is, last year, we did a supply chain story that talked about the bad stuff you can find in Docker Hub. This time we upleveled that and we actually looked deeper into the nature of said bad stuff and how one might identify that an image is bad.



And we found that 10% of the malware scary things inside images actually can’t be detected by most of your static tools. So, if you’re thinking, like, static analysis of any kind, SCA, vulnerability scanning, just, like, looking at the artifact itself before it’s deployed, you actually wouldn’t know it was bad. So, that’s a pretty cool change, I would say [laugh].



Corey: It is. And I’ll also say what’s going to probably sound like a throwaway joke, but I assure you it’s not, where you’re right, there is a lot of bad stuff on Docker Hub and part of the challenge is disambiguating malicious-bad and shitty-bad. But there are serious security concerns to code that is not intended to be awful, but it is anyway, and as a result, it leads to something that this report gets into a fair bit, which is the ideas of, effectively, lateralling from one vulnerability to another vulnerability to another vulnerability to the actual story. I mean, Capital One was a great example of this. They didn’t do anything that was outright negligent like leaving an S3 bucket open; it was a determined sophisticated attacker who went from one mistake to one mistake to one mistake to, boom, keys to the kingdom. And that at least is a little bit more understandable even if it’s not great when it’s your bank.



Anna: Yeah. I will point out that in the 10% that these things are really bad department, it was 10% of all things that were actually really bad. So, there were many things that were just shitty, but we had pared it down to the things that were definitely malicious, and then 10% of those things you could only identify if you had some sort of runtime analysis. Now, runtime analysis can be a lot of different things. It’s just that if you’re relying on preventive controls, you might have a bad time, like, one times out of ten, at least.



But to your point about, kind of, chaining things together, I think that’s actually the key, right? Like, that’s the most interesting moment is, like, which things can they grab onto, and then where can they pivot? Because it’s not like you barge in, open the door, like, you’ve won. Like, there’s multiple steps to this process that are sometimes actually quite nuanced. And I’ll call out that, like, one of the other findings we got this year that was pretty cool is that the time it takes to get through those steps is very short. There’s a data point from Mandiant that says that the average dwell time for an attacker is 16 days. So like, two weeks, maybe. And in our data, the average dwell time for the attacks we saw was more like ten minutes.



Corey: And that is going to be notable for folks. Like, there are times where I have—in years past; not recently, mind you—I have—oh, I’m trying to set something up, but I’m just going to open this port to the internet so I can access it from where I am right now and I’ll go back and shut it in a couple hours. There was a time that that was generally okay. These days, everything happens so rapidly. I mean, I’ve sat there with a stopwatch after intentionally committing AWS credentials to Gif-ub—yes, that’s how it’s pronounced—and 22 seconds until the first probing attempt started hitting, which was basically impressively fast. Like, the last thing in the entire sequence was, and then I got an alert from Amazon that something might have been up, at which point it is too late. But it’s a hard problem and I get it. People don’t really appreciate just how quickly some of these things can evolve.



Anna: Yeah. And I think the main reason, from at least what we see, is that the bad guys are into the cloud saying, right, like, we good guys love the automation, we love the programmability, we love the immutable infrastructure, like, all this stuff is awesome and it’s enabling us to deliver cool products faster to our customers and make more money, but the bad guys are using all the same benefits to perpetrate their evil crimes. So, they’re building automation, they’re stringing cool things together. Like, they have scripts that they run that basically just scan whatever’s out there to see what new things have shown up, and they also have scripts for reconnaissance that will just send a message back to them through Telegram or WhatsApp, letting them know like, “Hey, I’ve been running, you know, for however long and I see a cool thing you may be able to use.” Then the human being shows up and they’re like, “All right. Let’s see what I can do with this credential,” or with this misconfiguration or what have you. So, a lot of their initial, kind of, discovery into what they can get at is heavily automated, which is why it’s so fast.



Corey: I feel like, on some level, this is an unpleasant sharp shock for an awful lot of executives because, “Wait, what do you mean attackers can move that quickly? Our crap-ass engineering teams can’t get anything released in less than three sprints. What gives?” And I don’t think people have a real conception of just how fast bad actors are capable of moving.



Anna: I think we said—actually [unintelligible 00:07:57] last year, but this is a business for them, right? They’re trying to make money. And it’s a little bleak to think about it, but these guys have a day job and this is it. Like, our guys have a day job, that’s shipping code, and then they’re supposed to also do security. The bad guys just have a day job of breaking your code and stealing your stuff.



Corey: And on some level, it feels like you have a choice to make in which side you go at. And it’s, like, which one of those do I spend more time in meetings with? And maybe that’s not the most legitimate way to pick a job; ethics do come into play. But yeah, there’s it takes a certain similar mindset, on some level, to be able to understand just how the security landscape looks from an attacker's point of view.



Anna: I’ll bet the bad guys have meetings too, actually.



Corey: You know, you’re probably right. Can you imagine the actual corporate life of a criminal syndicate? That’s a sitcom in there that just needs to happen. But again, I’m sorry, I shouldn’t talk about that. We’re on a writer’s strike this week, so there’s that.



One thing that came out of the report that makes perfect sense—and I’ve heard about it, but I haven’t seen it myself and I wanted to dive into on this—specifically that automation has been weaponized in the cloud. Now, it’s easy to misinterpret that the first time you read it—like I did—as, “Oh, you mean the bad guys have discovered the magic of shell scripts? No kidding.” It’s more than that. You have reports of people using things like CloudFormation to stand up resources that are then used to attack the rest of the infrastructure.



And it’s, yeah, it makes perfect sense. Like, back in the data center days, it was a very determined attacker that went through the process of getting an evil server stuffed into a rack somewhere. But it’s an API call away in cloud. I’m surprised we haven’t seen this before.



Anna: Yeah. We probably have; I don’t know if we’ve documented before. And sometimes it’s hard to know that that’s what’s happening, right? I will say that both of those things are true, right? Like the shell scripts are definitely there, and to your point about how long it takes, you know, to stopwatch, these things, on the short end of our dwell time data set, it’s zero seconds. It’s zero seconds from, like, A to B because it’s just a script.



And that’s not surprising. But the comment about CloudFormation specifically, right, is we’re talking about people, kind of, figuring out how to create policy in the cloud to prevent bad stuff from happening because they’re reading all the best practices ebooks and whatever, watching the YouTube videos. And so, you understand that you can, say, write policy to prevent users from doing certain things, but sometimes we forget that, like, if you don’t want a user to be able to attach user policy to something. If you didn’t write the rule that says you also can’t do that in CloudFormation, then suddenly, you can’t do it in command line, but you can do it in CloudFormation. So there’s, kind of, things like this, where for every kind of tool that allows this beautiful, programmable, immutable infrastructure, kind of, paradigm, you now have to make sure that you have security policies that prevent those same tools from being used against you and deploying evil things because you didn’t explicitly say that you can’t deploy evil things with this tool and that tool and that other tool in this other way. Because there’s so many ways to do things, right?



Corey: That’s part of the weird thing, too, is that back when I was doing the sysadmin dance, it was a matter of taking a bunch of tools that did one thing well—or, you know, aspirationally well—and then chaining them together to achieve things. Increasingly, it feels like that’s what cloud providers have become, where they have all these different services with different capabilities. One of the reasons that I now have a three-part article series, each one titled, “17 Ways to Run Containers on AWS,” adding up for a grand total of 51 different AWS services you can use to run containers with, it’s not just there to make fun of the duplication of efforts because they’re not all like that. But rather, each container can have bad acting behaviors inside of it. And are you monitoring what’s going on across that entire threatened landscape?



People were caught flat-footed to discover that, “Wait, Lambda functions can run malware? Wow.” Yes, effectively, anything that can bang two bits together and return a result is capable of running a lot of these malware packages. It’s something that I’m not sure a number of, shall we say, non-forward-looking security teams have really wrapped their heads around yet.



Anna: Yeah, I think that’s fair. And I mean, I always want to be a little sympathetic to the folks, like, in the trenches because it’s really hard to know all the 51 ways to run containers in the cloud and then to be like, oh, 51 ways to run malicious containers in the cloud. How do I prevent all of them, when you have a day job?



Corey: One point that it makes in the report here is that about who the attacks seem to be targeting. And this is my own level of confusion that I imagine we can probably wind up eviscerating neatly. Back when I was running, like, random servers for me for various projects I was working on—or working at small companies—there was a school of thought in some quarters that, well, security is not that important to us. We don’t have any interesting secrets. Nobody actually cares.



This was untrue because a lot of these things are running on autopilot. They don’t have enough insight to know that you’re boring and you have to defend just like everyone else does. But then you see what can only be described as dumb attacks. Like there was the attack on Twitter a few years ago where a bunch of influential accounts tweeted about some bitcoin scam. It’s like, you realize with the access you had, you had so many other opportunities to make orders of magnitude more money if you want to go down that path or to start geopolitical conflict or all kinds of other stuff. I have to wonder how much these days are attacks targeted versus well, we found an endpoint that doesn’t seem to be very well secured; we’re going to just exploit it.



Anna: Yeah. So, that’s correct intuition, I think. We see tons of opportunistic attacks, like, non-stop. But it’s just, like, hitting everything, honeypots, real accounts, our accounts, your accounts, like, everything. Many of them are pretty easy to prevent, honestly, because it’s like just mundane stuff, whatever, so if you have decent security hygiene, it’s not a big deal.



So, I wouldn’t say that you’re safe if you’re not special because none of us are safe and none of us are that special. But what we’ve done here is we actually deliberately wanted to see what would be attacked as a fraction, right? So, we deployed a honey net that was indicative of what a financial org would look like or what a healthcare org would look like to see who would bite, right? And what we expected to see is that we probably—we thought the finance would be higher because obviously, that’s always top tier. But for example, we thought that people would go for defense more or for health care.



And we didn’t see that. We only saw, like, 5% I think for health—very small numbers for healthcare and defense and very high numbers for financial services and telcos, like, around 30% apiece, right? And so, it’s a little curious, right, because you—I can theorize as to why this is. Like, telcos and finance, obviously, it’s where the money is, like, great [unintelligible 00:14:35] for fraud and all this other stuff, right?



Defense, again, maybe people don’t think defense and cloud. Healthcare arguably isn’t that much in cloud, right? Like a lot of health healthcare stuff is on-premise, so if you see healthcare in cloud, maybe, you, like, think it’s a honeypot or you don’t [laugh] think it’s worth your time? You know, whatever. Attacker logic is also weird. But yeah, we were deliberately trying to see which verticals were the most attractive for these folks. So, these attacks are infected targeted because the victim looked like the kind of thing they should be looking for if they were into that.



Corey: And how does it look in that context? I mean, part of me secretly suspects that an awful lot of terrible startup names where they’re so frugal they don’t buy vowels, is a defense mechanism. Because you wind up with something that looks like a cat falling on a keyboard as a company name, no attacker is going to know what the hell your company does, so therefore, they’re not going to target you specifically. Clearly, that’s not quite how it works. But what are those signals that someone gets into an environment and says, “Ah, this is clearly healthcare,” versus telco versus something else?



Anna: Right. I think you would be right. If you had, like… hhhijk as your company name, you probably wouldn’t see a lot of targeted attacks. But where we’re saying either the company and the name looks like a provider of that kind, and-slash-or they actually contain some sort of credential or data inside the honeypot that appears to be, like, a credential for a certain kind of thing. So, it really just creatively naming things so they look delicious.



Corey: For a long time, it felt like—at least from a cloud perspective because this is how it manifested—the primary purpose of exploiting a company’s cloud environment was to attempt to mine cryptocurrency within it. And I’m not sure if that was ever the actual primary approach, or rather, that was just the approach that people noticed because suddenly, their AWS bill looks a lot more like a telephone number than it did yesterday, so they can as a result, see that it’s happening. Are these attacks these days, effectively, just to mine Bitcoin, if you’ll pardon the oversimplification, or are they focused more on doing more damage in different ways?



Anna: The analyst answer: it depends. So, again, to your point about how no one’s safe, I think most attacks by volume are going to be opportunistic attacks, where people just want money. So, the easiest way right now to get money is to mine coins and then sell those coins, right? Obviously, if you have the infrastructure as a bad guy to get money in other ways, like, you could do extortion through ransomware, you might pursue that. But the overhead on ransomware is, like, really high, so most people would rather not if they can get money other ways.



Now, because by volume APTs, or Advanced Persistent Threats, are much smaller than all the opportunistic guys, they may seem like they’re not there or we don’t see them. They’re also usually better at attacking people than the opportunistic guys who will just spam everybody and see what they get, right? But even folks who are not necessarily nation states, right, like, we see a lot of attacks that probably aren’t nation states, but they’re quite sophisticated because we see them moving through the environment and pivoting and creating things and leveraging things that are quite interesting, right? So, one example is that they might go for a vulnerable EC2 instance—right, because maybe you have Log4J or whatever you have exposed—and then once they’re there, they’ll look around to see what else they can get. So, they’ll pivot to the Cloud Control Plane, if it’s possible, or they’ll try to.



And then in a real scenario we actually saw in an attack, they found a Terraform state file. So, somebody was using Terraform for provisioning whatever. And it requires an access key and this access key was just sitting in an S3 bucket somewhere. And I guess the victim didn’t know or didn’t think it was an issue. And so, this state file was extracted by the attacker and they found some [unintelligible 00:18:04], and they logged into whatever, and they were basically able to access a bunch of information they shouldn’t have been able to see, and this turned into a data [extraction 00:18:11] scenario and some of that data was intellectual property.



So, maybe that wasn’t useful and maybe that wasn’t their target. I don’t know. Maybe they sold it. It’s hard to say, but we increasingly see these patterns that are indicative of very sophisticated individuals who understand cloud deeply and who are trying to do intentionally malicious things other than just like, I popped [unintelligible 00:18:30]. I’m happy.



Corey: This episode is sponsored in part by our friends at Calisti.



Introducing Calisti. With Integrated Observability, Calisti provides a single pane of glass for accelerated root cause analysis and remediation. It can set, track, and ensure compliance with Service Level Objectives.



Calisti provides secure application connectivity and management from datacenter to cloud, making it the perfect solution for businesses adopting cloud native microservice-based architectures. If you’re running Apache Kafka, Calisti offers a turnkey solution with automated operations, seamless integrated security, high-availability, disaster recovery, and observability. So you can easily standardize and simplify microservice security, observability, and traffic management. Simplify your cloud-native operations with Calisti. Learn more about Calisti at calisti.app.



Corey: I keep thinking of ransomware as being a corporate IT side of problem. It’s a sort of thing you’ll have on your Windows computers in your office, et cetera, et cetera, despite the fact that intellectually I know better. There were a number of vendors talking about ransomware attacks and encrypting data within S3, and initially, I thought, “Okay, this sounds like exactly a story people would talk about some that isn’t really happening in order to sell their services to guard against it.” And then AWS did a blog post saying, “We have seen this, and here’s what we have learned.” It’s, “Oh, okay. So, it is in fact real.”



But it’s still taking me a bit of time to adapt to the new reality. I think part of this is also because back when I was hands-on-keyboard, I was unlucky, and as a result, I was kept from taking my aura near anything expensive or long-term like a database, and instead, it’s like, get the stateless web servers. I can destroy those and we’ll laugh and laugh about it. It’ll be fine. But it’s not going to destroy the company in the same way. But yeah, there are a lot of important assets in cloud that if you don’t have those assets, you will no longer have a company.



Anna: It’s funny you say that because I became a theoretical physicist instead of experimental physicist because when I walked into the room, all the equipment would stop functioning.



Corey: Oh, I like that quite a bit. It’s one of those ideas of, yeah, your aura just winds up causing problems. Like, “You are under no circumstances to be within 200 feet of the SAN. Is that clear?” Yeah, same type of approach.



One thing that I particularly like that showed up in the report that has honestly been near and dear to my heart is when you talk about mitigations around compromised credentials at one point when GitHub winds up having an AWS credential, AWS has scanners and a service that will catch that and apply a quarantine policy to those IAM credentials. The problem is, is that policy goes nowhere near far enough at all. I wound up having fun thought experiment a while back, not necessarily focusing on attacking the cloud so much as it was a denial of wallet attack. With a quarantined key, how much money can I cost? And I had to give up around the $26 billion dollar mark.



And okay, that project can’t ever see the light of day because it’ll just cause grief for people. The problem is that the mitigations around trying to list the bad things and enumerate them mean that you’re forever trying to enumerate something that is innumerable in and of itself. It feels like having a hard policy of once this is compromised, it’s not good for anything would be the right answer. But people argue with me on that.



Anna: I don’t think I would argue with you on that. I do think there are moments here—again, I have to have sympathy for the folks who are actually trying to be administrators in the cloud, and—



Corey: Oh God, it’s hard.



Anna: [sigh]. I mean, a lot of the things we choose to do as cloud users and cloud admins are things that are very hard to check for security goodness, if you will, right, like, the security quality of the naming convention of your user accounts or something like that, right? One of the things we actually saw in this report it—and it almost made me cry, like, how visceral my reaction was to this thing—is, there were basically admin accounts in this cloud environment, and they were named according to a specific convention, right? So, if you were, like, admincorey and adminanna, like, that, if you were an admin, you’ve got an adminanna account, right? And then there was a bunch of rules that were written, like, policies that would prevent you from doing things to those accounts so that they couldn’t be compromised.



Corey: Root is my user account. What are you talking about?



Anna: Yeah, totally. Yeah [laugh]. They didn’t. They did the thing. They did the good accounts. They didn’t just use root everybody. So, everyone had their own account, it was very neat. And all that happened is, like, one person barely screwed up the naming of their account, right? Instead of a lowercase admin, they use an uppercase Admin, and so all of the policy written for lowercase admin didn’t apply to them, and so the bad guy was able to attach all kinds of policies and basically create a key for themselves to then go have a field day with this admin account that they just found laying around.



Now, they did nothing wrong. It’s just, like, a very small mistake, but the attacker knew what to do, right? The attacker went and enumerated all these accounts or whatever, like, they see what’s in the environment, they see the different one, and they go, “Oh, these suckers created a convention, and like, this joker didn’t follow it. And I’ve won.” Right? So, they know to check with that stuff.



But our guys have so much going on that they might forget, or they might just you know, typo, like, whatever. Who cares. Is it case-sensitive? I don’t know. Is it not case-sensitive? Like, some policies are, some policies aren’t. Do you remember which ones are and which ones aren’t? And so, it’s a little hopeless and painful as, like, a cloud defender to be faced with that, but that’s sort of the reality.



And right now we’re in kind of like, ah, preventive security is the way to save yourself in cloud mode, and these things just, like, they don’t come up on, like, the benchmarks and, like the configuration checks and all this other stuff that’s just going, you know, canned, did you, you know, put MFA on your user account? Like, yeah, they did, but [laugh] like, they gave it a wrong name and now it’s a bad na—so it’s a little bleak.



Corey: There’s too much data. Filtering it becomes nightmarish. I mean, I have what I think of as the Dependabot problem, where every week, I get this giant list of Dependabot freaking out about every repository I have on Gif-ub and every dependency thereof. And some of the stuff hasn’t been deployed in years and I don’t care. Other stuff is, okay, I can see how that markdown parser could have malicious input passed to it, but it’s for an internal project that only ever has very defined things allowed to talk to it so it doesn’t actually matter to me.



And then at some point, it’s like, you expect to read, like, three-quarters of the way down the list of a thousand things, like, “Oh, and by the way, the basement’s on fire.” And then have it keep going on where it’s… filtering the signal from noise is such a problem that it feels like people only discover the warning signs after they’re doing forensics when something has already happened rather than when it’s early enough to be able to fix things. How do you get around that problem?



Anna: It’s brutal. I mean, I’m going to give you, like, my [unintelligible 00:24:28] vendor answer: “It’s just easy. Just do what we said.” But I think [laugh] in all honesty, you do need to have some sort of risk prioritization. I’m not going to say I know the answer to what your algorithm has to be, but our approach of, like, oh, let’s just look up the CVSS score on the vulnerabilities. Oh, look, 600,000 criticals. [laugh]. You know, you have to be able to filter past that, too. Like, is this being used by the application? Like, has this thing recently been accessed? Like, does this user have permissions? Have they used those permissions?



Like, these kinds of questions that we know to ask, but you really have to kind of like force the security team, if you will, or the DevOps team or whatever team you have to actually, instead of looking at the list and crying, being like, how can we pare this list down? Like anything at all, just anything at all. And do that iteratively, right? And then on the other side, I mean, it’s so… defense-in-depth, like, right? I know it’s—I’m not supposed to say that because it’s like, not cool anymore, but it’s so true in cloud, like, you have to assume that all these controls will fail and so you have to come up with some—



Corey: People will fail, processes will fail, controls will fail, and great—



Anna: Yeah.



Corey: How do you make sure that one of those things failing isn’t winner-take-all?



Anna: Yeah. And so, you need some detection mechanism to see when something’s failed, and then you, like, have a resilience plan because you know, if you can detect that it’s failed, but you can’t do anything about it, I mean, big deal, [laugh] right? So detection—



Corey: Good job. That’s helpful.



Anna: And response [laugh]. And response. Actually, mostly response yeah.



Corey: Otherwise, it’s, “Hey, guess what? You’re not going to believe this, but…” it goes downhill from there rapidly.



Anna: Just like, how shall we write the news headline for you?



Corey: I have to ask, given that you have just completed this report and are absolutely in a place now where you have a sort of bird’s eye view on the industry at just the right time, over the past year, we’ve seen significant macro changes affect an awful lot of different areas, the hiring markets, the VC funding markets, the stock markets. How has, I guess, the threat space evolved—if at all—during that same timeframe?



Anna: I’m guessing the bad guys are paying more than the good guys.



Corey: Well, there is part of that and I have to imagine also, crypto miners are less popular since sanity seems to have returned to an awful lot of people’s perspective on money.



Anna: I don’t know if they are because, like, even fractions of cents are still cents once you add up enough of them. So, I don’t think [they have stopped 00:26:49] mining.



Corey: It remains perfectly economical to mine Bitcoin in the cloud, as long as you use someone else’s account to do it.



Anna: Exactly. Someone else’s money is the best kind of money.



Corey: That’s the VC motto and then some.



Anna: [laugh]. Right? I think it’s tough, right? I don’t want to be cliche and say, “Look, oh automate more stuff.” I do think that if you’re in the security space on the blue team and you are, like, afraid of losing your job—you probably shouldn’t be afraid if you do your job at all because there’s a huge lack of talent, and that pool is not growing quick enough.



Corey: You might be out of work for dozens of minutes.



Anna: Yeah, maybe even an hour if you spend that hour, like, not emailing people, asking for work. So yeah, I mean, blah, blah, skill up in cloud, like, automate, et cetera. I think what I said earlier is actually the more important piece, right? We have all these really talented people sitting behind these dashboards, just trying to do the right thing, and we’re not giving them good data, right? We’re giving them too much data and it’s not good quality data.



So, whatever team you’re on or whatever your business is, like, you will have to try to pare down that list of impossible tasks for all of your cloud-adjacent IT teams to a list of things that are actually going to reduce risk to your business. And I know that’s really hard to do because you’re asking now, folks who are very technical to communicate with folks who are very non-technical, to figure out how to, like, save the business money and keep the business running, and we’ve never been good at this, but there’s no time like the present to actually get good at it.



Corey: Let’s see, what is it, the best time to plant a tree was 20 years ago. The second best time is now. Same sort of approach. I think that I’m seeing less of the obnoxious whining that I saw for years about how there’s a complete shortage of security professionals out there. It’s, “Okay, have you considered taking promising people and training them to do cybersecurity?” “No, that will take six months to get them productive.” Then they sit there for two years with the job rec open. It’s hmm. Now, I’m not a professor here, but I also sort of feel like there might be a solution that benefits everyone. At least that rhetoric seems to have tamped down.



Anna: I think you’re probably right. There’s a lot of awesome training out there too. So there’s, like, folks giving stuff away for free that’s super resources, so I think we are doing a good job of training up security folks. And everybody wants to be in security because it’s so cool. But yeah, I think the data problem is this decade’s struggle, more so than any other decades.



Corey: I really want to thank you for taking the time to speak with me. If people want to learn more, where can they go to get their own copy of the report?



Anna: It’s been an absolute pleasure, Corey, and thanks, as always for having us. If you would like to check out the report—which you absolutely should—you can find it ungated at www.sysdig.com/2023threatreport.



Corey: You had me at ungated. Thank you so much for taking the time today. It’s appreciated. Anna Belak, Director of the Office of Cybersecurity Strategy at Sysdig. This promoted guest episode has been brought to us by our friends at Sysdig and I’m Cloud Economist Corey Quinn.



If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice along with an insulting comment that no doubt will compile into a malicious binary that I can grab off of Docker Hub.


Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.



Newsletter Footer

Get the Newsletter

Reach over 30,000 discerning engineers, managers, enthusiasts who actually care about the state of Amazon’s cloud ecosystems.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Sponsor Icon Footer

Sponsor an Episode

Get your message in front of people who care enough to keep current about the cloud phenomenon and its business impacts.