Security Can Be More than Hues of Blue with Ell Marquez

Episode Summary

We shouldn’t forget security until the very end—and Ell Marquez, Security Research Advocate at Intezer, is here to keep us honest! Ell’s job title is just a minor indication about her own work in security, which can be so much more than blue hued charts (ahem, AWS)! Ell pipes up for some solid insight on how to step up the security game. Ell shines the light on some of the esoteric nature of security, and the importance of entire organizations sharing the security responsibility. She offers her take on third party contractors, Target kerfuffles, and chats with Corey about Pokémon’s security choices. Ell’s security chops come to the forefront too, as she and Corey debate some of the importance of various practices, and as they talk about her own work as a podcaster and more!

Episode Show Notes & Transcript

About Ell
Ell, former SysAdmin, cloud builder, podcaster, and container advocate, has always been a security enthusiast. This enthusiasm and driven curiosity have helped her become an active member of the InfoSec community, leading her to explore the exciting world of Genetic Software Mapping at Intezer.


Links:

Transcript
Announcer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.


Corey: It seems like there is a new security breach every day. Are you confident that an old SSH key, or a shared admin account, isn’t going to come back and bite you? If not, check out Teleport. Teleport is the easiest, most secure way to access all of your infrastructure. The open source Teleport Access Plane consolidates everything you need for secure access to your Linux and Windows servers—and I assure you there is no third option there. Kubernetes clusters, databases, and internal applications like AWS Management Console, Yankins, GitLab, Grafana, Jupyter Notebooks, and more. Teleport’s unique approach is not only more secure, it also improves developer productivity. To learn more visit: goteleport.com. And not, that is not me telling you to go away, it is: goteleport.com.


Corey: This episode is sponsored by our friends at Oracle Cloud. Counting the pennies, but still dreaming of deploying apps instead of "Hello, World" demos? Allow me to introduce you to Oracle's Always Free tier. It provides over 20 free services and infrastructure, networking, databases, observability, management, and security. And—let me be clear here—it's actually free. There's no surprise billing until you intentionally and proactively upgrade your account. This means you can provision a virtual machine instance or spin up an autonomous database that manages itself all while gaining the networking load, balancing and storage resources that somehow never quite make it into most free tiers needed to support the application that you want to build. With Always Free, you can do things like run small scale applications or do proof-of-concept testing without spending a dime. You know that I always like to put asterisks next to the word free. This is actually free, no asterisk. Start now. Visit snark.cloud/oci-free that's snark.cloud/oci-free.


Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. If there’s one thing we love doing in the world of cloud, it’s forgetting security until the very end, going back and bolting it on as if we intended to do it that way all along. That’s why AWS says security is job zero because they didn’t want to remember all of their slides once they realized they forgot security. Here to talk with me about that today is Ell Marquez, security research advocate at Intezer. Ell, thank you for joining me.


Ell: Of course.


Corey: So, what does a security research advocate do, for lack of a better question, I suppose? Because honestly, you look at that, it’s like, security research advocate, it seems, would advocate for doing security research. That seems like a good thing to do. I agree, but there’s probably a bit more nuance to it, then I can pick up just by the [unintelligible 00:01:17] reading of the title.


Ell: You know, we have all of these white papers that you end up getting, the pen test reports that are dropped on your desk that nobody ever gets to, they become low priority, my job is to actually advocate that you do something with the information that you get. And part of that just involves translating that into plain English, so anyone can go with it.


Corey: I’ve got to say, if you want to give the secrets of the universe and make sure that no one ever reads them, make sure that it has a whole bunch of academic-style citations at the beginning, and ideally put it behind some academic paywall, and it feels like people will claim to have read it but never actually read the thing.


Ell: Don’t forget charts.


Corey: Oh yes, with the charts. In varying shades of blue. Apparently that’s the only color you’re allowed to do some of these charts in; despite having a full universe of color palettes out there, we’re just going to put it in varying shades of corporate blue and hope that people read it.


Ell: Yep, that sounds about security there. [laugh].


Corey: So, how much of, I guess, modern security research these days is coming out of academia versus coming out of industry?


Ell: In my experience in, you know, research I’ve done in researching researchers, it all really revolves around actual practitioners these days, people who are on the front lines, you know, monitoring their honey pots, and actually reporting back on what they’re seeing, not just theoretical.


Corey: Which I guess brings us to the question of, I wind up watching all of the keynotes that all the big cloud providers put on and they simultaneously pat me on the head and tell me that their side of security is just fine with their shared responsibility model and the rest, whereas all of the breaches I’m ever going to deal with and the only way anyone can ever see my data is if I make a mistake in configuring something. And honestly, does that really sound like something I would do? Probably not, but let’s face it, they claim that they are more or less infallible. How accurate is that?


Ell: I wish that I could find the original person that said this, but I’ve heard it so many times. And it’s actually the ‘cloud irresponsibility model.’ We have this blind faith that if we’re paying somebody for it, it’s going to be done correctly. I think you may have seen this with billing. How many people are paying for redundant security services with a cloud provider?


Corey: I’ve once—well, more than once have noticed that if you were to configure every AWS security service that they have and enable it in your account, that the resulting bill would be larger than the cost of the data breach it was preventing. So, on some level, there is a point at which it just becomes ridiculous and it’s not necessarily worth pursuing further. I honestly used to think that the shared responsibility model story was a sales pitch, and then I grew ever more cynical. And now my position on it is that it’s because if you get breached, it’s your fault is what they’re trying to say. But if you say it outright to someone who just got breached, they’re probably not going to give you money anymore. So, you need to wrap that in this whole involved 45-minute presentation with slides, and charts, and images and the rest because people can’t refute one of those quite the way that they can a—it’s in a tweet sentence of, “It’s your fault.”


Ell: I kind of have to agree with them in the end that it is your fault. Like, the buck stops with you, regardless. You are the one that chose to trust that cloud provider was going to do everything because your security team might make a mistake, but the cloud provider is made up of humans as well who can make just as many mistakes. At the end of the day, I don’t care what cloud provider you used; I care that my data was compromised.


Corey: One of the things that irks me the most is when I read about a data breach from a vendor that I had either trusted knowingly with my data or worse, never trusted but they somehow scraped it somewhere and then lost it, and they said, “Oh, a third-party contractor that we hired.” It’s, “Yeah, look, I’m doing business with you, ideally, not the people that you choose to do business with in turn. I didn’t select that contractor. You did, you can pass out the work and delegate that. You cannot delegate the responsibility.” So no, Verizon, when you talk about having a third-party contractor have a data breach of customer data, you lost the data by not vetting your contractors appropriately.


Ell: Let’s go back in time to hopefully something everybody remembers: Target. Target being compromised because of their HVAC provider. Yet how many people—you know this is being recorded in the holiday season—are still shopping at Target right now? I don’t know if people forget or they just don’t care.


Corey: A year later, their stock price was higher than it was before the breach. Sure they had a complete turnover of their C-suite at that point; their CSO and CEO were forced out as a result, but life went on. And they continue to remain a going concern despite quite literally having a bull’s eye painted on the building. You’d think that would be a metaphor for security issues. But no, no, that is something they actually do.


Ell: You know, when you talk about, you know, the CEO being let go or, you know, being run out, but what part did he honestly have to do with it? They’re talking about, oh, well, they made the decisions and they were responsible. What because they got that, you know, list of just 8000 papers with the charts on it?


Corey: As I take a look at a lot of the previous issues that we’ve seen with I’ve been doing my whole S3 Bucket Negligence Awards for a while, but once I actually had a bucket engraved and sent to a company years ago, the Pokémon Company, based upon a story that I read in the Wall Street Journal, how they declined to do business with a prospective vendor because going through their onboarding process, they noticed among other things, insufficient security controls around a whole bunch of things including S3 buckets, and it’s holy crap, a company actually making a meaningful decision based upon security. And say what you will about the Pokémon Company, their audience is—at least theoretically—children and occasionally adults who believe they’re children—great, not here to shame—but they understand that this is not 
something you can afford to be lax in and they kiboshed the entire deal. They didn’t name the vendor, obviously, but that really took me aback. It was such a rarity to see that, and it’s why I unfortunately haven't had to make a bucket like that since. I wish I did. I wish more 
companies did things like this. But no it’s just a matter of, well, we claim to do the right thing, and we checked all the boxes and called it good, and oops, these things happen.


Ell: Yes, but even when it goes that way, who actually remembers what happened, and did you ever follow up if there were any consequences to not going, “Okay, third-party. You screwed up, we’re out. We’re not using you.” I can’t name a single time that happened.


Corey: Over at The Duckbill Group, we have large enterprise customers. We have to be respectful and careful with their data, let’s be very clear here. We have all of their AWS billing data going back for some fixed period of time. And it worries me what happens if that data gets breached. Now, sure, I’ve done the standard PR crisis comms thing, I have statements and actions prepared to go in the event that it happens, but I’m also taking great pains to make sure it doesn’t.


It’s the idea of okay, let’s make sure that we wind up keeping these things not just distinct from the outside world, but distinct from individual clients so we’re not mixing and matching any of this stuff. It’s one of those areas where if we wind up having a breach, it’s not because we didn’t follow the baseline building blocks of doing this right. It’s something that goes far beyond what we would typically expect to see in an environment like this. This, of course, sets aside the fact that while a breach like that would be embarrassing, it isn’t actually material to anyone’s business. This is not to say that I’m not taking it seriously because we have contractual provisions that we will not disclose a lot of this stuff, but it does not mean the end of someone’s business if this stuff were to go public in the same way that, for example, back when I worked at Grindr many years ago, in the event that someone’s data had been leaked there, people could theoretically been killed. There’s a spectrum of consequences here, but it still seems like you just do the basic block-and-tackling to make sure that this stuff isn’t publicly exposed, then you start worrying about the more advanced stuff. But with all these breaches, it seems like people don’t even do that.


Ell: You have Tesla, right, who’s working on going to Mars, sending people there who had their S3 buckets compromised. At that point, if we’ve got this technology, just giant there, I think we’re safe to do that whole, “Hey, assume breach, assume compromise.” But when I say that, it drives me up the wall how many people just go, “Okay, well, there’s nothing we can do. We should just assume that there’s going to be an issue,” and just have this mentality where they give up. No, that gives you a starting point to work from, but that’s not the way it’s being seen.


Corey: One of the things that I’ve started doing as I built up my new laptop recently has been all right, how do I work with this in such a way that I don’t have credentials that are going to grant access to things in any long-lived way ever residing on disk? And so that meant with AWS, I started using SSO to log into a bunch of things. It goes through a website, and then it gives a token and the rest that lasts for 12 hours. Great.


Okay, SSH keys, how do I handle that? Historically, I would have them encrypted with a passphrase, but then I found for Mac OS an app called Secretive that stores it in the Secure Enclave. I have to either type in a password or prove it with a biometric Touch ID nonsense every time something tries to access the key. It’s slightly annoying when I’m checking out five or six Git repos at once, but it also means that nothing that I happen to have compromised in a browser or whatnot is going to be able to just grab the keys, send it off somewhere, and then I’ll never realize that I’ve been compromised throughout. It’s the idea of at least theoretically defense in depth because it’s me, it’s my personal electronics, in all likelihood, that are going to be compromised, more so than it is configured, locked-down S3 buckets, managed properly. And if not me, someone else in my company who has access to these things.


Ell: I’m going to give you the best advice you’re ever going to get, and people are going to go, “Duh,” but it’s happening right now: Don’t get complacent, don’t get lazy, how many of us are, “Okay, we’re just going to put the key over here for a second.” Or, “We’re just going to do this for a minute,” and then we forget. I recently, you know, did some research into Emotet and—you know, the new virus and the group behind it—you know how they got caught? When they were raided, everything was in plain text. They forgot to use their VPN for a while, all the files that they’d gotten no encryption. These were the people that that’s what they were looking for, but you get lazy.


Corey: I’ve started treating at least the security credential side of doing weird things, even one off bash scripts, as if they were in production. I stuff the credentials into something like AWS’s parameter store, and then just have a one line snippet of code that retrieves them at runtime to wind up retrieving those. Would it be easier to just slap it in there in the code? Absolutely, of course it would. But I also look at my newsletter production pipeline, and I count the number of DynamoDB tables that are in active use that are labeled Test or Dev, and I realized, huh, I’m actually kind of bad at taking something that was in Dev and getting it ready for production. Very often, I just throw a load at it and call it good. So, if I never get complacent around things like that, it’s a lot harder for me to get yelled at for checking secrets into Git, for example.


Ell: Probably not the first time that you’ve heard this but, Corey, I’m going to have to go with you’re abnormal because that is not what we’re seeing in a day-to-day production environment.


Corey: Oh, of course not. And the reason I do this is because I was a grumpy old sysadmin for so long, and have gotten burned in so many weird ways of messing things up. And once it’s in Git, it’s eternal—we all know that—and I don’t ever want to be in a scenario where I open-source something and surprise, surprise, come to find out in the first two days of doing something, I had something on disk. It’s just better not to go down that path if at all possible.


Ell: Being a former sysad as well, I must say, what you’re able to do within your environment, your computer is almost impossible within a corporate environment. Because as a sysad, I’m looking at, “What did the devs do again? Oh, man, what’s the security team going to do?” And you’re stuck in the middle trying to figure out how to solve a problem and then manage it through that entire environment.


Corey: I never really understood intrinsically the value of things like single-sign-on, until I wound up starting this company. Because first, it was just me for a few years. And yeah, I can manage my developer environments and my AWS environments in such a way that if they get compromised, it’s not going to be through basic, “Oops, I forgot that’s how computers work,” type of moment. It’s going to be at least something a little bit more difficult, I would imagine. Because if you—all right, if you managed to wind up getting my keys and the passphrase, and in some cases, the MFA device, great, good, congratulations, you’ve done something novel and probably deserve the data.


Whereas as soon as I started bringing other people in who themselves were engineers, I sort of still felt the same way. Okay, we’re all responsible adults here, and by and large, since I wasn’t working with junior people, that held true. And then I started bringing in people who did not come from a deeply computer-y technical background, doing things like finance, and doing things like sales, and doing things like marketing, all of which are themselves deeply technical in their own way, but data privacy and data security are not really something that aligns with that. So, it got into the weeds of, “How do I make sure that people are doing responsible things on their work computers like turning on disk encryption, and forcing a screensaver, and a password and the rest.” And forcing them to at least do some responsible things like having 1Password for everyone was great until I realized a couple people weren’t even using it for something, and oh dear. It becomes a much more difficult problem at scale when you have to deal with people who, you know, have actual work to do rather than sitting around trying to defend the technology against any threat they can imagine.


Ell: In what you just said though, there is one flaw is we tend to focus on, like you said, marketing and finance and all these organizations who—don’t get phished, don’t click on this link. But we kind of give the just the openness that your security team, your sysads, your developers, they’re going to know best practices. And then we focus on Windows because that’s what the researchers are doing. And then we focus on Windows because that’s what marketing is using, that’s what finance is using. So, what there’s no way to compromise a Mac or Linux box? That’s a huge, huge open area that you’re allowing for attackers.


Corey: Let’s be very clear here. We don’t have any Windows boxes—of which I’m aware—in the company. And yeah, the technical folk we have brought in, most of them I’d worked—or at least the early folks—I’d worked with previously. And we had a shared understanding of security. At least we all said the right things.


But yeah, as you—right, as you grow, as you scale, this becomes a big deal. And it’s, I also think there’s something intrinsically flawed about a model where the entire instruction set is, it all falls on you to not click the link or you’re going to doom us all. Maybe if someone can click a link and doom us all, the problem is not with them; it’s the fact that we suck at building secure systems that respect defense in depth.


Ell: Something that we do wrong, though, is we split it up. We have endpoint protection when we’re talking about, you know, our Windows boxes, our Linux boxes, our Mac boxes. And then we have server-side and cloud security. Those connect. Think about, there’s a piece of malware called EvilGNOME. You go in on a Linux box, you have access to my camera, keylogging, and watching exactly what I’m doing. I’m your sysad. I then cat out your SSH keys, I go into your box, they now have the password, but we don’t look for that. We just assume that those two aren’t really that connected, and if we monitor our network and we monitor these devices, we’ll be fine. But we don’t connect the two pieces.


Corey: One thing that I did at a consulting client back in 2012, or so that really raised eyebrows whenever I told people about it was that we wound up going to some considerable trouble building a allow list within Squid—a proxy server that those of us in Linux-land are all too familiar with in some cases—so everything in production could only talk to the outside world via that proxy; it was not allowed to establish any outbound connections other than through that proxy. So, it was at that point only allowed to talk to specify update servers, specified third-party APIs and the rest, so at least in theory, I haven’t checked back on them since, I don’t imagine that the log4yay nonsense that we’ve seen recently would necessarily work there. I mean, sure, you have the arbitrary execution of code—that’s bad—but reaching out to random endpoints on the internet would not have worked from within that environment. And I liked that model, but oh my God, was it a pain in the butt to set up properly because it turns out, even in 2012, just to update a Linux system reasonably, there’s a fair number of things it needs to connect to, from time-to-time, once you have all the things like New Relic instrumentation in, and the app repository you’re talking to, and whatever container source you’re using, and, and, and. Then you wind up looking at challenges like, oh, I don’t know, if you’re looking at an AWS-style environment, like most modern things are, okay, we’re only going to allow it to talk to AWS endpoints. Well, that’s kind of the entire internet now. The goalposts move, the rules change, the game marches on.


Ell: On an even simpler point, with that you’re assuming only outbound traffic through those devices. Are they not connected to anything within the internal network? Is there no way for an attacker to pivot between systems? I pivot over to that, I get the information, and I make an outbound connection on something that’s not configured that way.


Corey: We had—you’re allowed to talk outbound to the management subnet, which was on its own VLAN, and that could make established connections into other things, but nothing else was allowed to connect into that. There was some defense in depth and some thought put into this. I didn’t come up with most of this to be clear, it was—this was smart people sitting around. And yeah, if I sit here and think about this for a while, of course there’s going to be ways to do it. This was also back in the days of doing it in physical data centers, so you could have a pretty good idea of what was connect to the outside world just by looking at where the cables went. But there was also always the question of how does this–does this do what I think it’s doing or what have I overlooked? Security’s job is never done.


Ell: Or what was misconfigured in the last update. It’s an assumption that everything goes correctly.


Corey: Oh, there is that. I want to talk though, about the things I had to worry about back then, it seems like in many cases get kicked upstairs to the cloud providers that we’re using these days. But then we see things like Azurescape where security researchers were able to gain access to the Azure control plane where customers using Cosmos DB—Azure’s managed database service, one of them—could suddenly have their data accessed by another customer. And Azure is doing its clam up thing and not talking about this publicly other than a brief disclosure, but how is this even possible from security architecture point of view? It makes me wonder if it hadn’t been disclosed publicly by the researcher, would they have ever said something? Most assuredly not.


Ell: I’ve worked with several researchers, in Intezer and outside of Intezer, and the amount of frustration that I see within reasonable disclosure, it just blows my mind. You have somebody threatening to sue the researcher if they bring it out. You have a company going, “Okay, well, we’ve only had six weeks. Give us three more weeks.” And next thing we know, it’s six months.


There is just this pushback about what we can actually bring out to the public on why they’re vulnerable in organizations. So, we’re put in this catch-22 as researchers. At what point is my responsibility to the public, and at what point is my responsibility to protect myself, to keep myself from getting sued personally, to keep my company from going down? How can we win when we have small research groups and these 
massive cloud providers?


Corey: This episode is sponsored in part by something new. Cloud Academy is a training platform built on two primary goals. Having the highest quality content in tech and cloud skills, and building a good community the is rich and full of IT and engineering professionals. You wouldn’t think those things go together, but sometimes they do. Its both useful for individuals and large enterprises, but here's what makes it new. I don’t use that term lightly. Cloud Academy invites you to showcase just how good your AWS skills are. For the next four weeks you’ll have a chance to prove yourself. Compete in four unique lab challenges, where they’ll be awarding more than $2000 in cash and prizes. I’m not kidding, first place is a thousand bucks. Pre-register for the first challenge now, one that I picked out myself on Amazon SNS image resizing, by visiting cloudacademy.com/corey. C-O-R-E-Y. That’s cloudacademy.com/corey. We’re gonna have some fun with this one!


Corey: For a while, I was relatively confident that we had things like Google’s Project Zero, but then they started softening their disclosure timelines and the rest, and it was, we had the full disclosure security distribution list that has been shuttered to my understanding. Increasingly, it’s become risky to—yourself—to wind up publishing something that has not been patched and blessed by the providers and the rest. For better or worse, I don’t have those problems, just because I’m posting about funny implications of the bill. Yeah, worst case, AWS is temporarily embarrassed, and they can wind up giving credits to people who were affected and be mad at me for a while, but there’s no lasting harm in the way that there is with well, people were just able to look at your data for six months, and that’s our bad oops-a-doozy. Especially given the assertions that all of these providers have made to governments, to banks, to tax authorities, to all kinds of environments where security really, really matters.


Ell: The last statistic that I heard, and it was earlier this year, that it takes over 200 days for compromise even to be detected. How long is it going to take for them to backtrack, figure out how it got in, have they already patched those systems and that vulnerability is gone, but they managed to establish persistence somehow, the layers that go into actually doing your digital forensics only delay the amount of time that any of that is going to come out where that they have some information to present to you. We keep going, “Oh, we found this vulnerability. We’re working on patches. We have it fixed.” But does every single vendor already have it pitched? Do they know how it actually interacted within one customer’s environment that allowed that breach to happen? It’s just ridiculous to think that’s actually occurring, and every company is now protected because that patch came out.


Corey: As I take a look at how companies respond to these things, you’re right, the number one concern most of them have is image control, if I’m being honest with you. It’s the reputational management of we are still good at security, even though we’ve had a lapse here. Like, every breach notification starts out with, “Your security is important to us.” Well, clearly not that important because look at the email you had to send. And it’s almost taken on aspects of a comedy piece where it [grips 00:23:10] with corporate insincerity. On some level, when you tell a company that they have a massive security vulnerability, their first questions are not about the data privacy; it’s about how do we spend this to make ourselves come out of this with the least damage possible. And I understand it, but it’s still crappy.


Ell: Us tech folk talk to each other. When we have security and developers speaking to each other, we’re a lot more honest than when we’re talking to the public, right? We don’t try to hold that PR umbrella over ourselves. I was recently on a panel speaking with developers, head SRE folk—what was there? I think there was a CISO on there—and one of the developers just honestly came out and said, “At the end, my job is to say, ‘How much is that breach going to cost, versus how much money will the company lose if I don’t make that deployment?’” The first thing that you notice there is that whole how much money you’ll lose? The second part is why is the developer the one looking at the breach?


Corey: Yeah. The work flows downward. One of the most depressing aspects to me of the CISO role is that it seems like the job is to delegate everything, sign binding contracts in your name, and eventually get fired when there’s a breach and your replacement comes in to sign different papers. All the work gets delegated, none of the responsibility does, ideally—unless you’re SolarWinds and try and blame it on an intern; I mean, I wish I had an ablative intern or two around here to wind up a casting blame they don’t deserve on them. But that’s a separate argument—there is no responsibility-taking as I look at this. And that’s really a depressing commentary on the state of the world.


Ell: You say there’s no responsibility taken, but there is a lot of blame assigned. I love the concept of post-mortems to why that breach happened, but the only people in the room are the security team because they had that much control over anything. Companies as a whole need a scapegoat, and more and more, security teams are being blamed for every single compromised as more and more responsibility, more and more privileges, and visibility into what’s going on is being taken away from them. Those two just don’t balance. And I think it’s causing a lot of just complacency and almost giving up from our security teams.


Corey: To be clear, when we talk about blameless post-mortems for things like this, I agree with it wholeheartedly within the walls of a company. However, externally as someone whose data has been taken in some of these breaches, oh, I absolutely blame the company. As I should, especially when it’s something like well, we have inadvertently leaked your browsing history. Why were you collecting that in the first place? Is sort of the next logical question.


I don’t believe that my ISP needs that to serve me better. But now you have Verizon sending out emails recently—as of this recording—saying that unless anyone opts out, all the lines in our cell account are going to wind up being data mined effectively, so they can better target advertisements and understand us better. It’s no, I absolutely do not want you to be doing that on my phone. Are you out of your mind? There are a few things in this world that we consider more private than our browsing histories. We ask the internet things we wouldn’t ask our doctors in many cases, and that is no small thing as far as the level of trust that we place in our ISPs that they are now apparently playing fast and loose with.


Ell: I’m going to take this step back because you do a lot of work with cloud providers. Do you think that we actually know what information is being collected about our companies and what we have configured internally and externally by the cloud provider?


Corey: That’s a good question. I’ve seen this before, where people will give me the PDF exploded view of last month’s AWS bill, and they’ll laugh because what information can I possibly get out of that. It just shows spend on services. But I could do that to start sketching out a pretty good idea of what their architecture looks like from that alone. There’s an awful lot of value in the metadata.


Now, I want to be clear, I do not believe on any provider—except possibly Azure because who knows at this point—that if you encrypt the data, using their encryption facilities—with AWS, I know it’s KMS, for example—I do not believe that they can arbitrarily decrypt it and then scan for whatever it is they’re looking for. I do not believe that they are doing that because as soon as something like that comes out, it puts the lie to a whole bunch of different audit attestations that they’ve made and brings the entire empire crumbling down. I don’t think they’re going to get any useful data from that. However, if I’m trying to build something like Amazon Prime Video, and I can just look at the bill from the Netflix account. Well, that tells me an awful lot about things that they might be doing internally; it’s highly suggestive. Could that be used to give them an unfair advantage? Absolutely.


I had a tweet a while back that I don’t believe that Google’s Gmail division is scanning inboxes for things that look like AWS invoices to target their sales teams, but I sure would feel better if they would assure me that was the case. No one was able to ever assure me of that. It’s I don’t mean to be sitting here slinging mud, but at the same time, it’s given that when you don’t explicitly say you’re not doing something as a company, there’s a great chance you might be doing it, that’s the sort of stuff that worries me, it’s a bunch of unfair dirty trick style stuff.


Ell: Maybe I’m just cynical, or maybe I just focus on these topics too much, but after giving a presentation on cloud security, I had two groups, both, you know, from three letter government agencies, come up to me and say, “How do I have these conversations with the cloud provider?” In the conversation, they say, “We’ve contacted them several times; we want to look at this data; we want to see what they’ve collected, and we get ghosted, or we end up talking to attorneys. And despite over a year of communication, we’ve yet to be able to sit down with them.”


Corey: Now, that’s an interesting story. I would love to have someone come to me with that problem. I don’t know how I would solve that yet. But I have a couple ideas.


Ell: Hey, maybe they’re listening, and they’ll reach out to you. But—


Corey: You know, if you’re having that problem of trying to understand what your cloud provider is doing, please talk to me. I would love to go a little more in depth on that conversation, under an NDA or six.


Ell: I was at a loss because the presentation that I was giving was literally about the compromise of managed service providers, whether that be an outsourced security group, whether that be your cloud provider, we’re seeing attack groups going after these tar—think about how juicy they are. Why do I need to compromise your account or your company if I can compromise that managed service provider and have access to 15 companies?


Corey: Oh, yeah. It’s why would someone spend time trying to break into my NetApp when they could break into S3 and get access to everyone’s data, theoretically? It’s a centralization of security model risk.


Ell: Yeah, it seems to so many people as just this crazy idea. It’s so far out there. We don’t need to worry about it. I mean, we’ve talked about how Azure Functions has been compromised. We talked about all of these cloud services that people are specifically going after and being able to make traction in these attacks.


It’s not just this crazy idea. It’s something that’s happening now, and with the progress that attackers are making, criminal groups are making, this is going to happen pretty soon.


Corey: Sometimes when I’m out for a meal with someone who works with AWS in the security org, there’ll be an appetizer where, “Oh, there’s two of you. I’m going to bring three of them,” because I guess waitstaff love to watch people fight like that. And whenever I want the third one, all I have to do is say, “Can you imagine a day in which, just imagine hypothetically, IAM failed open and allowed every request to go through regardless of everything else?” Suddenly, they look sick, lose their appetite, and I get the third one. But it’s at least reassuring to know that even the idea of that is that disgusting to them, and it’s not the, “Oh, that happened three weeks ago, but don’t tell anyone.” Like, there’s none of that going on.


I do believe that the people working on these systems at the cloud providers are doing amazingly good work. I believe they are doing far better than I would be able to do in trying to manage all those things myself, by a landslide. But nothing is ever perfect. And it makes me wonder that if and when there are vulnerabilities, as we’ve already seen—clearly—with Azure, how forthcoming and transparent would they really be? And that’s the thing that keeps me up at night.


Ell: I keep going back during this talk, but just the interaction with the people there and the crowd was just so eye-opening. And I don’t want to be that person, but I keep getting to these moments of, “I told you so.” And I’m not going to go into SolarWinds. Lord, that has been covered, but shortly after that, we saw the same group going through and trying to—I’m not sure if they successfully did it, but they were targeting networks for cloud computing providers. How many companies focused outside of that compromise at that moment to see what it was going to build out to?


Corey: That’s the terrifying thing is if you can compromise a cloud service provider at this point, it’s well, you could sell that exploit on the dark web to someone. Yeah, that is a—if you can get a remote code execution be able to look into any random Cloud account, there’s almost no amount of money that is enough for something like that. You could think of the insider trading potential of just compromising Slack. A single company, but everyone talks about everything there, and Slack retains data in perpetuity. Think at the sheer M&A discussions you could come up with? Think of what you could figure out with a sort of a God’s eye view of something like that, and then realize that they run on AWS, as do an awful lot of other companies. The damage would be incalculable.


Ell: I am not an attacker, nor do I play one on TV, but let’s just, kind of, build this out. If I was to compromise a cloud provider, the first thing I would do is lay low. I don’t want them to know that I’m there. The next thing I would do is start getting into company environments and scanning them. That way I can see where the vulnerabilities are, I can compromise them that way, and not give out the fact that I came in through that cloud provider. Look, I’m just me sitting here. I’m not a nation state. I’m not somebody who is paid to do this from nine to five, I can only imagine what they would come up with.


Corey: It really feels like this is no longer a concern just for those folks who manage have gotten on the bad side of some country’s secret service. It seems like APTs, Advanced Persistent Threats, are now theoretically something almost anyone has to worry about.


Ell: Let me just set the record straight right now on what I think we need to move away from: The whole APTs are nation states. Not anymore. And APT is anyone who has advanced tactics, anyone who’s going to be persistent—because you know what, it’s not that they’re targeting you, it’s that they know that they eventually can get in. And of course, they’re a threat to you. When I was researching my work into Advanced Persistent Threats, we had a group named TNT that said, “Okay, you know what? We’re done.”


So, I contacted them and I said, “Here’s what I’m presenting on you. Would you mind reviewing it and tell me if I’m right?” They came back and said, “You know what? We’re not in APT because we target open Docker API ports. That’s how easy it is.” So, these big attack groups are not even having to rely on advanced methods anymore. The line onto what that is just completely blurring.


Corey: That’s the scariest part to me is we take a look at this across the board. And the things I have to worry about are no longer things that are solely within my arena of control. They used to be, back when it was in my data center, but now increasingly, I have to extend trust to a whole bunch of different places. Because we’re not building anything ourselves. We have all kinds of third-party dependencies, and we have to trust that they’re doing the right things as they go, too, and making sure that they’re bound so that the monitoring agent that I’m using can’t compromise my entire environment. It’s really a good time to be professionally paranoid.


Ell: And who is actually responsible for all this? Did you know that 70% of the vulnerabilities on our systems right now are on the application level? Yet security teams have to protect it? That doesn’t make sense to me at all. And yet, developers can pull in any third-party repository that they need in order to make that application work because hey, we’re on a deadline. That function needs to come out.


Corey: Ell, I want to thank you for taking the time to speak with me. If people want to learn more about how you see the world and what kind of security research you’re advocating for, where can they find you?


Ell: I live on Twitter to the point where I’m almost embarrassed to say, but you can find me at @Ell_o_Punk.


Corey: Excellent. And we will wind up putting a link to that in the [show notes 00:35:37], as we always do. Thanks so much again for your time. I appreciate it.


Ell: Always. I’d be happy to come again. [laugh].


Corey: Ell Marquez, security research advocate at Intezer. I’m Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry comment that ends in a link that begs me to click it that somehow it looks simultaneously suspicious and frightening.


Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.


Announcer: This has been a HumblePod production. Stay humble.

Transcript

Announcer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.

Corey: It seems like there is a new security breach every day. Are you confident that an old SSH key, or a shared admin account, isn’t going to come back and bite you? If not, check out Teleport. Teleport is the easiest, most secure way to access all of your infrastructure. The open source Teleport Access Plane consolidates everything you need for secure access to your Linux and Windows servers—and I assure you there is no third option there. Kubernetes clusters, databases, and internal applications like AWS Management Console, Yankins, GitLab, Grafana, Jupyter Notebooks, and more. Teleport’s unique approach is not only more secure, it also improves developer productivity. To learn more visit: goteleport.com. And not, that is not me telling you to go away, it is: goteleport.com.

Corey: This episode is sponsored by our friends at Oracle Cloud. Counting the pennies, but still dreaming of deploying apps instead of "Hello, World" demos? Allow me to introduce you to Oracle's Always Free tier. It provides over 20 free services and infrastructure, networking, databases, observability, management, and security. And—let me be clear here—it's actually free. There's no surprise billing until you intentionally and proactively upgrade your account. This means you can provision a virtual machine instance or spin up an autonomous database that manages itself all while gaining the networking load, balancing and storage resources that somehow never quite make it into most free tiers needed to support the application that you want to build. With Always Free, you can do things like run small scale applications or do proof-of-concept testing without spending a dime. You know that I always like to put asterisks next to the word free. This is actually free, no asterisk. Start now. Visit snark.cloud/oci-free that's snark.cloud/oci-free.

Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. If there’s one thing we love doing in the world of cloud, it’s forgetting security until the very end, going back and bolting it on as if we intended to do it that way all along. That’s why AWS says security is job zero because they didn’t want to remember all of their slides once they realized they forgot security. Here to talk with me about that today is Ell Marquez, security research advocate at Intezer. Ell, thank you for joining me.

Ell: Of course.

Corey: So, what does a security research advocate do, for lack of a better question, I suppose? Because honestly, you look at that, it’s like, security research advocate, it seems, would advocate for doing security research. That seems like a good thing to do. I agree, but there’s probably a bit more nuance to it, then I can pick up just by the [unintelligible 00:01:17] reading of the title.

Ell: You know, we have all of these white papers that you end up getting, the pen test reports that are dropped on your desk that nobody ever gets to, they become low priority, my job is to actually advocate that you do something with the information that you get. And part of that just involves translating that into plain English, so anyone can go with it.

Corey: I’ve got to say, if you want to give the secrets of the universe and make sure that no one ever reads them, make sure that it has a whole bunch of academic-style citations at the beginning, and ideally put it behind some academic paywall, and it feels like people will claim to have read it but never actually read the thing.

Ell: Don’t forget charts.

Corey: Oh yes, with the charts. In varying shades of blue. Apparently that’s the only color you’re allowed to do some of these charts in; despite having a full universe of color palettes out there, we’re just going to put it in varying shades of corporate blue and hope that people read it.

Ell: Yep, that sounds about security there. [laugh].

Corey: So, how much of, I guess, modern security research these days is coming out of academia versus coming out of industry?

Ell: In my experience in, you know, research I’ve done in researching researchers, it all really revolves around actual practitioners these days, people who are on the front lines, you know, monitoring their honey pots, and actually reporting back on what they’re seeing, not just theoretical.

Corey: Which I guess brings us to the question of, I wind up watching all of the keynotes that all the big cloud providers put on and they simultaneously pat me on the head and tell me that their side of security is just fine with their shared responsibility model and the rest, whereas all of the breaches I’m ever going to deal with and the only way anyone can ever see my data is if I make a mistake in configuring something. And honestly, does that really sound like something I would do? Probably not, but let’s face it, they claim that they are more or less infallible. How accurate is that?

Ell: I wish that I could find the original person that said this, but I’ve heard it so many times. And it’s actually the ‘cloud irresponsibility model.’ We have this blind faith that if we’re paying somebody for it, it’s going to be done correctly. I think you may have seen this with billing. How many people are paying for redundant security services with a cloud provider?

Corey: I’ve once—well, more than once have noticed that if you were to configure every AWS security service that they have and enable it in your account, that the resulting bill would be larger than the cost of the data breach it was preventing. So, on some level, there is a point at which it just becomes ridiculous and it’s not necessarily worth pursuing further. I honestly used to think that the shared responsibility model story was a sales pitch, and then I grew ever more cynical. And now my position on it is that it’s because if you get breached, it’s your fault is what they’re trying to say. But if you say it outright to someone who just got breached, they’re probably not going to give you money anymore. So, you need to wrap that in this whole involved 45-minute presentation with slides, and charts, and images and the rest because people can’t refute one of those quite the way that they can a—it’s in a tweet sentence of, “It’s your fault.”

Ell: I kind of have to agree with them in the end that it is your fault. Like, the buck stops with you, regardless. You are the one that chose to trust that cloud provider was going to do everything because your security team might make a mistake, but the cloud provider is made up of humans as well who can make just as many mistakes. At the end of the day, I don’t care what cloud provider you used; I care that my data was compromised.

Corey: One of the things that irks me the most is when I read about a data breach from a vendor that I had either trusted knowingly with my data or worse, never trusted but they somehow scraped it somewhere and then lost it, and they said, “Oh, a third-party contractor that we hired.” It’s, “Yeah, look, I’m doing business with you, ideally, not the people that you choose to do business with in turn. I didn’t select that contractor. You did, you can pass out the work and delegate that. You cannot delegate the responsibility.” So no, Verizon, when you talk about having a third-party contractor have a data breach of customer data, you lost the data by not vetting your contractors appropriately.

Ell: Let’s go back in time to hopefully something everybody remembers: Target. Target being compromised because of their HVAC provider. Yet how many people—you know this is being recorded in the holiday season—are still shopping at Target right now? I don’t know if people forget or they just don’t care.

Corey: A year later, their stock price was higher than it was before the breach. Sure they had a complete turnover of their C-suite at that point; their CSO and CEO were forced out as a result, but life went on. And they continue to remain a going concern despite quite literally having a bull’s eye painted on the building. You’d think that would be a metaphor for security issues. But no, no, that is something they actually do.

Ell: You know, when you talk about, you know, the CEO being let go or, you know, being run out, but what part did he honestly have to do with it? They’re talking about, oh, well, they made the decisions and they were responsible. What because they got that, you know, list of just 8000 papers with the charts on it?

Corey: As I take a look at a lot of the previous issues that we’ve seen with I’ve been doing my whole S3 Bucket Negligence Awards for a while, but once I actually had a bucket engraved and sent to a company years ago, the Pokémon Company, based upon a story that I read in the Wall Street Journal, how they declined to do business with a prospective vendor because going through their onboarding process, they noticed among other things, insufficient security controls around a whole bunch of things including S3 buckets, and it’s holy crap, a company actually making a meaningful decision based upon security. And say what you will about the Pokémon Company, their audience is—at least theoretically—children and occasionally adults who believe they’re children—great, not here to shame—but they understand that this is not something you can afford to be lax in and they kiboshed the entire deal. They didn’t name the vendor, obviously, but that really took me aback. It was such a rarity to see that, and it’s why I unfortunately haven't had to make a bucket like that since. I wish I did. I wish more companies did things like this. But no it’s just a matter of, well, we claim to do the right thing, and we checked all the boxes and called it good, and oops, these things happen.

Ell: Yes, but even when it goes that way, who actually remembers what happened, and did you ever follow up if there were any consequences to not going, “Okay, third-party. You screwed up, we’re out. We’re not using you.” I can’t name a single time that happened.

Corey: Over at The Duckbill Group, we have large enterprise customers. We have to be respectful and careful with their data, let’s be very clear here. We have all of their AWS billing data going back for some fixed period of time. And it worries me what happens if that data gets breached. Now, sure, I’ve done the standard PR crisis comms thing, I have statements and actions prepared to go in the event that it happens, but I’m also taking great pains to make sure it doesn’t.

It’s the idea of okay, let’s make sure that we wind up keeping these things not just distinct from the outside world, but distinct from individual clients so we’re not mixing and matching any of this stuff. It’s one of those areas where if we wind up having a breach, it’s not because we didn’t follow the baseline building blocks of doing this right. It’s something that goes far beyond what we would typically expect to see in an environment like this. This, of course, sets aside the fact that while a breach like that would be embarrassing, it isn’t actually material to anyone’s business. This is not to say that I’m not taking it seriously because we have contractual provisions that we will not disclose a lot of this stuff, but it does not mean the end of someone’s business if this stuff were to go public in the same way that, for example, back when I worked at Grindr many years ago, in the event that someone’s data had been leaked there, people could theoretically been killed. There’s a spectrum of consequences here, but it still seems like you just do the basic block-and-tackling to make sure that this stuff isn’t publicly exposed, then you start worrying about the more advanced stuff. But with all these breaches, it seems like people don’t even do that.

Ell: You have Tesla, right, who’s working on going to Mars, sending people there who had their S3 buckets compromised. At that point, if we’ve got this technology, just giant there, I think we’re safe to do that whole, “Hey, assume breach, assume compromise.” But when I say that, it drives me up the wall how many people just go, “Okay, well, there’s nothing we can do. We should just assume that there’s going to be an issue,” and just have this mentality where they give up. No, that gives you a starting point to work from, but that’s not the way it’s being seen.

Corey: One of the things that I’ve started doing as I built up my new laptop recently has been all right, how do I work with this in such a way that I don’t have credentials that are going to grant access to things in any long-lived way ever residing on disk? And so that meant with AWS, I started using SSO to log into a bunch of things. It goes through a website, and then it gives a token and the rest that lasts for 12 hours. Great.

Okay, SSH keys, how do I handle that? Historically, I would have them encrypted with a passphrase, but then I found for Mac OS an app called Secretive that stores it in the Secure Enclave. I have to either type in a password or prove it with a biometric Touch ID nonsense every time something tries to access the key. It’s slightly annoying when I’m checking out five or six Git repos at once, but it also means that nothing that I happen to have compromised in a browser or whatnot is going to be able to just grab the keys, send it off somewhere, and then I’ll never realize that I’ve been compromised throughout. It’s the idea of at least theoretically defense in depth because it’s me, it’s my personal electronics, in all likelihood, that are going to be compromised, more so than it is configured, locked-down S3 buckets, managed properly. And if not me, someone else in my company who has access to these things.

Ell: I’m going to give you the best advice you’re ever going to get, and people are going to go, “Duh,” but it’s happening right now: Don’t get complacent, don’t get lazy, how many of us are, “Okay, we’re just going to put the key over here for a second.” Or, “We’re just going to do this for a minute,” and then we forget. I recently, you know, did some research into Emotet and—you know, the new virus and the group behind it—you know how they got caught? When they were raided, everything was in plain text. They forgot to use their VPN for a while, all the files that they’d gotten no encryption. These were the people that that’s what they were looking for, but you get lazy.

Corey: I’ve started treating at least the security credential side of doing weird things, even one off bash scripts, as if they were in production. I stuff the credentials into something like AWS’s parameter store, and then just have a one line snippet of code that retrieves them at runtime to wind up retrieving those. Would it be easier to just slap it in there in the code? Absolutely, of course it would. But I also look at my newsletter production pipeline, and I count the number of DynamoDB tables that are in active use that are labeled Test or Dev, and I realized, huh, I’m actually kind of bad at taking something that was in Dev and getting it ready for production. Very often, I just throw a load at it and call it good. So, if I never get complacent around things like that, it’s a lot harder for me to get yelled at for checking secrets into Git, for example.

Ell: Probably not the first time that you’ve heard this but, Corey, I’m going to have to go with you’re abnormal because that is not what we’re seeing in a day-to-day production environment.

Corey: Oh, of course not. And the reason I do this is because I was a grumpy old sysadmin for so long, and have gotten burned in so many weird ways of messing things up. And once it’s in Git, it’s eternal—we all know that—and I don’t ever want to be in a scenario where I open-source something and surprise, surprise, come to find out in the first two days of doing something, I had something on disk. It’s just better not to go down that path if at all possible.

Ell: Being a former sysad as well, I must say, what you’re able to do within your environment, your computer is almost impossible within a corporate environment. Because as a sysad, I’m looking at, “What did the devs do again? Oh, man, what’s the security team going to do?” And you’re stuck in the middle trying to figure out how to solve a problem and then manage it through that entire environment.

Corey: I never really understood intrinsically the value of things like single-sign-on, until I wound up starting this company. Because first, it was just me for a few years. And yeah, I can manage my developer environments and my AWS environments in such a way that if they get compromised, it’s not going to be through basic, “Oops, I forgot that’s how computers work,” type of moment. It’s going to be at least something a little bit more difficult, I would imagine. Because if you—all right, if you managed to wind up getting my keys and the passphrase, and in some cases, the MFA device, great, good, congratulations, you’ve done something novel and probably deserve the data.

Whereas as soon as I started bringing other people in who themselves were engineers, I sort of still felt the same way. Okay, we’re all responsible adults here, and by and large, since I wasn’t working with junior people, that held true. And then I started bringing in people who did not come from a deeply computer-y technical background, doing things like finance, and doing things like sales, and doing things like marketing, all of which are themselves deeply technical in their own way, but data privacy and data security are not really something that aligns with that. So, it got into the weeds of, “How do I make sure that people are doing responsible things on their work computers like turning on disk encryption, and forcing a screensaver, and a password and the rest.” And forcing them to at least do some responsible things like having 1Password for everyone was great until I realized a couple people weren’t even using it for something, and oh dear. It becomes a much more difficult problem at scale when you have to deal with people who, you know, have actual work to do rather than sitting around trying to defend the technology against any threat they can imagine.

Ell: In what you just said though, there is one flaw is we tend to focus on, like you said, marketing and finance and all these organizations who—don’t get phished, don’t click on this link. But we kind of give the just the openness that your security team, your sysads, your developers, they’re going to know best practices. And then we focus on Windows because that’s what the researchers are doing. And then we focus on Windows because that’s what marketing is using, that’s what finance is using. So, what there’s no way to compromise a Mac or Linux box? That’s a huge, huge open area that you’re allowing for attackers.

Corey: Let’s be very clear here. We don’t have any Windows boxes—of which I’m aware—in the company. And yeah, the technical folk we have brought in, most of them I’d worked—or at least the early folks—I’d worked with previously. And we had a shared understanding of security. At least we all said the right things.

But yeah, as you—right, as you grow, as you scale, this becomes a big deal. And it’s, I also think there’s something intrinsically flawed about a model where the entire instruction set is, it all falls on you to not click the link or you’re going to doom us all. Maybe if someone can click a link and doom us all, the problem is not with them; it’s the fact that we suck at building secure systems that respect defense in depth.

Ell: Something that we do wrong, though, is we split it up. We have endpoint protection when we’re talking about, you know, our Windows boxes, our Linux boxes, our Mac boxes. And then we have server-side and cloud security. Those connect. Think about, there’s a piece of malware called EvilGNOME. You go in on a Linux box, you have access to my camera, keylogging, and watching exactly what I’m doing. I’m your sysad. I then cat out your SSH keys, I go into your box, they now have the password, but we don’t look for that. We just assume that those two aren’t really that connected, and if we monitor our network and we monitor these devices, we’ll be fine. But we don’t connect the two pieces.

Corey: One thing that I did at a consulting client back in 2012, or so that really raised eyebrows whenever I told people about it was that we wound up going to some considerable trouble building a allow list within Squid—a proxy server that those of us in Linux-land are all too familiar with in some cases—so everything in production could only talk to the outside world via that proxy; it was not allowed to establish any outbound connections other than through that proxy. So, it was at that point only allowed to talk to specify update servers, specified third-party APIs and the rest, so at least in theory, I haven’t checked back on them since, I don’t imagine that the log4yay nonsense that we’ve seen recently would necessarily work there. I mean, sure, you have the arbitrary execution of code—that’s bad—but reaching out to random endpoints on the internet would not have worked from within that environment. And I liked that model, but oh my God, was it a pain in the butt to set up properly because it turns out, even in 2012, just to update a Linux system reasonably, there’s a fair number of things it needs to connect to, from time-to-time, once you have all the things like New Relic instrumentation in, and the app repository you’re talking to, and whatever container source you’re using, and, and, and. Then you wind up looking at challenges like, oh, I don’t know, if you’re looking at an AWS-style environment, like most modern things are, okay, we’re only going to allow it to talk to AWS endpoints. Well, that’s kind of the entire internet now. The goalposts move, the rules change, the game marches on.

Ell: On an even simpler point, with that you’re assuming only outbound traffic through those devices. Are they not connected to anything within the internal network? Is there no way for an attacker to pivot between systems? I pivot over to that, I get the information, and I make an outbound connection on something that’s not configured that way.

Corey: We had—you’re allowed to talk outbound to the management subnet, which was on its own VLAN, and that could make established connections into other things, but nothing else was allowed to connect into that. There was some defense in depth and some thought put into this. I didn’t come up with most of this to be clear, it was—this was smart people sitting around. And yeah, if I sit here and think about this for a while, of course there’s going to be ways to do it. This was also back in the days of doing it in physical data centers, so you could have a pretty good idea of what was connect to the outside world just by looking at where the cables went. But there was also always the question of how does this–does this do what I think it’s doing or what have I overlooked? Security’s job is never done.

Ell: Or what was misconfigured in the last update. It’s an assumption that everything goes correctly.

Corey: Oh, there is that. I want to talk though, about the things I had to worry about back then, it seems like in many cases get kicked upstairs to the cloud providers that we’re using these days. But then we see things like Azurescape where security researchers were able to gain access to the Azure control plane where customers using Cosmos DB—Azure’s managed database service, one of them—could suddenly have their data accessed by another customer. And Azure is doing its clam up thing and not talking about this publicly other than a brief disclosure, but how is this even possible from security architecture point of view? It makes me wonder if it hadn’t been disclosed publicly by the researcher, would they have ever said something? Most assuredly not.

Ell: I’ve worked with several researchers, in Intezer and outside of Intezer, and the amount of frustration that I see within reasonable disclosure, it just blows my mind. You have somebody threatening to sue the researcher if they bring it out. You have a company going, “Okay, well, we’ve only had six weeks. Give us three more weeks.” And next thing we know, it’s six months.

There is just this pushback about what we can actually bring out to the public on why they’re vulnerable in organizations. So, we’re put in this catch-22 as researchers. At what point is my responsibility to the public, and at what point is my responsibility to protect myself, to keep myself from getting sued personally, to keep my company from going down? How can we win when we have small research groups and these massive cloud providers?

Corey: This episode is sponsored in part by something new. Cloud Academy is a training platform built on two primary goals. Having the highest quality content in tech and cloud skills, and building a good community the is rich and full of IT and engineering professionals. You wouldn’t think those things go together, but sometimes they do. Its both useful for individuals and large enterprises, but here's what makes it new. I don’t use that term lightly. Cloud Academy invites you to showcase just how good your AWS skills are. For the next four weeks you’ll have a chance to prove yourself. Compete in four unique lab challenges, where they’ll be awarding more than $2000 in cash and prizes. I’m not kidding, first place is a thousand bucks. Pre-register for the first challenge now, one that I picked out myself on Amazon SNS image resizing, by visiting cloudacademy.com/corey. C-O-R-E-Y. That’s cloudacademy.com/corey. We’re gonna have some fun with this one!

Corey: For a while, I was relatively confident that we had things like Google’s Project Zero, but then they started softening their disclosure timelines and the rest, and it was, we had the full disclosure security distribution list that has been shuttered to my understanding. Increasingly, it’s become risky to—yourself—to wind up publishing something that has not been patched and blessed by the providers and the rest. For better or worse, I don’t have those problems, just because I’m posting about funny implications of the bill. Yeah, worst case, AWS is temporarily embarrassed, and they can wind up giving credits to people who were affected and be mad at me for a while, but there’s no lasting harm in the way that there is with well, people were just able to look at your data for six months, and that’s our bad oops-a-doozy. Especially given the assertions that all of these providers have made to governments, to banks, to tax authorities, to all kinds of environments where security really, really matters.

Ell: The last statistic that I heard, and it was earlier this year, that it takes over 200 days for compromise even to be detected. How long is it going to take for them to backtrack, figure out how it got in, have they already patched those systems and that vulnerability is gone, but they managed to establish persistence somehow, the layers that go into actually doing your digital forensics only delay the amount of time that any of that is going to come out where that they have some information to present to you. We keep going, “Oh, we found this vulnerability. We’re working on patches. We have it fixed.” But does every single vendor already have it pitched? Do they know how it actually interacted within one customer’s environment that allowed that breach to happen? It’s just ridiculous to think that’s actually occurring, and every company is now protected because that patch came out.

Corey: As I take a look at how companies respond to these things, you’re right, the number one concern most of them have is image control, if I’m being honest with you. It’s the reputational management of we are still good at security, even though we’ve had a lapse here. Like, every breach notification starts out with, “Your security is important to us.” Well, clearly not that important because look at the email you had to send. And it’s almost taken on aspects of a comedy piece where it [grips 00:23:10] with corporate insincerity. On some level, when you tell a company that they have a massive security vulnerability, their first questions are not about the data privacy; it’s about how do we spend this to make ourselves come out of this with the least damage possible. And I understand it, but it’s still crappy.

Ell: Us tech folk talk to each other. When we have security and developers speaking to each other, we’re a lot more honest than when we’re talking to the public, right? We don’t try to hold that PR umbrella over ourselves. I was recently on a panel speaking with developers, head SRE folk—what was there? I think there was a CISO on there—and one of the developers just honestly came out and said, “At the end, my job is to say, ‘How much is that breach going to cost, versus how much money will the company lose if I don’t make that deployment?’” The first thing that you notice there is that whole how much money you’ll lose? The second part is why is the developer the one looking at the breach?

Corey: Yeah. The work flows downward. One of the most depressing aspects to me of the CISO role is that it seems like the job is to delegate everything, sign binding contracts in your name, and eventually get fired when there’s a breach and your replacement comes in to sign different papers. All the work gets delegated, none of the responsibility does, ideally—unless you’re SolarWinds and try and blame it on an intern; I mean, I wish I had an ablative intern or two around here to wind up a casting blame they don’t deserve on them. But that’s a separate argument—there is no responsibility-taking as I look at this. And that’s really a depressing commentary on the state of the world.

Ell: You say there’s no responsibility taken, but there is a lot of blame assigned. I love the concept of post-mortems to why that breach happened, but the only people in the room are the security team because they had that much control over anything. Companies as a whole need a scapegoat, and more and more, security teams are being blamed for every single compromised as more and more responsibility, more and more privileges, and visibility into what’s going on is being taken away from them. Those two just don’t balance. And I think it’s causing a lot of just complacency and almost giving up from our security teams.

Corey: To be clear, when we talk about blameless post-mortems for things like this, I agree with it wholeheartedly within the walls of a company. However, externally as someone whose data has been taken in some of these breaches, oh, I absolutely blame the company. As I should, especially when it’s something like well, we have inadvertently leaked your browsing history. Why were you collecting that in the first place? Is sort of the next logical question.

I don’t believe that my ISP needs that to serve me better. But now you have Verizon sending out emails recently—as of this recording—saying that unless anyone opts out, all the lines in our cell account are going to wind up being data mined effectively, so they can better target advertisements and understand us better. It’s no, I absolutely do not want you to be doing that on my phone. Are you out of your mind? There are a few things in this world that we consider more private than our browsing histories. We ask the internet things we wouldn’t ask our doctors in many cases, and that is no small thing as far as the level of trust that we place in our ISPs that they are now apparently playing fast and loose with.

Ell: I’m going to take this step back because you do a lot of work with cloud providers. Do you think that we actually know what information is being collected about our companies and what we have configured internally and externally by the cloud provider?

Corey: That’s a good question. I’ve seen this before, where people will give me the PDF exploded view of last month’s AWS bill, and they’ll laugh because what information can I possibly get out of that. It just shows spend on services. But I could do that to start sketching out a pretty good idea of what their architecture looks like from that alone. There’s an awful lot of value in the metadata.

Now, I want to be clear, I do not believe on any provider—except possibly Azure because who knows at this point—that if you encrypt the data, using their encryption facilities—with AWS, I know it’s KMS, for example—I do not believe that they can arbitrarily decrypt it and then scan for whatever it is they’re looking for. I do not believe that they are doing that because as soon as something like that comes out, it puts the lie to a whole bunch of different audit attestations that they’ve made and brings the entire empire crumbling down. I don’t think they’re going to get any useful data from that. However, if I’m trying to build something like Amazon Prime Video, and I can just look at the bill from the Netflix account. Well, that tells me an awful lot about things that they might be doing internally; it’s highly suggestive. Could that be used to give them an unfair advantage? Absolutely.

I had a tweet a while back that I don’t believe that Google’s Gmail division is scanning inboxes for things that look like AWS invoices to target their sales teams, but I sure would feel better if they would assure me that was the case. No one was able to ever assure me of that. It’s I don’t mean to be sitting here slinging mud, but at the same time, it’s given that when you don’t explicitly say you’re not doing something as a company, there’s a great chance you might be doing it, that’s the sort of stuff that worries me, it’s a bunch of unfair dirty trick style stuff.

Ell: Maybe I’m just cynical, or maybe I just focus on these topics too much, but after giving a presentation on cloud security, I had two groups, both, you know, from three letter government agencies, come up to me and say, “How do I have these conversations with the cloud provider?” In the conversation, they say, “We’ve contacted them several times; we want to look at this data; we want to see what they’ve collected, and we get ghosted, or we end up talking to attorneys. And despite over a year of communication, we’ve yet to be able to sit down with them.”

Corey: Now, that’s an interesting story. I would love to have someone come to me with that problem. I don’t know how I would solve that yet. But I have a couple ideas.

Ell: Hey, maybe they’re listening, and they’ll reach out to you. But—

Corey: You know, if you’re having that problem of trying to understand what your cloud provider is doing, please talk to me. I would love to go a little more in depth on that conversation, under an NDA or six.

Ell: I was at a loss because the presentation that I was giving was literally about the compromise of managed service providers, whether that be an outsourced security group, whether that be your cloud provider, we’re seeing attack groups going after these tar—think about how juicy they are. Why do I need to compromise your account or your company if I can compromise that managed service provider and have access to 15 companies?

Corey: Oh, yeah. It’s why would someone spend time trying to break into my NetApp when they could break into S3 and get access to everyone’s data, theoretically? It’s a centralization of security model risk.

Ell: Yeah, it seems to so many people as just this crazy idea. It’s so far out there. We don’t need to worry about it. I mean, we’ve talked about how Azure Functions has been compromised. We talked about all of these cloud services that people are specifically going after and being able to make traction in these attacks.

It’s not just this crazy idea. It’s something that’s happening now, and with the progress that attackers are making, criminal groups are making, this is going to happen pretty soon.

Corey: Sometimes when I’m out for a meal with someone who works with AWS in the security org, there’ll be an appetizer where, “Oh, there’s two of you. I’m going to bring three of them,” because I guess waitstaff love to watch people fight like that. And whenever I want the third one, all I have to do is say, “Can you imagine a day in which, just imagine hypothetically, IAM failed open and allowed every request to go through regardless of everything else?” Suddenly, they look sick, lose their appetite, and I get the third one. But it’s at least reassuring to know that even the idea of that is that disgusting to them, and it’s not the, “Oh, that happened three weeks ago, but don’t tell anyone.” Like, there’s none of that going on.

I do believe that the people working on these systems at the cloud providers are doing amazingly good work. I believe they are doing far better than I would be able to do in trying to manage all those things myself, by a landslide. But nothing is ever perfect. And it makes me wonder that if and when there are vulnerabilities, as we’ve already seen—clearly—with Azure, how forthcoming and transparent would they really be? And that’s the thing that keeps me up at night.

Ell: I keep going back during this talk, but just the interaction with the people there and the crowd was just so eye-opening. And I don’t want to be that person, but I keep getting to these moments of, “I told you so.” And I’m not going to go into SolarWinds. Lord, that has been covered, but shortly after that, we saw the same group going through and trying to—I’m not sure if they successfully did it, but they were targeting networks for cloud computing providers. How many companies focused outside of that compromise at that moment to see what it was going to build out to?

Corey: That’s the terrifying thing is if you can compromise a cloud service provider at this point, it’s well, you could sell that exploit on the dark web to someone. Yeah, that is a—if you can get a remote code execution be able to look into any random Cloud account, there’s almost no amount of money that is enough for something like that. You could think of the insider trading potential of just compromising Slack. A single company, but everyone talks about everything there, and Slack retains data in perpetuity. Think at the sheer M&A discussions you could come up with? Think of what you could figure out with a sort of a God’s eye view of something like that, and then realize that they run on AWS, as do an awful lot of other companies. The damage would be incalculable.

Ell: I am not an attacker, nor do I play one on TV, but let’s just, kind of, build this out. If I was to compromise a cloud provider, the first thing I would do is lay low. I don’t want them to know that I’m there. The next thing I would do is start getting into company environments and scanning them. That way I can see where the vulnerabilities are, I can compromise them that way, and not give out the fact that I came in through that cloud provider. Look, I’m just me sitting here. I’m not a nation state. I’m not somebody who is paid to do this from nine to five, I can only imagine what they would come up with.

Corey: It really feels like this is no longer a concern just for those folks who manage have gotten on the bad side of some country’s secret service. It seems like APTs, Advanced Persistent Threats, are now theoretically something almost anyone has to worry about.

Ell: Let me just set the record straight right now on what I think we need to move away from: The whole APTs are nation states. Not anymore. And APT is anyone who has advanced tactics, anyone who’s going to be persistent—because you know what, it’s not that they’re targeting you, it’s that they know that they eventually can get in. And of course, they’re a threat to you. When I was researching my work into Advanced Persistent Threats, we had a group named TNT that said, “Okay, you know what? We’re done.”

So, I contacted them and I said, “Here’s what I’m presenting on you. Would you mind reviewing it and tell me if I’m right?” They came back and said, “You know what? We’re not in APT because we target open Docker API ports. That’s how easy it is.” So, these big attack groups are not even having to rely on advanced methods anymore. The line onto what that is just completely blurring.

Corey: That’s the scariest part to me is we take a look at this across the board. And the things I have to worry about are no longer things that are solely within my arena of control. They used to be, back when it was in my data center, but now increasingly, I have to extend trust to a whole bunch of different places. Because we’re not building anything ourselves. We have all kinds of third-party dependencies, and we have to trust that they’re doing the right things as they go, too, and making sure that they’re bound so that the monitoring agent that I’m using can’t compromise my entire environment. It’s really a good time to be professionally paranoid.

Ell: And who is actually responsible for all this? Did you know that 70% of the vulnerabilities on our systems right now are on the application level? Yet security teams have to protect it? That doesn’t make sense to me at all. And yet, developers can pull in any third-party repository that they need in order to make that application work because hey, we’re on a deadline. That function needs to come out.

Corey: Ell, I want to thank you for taking the time to speak with me. If people want to learn more about how you see the world and what kind of security research you’re advocating for, where can they find you?

Ell: I live on Twitter to the point where I’m almost embarrassed to say, but you can find me at @Ell_o_Punk.

Corey: Excellent. And we will wind up putting a link to that in the [show notes 00:35:37], as we always do. Thanks so much again for your time. I appreciate it.

Ell: Always. I’d be happy to come again. [laugh].

Corey: Ell Marquez, security research advocate at Intezer. I’m Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry comment that ends in a link that begs me to click it that somehow it looks simultaneously suspicious and frightening.

Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

Announcer: This has been a HumblePod production. Stay humble.

Newsletter Footer

Get the Newsletter

Reach over 30,000 discerning engineers, managers, enthusiasts who actually care about the state of Amazon’s cloud ecosystems.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Sponsor Icon Footer

Sponsor an Episode

Get your message in front of people who care enough to keep current about the cloud phenomenon and its business impacts.