Scott Piper is an AWS security consultant at Summit Route, a company he founded in 2017. Scott Piper is an AWS security consultant at Summit Route, a company he founded in 2017. He’s also the developer of flaws.cloud and an organizer for the virtual fwd:cloudsec conference. Scott brings 15 years of tech experience to his current position, having worked as director of security at a cybersecurity company, a security engineer at Yelp, and a software engineer at the NSA, among other positions. Join Corey and Scott as they talk about how Scott created a game to help teach people AWS security; how Scott likely got a red flag thrown on his account indicating he’s a hassle to deal with; what fwd:cloudsec is, why it was named the way it was, and how it came about; some of the reasons why virtual conferences are better than in-person conferences; why in-person conferences likely aren’t coming back anytime soon; what Scott thinks AWS does well and what he thinks AWS does not do well; what Scott believes the best security boundary on AWS is; and more.
Episode Show Notes & Transcript
About Scott Piper
Scott is an independent consultant helping companies secure their AWS environments through private trainings. He created the free training sites flaws.cloud and flaws2.cloud, along with the open-source projects CloudMapper, Parliament, and more.
Scott is an independent consultant helping companies secure their AWS environments through private trainings. He created the free training sites flaws.cloud and flaws2.cloud, along with the open-source projects CloudMapper, Parliament, and more.
- Connect with Scott Piper on...
- Twitter: @0xdabbad00
- Company website: Summit Route
Announcer: Hello, and welcome to Screaming in the Cloud with your host, Cloud Economist Corey Quinn. This weekly show features conversations with people doing interesting work in the world of Cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.
This episode is sponsored by our friends at New Relic. If you’re like most environments, you probably have an incredibly complicated architecture, which means that monitoring it is going to take a dozen different tools. And then we get into the advanced stuff. We all have been there and know that pain, or will learn it shortly, and New Relic wants to change that. They’ve designed everything you need in one platform with pricing that’s simple and straightforward, and that means no more counting hosts. You also can get one user and a hundred gigabytes a month, totally free. To learn more, visit newrelic.com. Observability made simple.
Corey: When you think about feature flags—and you should—you should also be thinking of LaunchDarkly. LaunchDarkly is a feature management platform that lets all your teams safely deliver and control software through feature flags. By separating code deployments from feature releases at massive scale—and small scale, too—LaunchDarkly enables you to innovate faster, increase developer happiness—which is more important than you’d think—and drive transformation throughout your organization. LaunchDarkly enables teams to modernize faster. Awesome companies have used them, large, small, and everything in between. Take a look at launchdarkly.com, and tell them that I sent you. My thanks again for their sponsorship of this episode.
Corey: Welcome to Screaming in the Cloud, I'm Corey Quinn. I'm generally the person that people think of when there's an AWS billing problem, but when there's an AWS security problem, the one person I think of before anyone else is AWS Security Consultant Scott Piper. Scott, welcome to the show.
Scott: Thanks for having me, Corey.
Corey: It's been fascinating, just sort of, I guess, passing like ships in the night for the last, well, three and a half, four years or so. You're an independent consultant, a one-man band—like I was for the first two years before I had the good sense to hire someone whose primary language is spreadsheets—but it's been really interesting seeing you grow and evolve. And, honestly, you have actual expertise in the whole security space, whereas with billing, I mostly faked it for a while.
Scott: Yeah. And I faked it myself for a while because I did not come in with strong AWS experience at all. I basically was at a previous job trying to wear a lot of hats. I was the sole security person at a startup, and as a result, I was doing not only our CloudSec but also our AppSec—or CorpSec—our physical security, badge readers, surveillance cameras, just every aspect of our security in different ways. And doing all of it poorly, and especially on the cloud security side; that was an area that I felt I was very weak and I didn't have a lot of experience in, I didn't really know what my concern should be in there.
And so I started to really just try to understand what are the past incidents that have happened in AWS? And what are the things that I want to make sure that our DevOps guy is aware of when he's trying to build out our AWS infrastructure? And as, kind of, a challenge to myself, I figured, “Hey, I'll turn this into kind of a training program, kind of a game, that I can make online and available to everybody all the time.” And ended up releasing that as flaws.cloud. And so that is available even today. And we're not—
Corey: Yeah, I’m going to just put a plug in right there for that. flaws.cloud is one of the foundational learn-as-you-go exploration stories for learning by doing. It's basically an adventure game is probably the best way I can think of that, where there's an escalating series of open S3 buckets and things like that, where you go from level to level. It's what seven levels or something like that?
Scott: I think five levels for that one. And then I ended up creating flAWS2.cloud afterwards. And flaws2.cloud, I created a number of years later, but the initial flaws.cloud, I released it, and I figured if I release it on the right day on Twitter, maybe a dozen people will come and check it out.
And instead in the first month, 30,000 people visited the site. And so I was just floored. “Like, oh my gosh, there's something here.” Like I didn't know very much about AWS security, and apparently, a lot of other people out there are interested but don't know very much either, and are trying to learn about it. And so—
Corey: Oh, I’ll take a step further than that. The reason I think that what you've built is so compelling is it's real world. It shows what is going on and how this is supposed to work. But if you look across the entire landscape, every security story out there, it’s first pushed by a vendor trying to sell a product of some sort. And two, it's boring as hell. It’s, “Sit down and learn how this whole nonsense works for this fixed period of time.” I have to ask though, given the flaws.cloud and flaws2.cloud are both operating intentionally vulnerable environments, how many freaked out phone calls or emails have you gotten from the AWS security folks over the years for this?
Scott: Yeah, so I have received a number of emails, especially when, as part of flaws.cloud, I give people access to an AWS access key in my environment. And so, that access key has found its way onto a number of GitHub repos over the years, of people creating their own little test utilities, and they needed an access key, and they didn't want to use one in their own environment, so they just grabbed mine and put it in a GitHub repo somewhere. So, as a result, I get a number of emails along those lines. Eventually, AWS told me that I had to change the access key. And I told them, “No, I'm not going to do it. I've changed it a bunch of times already. This is just annoying. I know that it's not a security issue.” And so eventually, they somehow put some type of flag on my account to say, like, this guy is just a hassle to deal with. Just stop reaching out to him all the time.
Corey: I suspect that I have that flag, too. I'm sure it's something obscene as far as the naming convention, there goes. Yeah, that's my approach, too, very often. When they'll release something, like the original version of API gateway that is so convoluted to configure. I'll just bound an access key to that service, throw it in the internet, and then see what attackers do with it. “Oh, that's how it gets configured. Awesome.” I'm mostly kidding, but also not entirely.
Scott: Yep. [laugh].
Corey: Sometimes you learn by watching people break or misuse something in fascinating ways. I want to also highlight that it seems like you have aspects of the same problem that I do, and before you take that as a deadly insult, let me be more specific. It feels like if I ask you for the elevator pitch of, “What is it, you do exactly?” You've got to sabotage the elevator because it's not just the independent security stuff; it's not just the tool stuff; you're also the creator behind fwd:cloudsec. What is that?
Scott: So, fwd:cloudsec came about as a result of basically a number of security researchers and just other security folks had attended AWS’s re:Inforce, which was their security conference that they started in 2019. And we attended that, and there were some, kind of, frustrations that we had with it. And so specifically, we recognize that a lot of people are running on multiple clouds, whether they want to or not, whether they know it or not, they are running on many cloud environments. And obviously, AWS’s security conference is only going to be about AWS. So, that was one aspect.
Another aspect is that AWS is going to add their conferences because it's their conference, they're going to control the message and make sure that AWS and the features, the services that they release, the ones that exist, are always viewed in the best light; they don't want to talk about the limitations so much. And that really, as practitioners, is something that you're most interested in: does this work in all regions? Does it have these various integrations? Does it have CloudFormation support? All those different aspects of it.
And so we wanted to make sure that we had a conference or a platform where we could talk about a lot of those things. So, there's that. We wanted to be able to talk about attacks because we didn't want to just talk about, “Hey, here are some features on AWS that you can use to prevent security misconfigurations.” We wanted to dive into, what are those misconfigurations? What are attackers doing?
What beyond just a technical fix is something that people could try to use to mitigate these issues? So, we wanted to dive into all of that. And then it finally was AWS conferences are very large and trying to meet people at these conferences can take a long time to try and walk across from one end of a conference center to another. And so we wanted a smaller place where we could have a lot of the hallway conversations to talk about things. And so as a result of all of that, we ended up creating fwd:cloudsec to basically become a cloud security conference for practitioners that focused on all clouds and was able to dive into the limitations, the attack research, the other types of defenses that can be used, all those different types of things.
And then on top of that, a number of the organizers, they really wanted to make this have benefits for the greater community as well. And so it's a nonprofit. And so as a result of that, when—in 2020, we originally planned on having an in-person conference in June. Obviously, that did not work out due to this pandemic that has happened. But when we were going to have that in-person conference, we had planned on having basically scholarships for college students that couldn't otherwise afford to attend some of these conferences.
Corey: If there's one thing I'm taking away from the pandemic, it’s absolutely that it is such a better experience when you are not limiting these conferences to the folks who can get a week off of work, and travel to Las Vegas, and put themselves up there, and pay the $2,000 ticket fee. There's just so much else that it could be. And I am a huge fan of just that entire model. I'm also a huge fan of, by the way, of you following AWS’s wonderful footsteps when it comes to naming things, and naming fwd:cloudsec after an email subject line.
Scott: Yep. That was what we decided to do with the name is we were throwing around a whole bunch of different names. And we're like, “You know what? Let's just make fun of AWS with our name.” And so, with AWS using ‘re’ for everything—so re:Inforce, re:Invent, re:MARS, we decided to use the other subject header of forward—F-W-D colon.
Corey: Yeah, I am really looking forward to next year’s. The challenge was I believe the first year of this conference was co-located with re:Inforce, wasn't it?
Scott: Yeah. So, re:Inforce was supposed to have been in Houston this past year. And, you know, it was canceled. And so we decided, though, to continue to move forward with things. And so, we had it as a virtual conference.
And we originally had it planned to be in Houston; it was going to be the day before re:Inforce. And so we're currently trying to figure out what to do for next year. Because currently the next re:Inforce has not been announced yet. We don't know what city it will be in, what date or anything like that, but we have made a couple of decisions upfront, one of them being is that we do want it to be streamed live the day of the conference because we recognize that, again, going back to your earlier point, that one of the benefits of the virtual conferences is that anybody could have access to that conference in some way. And we don't know how vaccines are going to play out, we don't know whether or not we'll be able to have an entirely in-person conference. Our hope is that we will, but again, there's a lot of unknowns there.
Corey: It’s also the last sort of thing that's going to come back. It's all right, I'm going to take a risk now and go out to a restaurant or get my haircut, but travel to a different city for, basically, to sit in a vendor expo for two days and wind up effectively sharing air with 20,000 people, it feels like on some level, it's like, so which of our staff are we sending there? Oh, the expendable ones, of course.
Scott: [laugh]. Yeah. And so even if best-case scenario we're able to have an in-person conference, we still recognize that the people that are able to attend that conference are going to be people from countries that have access to the vaccine—not all countries do—and people that probably can make a plane purchase within a short timeframe, based on that decision making, and so potentially be spending more money on that flight. And so as a result of all that, we recognize that we want to make sure that fwd:cloudsec is accessible to people all around the world, no matter what their current economic or—situation is with whether or not they have access to the vaccine and everything. So, that is one of the decisions we have made for it. But a lot of the things are still up in the air for it.
Corey: One thing I've really come around to is the idea that with online conferences, I love the idea of live streaming the talk, but I feel like those talks should generally be pre-recorded. Whenever you do them live, it feels like you're, one, taunting the demo gods, which never goes well. But what I've also really enjoyed is participating in the live chat Q&A as a part of whatever conference program you're using, and answering questions on the fly as you go. Or if you're me, slash psychotic, live-tweeting your own talk.
Scott: Yeah, and I've seen some amazing conference talks this year, that had been pre-recorded, that had been professionally edited. They had cutscenes back and forth between demos of things and actual physical demonstrations of things as well. So, yeah, that all are things that we're still trying to figure out how we're going to make this make sense in some way. Especially given that there's so many unknowns as to how this next year is going to play out.
Corey: Oh, absolutely. And again, I think that people are extraordinarily patient when it comes to these sorts of things. Do you have any idea when the call for papers is going to open?
Scott: So, we still haven't settled on a date for the conference, specifically, but we expect probably in the next maybe two months.
Corey: For those listening, we are recording in the very last days of the wonderful year 2020. So, yeah, it's always interesting when people listen to these things, and it's a point in time, and sometimes I embarrass myself. Wow, you recorded that episode six months ago. What's the deal with that? And the answer is, generally, legal review. But I digress.
Scott: So, probably February of 2020. But again, we still may change that, and we're still trying to make some decisions on things.
Corey: Absolutely. I do want to get, as a security expert in all things AWS—largely self-appointed, but again, it’s not like there’s a certification board for these things and I've seen enough of your work to say that I unreservedly trust you. When you tell me something is true in the world of security. I take that at face value because I've yet to see you proven wrong.
Scott: Thank you. [laugh].
Corey: My position on this—talk about saying controversial things, get people in trouble down the road when this is played back, probably by a client for you down the road—but my take on the shared responsibility model, which is AWS's overly complicated way of saying, “Here's what the cloud provider worries about versus here's what the customer is responsible for,” is basically an overwrought song-and-dance because the answer actually fits in a tweet, but if you sit there to someone who's just suffered a breach, and tell them the truth of it, which is, “If you get breached in the Cloud, it is almost certainly your fault.”
Corey: “You messed up.” That's all it says because the breaches are not people driving trucks into data centers, grabbing racks into the back, and peeling out. Its misconfigurations of S3 buckets. It's oh, it turns out ‘kitty’ was a terrible password. It turns out that the OSP10 haven't really changed the list of top ten security vulnerabilities in web apps in the past decade because people still aren't sanitizing their inputs, or cross-site scripting; what does that mean exactly?
It's always the same old stuff and there's nothing new under the sun. But it's not oh, the cloud provider forgot to wipe the disk volume after you were done with it and present it to someone else. They have those operational aspects down to a science.
Scott: Yeah, so the shared responsibility model really comes down to anything that you can secure is your responsibility to secure. Any type of configuration change that you can make is your responsibility to make that. And so with the shared responsibility model, though, the confusion, the frustration comes down to there are some things that AWS does very, very well: they are able to operate this amazing cloud infrastructure that very rarely goes down. It has had some definite hiccups, but it tends to stay up, it tends to be able to scale fairly well, they have backwards compatibility, there's a number of things that they do well. But then there's some things that they don't do well, such as having good user interfaces, for example, being able to better understand your cloud environment in different ways.
And as a result of those limitations, that, I think, is where a lot of these security issues come into play is that people don't understand their cloud environments as well as they would like to, and AWS is not really helping people to understand their environments that well. And so as a result of that, that is where I think a lot of the misconfigurations come into play, there are some other aspects of this issue; for example, a number of their security services don't have as broad a coverage as we would like. So, for example—
Corey: Oh, I’ll take it a step further; a lot of the security services suck.
Corey: Okay, we're going to put all this stuff into CloudTrail logs, but no one ever reads them, so we're going to consume them with GuardDuty. Oh, and that's super noisy too, so we're going to build Detective on top of that. Now, they're charging you all the way to go up this ladder, and at some level, you look at all the security offerings that they have—and I've looked at some of the big consultancy security architectures for all this stuff—and I'm looking at this because I focus in the world of billing. And I am almost certain that a data breach would be less expensive than running these services.
Scott: [laugh]. Yeah, it can get pretty crazy. And there are these various aspects of their services that really are AWS’s own responsibility to improve them. So, you brought up CloudTrail, for example; there are a number of API calls on AWS that are not recorded in CloudTrail anywhere. So, a number of these are going to be your data-related calls.
And so there are configuration changes that you can make to CloudTrail to be able to see S3 object axises, but for example, CloudWatch put metric data: that call is not recorded anywhere, you have no ability to see that call anywhere. So, as a result of that, AWS's guidance on implementing a least privilege strategy becomes difficult to implement because one way of accomplishing that is to look at your historical access and basically remove privileges that have not been used. But because a number of those actions are not recorded anywhere, you do not have the ability to know whether or not those privileges have actually been used. Furthermore, you can start using some kind of more advanced concepts, like client-side monitoring, which is where you can flip some environmental variables in your application and it will result in you getting a local log of some of these actions that are made. However, within that recording of events, it does not record what resources were accessed, it only records what API calls were made.
And so as a result, if you were to leverage client-side monitoring to basically try and implement a least privileged strategy, you would not be able to restrict down to specific resources or apply certain conditions because you could only restrict down to the specific actions that are made. So, yeah, there's a number of limitations that I do think AWS still needs to improve on things themselves.
Corey: I would absolutely agree with that. The problem is, I feel like security and cost are spiritually aligned, insofar as people really care about them only after they didn't care about them and now they have egg on their face. I'm fortunate on my side of the world where it's just, “Well, we spent a little bit too much money that particular month and now we feel bad.” As opposed to security where it's, “So, what is your primary means of breach detection?” And the answer honestly is, “The front page of the New York Times.”
Scott: Yep, that or their AWS bill because there are a number of attacks on AWS that result in massive bills. The most common one is just going to be cryptocurrency mining. If you put an access key up on GitHub, the first thing that's going to happen—well, there's a race that happens. Basically, can AWS alert you about this problem, and can you take action on it before something bad happens, or the bad thing that'll happen is there's a number of bots that are continuously monitoring GitHub, and they will find that access key and they're going to spin up EC2 instances in your environment to do cryptocurrency mining. But beyond that, there's also the concept of denials—
Corey: [00:20:42 crosstalk] the only thing you have for a complete inventory. Periodically, I run into scenarios with smaller companies where, “Okay, so tell me about those instances in Australia?” “Oh, we don't have anything in that region.” I believe you are being sincere when you say this. However, somewhat paradoxically, above a certain point of scale, you can't really notice those breaches anymore via the bill. I mean, if you're spending, I don't know, $18 million a month on AWS, that's an awful lot of Bitcoin you have to mine.
Scott: Yep. [laugh].
Corey: It just disappears into the rest of the noise.
Scott: And so yeah, so trying to create billing alarms for things, that works when you have a free tier account, to some degree; it is not going to work when you are actively spending a lot of money otherwise. So, doing that type of monitoring is difficult. But I want to touch on, though, a little bit is the concept of, like, denial of wallet attacks are a concept that really didn't occur in the data center world, but now is suddenly an opportunity for attackers in the Cloud world. And what that really means is that if you were someone that didn't like a company for some reason, previously, you could have DDoS’ed them; you could have basically tried to send a lot of bandwidth over to that company in some way to shut down their servers because they're not able to keep up with the amount of traffic that you're sending to them. But in AWS, and in the Cloud world, and across the cloud providers, you now have the ability to basically increase that customer’s—or that company's—amount of spend on AWS.
And so if you are able to get access to an access key to spin up some of these resources or to start making some of these very expensive AWS calls. So, start reserving instances for the next three years that are SQL Server licensed or some other type of licensing option on them, and suddenly, yeah, you can actually burn—I think like, there's a single AWS call you can make that will cost a company $64 million, just because you're spinning up a whole bunch of resources all at once, they're all licensed resources.
Corey: Single API call? That much? I think that's right around the cap of the default console limits for maxing out savings plans.
Scott: [laugh]. Yeah, there's a lot of these different opportunities. Or there's the possibility of using the AWS marketplace in order to purchase something from one of the vendors there, or there's the opportunities to commit various types of white-collar crimes if you were to create your own ABS marketplace offering, and then from your company that you work for, during the day to make a purchase of that, and now suddenly you're able to make this side cash that the company probably isn't going to be very aware of, just because you are purchasing something from a vendor who happens to be yourself that you're moonlighting as. So, there's all those types of things that can potentially happen on AWS. And all the cloud providers.
Corey: Oh, absolutely. I don't think that AWS is particularly vulnerable. If anything, I would say their security posture is I would argue the best of all the major cloud providers, I know that GCP would argue that point strenuously. Where do you stand?
Scott: So, I am a big advocate at AWS. On Twitter, I will make fun of them all day; I will call them out on every single minor mistake that they make—
Corey: That’s why we get along so well.
Scott: [laugh]. But at the end of the day, I advocate to everybody to use AWS. I use it for all of my personal things, everything in my life is backed up on AWS in some way. I trust AWS. And if you look at it, there are a number of government agencies and different ways running on AWS.
Some of them have that special, fancy isolated partition that is not connected to the internet and is used for classified information, but, like, AWS and Amazon, they are able to secure things well. And part of that is just because of the economies of scale. They are making tons and tons of money, or receiving that money from customers, and as such, they can now afford to have their own DDoS response teams; they can have secure enclaves—you know, their Nitro Enclaves and all sorts of different features. Their automated reasoning, for example, like, these are things that you cannot do. And so I do recommend people use AWS, even though I do give them a hard time.
Corey: This episode is sponsored by ExtraHop. ExtraHop provides threat detection and response for the Enterprise (not the starship). On-prem security doesn’t translate well to cloud or multi-cloud environments, and that’s not even counting IoT. ExtraHop automatically discovers everything inside the perimeter, including your cloud workloads and IoT devices, detects these threats up to 35 percent faster, and helps you act immediately. Ask for a free trial of detection and response for AWS today at extrahop.com/trial.
Corey: I agree with you wholeheartedly. I think that you might have a more fraught relationship dynamic, just because the best approach from the cloud provider perspective when someone discovers a security issue is, “Cool. Could you tell us very quietly? We will never confirm or deny anything there. We will quietly fix it, and you will go away forever.”
Which is kind of not how you build a reputation for excellence in this space. So, there's that constant tension. I gave a talk at re:Invent last year—two years ago now, I guess, since this is going to be 2021 when people are listening—about the vulnerability disclosure program that AWS runs and how they do it. It was a fantastic story based upon some stuff that we collaboratively found. And the thing that surprised me was, every time I find other stuff in the billing area or, this doesn't make sense, they're super friendly and thrilled for me to bring that to their attention. They try not to talk much about anything that's even vaguely security-related, if they can help it.
Scott: Yeah. And for those types of issues, I mean, it makes sense then, but at the same time, like, for me, it can be frustrating at times. And I see that from the community, as well. I mean, we are each in a position where I think a lot of people send us DMs about a lot of different things. We get a lot of emails from people privately asking us, telling us different things.
And one of those things that sometimes I get messages from people about is about security issues with AWS that they will report to me prior to reporting it to AWS, in the expectation that I have some ability to fix it or something like that. Or they just want my advice; they're concerned because there are a number of vendors out there that take legal action against security researchers. And so they are concerned about how that's going to play out if they communicate these issues to AWS. And so I will say that AWS does not—in my experience, and I've never heard of them doing this, of taking legal action against people. Obviously, that's going to depend on how they’re—
Corey: No, they tend to reserve that for their employees as best I can tell.
Scott: Yeah, [laugh] yeah. I mean, it's going to depend on how you're finding these issues, and where you're finding them, and what you're doing. There is an extent to the leniency of what they can do. But yeah, I mean, I think that AWS is very good about interacting with researchers, and that's especially been true because of Zack Glick, specifically, over there at AWS.
Corey: Oh, he's amazing. He was my co-presenter.
Scott: Yeah. And he is kind of the liaison between security researchers and AWS. And so he, a couple years ago, ended up moving into that position of becoming that liaison, specifically because AWS was falling flat on responding to security issues. And so specifically, I had found, for example, some issues with AWS managed policies that were being—they were documented by AWS and advocated for customers to use, and there were a number of flaws in them that were not making those policies work in the way that was expected. And I tried reporting that to AWS repeatedly, trying to get their attention; couldn't get their attention, finally was lucky enough to be working with a client at the time that was spending enough money with AWS that they basically told AWS, “Hey, you need to respond to these issues. These are actual problems, and we're concerned about your security posture if you're not resolving these issues.” And so as a result of that, AWS now does respond much better to security researchers.
Corey: They do. For a while, they had specific pen testing requirements or disclosures for port scanning and the rest, and they've really loosened that, which is nice. There's something to be said for not getting in the way.
Scott: Yep. And those—oh, those pen testing requirements were—they were impossible to try and follow in different ways, just because you were supposed to tell them, which EC2 instances were going to be pen tested, which isn't going to work if you have auto-scaling groups that are spinning up and spinning down instances all the time.
Corey: I'm going to be doing—oh, nevermind, they're gone. Just kidding, they're back.
Scott: Yeah, it was a mess. So, it is good that they removed some of those requirements. They still do have some pen testing requirements, and still, some services are supposed to be off-limits. However, I've never heard of anybody getting in trouble for doing pen tests in different ways against these services. I don’t—I’m not going to—
Corey: No, I imagine if anything, they wind up IP blocked or, at worst case, they get a phone call or an email of, “Would you mind knocking that off?”
Scott: I mean, I haven't heard of them doing any of those things. Just because again, like a pen test that you're performing on AWS is not going to be any worse than the just day-to-day operations of a lot of companies out there, and also just the aspect of a lot of companies—or AWS services that are publicly accessible and are constantly getting free pen tests that aren't being reported in a new way, just because there's attackers out there that are constantly port-scanning, constantly trying to brute force hack, or break into different services.
Corey: One question I have for you, I've been seeing all the announcements about it, and I feel like I’ve almost been gaslit into believing that I'm the one that's looking at this the wrong way, but they've made a strong push towards attribute-based access control, primarily tags, and that terrifies me because historically, for the last 15 years, everyone gets access to tag everything because that was the way—the only way you could reasonably do cost allocation. Suddenly, if you go down that path, everything with tagging permission across your entire estate now becomes a massive security vector. Am I wrong on that?
Scott: So, historically, the best, only security boundary on AWS was having separate AWS accounts. And people realize that people have these large monolithic accounts, and they want to try and segment access in some way. And so the concept of attribute-based access control came about to allow people to tag resources, and only allow certain services or certain developers to work on those resources that are tagged in a certain way. There's a number of limitations with this, though.
The biggest one that people ran into is just that a number of resources didn't support tagging, at all, anywhere. And so AWS has gotten better about that, however, there's still a number of resources that don't support tag on create. And so even though they may support tags, you cannot restrict what tags someone can use when they're creating that resource.
Corey: I think Elastic IP addresses was done during re:Invent this year, or last year, which is just what… what is that?
Scott: There's still a number of them that—it's just kind of mind-boggling that they are trying to push this concept, and yet they don't provide you with the ability to really utilize that concept, except for a number of limited use cases. So, as a result of that, the best security boundary still remains having separate AWS accounts on AWS. However, attribute-based access control, the other big limitation of it is the lack of tooling around it. So, if you want to understand who has access to a certain S3 bucket or other resource on AWS, it's difficult to try and figure that out. I mean, you can use tools like Access Analyzer to try and figure out, generally, which AWS accounts have access, is it public or not, but if you want to identify the specific IAM role and when those roles have different restrictions based on tags or other conditions and things like that, it becomes really complicated to try and figure that out.
So, I think that is one of the big issues is the lack of tooling, that it's hard to try and understand this, it's hard to try and audit those different policies that you may have in some way. So, that all becomes just a big frustration. So, I currently still do not advocate people attempt, really, to use attribute-based access control except for maybe a few limited situations, or unless that's all they can do because they have—
Corey: Or it’s greenfield, potentially. But what's truly greenfield in the world of identity and access control? It's a company that just started.
Scott: But even then, the best practice still is to have multi-account, to try and have different accounts for your different applications, and so trying to do things that way. But then again, you run into another problem is that as you end up with these large number of AWS accounts, people start connecting those AWS accounts in different ways. They create these trust relationships via IAM roles, via S3 bucket policies between your AWS accounts. And so your security boundaries start becoming blurred, or start basically erasing those security boundaries because now you can have two AWS accounts which really become equivalent if someone was to compromise one of those accounts because they can then assume into an admin role inside the other account if they were to compromise an account and obtain admin privileges in that account. So, again, that is another problem that we're starting to run into is how do you understand the relationships between your different accounts?
What are the trust connections that exist between them? And I think that is another area that, one, I would really like to see AWS do more in that area, but, two, I think that that's an opportunity for people to create different types of tooling around that.
Corey: One of these days, I want to just give them a giant wish list of things in security, just from a usability perspective. The idea of being able to throw IAM policies into warn-if-reject mode. In other words, the test account, let me give something basically admin rights, I have the Lambda function step through, it's code paths—which should not be that many—and at the end of it, everything it just did, that's the only thing I want you to ever be allowed to do, and it blocks everything else out. That would be amazing.
Scott: Yep, that type of thing has been on people's wish lists, and people have tried to use that concept of client-side monitoring to try to accomplish that. But again, there's those limitations of client-side monitoring that just don't allow it to work as effectively as you'd like.
Corey: I was playing around with the SAM CLI for a while, and by default, it will build Lambda functions, when you're talking to DynamoDB, with the ability to—you have full access to DynamoDB, every table in the account, every permission. And getting that narrowed down to something that is much more bounded to a particular table requires an awful lot of messing around with it, which tells me that most people aren't doing it.
Scott: Correct. So, what I do for my business is doing assessments for companies. And so as a result, I get to see all sorts of IAM policies, and I will one hundred percent agree with you that people have very open IAM policies that are not as restricted as this ideal utopian world that AWS tries to tell people could potentially exist.
Corey: So, looking at the sheer complexity around security, it feels like the easiest solution is to give up on some level, rather than attempting and obviously failing to get it right. Because let's not kid ourselves: if Capital One, which has its faults but they don't hire dumb, if they, with all the assets that they have to protect can fall victim to this, what chance do the rest of us have? And the sheer complexity of service offerings that are ignoring first-party, the third-party stuff across the board, with everyone trying to sell me something, how is there any hope?
Scott: Yeah. And I mean, that was like, one of the big concerns that came out of the Capital One breaches. They are known as being one of the best for AWS security. And so they have some open source tools; they have a number of people there that are highly respected. And yeah, that breach, unfortunately, happened to them.
So, the big thing, though, that you can do, I think, is to try and do that account separation because that does allow you to make some of the mistakes—or specifically with regards to least privilege—that if one of your accounts gets compromised completely, you know, someone has admin access in it, it still allows your other accounts do not catch on fire when that one account does. It separates that blast radius there. The other thing that I think is still really in its infancy is using SCPs—or Service Control Policies—to start better protecting things. And so for example, if you have an incident response role that you want to allow your security team to be able to assume into so they can remediate issues, or just investigate potential issues, you want to make sure that you have an SCP that protects that IAM role so that if your AWS account gets compromised—one of your accounts—and that attacker starts deleting the incident response role, it won't be able to do that because it would be protected by the SCP. The SCP cannot be bypassed, even by the root user of an AWS account.
And so that allows you, basically, to put in those guardrails, to put in those restrictions, so that not only can you stop someone from turning off CloudTrail, or GuardDuty, or some of those other security features, but you can also better protect, for example, if you are using a vendor to do auto-remediation or to do monitoring, to again, use an SCP to protect that vendor’s IAM role or whatever access that they have into that account so that an attacker cannot disrupt that monitoring from happening. So, I think that that's another powerful thing that people need to do. But along with that—this is another area where I think AWS is still weak—is that if you, as a legitimate user in an account, try to turn off CloudTrail or try to do something that is somehow protected by one of these guardrails, you do not have the ability to know whether or not that was stopped by your IAM policy, or via an SCP, or via IAM boundary, or a session policy. Or if you're trying to mess with an S3 bucket in some way, is that stopped by one of the resource policies that's on the S3 bucket? There's all these different places in which you can define privileges on AWS.
And you do not know what is stopping you. And furthermore, even if you do have access to all that IAM information, you're able to see all those things, as just a developer that is not an expert in IAM, you can have very convoluted policies put on yourself that are difficult to understand. And so having some mechanism, I think, for AWS to be able to tell you, “Hey, you are not able to create that EC2 because you did not tag it with the required tag,” or, “You did not specify that it should have this EC2 instance type,” or something like that. AWS is just going to tell you, “Access denied,” and they're not going to give you any further information. And so I think that is an area that I would really like to see AWS be able to, in some way, provide you with more information, whether that's in the CloudTrail event to be able to dig into that, or if maybe it's some additional privilege that a user has if you have the debug privilege or something like that, that it will tell you why you were denied some action that you were trying to take.
Corey: The why of these failures is the hardest part to work around. And then people go ahead and work around it temporarily, and over-provision things, and they'll fix it later. Yeah. One of the biggest lies we tell ourselves.
Scott: Which again is the reason why I think that having those separate AWS accounts is so important, to not be working inside a monolithic account, to not have your dev-test sandbox environments within your production account, to have those as separate AWS accounts. So, again having those different AWS accounts, and to start focusing on building up those guardrails via SCPs, or auto-remediations where SCPs are not possible, I think that those are two things that people can really start focusing on.
Corey: Yeah, that's a reasonable starting point. I mean, security is a destination. It's a spectrum. It's not a journey. Nothing is completely secure until it's been blasted to pieces. And it's just a question of where your risk tolerance lies.
I strongly believe you will be more secure in a public cloud provider that is not IBM than you will be in your on-premises data centers in almost every case. So, thank you so much for taking the time to go through all this with me. If people want to learn more about who you are and what you do, where can they find you?
Scott: So, summitroute.com is probably the main entrance point to try and figuring out who I am and how to contact me. I am active on Twitter with almost entirely just AWS security-related things. Unfortunately, my Twitter handle, I created it back in the day when I was interested in reverse engineering, so it is @0xdabbad00 written all in hex letters. So, I would recommend just going to summitroute.com and figuring things out from there. Or just searching for me on Twitter as Scott Piper.
Corey: Excellent. We will, of course, throw links to all of that in the [00:40:41 show notes]. Thanks so much for taking the time to speak with me today. I really appreciate it, as always.
Scott: Yeah, thank you.
Corey: Scott Piper, AWS security consultant, I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with a comment allowing access to a single DynamoDB table written without consulting the documentation.
Announcer: This has been this week’s episode of Screaming in the Cloud. You can also find more Corey at screaminginthecloud.com, or wherever fine snark is sold.
This has been a HumblePod production. Stay humble.