Exposing Vulnerabilities in the World of Cloud Security with Tim Gonda

Episode Summary

Tim Gonda, Technical Director of Cloud at Praetorian, joins Corey on Screaming in the Cloud to discuss the complexities of exposing vulnerabilities in the world of cloud security. How does large amounts of technical debt impact vulnerabilities? When you’ve found a vulnerability, how do you let the affected company know in a way that will ensure they prioritize and address it? And why is it that when something happens, it’s (seemingly) always cloud security? Tim answers all these questions and more in this episode of Screaming in the Cloud.

Episode Show Notes & Transcript

About Tim

Tim Gonda is a Cloud Security professional who has spent the last eight years securing and building Cloud workloads for commercial, non-profit, government, and national defense organizations. Tim currently serves as the Technical Director of Cloud at Praetorian, influencing the direction of its offensive-security-focused Cloud Security practice and the Cloud features of Praetorian's flagship product, Chariot. He considers himself lucky to have the privilege of working with the talented cyber operators at Praetorian and considers it the highlight of his career.

Tim is highly passionate about helping organizations fix Cloud Security problems, as they are found, the first time, and most importantly, the People/Process/Technology challenges that cause them in the first place. In his spare time, he embarks on adventures with his wife and ensures that their two feline bundles of joy have the best playtime and dining experiences possible.

Links Referenced:

Transcript

Announcer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.

Corey: This episode is sponsored in part by our friends at Thinkst Canary. Most Companies find out way too late that they’ve been breached. Thinkst Canary changes this. Deploy Canaries and Canarytokens in minutes and then forget about them. Attackers tip their hand by touching ’em giving you the one alert, when it matters. With 0 admin overhead and almost no false-positives, Canaries are deployed (and loved) on all 7 continents. Check out what people are saying at canary.love today!

Corey: Kentik provides Cloud and NetOps teams with complete visibility into hybrid and multi-cloud networks. Ensure an amazing customer experience, reduce cloud and network costs, and optimize performance at scale — from internet to data center to container to cloud. Learn how you can get control of complex cloud networks at www.kentik.com, and see why companies like Zoom, Twitch, New Relic, Box, Ebay, Viasat, GoDaddy, booking.com, and many, many more choose Kentik as their network observability platform. 

Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. Every once in a while, I like to branch out into new and exciting territory that I’ve never visited before. But today, no, I’d much rather go back to complaining about cloud security, something that I tend to do an awful lot about. Here to do it with me is Tim Gonda, Technical Director of Cloud at Praetorian. Tim, thank you for joining me on this sojourn down what feels like an increasingly well-worn path.

Tim: Thank you, Corey, for having me today.

Corey: So, you are the Technical Director of Cloud, which I’m sort of short-handing to okay, everything that happens on the computer is henceforth going to be your fault. How accurate is that in the grand scheme of things?

Tim: It’s not too far off. But we like to call it Praetorian for nebula. The nebula meaning that it’s Schrödinger’s problem: it both is and is not the problem. Here’s why. We have a couple key focuses at Praetorian, some of them focusing on more traditional pen testing, where we’re looking at hardware, hit System A, hit System B, branch out, get to goal.

On the other side, we have hitting web applications and [unintelligible 00:01:40]. This insecure app leads to this XYZ vulnerability, or this medical appliance is insecure and therefore we’re able to do XYZ item. One of the things that frequently comes up is that more and more organizations are no longer putting their applications or infrastructure on-prem anymore, so therefore, some part of the assessment ends up being in the cloud. And that is the unique rub that I’m in. And that I’m responsible for leading the direction of the cloud security focus group, who may not dive into a specific specialty that some of these other teams might dig into, but may have similar responsibilities or similar engagement style.

And in this case, if we discover something in the cloud as an issue, or even in your own organization where you have a cloud security team, you’ll have a web application security team, you’ll have your core information security team that defends your environment in many different methods, many different means, you’ll frequently find that the cloud security team is the hot button for hey, the server was misconfigured at one certain level, however the cloud security team didn’t quite know that this web application was vulnerable. We did know that it was exposed to the internet but we can’t necessarily turn off all web applications from the internet because that would no longer serve the purpose of a web application. And we also may not know that a particular underlying host’s patch is out of date. Because technically, that would be siloed off into another problem.

So, what ends up happening is that on almost every single incident that involves a cloud infrastructure item, you might find that cloud security will be right there alongside the incident responders. And yep, this [unintelligible 00:03:20] is here, it’s exposed to the internet via here, and it might have the following application on it. And they get cross-exposure with other teams that say, “Hey, your web application is vulnerable. We didn’t quite inform the cloud security team about it, otherwise this wouldn’t be allowed to go to the public internet,” or on the infrastructure side, “Yeah, we didn’t know that there was a patch underneath it, we figured that we would let the team handle it at a later date, and therefore this is also vulnerable.” And what ends up happening sometimes, is that the cloud security team might be the onus or might be the hot button in the room of saying, “Hey, it’s broken. This is now your problem. Please fix it with changing cloud configurations or directing a team to make this change on our behalf.”

So, in essence, sometimes cloud becomes—it both is and is not your problem when a system is either vulnerable or exposed or at some point, worst case scenario, ends up being breached and you’re performing incident response. That’s one of the cases why it’s important to know—or important to involve others in the cloud security problem, or to be very specific about what the role of a cloud security team is, or where cloud security has to have certain boundaries or has to involve certain extra parties have to be involved in the process. Or when it does its own threat modeling process, say that, okay, we have to take a look at certain cloud findings or findings that’s within our security realm and say that these misconfigurations or these items, we have to treat the underlying components as if they are vulnerable, whether or not they are and we have to report on them as if they are vulnerable, even if it means that a certain component of the infrastructure has to already be assumed to either have a vulnerability, have some sort of misconfiguration that allows an outside attacker to execute attacks against whatever the [unintelligible 00:05:06] is. And we have to treat and respond our security posture accordingly.

Corey: One of the problems that I keep running into, and I swear it’s not intentional, but people would be forgiven for understanding or believing otherwise, is that I will periodically inadvertently point out security problems via Twitter. And that was never my intention because, “Huh, that’s funny, this thing isn’t working the way that I would expect that it would,” or, “I’m seeing something weird in the logs in my test account. What is that?” And, “Oh, you found a security vulnerability or something akin to one in our environment. Oops. Next time, just reach out to us directly at the security contact form.” That’s great. If I’d known I was stumbling blindly into a security approach, but it feels like the discovery of these things is not heralded by an, “Aha, I found it.” But, “Huh, that’s funny.”

Tim: Of course. Absolutely. And that’s where some of the best vulnerabilities come where you accidentally stumble on something that says, “Wait, does this work how—what I think it is?” Click click. Like, “Oh, boy, it does.”

Now, I will admit that certain cloud providers are really great about with proactive security reach outs. If you either just file a ticket or file some other form of notification, just even flag your account rep and say, “Hey, when I was working on this particular cloud environment, the following occurred. Does this work the way I think it is? Is this is a problem?” And they usually get back to you with reporting it to their internal team, so on and so forth. But let’s say applications are open-source frameworks or even just organizations at large where you might have stumbled upon something, the best thing to do was either look up, do they have a public bug bounty program, do they have a security contact or form reach out that you can email them, or do you know, someone that the organization that you just send a quick email saying, “Hey, I found this.”

And through some combination of those is usually the best way to go. And to be able to provide context of the organization being, “Hey, the following exists.” And the most important things to consider when you’re sending this sort of information is that they get these sorts of emails almost daily.

Corey: One of my favorite genre of tweet is when Tavis Ormandy and Google’s Project Zero winds up doing a tweet like, “Hey, do I know anyone over at the security apparatus at insert company here?” It’s like, “All right. I’m sure people are shorting stocks now [laugh], based upon whatever he winds up doing that.”

Tim: Of course.

Corey: It’s kind of fun to watch. But there’s no cohesive way of getting in touch with companies on these things because as soon as you’d have something like that, it feels like it’s subject to abuse, where Comcast hasn’t fixed my internet for three days, now I’m going to email their security contact, instead of going through the normal preferred process of wait in the customer queue so they can ignore you.

Tim: Of course. And that’s something else you want to consider. If you broadcast that a security vulnerability exists without letting the entity or company know, you’re also almost causing a green light, where other security researchers are going to go dive in on this and see, like, one, does this work how you described. But that actually is a positive thing at some point, where either you’re unable to get the company’s attention, or maybe it’s an open-source organization, or maybe you’re not being fully sure that something is the case. However, when you do submit something to the customer and you want it to take it seriously, here’s a couple of key things that you should consider.

One, provide evidence that whatever you’re talking about has actually occurred, two, provide repeatable steps that the layman’s term, even IT support person can attempt to follow in your process, that they can repeat the same vulnerability or repeat the same security condition, and three, most importantly, detail why this matters. Is this something where I can adjust a user’s password? Is this something where I can extract data? Is this something where I’m able to extract content from your website I otherwise shouldn’t be able to? And that’s important for the following reason.

You need to inform the business what is the financial value of why leaving this unpatched becomes an issue for them. And if you do that, that’s how those security vulnerabilities get prioritized. It’s not necessarily because the coolest vulnerability exists, it’s because it costs the company money, and therefore the security team is going to immediately jump on it and try to contain it before it costs them any more.

Corey: One of my least favorite genres of security report are the ones that I get where I found a vulnerability. It's like, that’s interesting. I wasn’t aware that I read any public-facing services, but all right, I’m game; what have you got? And it’s usually something along the lines of, “You haven’t enabled SPF to hard fail an email that doesn’t wind up originating explicitly from this list of IP addresses. Bug bounty, please.” And it’s, “No genius. That is very much an intentional choice. Thank you for playing.”

It comes down to also an idea of whenever I have reported security vulnerabilities in the past, the pattern I always take is, “I’m seeing something that I don’t fully understand. I suspect this might have security implications, but I’m also more than willing to be proven wrong.” Because showing up with, “You folks are idiots and have a security problem,” is a terrific invitation to be proven wrong and look like an idiot. Because the first time you get that wrong, no one will take you seriously again.

Tim: Of course. And as you’ll find that most bug bounty programs are, if you participate in those, the first couple that you might have submitted, the customer might even tell you, “Yeah, we’re aware that that vulnerability exists, however, we don’t view it as a core issue and it cannot affect the functionality of our site in any meaningful way, therefore we’re electing to ignore it.” Fair.

Corey: Very fair. But then when people write up about those things, well, they’ve they decided this is not an issue, so I’m going to do a write-up on it. Like, “You can’t do that. The NDA doesn’t let you expose that.” “Really? Because you just said it’s a non-issue. Which is it?”

Tim: And the key to that, I guess, would also be that is there an underlying technology that doesn’t necessarily have to be attributed to said organization? Can you also say that, if I provide a write-up or if I put up my own personal blog post—let’s say, we go back to some of the OpenSSL vulnerabilities including OpenSSL 3.0, that came out not too long ago, but since that’s an open-source project, it’s fair game—let’s just say that if there was a technology such as that, or maybe there’s a wrapper around it that another organization could be using or could be implementing a certain way, you don’t necessarily have to call the company up by name, or rather just say, here’s the core technology reason, and here’s the core technology risk, and here’s the way I’ve demoed exploiting this. And if you publish an open-source blog like that and then you tweet about that, you can actually gain security support around such issue and then fight for the research.

An example would be that I know a couple of pen testers who have reported things in the past, and while the first time they reported it, the company was like, “Yeah, we’ll fix it eventually.” But later, when another researcher report this exact same finding, the company is like, “We should probably take this seriously and jump on it.” It sometimes it’s just getting in front of that and providing frequency or providing enough people around to say that, “Hey, this really is an issue in the security community and we should probably fix this item,” and keep pushing others organizations on it. A lot of times, they just need additional feedback. Because as you said, somebody runs an automated scanner against your email and says that, “Oh, you’re not checking SPF as strictly as the scanner would have liked because it’s a benchmarking tool.” It’s not necessarily a security vulnerability rather than it’s just how you’ve chosen to configure something and if it works for you, it works for you.

Corey: How does cloud change this? Because a lot of what we talked about so far could apply to anything. Go back in time to 1995 and a lot of what we’re talking about mostly holds true. It feels like cloud acts as a significant level of complexity on top of all of this. How do you view the differentiation there?

Tim: So, I think it differentiated two things. One, certain services or certain vulnerability classes that are handled by the shared service model—for the most part—are probably secure better than you might be able to do yourself. Just because there’s a lot of research, the team is [experimented 00:13:03] a lot of time on this. An example of if there’s a particular, like, spoofing or network interception vulnerability that you might see on a local LAN network, you probably are not going to have the same level access to be able to execute that on a virtual private cloud or VNet, or some other virtual network within cloud environment. Now, something that does change with the paradigm of cloud is the fact that if you accidentally publicly expose something or something that you’ve created expo—or don’t set a setting to be private or only specific to your resources, there is a couple of things that could happen. The vulnerabilities exploitability based on where increases to something that used to be just, “Hey, I left a port open on my own network. Somebody from HR or somebody from it could possibly interact with it.”

However, in the cloud, you’ve now set this up to the entire world with people that might have resources or motivations to go after this product, and using services like Shodan—which are continually mapping the internet for open resources—and they can quickly grab that, say, “Okay, I’m going to attack these targets today,” might continue to poke a little bit further, maybe an internal person that might be bored at work or a pen tester just on one specific engagement. Especially in the case of let’s say, what you’re working on has sparked the interest of a nation-state and they want to dig into a little bit further, they have the resources to be able to dedicate time, people, and maybe tools and tactics against whatever this vulnerability that you’ve given previously the example of—maybe there’s a specific ID and a URL that just needs to be guessed right to give them access to something—they might spend the time trying to brute force that URL, brute force that value, and eventually try to go after what you have.

The main paradigm shift here is that there are certain things that we might consider less of a priority because the cloud has already taken care of them with the shared service model, and rightfully so, and there’s other times that we have to take heightened awareness on is, one, we either dispose something to the entire internet or all cloud accounts within creations. And that’s actually something that we see commonly. In fact, one thing I would like to say we see very common is, all AWS users, regardless if it’s in your account or somewhere else, might have access to your SNS topic or SQS Queue. Which doesn’t seem like that big of vulnerability, but I changed the messages, I delete messages, I viewed your messages, but rather what’s connected to those? Let’s talk database Lambda functions where I’ve got source code that a developer has written to handle that source code and may not have built in logic to handle—maybe there was a piece of code that could be abused as part of this message that might allow an attacker to send something to your Lambda function and then execute something on that attacker's behalf.

You weren’t aware of it, you weren’t thinking about it, and now you’ve exposed it to almost the entire internet. And since anyone can go sign up for an AWS account—or Azure or GCP account—and then they’re able to start poking at that same piece of code that you might have developed thinking, “Well, this is just for internal use. It’s not a big deal. That one static code analysis tool isn’t probably too relevant.” Now, it becomes hyper-relevant and something you have to consider with a little more attention and dedicated time to making sure that these things that you’ve written or deploying, are in fact, safe because misconfigured or mis-exposed, and suddenly the entire world is starts knocking at it, and increases the risk of, it may really well be a problem. The severity of that issue could increase dramatically.

Corey: As you take a look across, let’s call it the hyperscale clouds, the big three—which presumably I don’t need to define out—how do you wind up ranking them in terms of security from top to bottom? I have my own rankings that I like to dole out and basically, this is the, let’s offend someone at every one of these companies, no matter how we wind up playing it. Because I will argue with you just on principle on them. How do you view them stacking up against each other?

Tim: So, an interesting view on that is based on who’s been around longest and who is encountered of the most technical debt. A lot of these security vulnerabilities or security concerns may have had to deal with a decision made long ago that might have made sense at the time and now the company has kind of stuck with that particular technology or decision or framework, and are now having to build or apply security Band-Aids to that process until it gets resolved. I would say, ironically, AWS is actually at the top of having that technical debt, and actually has so many different types of access policies that are very complex to configure and not very user intuitive unless you speak intuitively JSON or YAML or some other markdown language, to be able to tell you whether or not something was actually set up correctly. Now, there are a lot of security experts who make their money based on knowing how to configure or be able to assess whether or not these are actually the issue. I would actually bring them as, by default, by design, between the big three, they’re actually on the lower end of certain—based on complexity and easy-to-configure-wise.

The next one that would also go into that pile, I would say is probably Microsoft Azure, who [sigh] admittedly, decided to say that, “Okay, let’s take something that was very complicated and everyone really loved to use as an identity provider, Active Directory, and try to use that as a model for.” Even though they made it extensively different. It is not the same as on-prem directory, but use that as the framework for how people wanted to configure their identity provider for a new cloud provider. The one that actually I would say, comes out on top, just based on use and based on complexity might be Google Cloud. They came to a lot of these security features first.

They’re acquiring new companies on a regular basis with the acquisition of Mandiant, the creation of their own security tooling, their own unique security approaches. In fact, they probably wrote the book on Kubernetes Security. Would be on top, I guess, from usability, such as saying that I don’t want to have to manage all these different types of policies. Here are some buttons I would like to flip and I’d like my resources, for the most part by default, to be configured correctly. And Google does a pretty good job of that.

Also, one of the things they do really well is entity-based role assumption, which inside of AWS, you can provide access keys by default or I have to provide a role ID after—or in Azure, I’m going to say, “Here’s a [unintelligible 00:19:34] policy for something specific that I want to grant access to a specific resource.” Google does a pretty good job of saying that okay, everything is treated as an email address. This email address can be associated in a couple of different ways. It can be given the following permissions, it can have access to the following things, but for example, if I want to remove access to something, I just take that email address off of whatever access policy I had somewhere, and then it’s taken care of. But they do have some other items such as their design of least privilege is something to be expected when you consider their hierarchy.

I’m not going to say that they’re not without fault in that area—in case—until they had something more recently, as far as finding certain key pieces of, like say, tags or something within a specific sub-project or in our hierarchy, there were cases where you might have granted access at a higher level and that same level of access came all the way down. And where at least privilege is required to be enforced, otherwise, you break their security model. So, I like them for how simple it is to set up security at times, however, they’ve also made it unnecessarily complex at other times so they don’t have the flexibility that the other cloud service providers have. On the flip side of that, the level of flexibility also leads to complexity at times, which I also view as a problem where customers think they’ve done something correctly based on their best knowledge, the best of documentation, the best and Medium articles they’ve been researching, and what they have done is they’ve inadvertently made assumptions that led to core anti-patterns, like, [unintelligible 00:21:06] what they’ve deployed.

Corey: This episode is sponsored in part by our friends at Uptycs, because they believe that many of you are looking to bolster your security posture with CNAPP and XDR solutions. They offer both cloud and endpoint security in a single UI and data model. Listeners can get Uptycs for up to 1,000 assets through the end of 2023 (that is next year) for $1. But this offer is only available for a limited time on UptycsSecretMenu.com. That’s U-P-T-Y-C-S Secret Menu dot com.

Corey: I think you’re onto something here, specifically in—well, when I’ve been asked historically and personally to rank security, I have viewed Google Cloud as number one, and AWS is number two. And my reasoning behind that has been from an absolute security of their platform and a pure, let’s call it math perspective, it really comes down to which of the two of them had what for breakfast on any given day there, they’re so close on there. But in a project that I spin up in Google Cloud, everything inside of it can talk to each other by default and I can scope that down relatively easily, whereas over an AWS land, by default, nothing can talk to anything. And that means that every permission needs to be explicitly granted, which in an absolutist sense and in a vacuum, yeah, that makes sense, but here in reality, people don’t do that. We’ve seen a number of AWS blog posts over the last 15 years—they don’t do this anymore—but it started off with, “Oh, yeah, we’re just going to grant [* on * 00:22:04] for the purposes of this demo.”

“Well, that’s horrible. Why would you do that?” “Well, if we wanted to specify the IAM policy, it would take up the first third of the blog post.” How about that? Because customers go through that exact same thing. I’m trying to build something and ship.

I mean, the biggest lie in any environment or any codebase ever, is the comment that starts with, “To do.” Yeah, that is load-bearing. You will retire with that to do still exactly where it is. You have to make doing things the right way at least the least frictionful path because no one is ever going to come back and fix this after the fact. It’s never going to happen, as much as we wish that it did.

Tim: At least until after the week of the breach when it was highlighted by the security team to say that, “Hey, this was the core issue.” Then it will be fixed in short order. Usually. Or a Band-Aid is applied to say that this can no longer be exploited in this specific way again.

Corey: My personal favorite thing that, like, I wouldn’t say it’s a lie. But the favorite thing that I see in all of these announcements right after the, “Your security is very important to us,” right after it very clearly has not been sufficiently important to them, and they say, “We show no signs of this data being accessed.” Well, that can mean a couple different things. It can mean, “We have looked through the audit logs for a service going back to its launch and have verified that nothing has ever done this except the security researcher who found it.” Great. Or it can mean, “What even are logs, exactly? We’re just going to close our eyes and assume things are great.” No, no.

Tim: So, one thing to consider there is in that communication, that entire communication has probably been vetted by the legal department to make sure that the company is not opening itself up for liability. I can say from personal experience, when that usually has occurred, unless it can be proven that breach was attributable to your user specifically, the default response is, “We have determined that the security response of XYZ item or XYZ organization has determined that your data was not at risk at any point during this incident.” Which might be true—and we’re quoting Star Wars on this one—from a certain point of view. And unfortunately, in the case of a post-breach, their security, at least from a regulation standpoint where they might be facing a really large fine, is absolutely probably their top priority at this very moment, but has not come to surface because, for most organizations, until this becomes something that is a financial reason to where they have to act, where their reputation is on the line, they’re not necessarily incentivized to fix it. They’re incentivized to push more products, push more features, keep the clients happy.

And a lot of the time going back and saying, “Hey, we have this piece of technical debt,” it doesn’t really excite our user base or doesn’t really help us gain a competitive edge in the market is considered an afterthought until the crisis occurs and the information security team rejoices because this is the time they actually get to see their stuff fixed, even though it might be a super painful time for them in the short run because they get to see these things fixed, they get to see it put to bed. And if there’s ever a happy medium, where, hey, maybe there was a legacy feature that wasn’t being very well taken care of, or maybe this feature was also causing the security team a lot of pain, we get to see both that feature, that item, that service, get better, as well as security teams not have to be woken up on a regular basis because XYZ incident happened, XYZ item keeps coming up in a vulnerability scan. If it finally is put to bed, we consider that a win for all. And one thing to consider in security as well as kind of, like, we talk about the relationship between the developers and security and/or product managers and security is if we can make it a win, win, win situation for all, that’s the happy path that we really want to be getting to. If there’s a way that we can make sure that experience is better for customers, the security team doesn’t have to be broken up on a regular basis because an incident happened, and the developers receive less friction when they want to go implement something, you find that that secure feature, function, whatever tends to be the happy path forward and the path of least resistance for everyone around it. And those are sometimes the happiest stories that can come out of some of these incidents.

Corey: It’s weird to think of there being any happy stories coming out of these things, but it’s definitely one of those areas that there are learnings there to be had if we’re willing to examine them. The biggest problem I see so often is that so many companies just try and hide these things. They give the minimum possible amount of information so the rest of us can’t learn by it. Honestly, some of the moments where I’ve gained the most respect for the technical prowess of some of these cloud providers has been after there’s been a security issue and they have disclosed either their response or why it was a non-issue because they took a defense-in-depth approach. It’s really one of those transformative moments that I think is an opportunity if companies are bold enough to chase them down.

Tim: Absolutely. And in a similar vein, when we think of certain cloud providers outages and we’re exposed, like, the major core flaw of their design, and if it kept happening—and again, these outages could be similar and analogous to an incident or a security flaw, meaning that it affected us. It was something that actually happened. In the case of let’s say, the S3 outage of, I don’t know, it was like 2017, 2018, where it turns out that there was a core DNS system that inside of us-east-1, which is actually very close to where I live, apparently was the core crux of, for whatever reason, the system malfunctioned and caused a major outage. Outside of that, in this specific example, they had to look at ways of how do we not have a single point of failure, even if it is a very robust system, to make sure this doesn’t happen again.

And there was a lot of learnings to be had, a lot of in-depth investigation that happened, probably a lot of development, a lot of research, and sometimes on the outside of an incident, you really get to understand why a system was built a certain way or why a condition exists in the first place. And it sometimes can be fascinating to kind of dig into that very deeper and really understand what the core problem is. And now that we know what’s an issue, we can actually really work to address it. And sometimes that’s actually one of the best parts about working at Praetorian in some cases is that a lot of the items we find, we get to find them early before it becomes one of these issues, but the most important thing is we get to learn so much about, like, why a particular issue is such a big problem. And you have to really solve the core business problem, or maybe even help inform, “Hey, this is an issue for it like this.”

However, this isn’t necessarily all bad in that if you make these adjustments of these items, you get to retain this really cool feature, this really cool thing that you built, but also, you have to say like, here’s some extra, added benefits to the customers that you weren’t really there. And—such as the old adage of, “It’s not a bug, it’s a feature,” sometimes it’s exactly what you pointed out. It’s not necessarily all bad in an incident. It’s also a learning experience.

Corey: Ideally, we can all learn from these things. I want to thank you for being so generous with your time and talking about how you view this increasingly complicated emerging space. If people want to learn more, where’s the best place to find you?

Tim: You can find me on LinkedIn which will be included in this podcast description. You can also go look at articles that the team is putting together at praetorian.com. Unfortunately, I’m not very big on Twitter.

Corey: Oh, well, you must be so happy. My God, what a better decision you’re making than the rest of us.

Tim: Well, I like to, like, run a little bit under the radar, except on opportunities like this where I can talk about something I’m truly passionate about. But I try not to pollute the airwaves too much, but LinkedIn is a great place to find me. Praetorian blog for stuff the team is building. And if anyone wants to reach out, feel free to hit the contact page up in praetorian.com. That’s one of the best places to get my attention.

Corey: And we will, of course, put links to that in the [show notes 00:30:19]. Thank you so much for your time. I appreciate it. Tim Gonda, Technical Director of Cloud at Praetorian. I’m Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry comment talking about how no one disagrees with you based upon a careful examination of your logs.

Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

Announcer: This has been a HumblePod production. Stay humble.
Newsletter Footer

Get the Newsletter

Reach over 30,000 discerning engineers, managers, enthusiasts who actually care about the state of Amazon’s cloud ecosystems.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Sponsor Icon Footer

Sponsor an Episode

Get your message in front of people who care enough to keep current about the cloud phenomenon and its business impacts.