Optimizing Cloud Spend at Airbnb with Melanie Cebula

Episode Summary

Melanie Cebula is a staff software engineer at Airbnb who’s focused on cloud infrastructure. She’s a 2016 graduate of UC Berkeley, where she earned a bachelor of arts degree in computer science. Prior to joining Airbnb full-time, she interned there on the payments team. She’s also worked as a teaching assistant at UC Berkeley (CS 164 - Programming Languages and Compilers and CS 61B - Data Structures) and has interned at Facebook, too. Melanie has many opinions, which are her own, and which do not reflect the opinions or views of her employer. Join Corey and Melanie as they discuss the differences between junior, senior, staff, and principal engineers, what a staff engineer’s job looks like at Airbnb, why cloud cost efficiency is a hard-but-great problem to work on, why some engineers are hesitant to turn anything off, how much of optimizing cloud spend involves picking off low-hanging fruit, why it’s more fun to talk to technologists about cloud problems than vendors, how Airbnb uses Kubernetes and what that means for AWS spend analysis, and more.

Episode Show Notes & Transcript

About Melanie Cebula


Melanie Cebula is an expert in Cloud Infrastructure, where she is recognized worldwide for explaining radically new ways of thinking about cloud efficiency and usability. She is an international keynote speaker, presenting complex technical topics to a broad range of audiences, both international and domestic. Melanie is a staff engineer at Airbnb, where she has experience building a scalable modern architecture on top of cloud-native technologies.


Besides her expertise in the online world, Melanie spends her time offline on the “sharp end” of rock climbing. An adventure athlete setting new personal records in challenging conditions, she appreciates all aspects of the journey, including the triumph of reaching ever higher destinations.


On and off the wall, Melanie focuses on building reliability into critical systems, and making informed decisions in difficult situations. In her personal time, Melanie hand whisks matcha tea, enjoys costuming and dancing at EDM festivals, and she is a triplet.


Links Referenced:




Transcript

Announcer: Hello, and welcome to Screaming in the Cloud with your host, Cloud Economist Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.



Corey: Are you better than the average bear with AWS? If you're listening to this podcast, the answer is almost certainly yes. Want to turn those skills into money? If you're US-based and have an AWS certification, sign up as an expert on AWS IQ today, and help customers with their problems., visit snark.cloud/IQ to learn more.



Corey: This episode is brought to you by Trend Micro Cloud One™. A security services platform for organizations building in the Cloud. I know you're thinking that that's a mouthful because it is, but what's easier to say? I'm glad we have Trend Micro Cloud One™ a security services platform for organizations building in the Cloud, or, “Hey, bad news. It's going to be a few more weeks. I kind of forgot about that security thing.” I thought so. Trend Micro Cloud One™ is an automated, flexible all-in-one solution that protects your workflows and containers with cloud-native security. Identify and resolve security issues earlier in the pipeline, and access your cloud nvironments sooner, with full visibility, so you can get back to what you do best, which is generally building great applications. Discover Trend Micro Cloud One™ a security services platform for organizations building in the Cloud. Whew. At trendmicro.com/screaming.



Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week by Melanie Cebula, staff engineer at Airbnb. Melanie, welcome to the show.



Melanie: Thanks for having me, Corey.



Corey: So, let's start at the very beginning, I guess. What is a staff engineer, and how did you become such a thing?



Melanie: To understand a staff engineer, you probably need to understand that there's levels with engineering. So, a lot of people when they're new to engineering, they just know that there's software engineers or developers. But when you join most large companies, they need a way of tiering people based on experience, and the kind of work they do. So, there's essentially junior engineers, software engineer, senior, staff, principal, and it goes up from there. 



And it varies from company to company, but essentially, the kind of work that you do at different levels does change a lot. And so it is helpful to frame them differently. So, staff engineers usually are given less scoped problems; they’re sort of given problem spaces, and they kind of work within that space and provide direction for the company in that space.



Corey: Most companies that aren't suffering from egregious title inflation, which I can state Airbnb is not, it tends to mean an engineer of a sufficient level of experience and depth, where folks are more or less, as you said, trusted to provide not just solutions, but insight and effectively begin to have insight into strategic level concerns more so than just, what's the most elegant way to write this particular code section? Is that a somewhat fair assessment?



Melanie: I think that's fair.



Corey: Okay. So, Airbnb, which we are not here to talk specifically about your environment, but rather, in a general sense, is an interesting company because they tend to be, from the world's perspective, a giant company with a massive web presence, but unlike a lot of other folks that I get to talk to on this show, you're not yourselves a cloud provider. You're not trying to sell any of the infrastructure services that you're using to run your environment. Instead, you're coming at this from a perspective of you're a customer, just like most of us tend to be customers, of one or more cloud providers, with the exception being that you just tend to have bigger numbers to deal with in some senses than the rest of us do. Not me necessarily. I mean, I tend to run Twitter for Pets at an absolute world-spanning scale because I know it's going to take off any day now. But most sensible people don't operate at that level.



Melanie: Yeah, so I think what's so empowering about being a user of all these technologies is you can be really pragmatic about how you use things. You don't have to look at, “Well, this is what Google does, and this is what this other big company does.” Or, “This is what this vendor is pushing this month. This is AWS’s, newest latest technology.” You look at what you need, and what the problems you have are, and what are the solutions out there, and you can actually try them out and find what's the best one, and then you can share that with everyone else. “Hey, for our scale, for these kinds of problems we have, we’ve found this technology works for us.” Or, as the case usually is, “With a lot of work on our end, we've made this technology work well enough.” And I think that's really refreshing. I love talking to other users of technology and coming to what is actually the best solution for your problem because you just don't get that from vendors. They're trying to sell you something.



Corey: Right, it comes down to questioning the motives of people who are having conversations around specific areas where they have things to offer. Apropos of absolutely nothing, what areas of problem do you tend to focus on these days?



Melanie: In the last few years, I’ve worked a lot on our infrastructure platform. And so what makes our infrastructure more usable, more easy to operate, make it more functional for developers? And then the past few months, instead, I've started working on cost efficiency and cloud savings.



Corey: a subject that is near and dear to my heart. When I started my consulting company a few years back, the big question I had was, “Great, what problem can I solve with a set of engineering skills in my background, but I want it to be an expensive business problem? Absolutely nobody wants to see me write code. And oh, yeah because of some horrible environments I’d worked in previously. I refuse to work in anything that requires me to wake up in the middle of the night, so this has to be restricted to business problems.” The AWS bill was really where I landed on when I was putting all those things together. And for better or worse, it seems to have caught on to the point where I fail to go out of business pretty consistently, every single month.



Melanie: Yeah, it's a really hard problem. And I do agree: I've had on call before, and I've worked on different problems, and I think cost is actually a really great problem to work on. And when I worked on cost, too, I was wondering, hey, is this actually a problem worth working on? Because when you think about some of these other problems, there's more sophistication around companies that have reliability orgs or developer tooling orgs. And most companies just don't have that sophistication when it comes to cloud savings, so it is a new and exciting place. And so I'm happy to be working on it, and the problems do not stop. So, I think it's quite interesting.



Corey: I would absolutely agree with you. It was one of those things that when I first got into—which it turns out was surprisingly easy to get into because when you call yourself an expert on something like the AWS bill, no one is going to challenge you on such a thing because who in the world would ever claim such a thing if it weren't true? And I figured it would be a lot drier and less technically interesting than it turned out to be. The more I do this, the more I'm realizing that it is almost entirely an architecture story past a certain point. It's not about the basic arithmetic story of adding up all the bill items and make sure that the numbers agree. That's arithmetic that's not nearly as interesting or, frankly, as challenging of a problem. 



The part that's neat to me, is that past a certain point of scale—and that point is not generally in someone's personal test environment—spending significant time and energy on not just reducing the bill, but understanding and allocating portions of the bill to different teams, environments, et cetera, is something that companies begin to turn their attention to. At a certain point of scale, as it's clear that you folks are at, having an engineer or engineers focusing on that problem makes an awful lot of sense. It seems that some folks try to get there a bit too soon by hiring an engineer to do this who cost more than their entire AWS bill. That seems like it might be an early optimization, but I'm not one to judge. So, my question for you that I want to start with is what do you see as being the most interesting thing that you've learned about AWS billing in the last 6 to 12 months?



Melanie: That is a big question. [laughs]. I’d have to say that the most interesting thing that I have learned has been around architecture and architecting for, basically, efficient compute and cost savings. And so things like the way that we send data between services, and configuring retention for that isn't always straightforward. And another big one has been data transfer; I mean, that's been huge. 



When people think about availability and being available in multiple zones across multiple data centers around the world, I don't think cost goes into that equation. And what I found in my own experience so far is that it definitely should because to solve that problem, you're looking at building enough provisioning into your compute layer—so having enough so that if one data center goes down, that the other ones can spin up fast enough, and get that compute in time to handle an outage. You're looking at changes in your service mesh to send traffic to different sources within the same availability zone. I mean, the list goes on: Kafka clusters needing to send traffic, setting one up in every AZ. And for me, that's just been one of the most fascinating things is that when I first started working on cost savings, I didn't think that that much architecture work would be involved, and certainly there's a mixture of things: I’ve had to build little tools, little scripts, some automation. I've had to do some of the, oh, let's just get rid of that manually. But it goes from these basic, easy wins to, we need to really rethink this entire piece of infrastructure, and so that's kind of exciting.



Corey: The hard part about data transfer pricing is that it's inscrutable from the outside, and it's not at all intuitive as far as understanding what makes sense from a billing logical perspective. It costs the same, for example, to move data from one availability zone to another, as it does from one region to another for most workloads—there are exceptions to virtually everything that we talk about in this, which is part of what makes this fun. And, general rule of thumb—this isn't quite right, but if you're looking at this as gross estimation perspective, storing data in S3 for one month is roughly the same cost as moving it once between availability zones or between regions. So, if you're passing the same piece of data back and forth four or five times, maybe just store it more than once and stop moving it around, where the processing and reprocessing of data. I mean, you talked about Kafka. There's always the challenge of historically, compression wasn't as great as some of the newer versions that have been—some of the newer pull requests have merged in new forms of compression that tend to offer a better ratio. There's a pull request that was merged in somewhat recently where you can query the local follower. 



But you're right, you have to have things like your service mesh understand that you can now route those things differently, and what your replication factor looks like becomes a challenge. And a lot of, at least in my experience, has always become a more strategic question where it's a spectrum that you have to pick a point on that you're going to target between cost efficiency and durability. I mean, things are super cheap if you only ever run one of them compared to running three of them. But if you accidentally fat-finger the wrong S3 bucket, you don't have a company anymore, in some cases. So, aligning business risk and technical risk with something that is cost-efficient is a balancing act. And anyone who tells you that stuff is simple is selling something.



Melanie: Yes. For me, what I found is framing the problems almost along a matrix of, well, we could go all the way on making this as cheap as possible. That's not ever anyone's preference. People want some amount of durability, and availability, and redundancy, and I think that's great. What I've seen in a lot of strong engineering organizations—this is normally a good thing—engineers really want to do the best thing, at least in my experience. 



And so they're very optimistic on some of the engineering work they do, and how available they want things to be, and the pricing and some of these things just need to be considered and architected for. So, I don't think you necessarily have to make a choice on being this available and this durable is prohibitively expensive, but the ways that people do it naively can be. And so having to think through the ways to solve the problem I think it's really interesting. And another example of that is EBS versus EC2 costs. We recently discovered if you're trying to run a certain kind of job on these instances that need to have—so storage on them, there are multiple ways to solve this problem, and so what I found is, what we're really looking at is different ways to solve it with different AWS resources. And the pricing can matter on those kinds of things.



Corey: Oh, absolutely. And there are edge cases that cut people to ribbons all the time. Almost every time you see io1 EBS volumes, my default response is, “That's probably not what you mean to be doing.” You can get gp2, which is less expensive, to similar performance profiles up to a certain point, but before you hit that, an awful lot of the instances will wind up having instance throughput limits. And that's the fun part is, no matter what AWS service you look at, by and large, there are going to be interesting ways to optimize once you hit a certain point of scale. 



The hard part in some cases is finding an environment that's using a particular service in such a way where you get to spend time doing some of those deep dives. For example, everyone loves to use EC2—or rather, they use it. Whether they love it or not is a subject of some debate—but it turns out that Amazon Chime: maybe there's ways to optimize the bill. We wouldn't know; we've never seen an actual customer. So, finding things that align with everyone, and hitting the big numbers on the bill before working on the smaller ones is generally an approach that I think, for some reason, sails past people because the bill is organized alphabetically, but we're also seeing that folks tend to wind up getting focused on things that are complicated and interesting to solve from an engineering perspective rather than, step one, turn things you're not using off because the cloud is not billing you based upon on what you use so much as what you're forgetting to turn off. But that's not fun or interesting, so instead, we're going to build this custom bot that powers down developer environments out of hours. And that's great, but development in some cases is 3 percent of your spend, and you haven't bought a reserved instance in two years. Maybe fix that one first.



Melanie: It is so interesting that you say that because I do know an engineer who has built a bot to spin down development instances and, actually, I think it was really quite effective because the development instances were so expensive, and a lot of developers were not using them. So, in that case, it was a really great tool. What they didn't know is that a lot of the development instances were actually at one point spinned up as CI jobs. So, someone had an interesting idea where we could run integration tests on these development machines, so they glued everything together and it didn't work that well, and then they forgot to turn them off and the bot didn't account for those. So, over time, what we saw is that the bot was running, but machines weren't going down, and it was because there was so many of this other kind of machine up. 



And so it really came back down to is, what are you actually running? Or, what EC2 instances do you have running that you're not using? And even in this case, the idea that there's a lot more development instances than we think are being used, it still didn't get root-caused at the right level. But I definitely have found that a lot of the tooling I've built and a lot of the solutions that I worked on, at least initially, were just low hanging fruit. There are a lot of things that I think companies don't realize they're not using. Like, if they knew they weren't using it, they wouldn't be paying for it, but they just don't know. And I think some costs I've seen around that is S3 buckets not setting lifecycle policies, and you have a lot of data being stored there that is never ever removed, and you're just not accessing it; you're not using it. And another example I've seen is with EBS volumes becoming unattached, and then never being cleaned up. And so that also can cost a bit over time.



Corey: Oh, yeah. Part of the problem, too, is a lot of tooling in this space claims to solve these problems perfectly. The challenge is, is all of them lack context. Hey, that data in that S3 bucket has never been accessed, so we can get rid of it is probably accurate, if you're referring to build logs from four years ago, probably not if you're referring to the backups of the payment database. So, there's always going to be a strange story around what you can figure out programmatically versus what requires in-depth investigation by someone who has the context to see what is happening inside that environment. 



I think a lot of the speeding things up and never turning them off in some respects is a culture problem. First, people are never as excited to clean up after themselves as they are to make a mess, but in some companies, this is worse, where back in the days of data centers you wanted a new server? Great, if you have an IT team that's really on the ball, you can get something racked and ready to go and only six short weeks. So, once you've run your experiment, would you ask them to turn it off? Absolutely not. If you have to run it again, it'll be six weeks until they wind up getting you another one, so you keep it around. I've seen some shops where they run idle nonsense, like Folding@home on fleets just to keep utilization up so accounting doesn't bother them. It's really a strange and perverse incentive, but this idea of needing to make sure that people aren't spinning things up unnecessarily can counterintuitively cause more waste than it solves for.



Melanie: The basic idea is that engineers want to hold on to the things they spend up to?



Corey: They want to hold on to things if it's painful to turn them off.



Melanie: Okay, yes.



Corey: Or rather, if it's painful to get it spun back up where, if it takes you three hours of work to get something up and running, once it's up and running, you're going to leave it there because you don't want to go through that process of spinning it up again. Whereas if it's push-button and receive this thing that you were using almost with no visible latency, then people are way more willing to turn things off.



Melanie: Where I have seen hesitancy in turning things off, it generally comes from a state where they know that it's painful to spin up again, and they really don't want to ever do that again, and in cases where people just aren't sure if it's safe; they just don't know. And they're afraid of there being consequences. And so, I think, our old school development environments, I mean, that was another problem with that, is that people didn't want to spin them down even if they hadn't used them in quite a while because of the way those machines were configured. It's not using containers, there's a lot of stateful data, it means that people want to hold on to it because spinning it up again is so painful. And what I’ve found with us moving to a lot of containerized technology and stateless things is that, at least in those cases, spinning things down to the right utilization has been a lot less controversial. 



And another strategy has been making it not the developers problem. So, when you don't have autoscaling, and you don't have sophisticated capacity management in place yet, a lot of developers tend to over-provision things because that's how they handle traffic spikes. They just try to make sure they always have enough compute to handle the traffic spike, but that's not necessarily efficient. So, when you make it not their problem and you just have the say, okay, well, this service, it can target 50 percent CPU utilization, let's say. And then we can, behind the abstraction layer, sort of spin things up and down—or in this case, Kubernetes is doing it. You can also use auto scaling groups with EC2 or other solutions. I have found that, in general, trying to get rid of that problem and, sort of, get rid of the attachment is one way to solve it. But you'll always find cases where you have to kind of deal with that, sort of, perverse incentive.



Corey: Right. And you also find that there's this idea as well, that, oh, we're going to build tooling to solve all of this. But it turns out that mistakes on turning things off can show, and the first time in most shops that a cost savings initiative takes production down, you're often not allowed to try to save money anymore because, “Well, we tried that once and it ended badly.” There often need to be better safeguards and people trying to dive into these things with the best of intentions, but not the real-world experience that—or at least the scars that come from real-world experience having tried such things in the past.



Melanie: Yeah. I think if you're an inexperienced shop trying to work on cost savings, be really careful, I guess would be my advice. What you're doing, really, with cost savings, is you're trying to run—like with compute, you're trying to run things more efficiently. I mean, there's services and applications that just have never run this hot before, and they might not perform well under those kinds of circumstances. I can imagine cases where you don't think anyone's using this S3 bucket and you delete the bucket and, well, now you're in that situation. 



And so I think when you're looking at cost savings, you are—every operation with cost savings is a risky one. And so for me, taking my reliability background and applying that to this problem has been really helpful. I mean, having run books, having operation plans, having the needed metrics, and introspection to make these changes. So, one of the biggest changes I've done here, at least at Airbnb was there were places where we just didn't know what was being used. There just was no observability. 



And Amazon offers products for this, so S3 object analytics and metrics on usage and things like that, just enabling those for the buckets where it made sense has been really helpful. I will say that, like you've said, there's an edge case for everything, so there are certain buckets that have such an extraordinary number of objects that enabling this kind of observability would be very expensive. And that's the other category I've seen, is people not understanding where the bill can become exponential, and so S3 buckets with a lot of objects is one of them.



Corey: Oh, yes. I saw one once that was just shy of 300 billion objects in a single bucket.



Melanie: Yep.



Corey: You try and iterate through those, it'll complete two weeks after the earth crashes into the sun. And their response, when you ask them about that was—what is this? And they had an answer to, they tried to build some custom database style thing, and they said, “This may not have been the best approach.” I'm going to stop you there. It was not. But at some point, things become so big, you can't instrument them using traditional methods and have to start looking at new and creative ways. Things that are super easy when you do a test case on a small handful of resources explodes in fire, ruin, and pain when you get to a point of scale.



Melanie: Yeah, and the other thing I noticed is—so AWS’s billing doesn't necessarily have these safeguards out of the bat. So, one of the first things I would do is implement some safeguards so that you don't shoot yourself in the foot and make it worse. And the other one is to build an understanding of what observability you need, and what you can enable. And so that's been helpful.



This episode is sponsored in part by , fellow worshipers at the altar of turned out [BLEEP] off. ParkMyCloud makes it easy for you to ensure you're using public cloud like the utility it's meant to be. just like water and electricity, You pay for most cloud resources when they're turned on, whether or not you're using them. Just like water and electricity, keep them away from the other computers. Use ParkMyCloud to automatically identify and eliminate wasted cloud spend from idle, oversized, and unnecessary resources. It's easy to use and start reducing your cloud bills. get started for free at parkmycloud.com/screaming.


Sponsorships can be a lot of fun sometimes. ParkMyCloud asked "can we have one of our exacts do a video webinar with you?" My response was "here's a better idea, how about I talk to one of your customers instead, so you can pay me to make fun of you?" And turns out, I'm super convincing. So that's, what's happening.

Join me and ParkMyCloud's Customer Workfront on July 23rd for a no holds barred discussion about how they're optimizing AWS costs and whatever other fights I managed to pick before ParkMyCloud realizes what's going on and kills the feed. Visit parkmycloud.com/snark to register.  That's parkmycloud.com/snark.



Corey: Something else that I think is not well understood by folks who are used to much smaller environments is, if I were to check my AWS credentials into GitHub—or GIF-huhb, depending upon pronunciation choice—then I would notice that I had done so pretty much immediately when my $200 a month bill is now $15,000. Past a certain point of scale, even incredibly hilarious spin-ups of all kinds of instances that are being exploited, or misconfigurations that are causing meteoric growth disappear into the low-level background noise because it takes a lot to have even a 10 percent shift in big numbers, versus in my case, if I have a Lambda function get out of hand, I can have a 10 percent shift.



Melanie: Absolutely. And I think that is what makes cost such a snowballing problem. People don't understand why the bill is increasing at this rate, and it's because the more crazy the bill gets, the more things get hidden by the crazy bill, the harder it gets to go after and fix all these things in a way that is systematic and prevents it from happening again. And what I found is we had to build a lot of custom tooling, and so one of the most important ones is not necessarily, show alphabetically what's the most expensive or anything like that, but show the difference. Like, these costs that we have tagged, their delta over the last day or the last three days is this big, and so that's going to be at the top of the list, is actually that delta and pricing change.



Corey: I would like to point out, just for the record, at the moment that in the event that someone else is listening to this thinking, “Oh, I'm going to go build some custom cost tooling myself.” Don't do that. You don't want to do that. You want to go ahead and see if there's something else out there first before you start building your own things. I promise, having fallen down that trap myself, please learn from my mistake. 



Something I want to talk to you about that is, well, how do I put this in whatever the opposite of least confrontational way possible is. Okay, so at Airbnb, you run an awful lot of really interesting, well built, very clearly defined awesome technologies, and also Kubernetes. What have you found that makes Kubernetes interesting—if anything—from an AWS billing perspective?



Melanie: From an AWS billing perspective, what I would say is—so when you work with Amazon Web Services, a lot of it, you're working with the different services that they define, and so their billing can show you how you use their resources. When you run your own infrastructure on top of EC2—in this case, we run our own Kubernetes clusters on top of EC2—they don't get the same insight. And so when you're looking at cost, what you get is the cost of your Kubernetes clusters. So, it's not that helpful to know that this Kubernetes cluster got more expensive from day one to the next day. What's helpful is to know which namespaces or which services essentially got more expensive and why. And so having to do that second level attribution, as I call it, is necessary to understand your compute costs. And so, I do think there is a lag from when you use some of the latest technologies, and they're not necessarily AWS services, then you take on a lot of the maintenance of owning that technology and running it, and then also the cost savings for that technology.



Corey: Part of the challenge too, is that folks who are really invested heavily in Kubernetes are trying inherently to solve infrastructure problems, or engineering problems, and that's great. No one is setting out to deploy Kubernetes—I could stop that sentence there, and probably a decent argument—but no one is setting out to deploy Kubernetes from a cost optimization, or more importantly, cost allocation perspective. So, whenever you wind up with a weird billing story on top of Kubernetes, a lot of things weren't done early on, and now there's a bit of a mess because, from the cloud provider’s perspective, you have one application that is running on top of a bunch of EC2 instances, or otherwise. And that application is called Kubernetes, and it is super weird because sometimes it does all kinds of weird data transfer, sometimes it beats the crap out of S3, sometimes it winds up having weird disk access patterns, but figuring out which workload inside of Kubernetes is causing a particular behavior is almost impossible without an awful lot of custom work. Today, I'm not aware of anything generic that works across the board from that perspective. Are you?



Melanie: I am not aware.



Corey: I was hoping you'd have a different answer to that.



Melanie: Well, I can say that because of some standardization and implementation details of how we implemented Kubernetes and how we hosted it on AWS, we were able to come up with a strategy for tagging different namespaces and getting them attributed it to the right services and there for the right service owners, but I think we did have some insight about standardization, and very opinionated usage of Kubernetes. I think if you didn't have that, you would be in a much tougher position. And I also think that's probably why it is tough to find a solution out there so far. And I think it's because you can use these super-pluggable, flexible infrastructure, you can use them a lot of different ways, and so it's hard to build a tooling that kind of just works. I mean, how you define namespaces, I think would be really huge for how you know what is increasing the Kubernetes data transfer costs, or whatever it is.



Corey: A further problem that goes beyond that, too, is every time I've looked at workloads inside of Kubernetes, as we talked about earlier, there's not a lot of zone affinity that is built into this, where if it's going to ask a different microservice—because everything's a microservice because why shouldn't every outage become a murder mystery instead? It reaches out to the thing that's defined with no awareness of the fact that that very well might be someplace super expensive versus free. And again, AWS helps with approximately none of this, because data transfer with AWS is super expensive because bandwidth is a rare and precious thing unless it's bandwidth in the AWS, in which case it's free; put all your data there, please. Have you found that there's any answer to that other than just building more intelligent service discovery on top of it, and then having to shoehorn it into various apps?



Melanie: Well, what I will say is that I think we were probably one of the first companies to truly try to run AZ aware workloads on Kubernetes on AWS. And so we did run into interesting problems and ways of solving this. So, right away, on the orchestration layer, we found bugs in the Kubernetes scheduler that prevented AZ balance from happening. An engineer on one of Airbnb infra teams actually made some changes to the Kubernetes scheduler upstream. Once we had that fixed, it was possible to have pods be balanced across AZ zones, so we could do AZ aware routing in a way that didn't just blow up one zone because it was not balanced. 



And then, because we've been working on using Envoy, we—Envoy has quite good AZ aware routing support, and so we've been able to use that to route traffic. But there's this other idea in Kubernetes, about, like, scheduling preferences, so scheduling pods in such a way that the same AZ is preferred, but not required, so that—you want to prefer this because it's much cheaper, and it's a better solution, but if you choose the required option, what you'll get is actually an outage if you have problems with AZ zone. So, that's a little bit in the weeds, but what I have found is we had to solve it at multiple layers.



Corey: And that's sort of the problem because it feels like Kubernetes was sold, perhaps incorrectly, as this idea of you have silos between Dev and Ops, but that's okay because you don't have to have any communication between those groups. That was never true, but this is one of those areas where that historical separation seems like it's coming home to roost a bit. One of the whole arguments behind containerization early on, was that, oh, now you can have developers build their application, they don't have to worry at all about the infrastructure piece, and then they throw it over the wall, more or less, and let operations take it from there. When you have to build things into applications to be aware of zonal affinity, and the infrastructure absolutely has to be aware of that, it feels like by the time that becomes an expensive enough problem to start really addressing, there's already a large enough environment that was built without any close coupling between Dev and Ops to build that type of cost-effective architecture in the base level. So, it has to almost be patched in after the fact.



Melanie: Yeah, so I think Kubernetes actually does bill itself as DevOps empowerment, which is interesting. The idea that you can sort of create your own configuration and apply it yourself. And in practice, I'd say Airbnb has had a really strong DevOps culture historically, so a lot of our engineers are on call for their own services, their own outages, essentially, and so when services do have a configuration problem, like a Kubernetes problem, it is generally that team that is paged. There is a problem, definitely, with the platform that we've built, where if you have a general issue with supporting AZ, that's going to fall to the infrastructure team to solve, really, because at that point it's just so in the weeds that I think a regular application developer would be probably horrified if you asked them to try to solve AZ aware routing in Kubernetes for their service.



Corey: I am the opposite of an application developer and I'm still horrified at the idea. It's one of those complicated problems with no right answer that also becomes a serious problem when you're trying to have this built out for anything that is not just a toy problem on someone's laptop. Oh, wow—because not only you have to solve for this, but you also have to be able to roll this out to something at significant scale. And scale adds its own series of problems that tend to be a treasure into the light for everyone experiencing them for the first time.



Melanie: Yeah, and I think it's interesting because I think infrastructure is in a Renaissance period right now, where people are really excited about all these technologies, but it's just, the whole category is kind of immature. And so there's these growing pains, and I think when you're in my position, you see those growing pains. I think a lot of people are starting to acknowledge that now. And you can still be really excited about all these technologies and possibly be willing to run them in production, but when we use these kinds of technologies, there will be trade-offs, there will be growing pains. When there's these paradigm shifts and how infrastructure is used, they come in waves, and so for example, service mesh, I think, is probably the biggest example. When you are dealing with all these microservices, these other technologies become kind of crucial to running them at a certain scale.



Corey: Yeah, there's really not a great series of stories that apply universally, yet. And I think you're right: Things come in waves, where you wind up with things getting more and more layers of abstraction; the complexity increases to the point where something happens, and that all collapses down on itself into something a human being can understand again, and then it continues to repeat. It's almost a sawtooth graph of complexity measured over decades. I think that Kubernetes is one of those areas now where it's starting to get more complex, and it's worth, when at some point, you look at all the different projects that are associated with it under the CNCF, and you look around for the hidden camera because you're almost positive, you being punked.



Melanie: [laughs]. A lot of these technologies, I'm really excited by all the development of them, but I can also acknowledge that it's far too complex for, not just the average use case, but any use case. No one wants to be running anything this complex. And it's also fair to say that it would be hard to build something that supports all of these use cases and not end up as complex. But every time we go through this development cycle and iteration, we learn things, and we build it better the next time. 



And so what I'm actually seeing is a proliferation of opinionated platforms being built. So, Kubernetes is actually one of the older ones at this point, although it's surprising to say that with Borg being its predecessor, and now these other opinionated platforms that are really quite new. I mean, you look at the serverless movement, AWS Lambda, Knative as another iteration on Kubernetes. And so I think what we are seeing is people trying out different opinionated platforms and building tooling around it, and I think we are moving in the right direction, but we'll see these waves of complexity until we get there.



Corey: I think that's probably a very fair assessment. And I wish I could argue with it, but everything old becomes new again, sooner or later.



Melanie: Yeah, and I think what we'll find is in some areas we’re really insightful, in other areas we kind of just went too far, and we’ll course correct over time. And, yeah, I think that's what we're seeing right now is I think people are agreeing that it's quite complex, and it has all these implications all the way down to cost. And yeah, I think there'll be—there already is a lot of development there. I think people are actually hopeful that there'll be one platform that comes out, and everyone just tells them to use the platform and it's great. I don't actually think that's going to happen. I think what we'll get is a few specialized platforms that are really good at what they do. So, I'm excited for that future.



Corey: I am too. I'm looking forward to seeing how it shakes out. Melanie, thank you so much for taking the time to speak with me today. If people want to hear more about what you have to say, where can they find you?



Melanie: So, you can reach out to me at @MelanieCebula on Twitter or on my website.



Corey: Excellent. We'll throw the links to both of those in the [show notes]. Melanie Cebula, staff engineer at Airbnb. I am Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on Apple Podcasts. If you've hated this podcast, please leave a five-star review on Apple Podcasts, and then leave a comment incorrectly explaining AWS data transfer.



Announcer: This has been this week’s episode of Screaming in the Cloud. You can also find more Corey at ScreamingintheCloud.com, or wherever fine snark is sold.



This has been a HumblePod production. Stay humble.



Newsletter Footer

Get the Newsletter

Reach over 30,000 discerning engineers, managers, enthusiasts who actually care about the state of Amazon’s cloud ecosystems.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Sponsor Icon Footer

Sponsor an Episode

Get your message in front of people who care enough to keep current about the cloud phenomenon and its business impacts.