HEY, We’re Building Better Email with Blake Stoddard

Episode Summary

Blake Stoddard is a senior site reliability engineer at Basecamp who’s tasked with running and maintaining Ruby on Rails applications running on-premises, in AWS ECS, and in Kubernetes. Previously, he served as the chief executive officer at Coursix, Inc., a company that built software solutions for schools. In 2018, Blake earned a bachelor’s in business management with a concentration in IT from North Carolina State University. Join Corey and Blake as they discuss the recent saga of Basecamp taking on Apple, Basecamp’s email platform HEY and Blake’s role in its development, tracking pixels and why they’re a terrible thing, how HEY solves the tracking pixel problem, how everything Basecamp designs is intended to last until the end of the internet, what Basecamp’s hybrid cloud environment looks like, why organizations shouldn’t simply move to the cloud to transfer CAPEX to OPEX, how Basecamp uses Kubernetes, and more.

Episode Show Notes & Transcript

About Blake Stoddard

Blake is Senior System Administrator on Basecamp’s Operations team who spends most of his time working with Kubernetes, and AWS, in some capacity. When he’s not deep in YAML, he’s out mountain biking.



Links Referenced:


Transcript


Announcer: Hello, and welcome to Screaming in the Cloud with your host, Cloud Economist Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.



Corey: This episode is sponsored by a personal favorite: Retool. Retool allows you to build fully functional tools for your business in hours, not days or weeks. No front end frameworks to figure out or access controls to manage; just ship the tools that will move your business forward fast. Okay, let's talk about what this really is. It's Visual Basic for interfaces. Say I needed a tool to, I don't know, assemble a whole bunch of links into a weekly sarcastic newsletter that I send to everyone. I can drag various components onto a canvas: buttons, checkboxes, tables, etc. Then I can wire all of those things up to queries with all kinds of different parameters, post, get, put, delete, etc. It all connects to virtually every database natively, or you can do what I did, and build a whole crap ton of lambda functions, shove them behind some API’s gateway and use that instead. It speaks MySQL, Postgres, Dynamo—not Route 53 in a notable oversight; but nothing's perfect. Any given component then lets me tell it which query to run when I invoke it. Then it lets me wire up all of those disparate APIs into sensible interfaces. And I don't know frontend; that's the most important part here: Retool is transformational for those of us who aren't front end types. It unlocks a capability I didn't have until I found this product. I honestly haven't been this enthusiastic about a tool for a long time. Sure they're sponsoring this, but I'm also a customer and a super happy one at that. Learn more and try it for free at retool.com/lastweekinaws. That's retool.com/lastweekinaws, and tell them Corey sent you because they are about to be hearing way more from me.



Corey: Normally, I like to snark about the various sponsors that sponsored these episodes, but I'm faced with a bit of a challenge because this episode is sponsored in part by A Cloud Guru. They're the company that's sort of famous for teaching the world to cloud. And it's very, very hard to come up with anything meaningfully insulting about them. So I'm not really going to try. They've recently improved their platform significantly, and it brings both the benefits of A Cloud Guru that we all know and love as well as the recently acquired Linux Academy together. That means that there's now an effective hands on and comprehensive skills development platform for AWS Azure, Google cloud and beyond yes and beyond is doing a lot of heavy lifting right there. In that sentence, they have a bunch of new courses and labs that are available. For my purposes, they have terrific learn by doing experience that you absolutely want to take a look at. And they also have business offerings as well under ACG for business, check them out, visit  acloudguru.com to learn more. Tell them Cory sent you and wait for them to instinctively flinch. That's acloudguru.com.




Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week by Blake Stoddard, senior site reliability engineer at a company that has been in the news a fair bit lately: Basecamp. Blake, welcome to the show.



Blake: Thanks for having me, Corey.



Corey: So, Basecamp was always this sort of aberration in tech circles. You rather famously did not take VC funding; you were originally called, I believe it was 37signals. Campfire was a early Slack-alike, only less awful in some ways than Slack has become; and recently you launched a second product, for lack of a better term, HEY, a controversial email client. Controversial because it doesn't let people track everyone who uses it to read email while they're sleeping.



Blake: Totally. That is exactly how it went.



Corey: So, it's been an interesting ride, most notably where people would have heard about this, is—in many cases—in small publications like The Wall Street Journal and The New York Times because post-launch, Apple decided that it was inconceivable that anyone could make money without giving Apple a 30 percent cut, and one of your f—your two founders, Jason Fried and David Heinemeier Hansson—did I pronounce that correctly?



Blake: Mm-hm. Yep.



Corey: —or DHH as we all know him, more or less went nuclear against Apple, which, given that my entire world professionally revolves around kicking a trillion and a half dollar company right in the shins, it really resonates with me. But for my case, I do it out of a labor of love, not out of a higher principle, necessarily, or a business perspective. Again—in fact, many of my business advisors urged me constantly to stop making as much fun of Amazon as I do, so then I make fun of them until they leave me alone. What's it been like? What has the experience been, from someone who more or less starts off with the, “Yeah, I'm a senior SRE; my entire job here is to keep the server's going, and oh, there's our company in The New York Times.” It's got to be a trippy experience.



Blake: It's been fun, yeah. I feel like HEY, the infrastructure behind HEY has been under my wing for a long time, for probably nine months to a year. I was really the sole infrastructure engineer working on HEY, so I’ve really seen it from the beginning. And as we got closer and closer to launch, we did all this load testing, we predicted like, okay, in the first couple months, we want to see, I don't know, x hundred thousand customers, and we knew the resource amounts needed to meet that. And we expected to get there in, I don't know, maybe six months. That sounded great. 



And then leading up to the launch, and the day of the launch, everything just goes viral, and it's in the news here and there. And then all of a sudden, we're looking at traffic graphs, and we're seeing—the numbers that we expected to see six months into the launch within, like, two weeks of the launch. It's been a wild ride to scale this app up, and to be able to do it with very little customer-facing effect. Everything has gone to plan, which is interesting to say from an ops perspective. It's all worked great. We’re proud of how it is.



Corey: To be clear, in the interest of full disclosure, I am a HEY customer. I am a huge fan of the idea of what HEY is built around, specifically the idea of it calls out and shames tracking pixels, various sketchy things that have—I’m just going to say it—have infested email for a very long time, and it is terrific seeing a company take a stand against this.



Blake: Tracking pixels are something that we hold near and dear to heart in a way that we want to extinguish them from the earth. And HEY helps us do that.



Corey: Oh yeah. To be very transparent, the Last Week in AWS newsletter does have a tracking pixel at the end that tracks opens in aggregate. There's also a custom link fuzzer that I have built on top of this, which tells me, in the aggregate, cool, I wound up sending 28 links last week to—I don't know, what is it now—21,000 people at time of this recording, and I want to know how many people clicked any given link, and show me a list of what are the top-performing links? What are the top five? I care in the aggregate that some number of individuals have clicked the link, but I could not possibly care less about which individual person clicked a link. I want to know what's resonating with the audience versus what isn’t, in other words. And frankly, no one ever hits reply, so there's no real other good way to get that data.



Blake: Yeah, and I think Last Week in AWS does this perfectly. And it sets the bar between the ideal use case of no tracking pixels whatsoever, and the current use case of every email marketing company including a tracking pixel that's linked back to the user.



Corey: What I'm hoping, personally, is that HEY, takes off and launches a bit of a revolution. a bit of a revolution; that sounds like a weird, weird way of min-maxing in the same phrase, but I'm hoping that it sparks something where it becomes acceptable to—so our media kit can say we send this to x thousand people, and that's all we've got. But we already have that level of lack of transparency around podcasts, where when we put this podcast out, we have no earthly idea who's going to listen. The single metric we get is, oh, this many downloads. And from there, it's all a great mystery. 



And that's kind of fun in some ways because there's no good way to track people and the only other way to do it is horrifying. For the first three or four months I was doing this podcast, I thought I'd forgotten to turn the microphone on because I got no feedback. Then I went to a conference, and more or less got swarmed by people. “I love the podcast.” “You listen to that?” And it became this really interesting journey of discovery for me. Turns out, it's easier to hit reply to a newsletter—although almost no one does that either—than it is to, “I’m going to pull over while I'm commuting, pull out my phone, and yell at whoever is on this podcast right now.” Turns out you have to have a really, really bad take for that to be someone's response.



Blake: Podcasts are interesting, too, because I feel like—I'm not very old; I’m 23 at the time that we're recording this, and back when I was, I don't know, in middle school or high school, I read about podcasts, these like cool things that anybody could do. And I really wanted to do one, but the audience just wasn't there. But now we've come around to, like, podcasts are the new wave of things that influence the way people think, and it's been a wild thing to watch change. Back on the tracking pixel thing, I feel like we've started to make a dent in the worldview there. We were talking to MailChimp recently, now any new MailChimp campaign doesn't come with tracking enabled by default. Email marketers have started writing blog posts about, like, what are we going to do now that pay is blocking how we're able to get information? And then we've seen other blog posts from companies that they say, “Okay, HEY is blocking or things now, here's all the sleazy things we're going to do now.” And then we get to go back and block all those to make their life even more fun.



Corey: I will say, I sometimes look at the feed—because again, when you build a platform, at some point, you're sort of interested to see, “Oh, wow, who's signing up for this stuff?” “How many people are signing up when I do this, this week?” Or, “How many sign up when I do Y?” I have no idea where these people are coming from, but there's been at a pretty steady organic growth of about 100 net new subscribers every week, almost since launch. And I smile, I nod, and it's like when I'm in an airplane. I don't think too hard about the physics of how this works because if you question it, it stops working. That is how I believe these things work. 



And it's nutty, but I started seeing a bunch of people signing up from hey.com email addresses, which is awesome. I signed up myself from my HEY account to see how it went. And there were remarkably few scary shame-y things around what I send, which is kind of awesome, and all in all, it's been a great experience. Now, of course, I have a laundry list of feature enhancements, things that annoy me about HEY. I mean, it is a piece of software, and let's not kid ourselves, the purpose of software is solely to piss people off for not doing things as they would have them done.



Blake: Yes, HEY is definitely one of those pieces of software where you have to follow the way that we envision the software being used, or you're going to have a bad time.



Corey: I will also say, since I had not been tracking the development super closely, it's Imbox: I-M—as in Mike—or Mansy, for those who have watched a certain show—box. I look at this, and my immediate response when that pop up showed up was, “Oh my god, they launched this thing after all this work, and they had a—”



Blake: With a typo.



Corey: “—egregious typo at top of the screen.” And then it was, “Oh, it's not a typo, it's cutesy. I hate it. Thanks.” And then, okay, now I've just become accustomed to it. Mostly.



Blake: Well, I think our corporate stance is that your inboxes for important things, that's what email should be for. And so, that's the take we have, is that it's a box for important mail. It's actually gone as far as that we've had customers that have started creating Chrome extensions that find every reference to the word ‘imbox’ in the app and change it to ‘inbox.’ to appease their—



Corey: Oh, I've got to get me one of those.



Blake: [laughs]. Appease their brain.



Corey: Yeah, I like it quite a bit. It feels like it needs a few more features before I start taking it incredibly seriously as a mail client. In that, right now it only easily works with a hey.com address, and as trendy as it is these days and as much as I appreciate the long-term perspective that Basecamp has brought to all of its stuff, it feels like it's only a half step removed from, “Oh, you can email me at [email protected],” where it’s, this is tied to an ISP or provider that very well may not be around for the long haul. 



I was checking the other day, my vanity domain for my personal stuff, sequestered.net, was registered back in 2001, and I have gone through so many life changes, iterative steps forward. At one point, the domain for that lived running a postfix in a rack in downtown LA, that I was down there at least once a month fixing things that I'd horribly broken because it turns out that remote access out-of-band was not something I figured out in those days, all the way now to, it lives at Gmail. I'm not super thrilled with it, but it works well enough for the time being. It's been this iterative process through, but the addresses remain the same. Trusting that whatever happens in the future to the hey.com email domain, it makes me reluctant to give it out to various companies where I'm going to need to continue to have an ongoing relationship from that contact point in perpetuity.



Blake: Sure. And that's one of the policies that we have at Basecamp is that everything we create will live until the end of the internet, and we've exemplified that, actually. The very first product for release, Ta-da List, we still run it. It's running on AWS; we moved it from on-prem, it's, like, shuffled around through all of the different iterations of how we run our infrastructure, but it's still going today. And that's the plan [unintelligible] is we're going to run it forever because that's the policy: we'll run things until the end of the internet.



Corey: That's a very AWS-like policy as opposed to GCP where someone shakes their car keys just off-screen and, oop, forget this. I'm going to go chase that fun noise with the shiny thing.



Blake: What is it? SimpleDB that's been around for—I don't know—since the dinosaurs were here, and AWS still runs it.



Corey: Andy Jassy, CEO of AWS, once on record in an interview with the press as calling it a failed product, but you can still get it. People say, oh, but it's not in the console anymore. Spoiler: it never was. And I checked the other day—the job posting is currently filled, apparently—they still hire for the SimpleDB team, which feels on some level, like, wow, I didn’t realize you'd hire people directly into it. I assumed someone was screwing up, you'd put them on a pip and that was digital Siberia that you would ship them off to.



Blake: [laughs]. Oh, that's a great take.



Corey: That's got to be the saddest team there, just because sure everyone is going to insult your products on the internet, but when the CEO of your company does it in a press interview, that can't feel great.



Blake: No, not at all. And I feel like, on the other hand, I'm wary of putting anything on a Google product because I don't know if six months from now it will still be available.



Corey: And that brings us to what I really wanted to talk about by having you on the show. You spoke with the A Cloud Guru folks somewhat recently, and you alluded to some things that I wanted to dive into a bit. Specifically, you've mentioned that you are historically an on-prem shop but talked about the launch for the infrastructure of HEY running on top of AWS—which is kind of awesome—and using Kubernetes—which is the exact opposite of awesome. So, tell me a little bit about what would take you from an on-prem environment, where you are currently happily living with—by all accounts—no intent to leave, into launching something on top of a public cloud provider? What's your strategy around that? What's the story?



Blake: So, the current status of our infrastructure is that we are, I guess, technically a hybrid-cloud company. We still have on-premise data centers; we have two of them. We have several racks in those, and in fact, we run several of our large revenue-generating apps there still. In fact, we've actually run applications with their front-end compute in a major cloud provider with their database still on-prem because it’s cost and performance prohibitive to do that in the Cloud. So, we had a mandate to explore the Cloud as an option to see if we can run the same workloads that we run in our own data centers in a cloud provider’s environment where we can do the same thing at the same or cheaper price, with access to additional managed services to allow us to do more with the same size operation [unintelligible].



Corey: So, it acts more or less as a capability store slash force multiplier, in other words?



Blake: Sure. Yeah. And in fact, since we've moved to the Cloud, we have only grown the team by two. So, while growing the number of applications that we run by. One, so I guess it's not a great ratio. [laughs].



Corey: [laughs]. True, but at the same time, depending upon the actual percentages, well, okay, that's a little sketchy at scale, but with small numbers, and you look at the amount of time it takes to do these things, versus what the alternative is, that's not bad at all.



Blake: No, not at all. And I think the typical Silicon Valley VC startup backed thing to do would be, “Oh, we're going to launch a new product. Let's hire 300 people just because we can.” And in Basecamp’s case, that is totally not what happened at all.



Corey: In what you might be forgiven for mistaking for a blast from the past, today, I want to talk  about New Relic. They seem to be a relatively legacy monitoring company, and I would have agreed with that assessment up until relatively recently, but they did something a little out there: they reworked everything. They went open source, they made it so you can monitor your whole stack in one place. And most notably from my perspective, they simplified their pricing into something that is much more affordable for almost everyone. There's even a free tier with one user and a hundred gigs per month, totally free.



Check it out at newrelic.com.




Corey: One the thing that I see periodically, when you have an on-prem environment that then decides to expand into the Cloud for some aspects of it, is—how do I put this politely? I guess I don’t. You wind up with a VMware model. The payday lender of technical debt, where you're going to just run a bunch of VMs, but now also in the Cloud. You're not really leveraging cloud in that story so much as you are making it look like a version of your data center. You are effectively worsening the cloud environment in order to slightly improve your capability, data-center-side, not that that's inherently a bad thing, but it's not what I would call cost-effective either.



Blake: No, not at all. That's one of the things that I think should be a prime tenet of any corporations looking to move to the Cloud. It should not be seen as a way to obfuscate CAPEX to OPEX. Because if you're going to the Cloud, you should be doing it to gain additional value. In our case, we decided to—let's explore the Cloud as an option. The mandate was explicitly do not lift and shift. In fact, we had a typo recently internally where we said ‘lift and shift,’ and I think that's a pretty accurate representation of how some corporate cloud moves go. 



So, when we started looking at the Cloud, we knew that we wanted to use containers as a way to orchestrate the way that we run our apps. When we—the things that we've run on-premise, we do so on bare metal, we deploy them with Capistrano, we use Chef to manage the boxes. We don't use containers on-premise, but by going to the Cloud and being able to use a managed container orchestration service, that gives us a great chance to look at containerizing our apps, and running them in containers, not just because it's the cool thing to do, but also because we gain value in being able to binpack them better and use compute more efficiently than we could if we were just running them on a fleet of t2.nano instances.



Corey: So, tell me a little bit more about the Kubernetes decision. Is this something that predated your HEY build-out? Is that something you've been dabbling with for a long time? I've got to say that Basecamp as a company has seemed relatively immune to hype-driven development in many respects, so seeing that you folks were on Kubernetes was a little bit surprising.



Blake: Yes, it did predate HEY development. When we first moved to the Cloud and decided that containers were the way forward for us, we started out using AWS as Elastic Container Service, or ECS as they prefer to call it. ECS is fine. It works well enough, but our take was that the service was not gaining features at a good enough pace, and we were running into problems with it being sometimes a very big black box, where things would happen, and you didn't know why, and your fix was to open a support ticket and hope they responded to you quickly and helpfully. And our experience with support is that neither of those two things happened.



Corey: Right? You just shame them publicly and loudly in increasingly public places.



Blake: Yes. And sometimes it helps a lot. [laughs]. Yeah, beyond that, when we decided that ECS wasn't the way forward, but we still wanted to use containers, Kubernetes was the thing to do here. And around the same time, we also started looking at leaving AWS in favor of Google Cloud Because if you want to use containers, Google's GKE product is the way to go. I mean, for a project that came out of Google, using their managed version of the product is the way to go. 



And in fact, we did do that. Basecamp 2, we actually ran on GKE for several months with minimal issues until Google started having a few bad days a lot. And at that point, we decided to move Basecamp 2 back on-prem, but we didn't want to leave Kubernetes as a whole. By this point, we had already started work on HEY, and HEY was living on GKE, too. But when we moved Basecamp 2 away from GKE and moved it in on-prem, we decided that we were going to not use Google at all. And so since we already had the infrastructure, we were able to make use of the flagship feature of Kubernetes, and move it to another Kubernetes platform with very little additional infrastructure work. And from there, we moved to the EKS, and it's been living there fine since then.



Corey: What made you decide to go EKS instead of rolling your own control plane?



Blake: The price of a managed Kubernetes cluster is less than the price of the number of engineering hours it would take to run Kubernetes on bare metal, or on EC2.



Corey: Nope, very fair. To be blunt, I wish more people accepted that. Something that also struck me as interesting about your exploration of this, was the idea of using Kubernetes on top of Spot. That's something I've been advocating for from an economic perspective for a long time, but in practice, it feels like you talk about the Cloud as oh, it's elastic, you can scale up, and load increases, and scale down when it doesn't need to, which makes everyone feel super good about not doing exactly that. Everything winds up at the same baseline level of usage. And oh, we'll get to it next sprint, as if suddenly you're going to stop making poor decisions right after this one.



Blake: Using Spot requires a bunch of additional thought and processes around getting workloads onto Kubernetes, but once they're there, you are able to reap the benefits of not having committed compute capacity just sitting around when you don't need it, you're able to use all of the instances that AWS offers you, all while saving money while doing it. Now, on the other hand, Spot looks great on paper, it looks great from a billings perspective, but I'd argue that Spot is one of the stickiest services that AWS offers because you can't replicate that on-premise. Google's Preemptible Instance [unintelligible] only gives you a thirty-second window, versus Amazon's two-minute window. Amazon's implementation of Spot works really well for us because our workloads, we’re able to know—well, we're able to have two different versions. We know that some workloads are okay being terminated with the two-minute warning, and then things that we know aren't okay being terminated with a two-minute warning, we run them on on-demand instances, and they're able to work well.



Corey: So, you said—or implied at least—that Spot is one of those things that is not able to be replicated on-prem; you're fundamentally not going to get there in the same way. Tell me a little bit more about that. What do you think the Spot market does in terms of lock-in? Do you think that this is something that's going to drive people, once they learn to adopt to something like the Spot market, that makes it in some ways harder to leave AWS than any pure technology buy?



Blake: No, I don't think that it's a product that causes you to say like, “Oh, this is absolutely amazing. What will we do without it?” I think that it makes you feel really nice inside when you see the cost savings. And when you see the ability that you have to pull from vast resource pools that you can't replicate on-premise.



Corey: It's neat to start seeing capability stories like that around the things, to be very direct, that people are using in AWS. Because they get on stage, they talk a lot about their ridiculous nonsense, “Oh, it's a machine learning musical keyboard.” Or, “Hey, it's this ridiculous thing that winds up leveraging 18 different implementations of blockchain.” But if you look at what people are actually using/spending the money on, it’s EC2, it’s data transfer, it’s S3, it’s database store, it’s disk. It's the boring building blocks that no one wants to talk about in keynotes because it's not interesting or exciting anymore the way that it once was, but it's what the world runs on. So, things like Spot seemed like they are aligned with that vision of the future.



Blake: Totally. And using Spot isn't just, click a button and your workloads will be fine. It requires taking the time to look through them, seeing what can deal with being torn down with a very short amount of notice, and accepting that maybe your workloads aren't good for Spot. Maybe you don't want to use this. But for a lot of workloads, you totally can, especially for front-end web workloads like ours, we have no reason to not use Spot.



Corey: One of the things that I find compelling is something like that does force you to refresh your instances—or at least have a plan to refresh them—at virtually any moment. Whereas in practice, we talk—in many environments—oh, we believe in cattle instead of pets, and then you look at their environment, like, “Oh, great. So, I can turn any one of these things off?” “Absolutely. Except for that one, that one, that one, that one and, oh my god, that one.” And it comes down to the, ‘what we say versus what we do’ story.



Blake: Yeah, totally. And that's actually one of the things that Kubernetes helps us with a ton by being able to use Spot instances, is that we can let things come and go into the auto-scaling groups and actually treat these instances like cattle because Kubernetes can take care of scheduling things in other nodes when they become available, it can take care of what's running now, what needs to be running, we have controllers in the cluster that notice that, “Oh, we've lost an instance. We have pods that need to be scheduled, but we don't have that capacity. Add it to the cluster.” Kubernetes helps us a ton there.



Corey: So, when you take a look across the landscape of what cloud providers offer, what you're able to achieve on-prem, do you think the future is pure public cloud for the sort of things you do? Do you foresee there's going to be Basecamp data centers for the next century or something else entirely?



Blake: I think hybrid cloud is going to continue to be the thing that we see more and more of. I think Google Cloud is pushing it a ton with Anthos. We're doing it ourselves. We've done it before, where we run front-end compute on Kubernetes, and we keep databases on-prem, and we connected them over direct connects and TCP interconnects. Some things are just not feasible to do in the Cloud, whether that's from a cost standpoint, whether it's from a performance standpoint, whether there's some service that you want to run that isn't offered in a managed form in AWS if you want to use it as a managed form. I think hybrid cloud is going to be the end goal. And I think we've even seen companies like Netflix that start out being all-in on AWS decide, “Oh. This is kind of expensive,” and decided to run their own data centers with their own hardware sometimes. So, hybrid cloud, I think, will end up being the end goal of what we do, but I think the implementation will be different for everybody.



Corey: What do you think is currently the most misunderstood thing about Kubernetes in the larger ecosystem?



Blake: That you're not required to use it. [laughs].



Corey: [laughs]. I like that quite a bit. This is coming from Basecamp. Again, one of your founders was the creator of Ruby on Rails. There's a very anti-trendy-JavaScript-framework philosophy there that, frankly, I wish the rest of the industry shared. So, it's easy to dismiss this as, oh, those are just a few countercultural folks who are trying to stir up trouble. I don't believe that that's true. I think that there's a lot to be said for using technologies that have been shown to work, for focusing on parts of the story that are, I guess, more in line with what sensible people with a business interest are concerned about. And it feels like on some level that the number one project everyone's trying to solve for remains their own resume.



Blake: Especially for a company the size of Basecamp. We won't do things unless we see value coming out of them. We didn't look at the Cloud and decide to do it just because it was a cool thing to do. We didn’t look at Kubernetes and decide to start using it just because it was a cool thing to do. We started using Kubernetes because it gave us a path to accomplish our end goal, which was making our compute more efficient, being able to run it in ways that we can't do on-prem, and being able to do more with the same number of operations staff members. Kubernetes and public cloud are great for those things. 



For a product like HEY, if we were running that on-premise, we would have massively over-provisioned the hardware, spent hundreds of thousands of dollars on hardware for a product that we can't guarantee is going to perform the way that we think it will. We would have had to hire additional people to be in charge of racking and stacking that year, and that's just not something we have to worry about when we’re running on a managed cloud product, and when we're using something like Kubernetes.



Corey: It seems increasingly like solving for business value rather than for hype is taking a renewed focus at a lot of companies. I suspect that the longer this pandemic drags on, the better enterprise tech is going to become, just due to lack of executive exposure to ads in airports, which historically seems to have been driving an awful lot of very strange decision making. Now we're seeing, “Well, what is the business value for this?” Type conversations coming out. And I'm optimistic that this is going to usher in a new era of good decision making. But, on balance, these are human beings we're talking about and well we have some track record of showing how that is not true.



Blake: My hope is that we start seeing executive sponsorship lean more into how does the operations team want to run things? How can they run things efficiently? Let the people who do the work make the informed decisions about how they want to work, how they can work most efficiently, how they can meet the goals of the business, rather than just handing it down from the top with no discussion to the implementers.



Corey: If people want to hear more about what you have to say about this and other topics, where can they find you?



Blake: I do a lot of ranting about Kubernetes and public cloud on twitter at @t3rabytes—the word ‘terabytes’ with a 3 instead of an E. And I've also been writing more on our company blog, Signal v. Noise, which is signalvnoise.com.



Corey: And we will put links to those in the show notes, of course. Thank you so much for taking the time to speak with me today. I appreciate it.



Blake: Absolutely. Thanks for having me, Corey.



Corey: Blake Stoddard, senior site reliability engineer at Basecamp. I’m Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on Apple Podcasts, whereas if you hated it, please leave a five-star review on Apple Podcasts and a tracking pixel in the comments.



Announcer: This has been this week’s episode of Screaming in the Cloud. You can also find more Corey at ScreamingintheCloud.com, or wherever fine snark is sold.



This has been a HumblePod production. Stay humble.



Newsletter Footer

Get the Newsletter

Reach over 30,000 discerning engineers, managers, enthusiasts who actually care about the state of Amazon’s cloud ecosystems.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Sponsor Icon Footer

Sponsor an Episode

Get your message in front of people who care enough to keep current about the cloud phenomenon and its business impacts.