Episode Show Notes & Transcript
Corey: On this show, I talk an awful lot about architectural patterns that are horrifying. Let’s instead talk for a moment about something that isn’t horrifying. CHAOSSEARCH. Architecturally, they do things right. They provide a log analytics solution that separates out your storage from your compute. The data lives inside of your S3 buckets, and you can access it using APIs you’ve come to know and tolerate, through a series of containers that live next to that S3 storage. Rather than replicating massive clusters that you have to care and feed for yourself, instead, you now get to focus on just storing data, treating it like you normally would other S3 data and not replicating it, storing it on expensive disks in triplicate, and fundamentally not having to deal with the pains of running other log analytics infrastructure. Check them out today at CHAOSSEARCH.io.
I talked a lot about databases on this show. There are a bunch of reasons for that, but they mostly all distill down to that databases are, and please don't quote me on this as I'm not a DBA, where the data lives. If I blow up a web server, it can have hilarious consequences for a few minutes, but it's extremely unlikely to have the potential to do too much damage to the business. That's the nature of stateless things. They're easily replaced, and it's why the infrastructure world has focused so much on the recurring mantra of cattle, not pets.
But I digress. This episode is not about mantras. It's about databases. Today's episode of the AWS Morning Brief: Whiteboard Confessional returns to the database world with a story that's now safely far enough in the past that I can talk about it without risking a lawsuit. We were running a fairly standard three-tiered web app. For those who haven't had the pleasure because their brains are being eaten by the microservices worms, these three tiers are web servers, application servers, and database servers. It's a model that my father used to deploy, and his father before him.
But I digress. This story isn't about my family tree. It's about databases. We were trying to scale, which is itself a challenge, and scale is very much its own world. It's the cause of an awful lot of truly terrifying things. You can build an application that does a lot for you on your own laptop. But now try scaling that application to 200 million people. Every single point of your application architecture becomes a bottleneck long before you'll get anywhere near that scale, and you're gonna have oodles of fun re-architecting it as you go. Twitter very publicly went through something remarkably similar about a decade or so ago, the fail whale was their error page when Twitter had issues, and everyone was very well acquainted with it. It spawned early memes and whatnot. Today, they've solved those problems almost entirely.
But I digress. This episode isn't about scale, and it's not about Twitter. It's about databases. So my boss walks in and as we're trying to figure out how to scale a MySQL server for one reason or another, and then casually suggests that we run the database on top of NFS.
Yes, I said NFS. That's Network File System. Or, if you've never had the pleasure, the protocol that underlies AWS’s EFS offerings, or Elastic File System. Fun trivia story there, I got myself into trouble, back when EFS first launched, with Wayne Duso, AWS’s GM of EFS, among other things, by saying that EFS was awful. At launch, EFS did have some rough edges, but in the intervening time, they've been fixed to the point where my only remaining significant gripe about EFS is that it's NFS. Because today, I mostly view NFS is something to be avoided for greenfield designs, but you've got to be able to support it for legacy things that are expecting it to be there. There is, by the way, a notable EFS exception for Fargate and using NFS with Fargate for persistent storage.
But I digress. This episode isn't about Fargate. It's about databases.
Corey: In the late 19th and early 20th centuries, democracy flourished around the world. This was good for most folks, but terrible for the log analytics industry because there was now a severe shortage of princesses to kidnap for ransom to pay for their ridiculous implementations. It doesn’t have to be that way. Consider CHAOSSEARCH. The data lives in your S3 buckets in your AWS accounts, and we know what that costs. You don’t have to deal with running massive piles of infrastructure to be able to query that log data with APIs you’ve come to know and tolerate, and they’re just good people to work with. Reach out to CHAOSSEARCH.io. And my thanks to them for sponsoring this incredibly depressing podcast.
So I'm standing there, jaw agape at my boss. This wasn't one of those many mediocre managers I've had in the past that I've referenced here. He was and remains the best boss I've ever had. Empathy and great people management skills aside, he was also technically brilliant. He didn't suggest patently ridiculous things all that often, so it was sad to watch his cognitive abilities declining before our eyes. “Now, hang on,” he said, “before you think that I've completely lost it. We did something exactly like this before at my old job, it can be done safely, sanely and offer great performance benefits.” So, I'm going to skip what happens next in this story because I was very early in my career. I hadn't yet figured out that it's better to not actively insult your boss in a team meeting, based only upon a half baked understanding of what they've just proposed. To his credit, he took it in stride, and then explained how to pull off something that sounds on its face to be truly monstrous.
Now I've doubtless forgotten most of the technical nuance here, preferring instead to use much better databases like Route 53. But the secret that made this entire monstrosity work was that we didn't just use crappy servers with an open-sourced file server daemon running on top of it as our NFS server. Oh no, we decided to solve this problem properly, by which I mean we use NetApp Filers. Now, I want to pause here to make a few points. First, and most importantly, NetApp is not a sponsor of this podcast in any way. I'm not here to shill for them. In fact, there are a laundry list of reasons not to use NetApps, not the least of which being is that they are, and remain, ungodly expensive. Second, over a decade later from the time this story takes place, there are way better ways to get the IOPS you need than shoving the mySQL data volume onto something that's being accessed from the database server via NFS. All of those ways are better than what we did.
Thirdly, in a time of Cloud as we are today, which we were assuredly not over a decade ago, you can't get a NetApp Filer shoved into us-east-1 without a whole lot of bribery and skullduggery. So, this model won't even work in a cloud environment in the first place. And fourthly, if you're still trying to do this with Cloud, you absolutely must be able to control the network absolutely. Between the Filer and the database servers, in a cloud environment, you're not going to be able to do that. So, this entire story is entirely off the table. But, if you're back in 2009, and you're trying to solve this problem with the exact constraints I've laid out, there are worse approaches you can take.
But I digress. This episode isn't about time travel. It's about databases. This led to other problems, too, once we got this thing up and running. Backups were painful, for example, because while NetApp Snapshots were now the right way to backup the data store, NetApp Snapshots, of course being awesome, we had to run a script on the database instances that were talking to those volumes in order to quiesce the database. The reason here is that databases are, to be very clear, terrible. To go into slightly more detail, you want all of your in-flight transactions to be written to disk, so the snapshot of your database volume isn't captured in an inconsistent state. If you capture a database volume in the middle of a write, there's a great chance that that backup won't be usable, or you'll have data corruption sneaking in.
This wasn't difficult, but it was annoying, because we figured this out before we had that problem, but it did mean a one- to two-second pause for everything that was using that database during the snapshot because it effectively had to block every write to that database, and reads, as it turned out, because of the way we structured things—see a previous episode of this show for more on that—but then we had to wait for the writes to finish blocking, the snapshot to complete, report that it completed, and then unlock the database again. That meant realistically a one- to two-second pause for everything that was using that database during that snapshot period, and you could see it on the graphs as plain as day. This wasn't a huge deal, but it definitely was annoying for some of the high-performance workloads involved.
We also, of course, had to stripe our database volumes across a whole bunch of spinning disks living in a NetApp shelf because this was during an era where large scale enterprise SSDs weren't really a thing. And as this continued to grow, you wouldn't buy just one more disk, you had to buy an entire shelf. So, there was definitely a step function approach to what this was going to cost at scale. It wasn't a nice linear ramp like you would see in a cloud provider. And of course, our NetApp reseller as a direct result, renamed their corporate jet after us because, oh my stars, was this entire thing expensive to pull off, start to finish. A modern cloud architecture is better than what I've just described in virtually every way unless you're a NetApp reseller. So, now at least you know a little bit more about the root of my aversion to using NFS. It's less to do with the protocol shortcomings and more to do with, as with oh so very many other things, my tendency to see it as a database.
This has been the AWS Morning Briefs: Whiteboard Confessional. I'm Cloud Economist Corey Quinn, and I'll talk to you next week.
Thank you for joining us on Whiteboard Confessional. If you have terrifying ideas, please reach out to me on twitter at @quinnypig and let me know what I should talk about next time.
Announcer: This has been a HumblePod production. Stay humble.