The Podcast
Calendar Icon 11.14.2019
aws-section-divider
Audio Icon
Networking in the Cloud Fundamentals, Part 3
About Corey Quinn
Over the course of my career, I’ve worn many different hats in the tech world: systems administrator, systems engineer, director of technical operations, and director of DevOps, to name a few. Today, I’m a cloud economist at The Duckbill Group, the author of the weekly Last Week in AWS newsletter, and the host of two podcasts: Screaming in the Cloud and, you guessed it, AWS Morning Brief, which you’re about to listen to.
Transcript



This episode of Networking in the Cloud is sponsored by ThousandEyes. Their 2019 Cloud Performance Benchmark Report is now live as of yesterday. Find out which Clouds do what well, AWS, Azure, GCP, Alibaba, and IBM Cloud all have their networking capabilities raced against each other. Oracle was not invited, because we are talking about actual Cloud providers here, not law firms. Get your copy of the report today at Snark.Cloud/realclouds. That's Snark.Cloud/realclouds. That's completely free. Download it, let me know what you think. I'll be cribbing from that in future weeks. Now, for the third week of our AWS Morning Brief Screaming in the Network, or whatever we're calling it, mini-series on how computers talk to one another. Let's talk about the larger internet.


Specifically, we begin with BGP, or Border Gateway Protocol. This matters, because it's how different networks talk to one another. If you have a whole bunch of different computer networks gathered into a super network, or internet as some people like to call it, how do those networks know where each one lives? Now, from a home user perspective, or even in some enterprises, that seems like sort of a silly question, because it is. You have a network that lives on your end of things. You plug a single cable in, and every other network lives through that cable. When you're talking about large disparate networks though, how do they find each other? More to the point, because of how the internet was built, it's designed so that any single failure of another network can now be routed around. There are multiple paths to get to different places. Some biased for cost, some biased for performance, some biased for consistency. And all of those decisions have to be made globally. BGP is the lingua franca of how those networks talk to one another. BGP is also a hot mess.


It's the routing protocol that runs the internet, and it's comprised of different networks in this parlance, autonomous systems, or AS's, and it was originally designed for a time before jerks ruled the internet, and that's jerks in terms of people causing grief for others, as well as shady corporate interests that are publicly traded on NASDAQ. There's no authentication tied to BGP. Effectively, it is trusted to contain correct data. There is no real signing or authentication that someone who announces something through BGP is authorized to do it, and it's sort of amazing the whole thing works in the first place, but what happens is, is when a large network with other networks behind it winds up doing an announcement, it says, oh, I have routes to these following networks. And it passes them on to its peers. They in turn pass those announcements on, oh, behind me. Then this way two hops is this other series of networks, and so on and so forth.


Now this can cause hilariously bad problems that occasionally make the front page of the newspaper when a bad announcement gets out. A few years ago there was an announcement from an ISP that said, oh, all of YouTube lives behind us. That announcement should never have gone out, and their upstream ISP should have quashed it, and they didn't. So suddenly a good swath of the internet was trying to reach YouTube through a relatively small link. As you can imagine, TCP terminated on the floor. Not every link can handle exabytes of traffic. Who knew? That gets us to another interesting point. How do these large networks communicate with each other? You have this idea of one network talks to another network. Does money change hands? Well, in some cases, no. If traffic volumes are roughly equal and desirable on both sides, we'll have our networks talk to one another, and no money changes hands. This is commonly known as peering.


 
At that point, everything is mostly grand, because as traffic continues to climb, you increase the links. Both parties generally wind up paying to operate infrastructure on their own side and in between, and traffic continues to grow. Other times it doesn't work that way where you have one network with a lot of traffic, and another network that doesn't really have much of any, and people want to go from one end to the other. Very often this is known as a transit agreement, and money changes hands from usually the smaller network to the bigger network, but occasionally the other direction depending on the specifics of the business model, and at that point, every byte passing through is metered and generally charged for. Usually this is handled by large ISPs and carriers and businesses behind the scenes, but occasionally it spills out into public view. Comcast and Netflix, for example, have been having a fantastic public spat from time to time, and this manifests itself when there's congestion and you're on Comcast.


If so, I'm sorry for you, and your Netflix stream starts degrading into lower picture quality. Occasionally it's skips or whatnot, and strangely whenever Comcast and Netflix come to an agreement, of course under undisclosed terms, magically these problems go away almost instantly. Originally this sort of thing was frowned upon. The FCC got heavily involved, but with the demise in the United States of network neutrality, suddenly it's okay to start preferring some traffic over others through a legalistic framework, and this has led to a whole bunch of either malfeasant behavior or normal behavior that people believe is malfeasant. And that doesn't leave anyone in a terrifically good place. I'm not here to talk about politics, but it does wind up leading to an interesting place, because there's an existential problem to the business model for an awful lot of ISPs out there. Because generally speaking, when you wind up plugging into your upstream provider, maybe it's Comcast, maybe it's AT&T, maybe it doesn't matter, but you're generally trying to use them as a dumb pipe to the internet.


The problem is, is they don't want to be a dumb pipe. There's a finite number of dollars that everyone is going to pay for access to the internet, and that is a naturally self-limiting business model, so they're trying to add value with services that don't really tend to add much value at all. My wireless carrier for example, wants to sell me free storage, and an email address, and a bunch of other things that I just don't care about, because I already have an email solution that works out super well for me. My Cloud storage that I care about is either Dropbox, something in AWS or other nonsense. I don't need to have Verizon's Cloud storage, but they keep trying to find alternative business models. Some of these ways are useful and beneficial to everyone, and others are well to be honest, less so.


Comcast for example, isn't going to build you a search engine that is going to rival Google, which is kind of weird on some level because if you take a look from a customer service perspective, Comcast and Google are about on equal footing, but they're not going to be able to deliver the kind of user experience from a localized ISP that a lot of the global providers do. So since they're not able to sell value-added services to end users and they're not able to effectively shake down upstream providers, I mean can you imagine if you had to pay Comcast extra to access Google, or if magically YouTube was not accessible through one ISP? People would storm their offices. Discussions around trip peering and transit and trying to shake down upstream providers is sort of how a lot of folks are trying to bring more money out of being dumb pipes, but there is an existential business question for them. 



That's more to come on another episode presumably, but now speaking of interesting behavior that varies between different providers as mentioned, this is sponsored by ThousandEyes. Their public cloud performance benchmark is terrific. Is the AWS global accelerator worth the money? Well for that one, tune in next week, or the week after. I'm not sure what the order is, but we will be doing a deep dive into the global accelerator. Do all cloud providers pay the same latency toll when they cross China's great firewall? There are a bunch of different questions that are answered, things that you may not have expected surface in the report, and you can read it now. Go take a look. Sorry Oracle, you are not invited to have your cloud networking performance tested in the report this year, but there's always next year, just grow a little bit and sign a customer or three. Take a look at Snark.Cloud/realclouds. It's absolutely free and it's fascinating. Thanks again to ThousandEyes for their sponsorship of this ridiculous podcast mini-series. 


Now, Netflix has been a famous AWS marquee customer for a long time. They spend presumably boatloads of money on AWS, and they wind up having an awful lot of ridiculously impressive conference talks about how they do what they do, and they're very open about their work with AWS.


But what's not necessarily as well known is that when you fire up a Netflix stream, that doesn't stream from AWS, because the bills would be monstrous for data transfer to start. Instead, they do what any large streaming company does, or any company with significant static assets they need to get to customers, and they use a CDN. In Netflix's case, they built their own because of what they do. They call this the Open Connect Project and details of it are on their website, but what it fundamentally means is that they build boxes that have a whole bunch of hard drive space in them, and they ship them to various ISPs. At times in the United States, Netflix is over a third of internet traffic, so having to pay for peering or transit for those providers and upgrade equipment saturate their links isn't a great plan. Here's a box with all the popular stuff on it that you can put in your data centers, and just stream out to your users is compelling. That's a win for most folks.


Now, most of us aren't shipping boxes places, but there are CDNs, you can use. AWS's CloudFront is an example, Fastly, Akamai, CloudFlare, and there's a whole bunch of others that specialize in different things, but a lot of websites use those. What is a CDN? Well, if you have static assets like CSS, Cascading Style Sheets, or images or video or JavaScript includes that you don't necessarily want to have customers half a world away grab from your web server. You can have a CDN handle a lot of these things. First they can provide hosting for those static files, or they can cache them at edge points of presence or POPs much closer to your customers. So the benefit there is that they can wind up having things that requires significant page load time, or significant latency, because bandwidth concerns way closer to customers, meaning that each request is fulfilled far sooner. Sure, it might only be a hundred milliseconds or so per request, but if you take a look at modern crappy web design, there are 30 to 60 different elements that are often gathered just to load a relatively simple page.


Ad networks make this far, far, far worse. The entire value proposition behind Content Delivery Networks or CDNs as well is that they're also generally terrific at infrastructure. Very often they'll have their own private links that let them speak back to an origin that is faster than traversing the public internet. There's a lot more on that by the way, in the Cloud Performance Benchmark report at Snark.Cloud/realclouds. And they're also able in many cases to withstand distributed denial of service attacks. This goes back to the aforementioned jerks on the internet. A DDOS for those who aren't familiar is when bad actors wind up throwing a bunch of garbage traffic at various websites in an attempt to take them down. CDNs generally are used to seeing this, and have a bunch of different mitigations in place. Some of them are technical in nature as far as being able to identify bad traffic and drop it early, whereas others solve the problem rather handily with giant piles of bandwidth. It's somewhat hard to flood an enormous pipe when it can handle more traffic than you can throw at it.


The best part of all is that CDNs tend to generally be largely single-purpose, and relatively easy to switch between. So if you're looking to have your static assets close to an end user, paying a company that specializes in solving that specific problem, who is already invested the not insignificant infrastructure costs to build that out, it makes an awful lot of sense. There've been a number of different approaches to figure out which CDN is best, and the easy answer for that is the one that works for you. Every CDN tends to have different strengths and weaknesses. For example, AWS's CloudFront is fantastic and a lot of things, but it takes what feels like years to update a distribution. In practice only 20 to 30 minutes, but I sometimes lose interest in the middle of writing a tweet. I don't have that kind of attention span.


To sum all of this up, what really is incredible about the internet is how much goes on under the hood just to make very basic, low-lying things work. What's amazing is not in fact that all this complexity is there, and that you don't have to think about it, but that it works at all, because there's so much that can cause problems from a technical perspective, whenever you're dealing with real world infrastructure, it's expensive. It takes a long time to fix, but when's the last time working with the cloud that you had to think about any of these things?


 
Of course, if you work at one of the cloud providers, that does not apply to you. Thank you for thinking about these things so those of us building Twitter for pets, obnoxious troll websites don't have to. That sums up the third week of what I'm calling Networking in the Cloud. I am cloud economist Corey Quinn. If you're enjoying this mini-series, please leave it a five star review on iTunes. If you're hating this mini-series, you don't have to listen to it, but please leave a five star review on iTunes anyway, because gamification is how this works. I will be back next week. Thank you for listening to this show, and thanks again to ThousandEyes for their generous sponsorship of my ridiculous nonsense.


Announcer:  This has been a HumblePod Production. Stay humble.