Google’s Biggest Partner with Miles Ward

Episode Summary

Miles Ward is the Chief Technology Officer at SADA, a global business and cloud consulting firm that is Google’s largest partner. Prior to this role, Miles worked as the director of solutions and global lead at Google Cloud for five years and served as the senior management of solutions architecture at Amazon Web Services for four years. He’s also held director-level positions at Visible Technologies and Insurgent Technologies. Join Corey and Miles as they discuss hybrid and multi-cloud environments, what Andy Jassy believes is the biggest impediment to AWS’ growth, why Miles decided to leave Google after a life-changing five-year run, how managing a team of 80 makes it nearly impossible to get your hands dirty with tech, what a solutions architect does and whether the job description changes from company to company, the product Miles killed at Google and what the experience was like, how much Miles believes it costs Google to turn off products, what the Achilles heel of every public cloud is, and more.

Episode Show Notes & Transcript

About Miles Ward


As Chief Technology Officer at SADA, Miles Ward leads SADA’s cloud strategy and solutions capabilities. His remit includes delivering next-generation solutions to challenges in big data and analytics, application migration, infrastructure automation, and cost optimization; reinforcing our engineering culture; and engaging with customers on their most complex and ambitious plans around Google Cloud.


Previously, Miles served as Director and Global Lead for Solutions at Google Cloud. He founded the Google Cloud’s Solutions Architecture practice, launched hundreds of solutions, built Style-Detection and Hummus AI APIs, built CloudHero, designed the pricing and TCO calculators, and helped thousands of customers like Twitter who migrated the world’s largest Hadoop cluster to public cloud and Audi USA who replatformed to k8s before it was out of alpha, and helped Banco Itau design the intercloud architecture for the bank of the future.


Before Google, Miles helped build the AWS Solutions Architecture team. He wrote the first AWS Well Architected framework, proposed Trusted Advisor and the Snowmobile, invented GameDay, worked as a core part of the Obama for America 2012 “tech” team, helped NASA stream the Curiosity Mars Rover landing, and rebooted Skype in a pinch.
Earning his Bachelors of Science in Rhetoric and Media Studies from Willamette University, Miles is a three-time technology startup entrepreneur who also plays a mean electric sousaphone.


Links


Transcript


Announcer: Hello, and welcome to Screaming in the Cloud with your host, Cloud Economist Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.



Corey: This episode is brought to you by DigitalOcean, the cloud provider that makes it easy for startups to deploy and scale modern web applications with, and this is important to me, no billing surprises. With simple, predictable pricing that’s flat across 12 global data center regions and UX developers around the world love, you can control your cloud infrastructure costs and have more time for your team to focus on growing your business. See what businesses are building on DigitalOcean and get started for free at do.co/screaming. That’s D-O-Dot-C-O-slash-screaming and my thanks to DigitalOcean for their continuing support of this ridiculous podcast.





Corey: This episode is brought to you by Spot.io, the continuous cloud cost optimization platform, saving businesses millions of dollars each year on their cloud bills used by some of the world's largest enterprises and fastest growing startups like Intel and Samsung. Those are enterprises and duo lingo.  That's a startup. Spot.io delivers the optimal balance of cost and performance by leveraging spot instances, reserve capacity, and on demand. Give your workloads the infrastructure they deserve. Always available, always scalable, and always at the lowest possible cost. Visit Spot.io to learn more.






Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week by the CTO of SADA, Miles Ward. Miles, welcome to the show.



Miles: Hey, thank you so much, Corey. I'm super excited to be here.



Corey: So, until today, my previous interaction with you has largely been busting my chops anytime I say something unflattering about GCP. Now, in your defense, I tend to phrase things in the most obnoxious way possible. But a little background on you first. You were at GCP for a while; did an awful lot of things there. Before that you were at AWS and launched the first version of the Well-Architected Framework, better known as the Well-Actually Framework. But now you're over, doing CTO style work at what I believe to be the largest Google partner out there, if I'm not mistaken.



Miles: That's exactly the gig. You know, I helped build the solutions architecture practice at AWS. I was the fifth member of that team, that's now well over 1000 folks. And when I came to Google, I came to build solutions architecture inside GCP. 



But in both of those roles, there was always this little gap at the end where customers actually go use those solutions, and implement them, and run production on top of them, and make promises to their customers based on them, and because of the requirements of large scale companies, there's just always a friction between really being deeply accountable to a customer and frankly, interacting with them in those complex scenarios. So, I took a role as CTO here at SADA to plug into customers and do the last mile to deliver the solutions that I had been spending a decade designing.



Corey: So, let’s—I guess, rather than looking into the ancient history story, because again, any cloud provider 10 years ago looks very different than it does today—



Miles: Yes. 



Corey: —the modern landscape is very interesting to me. When I'm talking to customers—and my perspective is one of, believe it or not, being relatively impartial. I focused on AWS because that is where my customers tend to live. But we do see an ever-growing constituency representing Azure, representing GCP, Oracle—if you read Oracle press releases—and the entire space is growing. 



So, trying to race clouds against one another is, I guess, an interesting hobby, but doesn't seem to get you very far. At least, that's my personal perspective. So, I guess my first question for you is this: multi-cloud/hybrid-cloud; is it a thing, or is it just something analysts make up as fan-fiction?



Miles: So, I think there's a couple layers there. First—I mean, it's Andy Jassy—he was at one of the re:Invent conferences—who described that the biggest impediment to AWS growth was the lack of a viable competitor. And so, I took that as a bit of a personal challenge, and I'm happy to see the degree to which GCP certainly takes individual customers and has great growth, but I think is firmly cemented as one of the companies that you can evaluate when you're thinking about an infrastructure that you don't have to manage yourself. 



You know, I think the concept of hybrid is something that's been screwed up by everybody involved. The idea that all of us are not persistently in a hybrid operations mode is bizarre, really. I know of no company, anywhere that consumes the entirety of its technology infrastructure from a single vendor. I actually think such a thing as impossible. How do you get a phone from Microsoft, or a desktop operating system from our friends in Amazon? You need all of the building blocks that are involved. 



So, that we are in hybrid all the time and will be in hybrid forever, suggests that, for our friends in the procurement department, the adults who are responsible for the care and feeding of all this, sort of, fun toys that we get to play with, you don't have to be a double-masters in procurement to have gotten the, probably, first line of the first page of the book that talks about best practices for procurement, which is you shall have more than one vendor for things on which you are critically dependent. So, I think that kind of thinking drives a bunch of behavior from customers who are eager to be able to run in more than one environment, to play these providers off each other, to retain power in the negotiation, and to do so at a cost that's not totally exorbitant and insane. Because you can imagine if you were doing this all from scratch—by hand—on your own, you implement against three different SDKs; you learn effectively three different environments, nuances, and all the details of their products. That's just going to be particularly complicated. So, we're watching businesses work really hard to reduce that impediment to the best practice, or that impediment to what the business people in the building require.



Corey: And that is, I think, an interesting story. I view, at least in the hybrid space where, “We're hybrid-cloud,” means that we started doing a full-on cloud migration, realized halfway through it's super hard to migrate some workloads, gave up, planted a flag, declared victory, and now we're hybrid. Multi-cloud feels a bit different, in that everyone wants to have a different story as you go down that particular path. So, a clear question: you were at Google for a while, after having done an awful lot of AWS. And again, nothing good can stay at Google because it gets deprecated. Why did you leave? Or you weren’t—you didn’t so much leave, as you were Google Reader-ed?



Miles: [laughs], no quite the opposite. I received what I can only consider a totally shocking retention offer from the Google people. So, I wasn't Reader-ed. I had a great time there, and the people that I worked with I love very deeply and I say that in the most human and personal way possible. There are a lot of people that will be friends of mine for life, and it was an incredibly hard decision to leave, but this requirement, right? I mean, you cannot learn more about what you're doing if you don't actually do it. Hands-on is the way for me. 



I suppose you could do synthetic research about the effectiveness of individual solutions writ large across whole sets of customers, but I just knew we were missing data about this last step, where we actually go out and implement them and hold customers hands and onboard them to the details and bear the risks together with them for those deployments. So, a big driver for leaving was really needing to be involved in that way, hands-on, with the deployment of customers. 



Another big driver, another important thing was frankly, I have seen a bit of this movie before. I was at AWS, I helped put together the Partner Programs, the systems integrator and reseller programs at Amazon, together with Dorothy, who's rad. And I watched a bunch of those partners capture big markets and do incredible things as Amazon passed these sort of operational thresholds: they got to double-digit billions in revenue, and they started to do a regional sales model, and they got to capacity management problems because all of a sudden they're getting customer demand that they didn't expect in different areas, they started to have bunches of regions available so you can do a really global deployment. 



Google has crossed all those same thresholds that Amazon crossed, in 2014. So, being able to participate at that scale, I thought there—was really clear that the work that was going to happen in systems integrators was going to be some of the most interesting work of this generation, or this phase of the expansion of the Google environment. And then, the last area, the place where, I think, some really hard thinking to do was to unpack what kind of gig I wanted to have. I had started to manage. The solutions architecture team on the Google side had gotten over 80 people, which doesn't sound very big in comparison to AWS. But think of it more like the—



Corey: It's both larger than two pizzas, no matter how you slice them.



Miles: It is substantially larger than two pizzas. It is also more than one calibration meeting, and the performance management, and personnel management load had become fairly high. So, I'm a dweeb, and I wanted to do stuff with my hands, and it was much easier for me to do that when I wasn't also managing 80 incredibly smart, challenging, hard-working people.



Corey: So, this does lead to an actual question: is solutions architecture the same thing between the two companies—or basically everywhere? Does SA work differ based upon the culture in which you find yourself in? That does sound like a leading question because I can't imagine the answer being no, but tell me about it. 



Miles: Oh, sure. So, we were trying to figure out what to call it on the Amazon side, and we had this meeting and it was Rudy Valdez who—his suggestion was solutions architecture because he had met some Oracle folks that had that title, and he thought that just sounded cool, was basically as much aggressive thought that went into it, which is a little wild for however many thousands of people do that role now in the public cloud context. The guide for those, and I think the dimensions on which engineers consider a role that has as much customer-facing time and as much of a communication requirement as I think all solutions architecture gigs share, is where you sit in the org, relative to things like quota or not. Are you really in the sales org, or are you, sort of, an overlay that doesn't bear the individual deal responsibility? Another is how much access or interaction you have with product and product engineering. Are you an advisor to product management? Have folks moved from solutions architecture into product management and vice versa? How is that balanced struck? And then, many orgs differentiate—Google certainly thinks of product management as different than the core technical leadership for an individual product, and so how much access do you have to the—like, literally, to the software developers that are building the services that you're out representing? These things change rapidly enough that it's really critical, I think, to be able to make a positive impact on customers that you have that kind of access. 



So, one of the reasons that I pushed very hard in the structural definition for solutions architecture on the Google side was to resolve several problems that I thought made solutions architectures choose between recognition and high performance inside the AWS business, and a successful outcome for their customers, or the very best recommendations to them. If you bear quota, if you have any kind of tie to this individual customer's outcome, I think it can be hard to always carry the highroad with every employee about telling them the right thing to do, as opposed to maybe the slightly more expensive way of doing a given thing. It's also very difficult, I think, to step back and think about higher-level recommendations or higher-level structures that you might need to build to enable every one of the solutions architects to be well. 



So, early on, as we were typing up this, what was really a checklist for Reddit, to make it so that they would get the heck out of a single zone, and stop going offline when Amazon did exactly what it promised to Reddit that it would do, by having zones go offline. The Well-Architected Framework was one of the attempts to try and help all the solutions architects do well. Not just to individually succeed with the customers that I was working with. And so, I wanted to build a program on the Google side that was deeply oriented in that way, where you took more responsibility over scaled impacts over multi-participant positive benefit, as opposed to the individual successes with individual customers because those come and go.



Corey: You definitely sound like you have a deep and abiding knowledge and perspective on customers, but let's do a quick spot check here. A lot of people like to claim that they worked at Google. But that often just sounds to me like one of those lies you tell on your resume. So, let's do a spot check here. If you really worked at Google, what product or service did you kill? 



Miles: [laughs], yes, that's true. If you're going to be in a leadership position over any substantial period of time, and you don't slay something, can you really stick Google on your resume? I posit that perhaps you can't right? I mean, there's just an incredible number of services that have been turned off, which, the only number that is bigger, obviously, because, well, that's a tautology. The number of services that have been created has to be a lot bigger than that. So—



Corey: But only by a small margin.



Miles: [laughs]. Right. Necessarily, but only by a small margin. So, I spent a bunch of cycles with marketing and PR leadership, as well as with product management and senior executive leadership, speaking very specifically to this issue. That when we, in the public's eye, seemingly arbitrarily, turn off products, or services, or features, or capabilities, or worse, change them after the fact, or change them as they're expanding and growing, we erode public trust, we make us seem unprepared for the real world expectations of enterprises and major customers. I know that only because of the personal feedback I received when I, yes, did turn off a Google product. And I think it's probably worth exploring what that looked like, and my decision-making as it went into it because I think in many, many cases, it's the same kind of analysis that other product managers and product owners are making as they decide to pull that trigger on the Google side. Is that worth taking the time?



Corey: I think so.



Miles: Okay. So, the product is called the TCO calculator, or the Total Cost of Ownership.



Corey: Oh, yes, every cloud provider has one of these, and it's always very polite fiction—



Miles: Yeah, oh no—



Corey: —because, it’s just, what do you include? What do you not? What story do you want to tell? Tell me the conclusion I can get your data to back it up.



Miles: Well, the story we wanted to tell was that Amazon is hideously expensive and that Google is better. And so, in order to be able to tell that story, we had to be able to unpack the real-world pricing differences, just the raw unit cost differences between the two environments, but we also had to show the differences in pricing model. So, for example, AWS has this thing called a Reserved Instance, you may be familiar with those. Reserved instances fix values like which operating system you're running, or which individual instance family you've chosen, or which zone in which region that you intend to consume. Google's model has—



Corey: Since supplanted by savings plans; same discount, none of the restrictions.



Miles: You, which actually solved a whole bunch of these issues at the time—



Corey: Oh, god, yes. I’d been complaining about the Reserved Instance limitations for years. I’m glad it finally got fixed. Credit where due. I want to call out when they have fixed a thing.



Miles: Yeah, and the savings plans are dramatically better. I think there remains some advantages in the simpler parts of Google's model. One example of those being sustained usage discounts, where without any action on your part, if there is any way for us to infer that you have consumed any amount of virtual machine resources that can be considered one thing—for the purposes of calculations—over any more than 25 percent of a month, you are automatically getting a discount. You cannot opt out, you cannot click the wrong button, you literally can't screw it up. And that, as a difference from AWS, was one that was very difficult for modelers to include in their analysis. 



So, the TCO tool did include it and was able to incorporate that, as well as things like the committed use discounts, the higher throughput networking, a bunch of other building blocks that were advantaged on the Google side. But that's not the story I'm trying to tell. I'm trying to tell the story about killing the thing. So, we had built it, to be totally honest, in a mad rush to respond to Gartner feedback that said you don't have a real cloud if you don't have one of these. I blame that squarely on very smart folks like you who are working very hard on this pricing analysis stuff, and all of the great customers who really do want to understand these problems as much as there is always going to be assumption and fictionalization and madness built into those calculations. So, we built it in a week. Literally Urs Hölzle and Ben Traynor—Urs is the de facto CTO that SVP for—



Corey: Oh, we’ve spoken once or twice on Twitter. He loves calling me out when I go a step too far, which, please keep doing this. That's what I'm there for. If no one calls you out, have you gone too far is always the question.



Miles: That's right. That's right. Urs, I have an incredible amount of respect for the impact on the planet that Urs has had. He is an incredible person. And then, Treynor is no small part of that. He is the head of operations for all of Alphabet. So, his LinkedIn says, “If Google's down, it's my fault.” And the two of them, to have them working in a Google Sheet with me is a little [laughs] bit of a terrifying afternoon, and you really double check your multiplication. 



But we were able to pull together a model that they found viable, and that several of the customers we were working with thought was useful, and get that out and published on the web, and then connected to Google’s—the cloud.google.com website for exposure to customers. And that all went great for about four and a half months until which point, as both Amazon and Google had changed some of their pricing. So, I said, “Hey, I think I'm going to make some updates and changes to this thing.” And because it had now been online for a while, it was a part of the production management and review cycles for all normal products on the Google side. I was like, “Great, I would love input and feedback. Help me make sure that I'm doing this stuff the right way.” 



And the core of that was a requirement for the operations story for this product. And I was like, “Oh, that's no problem. I will get up in the middle of the night and I will track when there's been a change in pricing, and then I will adjust the model myself to make sure that it reflects reality, and then I'll publish the changes.” The whole app sits on App Engine, so there's no operations of any kind because App Engine is great, and everything will work slick.” 



And they're like, “Oh, no, no, no, that's, that doesn't work at all. You need to be staffed for a full-time member of the site reliability engineering organization, to observe and manage that product, and to ensure that your releases happen in a timely manner, to hold things on the right stead.” I was like, “Awesome. So, you assign that person?” They're like, “No, no, they come out of your team. You have to staff that person out of your resourcing.” I was like, “Well, then that's me. I'm the person who's the SRE.” And they go, “Well, no, it has to be a different person than the developer who's in charge of the business.” I was like, “Well, I'm the only one on my team,” at the time, so—



Corey: Oh, yes. These are common problems. And the challenge is, is that a lot of them seem to start from how things have to fit into a matrix internally rather than what is best for a customer in this story. Now, there are reasons that a company would do things that suit internal needs, but the customers a) don't care and b) don't love it when whatever it is that gets implemented deviates from a successful outcome to their eyes.



Miles: Oh, yeah. No, I mean, I really see how a customer who had set up a meeting with their boss to walk through the decision-making for buying GCP versus something else and was expecting to go to cloud.google.com/products/tco and see this tool and then have the thing not show up. So, I wrote long-form papers describing why it is this was valuable, and here's our customer traffic, and this is the sort of support and responsibility that I need from a shared group to be able to participate in this. And I'm pretty persuasive, and so it ended up living for about a year based purely on my ability to cajole others into supporting this piece that we had onboarded. 



But as new leadership and new people came through, I wasn't persuasive enough. And so, I really get how, as a product owner, there's this balancing act and granularity between, do we only take bets that we can sustainably resource for 1000 years, or are we allowed to take bets that we don't have a clear worldview on how we would sustainably resource for a thousand years? Because all of Google is a bet that cannot be sustainably resourced for 1000 years. That's what the company is. It is a super dynamic driven business looking at the market, identifying new ways to be able to organize and make useful the world's information. And so, I don't think it has the kind of institutional will that would be required and, frankly, the cost overheads and the resource commitments that would be required to be able to take the thousands and thousands of experiments—and it absolutely thinks of them as experiments internally, which because Google is so big, become things that businesses depend on.



Corey: You'll notice the word experiment never appears in marketing dialogue. 



Miles: My product had a big asterisk at the bottom that said, “We are providing this as a service of our communications and technical teams. We reserve the right to take it offline as pricing changes or other requirements adjust.” So, I don't know if I quite called it an experiment, but I certainly described the fact to which that it was provided as a best-case scenario kind of a basis.



Corey: This episode is sponsored in part by ChaosSearch. Now their name isn’t in all caps, so they’re definitely worth talking to. What is ChaosSearch? A scalable log analysis service that lets you add new workloads in minutes, not days or weeks. Click. Boom. Done. ChaosSearch is for you if you’re trying to get a handle on processing multiple terabytes, or more, of log and event data per day, at a disruptive price. One more thing, for those of you that have been down this path of disappointment before, ChaosSearch is a fully managed solution that isn’t playing marketing games when they say “fully managed.” The data lives within your S3 buckets, and that’s really all you have to care about. No managing of servers, but also no data movement. Check them out at chaossearch.io and tell them Corey sent you. Watch for the wince when you say my name. That’s chaossearch.io.



Corey: True, but you've been at both shops. I mean, AWS could be said to have all of these same constraints around being forced to sustain whatever it is that they release indefinitely, but to their credit they have. You don't see deprecations, effectively, ever on the AWS side of the house. Which is why it's interesting because both GCP and AWS have the same language around deprecation timelines in their terms and conditions. No one ever brings it up in a serious context around AWS. They do constantly with GCP, and it's no small part based to people's experience on the consumer side of the house with Google deprecating things that are beloved. Reader, I am still avenging you.



Miles: [laughs]. Yeah, no. And I think I think it's important for customers to do the balancing act analysis, to say, “Okay, I'm interacting with a provider that has a different worldview than AWS, or Oracle, or IBM or any of these other competitors.” Google is a different company. There are some really positive benefits of that worldview. It is experimenting on your behalf a lot more, in my view, person to person or dollar per dollar of revenue, than those other businesses are. 



Then the downsides are that I think that it has, by way of policy, absolutely structured itself in a way that means that more of those experiments get turned off. We think there is not nearly enough work done today to capture the feedback and interest from customers who would characterize those experiments as critical where right now, there's not nearly enough of that sort of color or anecdote or communication with customers to be able to get that final detail that says, “I need to have this. This thing is now a critical part of my workflow.” If they had more of that view, I think, it would be easier for them to make the case internally. 



I know really quickly, I ran to be able to provide basically a user feedback form as a part of the pricing—or in the TCO calculator, because I wanted all of those anecdotes to be able to identify for my stakeholders, no really, people are in this thing all the time and using it. I can show you the traffic and you won't care about that because it's a lot smaller than YouTube, but you certainly should care about which kinds of customers are in there and participating. So, I think Google could go further, to spend more time thinking in that way, and push product managers harder to pay very close attention to that. I think it's worth a million dollars, at least, in marketing cost every time they turn a product off at this point, because—



Corey: I would argue more than that, depending on what side of the fence it's on.



Miles: Yeah, no, I agree. I think that there's culpable risk there, and that's a lot less than the cost of an SRE, part-time.



Corey: Oh, yes. Not to belabor this one, but when GKE went from free to just kidding, it's going to start charging per control cluster now. There was a lot of response, correctly so, that $73 a month is not going to break the bank for anyone's Kubernetes cluster. But first, there's the problem of trying to dictate to customers, well, it's fine, not that's not a big amount of money, you'll get used to it. “Well excuse you, who are you to tell me that?” Secondly, it's the moving of the goalposts where when you were building things out and your cost model was this thing is going to be free, and the workload on it would be costing us, suddenly it changes how that is perceived. 



And given that Google already has a strange reputation for changing the deal after people are using things and loving them, it makes for a very dangerous question for a lot of the Big E enterprise, folks. If, for example, Microsoft Excel changed the way that they wound up running formulas at random and pushed it out to some subset of their users, they would be burned alive out of their offices by people who are; you don't change anything we're doing a Big E enterprise without minimum 18 months of roadmap notice, and even that it's tight for some shops. It's a different culture than exists internally at Google and exists with a lot of the born-in-the-cloud startups. And if that is where Google is serious about competing, it needs to be able to assuage those customer concerns, even if they're not directly articulated to them.



Miles: Yeah, I think there is a balance there. I think it does need to get better, and be serious about the places where there is a real impact to customers in the changes that they make. And not just on the impression of it, but the actual outcomes for those customers. I work together—the one I'm thinking of is in Google Maps, they made a substantial change in the pricing structure for maps, and there's a bunch of customers for whom that's just, sort of—they take it as a shot across the bow, and they, sort of, evaluate if there's some sort of alternative there, and now they have to do this in situ reevaluation of the business structure. And that's, that's not something you should do with thousands to hundreds of thousands of customers, even if it costs a percentage or two on your side, or maybe more than that, to be able to keep things going. 



So, I say yes, and sitting as a partner outside of Google, working together with them every day, we advocate and push on that issue all the time. We are one of the signals into them describing the criticality of that issue with our major customer opportunities and the places where we're interacting. I think the far side of that, the other end, though, is big enterprises do have to weigh, how much value they get out of an experimentative culture and what are the kinds of things that maybe they should be working with Google and others to figure out the best practices for interacting with something that is more volatile than they're used to because, I don't know, what I saw was Google was able to wrap itself around and solve really gnarly problems really rapidly because of this dynamic internal behavior. So, I don't want them to lose that for the benefit of keeping a couple of products going a little longer than maybe they otherwise would.



Corey: Absolutely, and this is the challenge too, is that a lot of the reputational risk factors are not themselves directly aimed at things that are quantifiable in the traditional sense. When you wind up trying to decide what cloud provider do we go with, no one realistically, or at least what rounds to no one is going to say, “Not Google, because they turn stuff off.” But it will be used as a bullet point in the list of a larger argument. It shores up the question of would we be able to wind up having these conversations in anything approaching a realistic way if that's how it were to play out with this thing that we cared about? It winds up adding wood to the arrows that people use to take down a certain cloud provider, and it's ground that I don't believe GCP needs to give up as easily as it does.



Miles: Mm-hm, yeah, and I think that's another driver in this overall multi-cloud/hybrid-cloud issue. If you must pick a single provider for a decade, and it will cost you a billion dollars to change providers, these issues are paramount, and you really need to make a good bet. And you really can't afford the kind of complexity or overhead that a switch might maintain. If, on the other hand, these things become increasingly commoditized, it is trivial to move workloads between them. 



Corey: Ah, but not data. The Achilles heel of every public cloud is the cost of data transfer out. Well, that’s, sort of, a two-part problem. One is the actual cost. Secondly, the sheer level of complexity and modeling what that's going to be before trying it to see.



Miles: Oh, yeah, as one of the guys who helped design the 800 gigabit a second connectivity from Twitter's primary data centers to Google Cloud, which is, I believe, more than the throughput of the public Internet—



Corey: I don't think I can post nonsense nearly that quickly.



Miles: [laughs], it was an incredible amount of shitposting per millisecond flying back and forth between those two environments. So, the bandwidth is a thing. Dedicated interconnect and direct connect and the private linkages between these different environments and their customers are certainly a way to offset some of that, but I also think it's a model component where Google—following Amazon in structure, bluntly, was really, I think smart in trying to land on a model that, that enabled customers to be able to pick and choose. So, they landed on this egress system called Premium Pricing that lets you use Google's primary network to do all of the interregional transmissions—all that stuff is built in—all that cost more, and then there's the lower cost one if you want to use a dirty internet connection like Amazon would provide to people as where it only comes out of a single regional provider ever. And if that outbound linkage goes down, it gets worse. 



Lots of customers that I talked to tolerate in their own environments, even riskier network setups and that's the place where I think none of the clouds have really landed on a great offering is super unreliable, super high latency, weird, but great throughput and low-cost connectivity in and out of their systems. I don't think they do that, as much as I think it is easy to assume, to say, oh, they make the egress prices higher, so you can't take the data out so they can lock you in. I think Amazon landed on that model early, and the others followed them into it because it was the standard structure for this stuff. And it’s—I didn't see either on Amazon or on Google’s side, them talking about, oh, sweet, we've got them all locked in. Now, they can't get their data out of there. I mean, the whole point of putting data anyplace is to be able to use it.



Corey: Ideally. Yeah, the fun part is is that I feel like a lot of the pricing things are outgrowths of how things used to be in people's historical use cases, which means that for some things, it is not a fit for certain modern patterns, and having to play those games and fight it out is challenging. I don't know that there is any, I guess, golden path forward for all folks, I think everyone's use case is different and making sure that every provider has a story that resonates with that person's particular use case is going to be paramount. Whether or not various providers are able to deliver on that really remains to be seen.



Miles: Yeah, I wonder if there's a model available, I'm thinking in the same way, as Google offers sole tenant nodes, or Amazon has whole hosts are bare metal instances, things like that, where you are getting lower down in the stack to make longer-term commitments for bigger chunks of what are the real world building blocks. I mean, as a customer, if I were able to purchase whole connectivity links and provide them to a Google, or an Amazon, or an Azure, or whoever, and be responsible for a bunch of the operational overhead and structural complexity of maintaining and managing that link, I would expect that you'd also then be able to match that up with some lower fixed rate for the internal networking costs that are now currently zero on cloud providers when you move inside of a single zone around that zone. 



That stuff's free, but it sure isn't free to build. So, I think there's some kind of a model there where, rather than—I think this is a negative externality—rather than embedding the cost of internal networking in the egress rate, if instead, they were able to pry that back out and allow you to go do the legwork to purchase egress in whatever way you think you're going to do a better job of the Google side or the Amazon side to provide, I think that's a model that would be attractive, especially for customers that are themselves telecommunications companies, or customers that have unreasonable access to this kind of bandwidth.



Corey: I think that's very fair. So, if people want to hear more about what you have to say on this, and many, many other topics, where is the best place they can find you?



Miles: Sure. I'm not a deeply organized person, I think if you interact with me—



Corey: Guilty as charged.



Miles: —oh, yeah. So, I spent a lot of cycles working together with Google marketing. I delivered about half of Google Cloud’s keynote sessions over the course of the last five years, so we're close buddies. And so, SADA is Google’s—literally their partner of the year. So, we work together with them to participate at Google events all over the place. So, if you come to a Google Cloud Platform event, there's a pretty good chance that me, or somebody from my team will be there presenting and answering questions and plugging in. 



At SADA.com you'll have a whole running list of the small format events that we do that are closer to customers, that allow people to interact with us one on one, that’s got my travel schedule going. I participate mostly at LinkedIn and Twitter, so both of those are just my full name, Miles Ward, so that's pretty easy to hunt down. And then, we also spend and have on our side, our own version of this same sort of thing. We call it Cloud N Clear. Both the CEO, Tony Safoian, and I have produced sessions of that. And so, maybe the next place for people to hear about us would be when we're interviewing you there. It’d be super fun.



Corey: Well, we'll certainly see if that happens to take place, especially because at the time of this recording, it's questionable whether anyone will ever attend a conference event in person again—



Miles: Yeah.



Corey: —due to a pandemic. 



Miles: Yeah, that's right. No, we're actually doing some really interesting work together with Google marketing to think through because they have canceled Google Next in person. And, you know—



Corey: I was going to be there with bells on.



Miles: Oh, yeah, no, I had my multicolored Converse shoes and my electric tuba, ready to go. So, we're working with them now to do a bunch of the emergency creative thought about how do you translate a bunch of this stuff, although we still haven't figured out a way to have the, sort of, hanging out at the lobby bar at two in the morning by way of a Google Hangout. There's just something that doesn't quite translate. Although we have been working quite a lot with some of our partners and customers, like, Domino's is a customer of ours. We're trying to figure out how to ship everybody pizza, and we've been hanging out with DoorDash quite a bit. So, maybe we'll Dash people a bunch of chow while we're on a Hangout or something. We'll see how it all goes.



Corey: Excellent. Sounds like a plan to me. One way or another, we'll make it work. Thanks again for taking the time to speak with me. I appreciate it. 



Miles: Thanks, Corey. Good to see you, too.



Corey: Myles Ward, CTO at SADA. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on Apple Podcasts. If you've hated this podcast, please leave a five-star review on Apple Podcasts and a separate five-star review Google podcasts to embrace a multi-cloud strategy.



Announcer: This has been this week’s episode of Screaming in the Cloud. You can also find more Corey at ScreamingintheCloud.com, or wherever fine snark is sold.



This has been a HumblePod production. Stay humble.
Newsletter Footer

Get the Newsletter

Reach over 30,000 discerning engineers, managers, enthusiasts who actually care about the state of Amazon’s cloud ecosystems.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Sponsor Icon Footer

Sponsor an Episode

Get your message in front of people who care enough to keep current about the cloud phenomenon and its business impacts.