Innovating in the Cloud with Craig McLuckie

Episode Summary

This week Craig McLuckie, VP-Modern Applications Platform Business Unit at VMware, sits down with Corey to discuss his beginning with Google Compute Engine in the early days of the cloud, and his time at the forefront of Kubernetes and Docker. He discusses VMWare, and what exactly modern applications hope to achieve there, and what the next steps look like. Craig has always been at the forefront of innovation, especially in regard to the cloud. His storied history speaks to this and it stands at the center of Craig’s contributions to the field. Craig and Corey’s conversation covers a wide range that embodies Craig’s own trajectory, tune in for the whole story!

Episode Show Notes & Transcript

About Craig
Craig McLuckie is a VP of R&D at VMware in the Modern Applications Business Unit.  He joined VMware through the Heptio acquisition where he was CEO and co-founder. Heptio was a startup that supported the enterprise adoption of open source technologies like Kubernetes.  He previously worked at Google where he co-founded the Kubernetes project, was responsible for the formation of CNCF, and was the original product lead for Google Compute Engine.

Links:

Transcript


Announcer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at the Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.


Corey: This episode is sponsored in part my Cribl Logstream. Cirbl Logstream is an observability pipeline that lets you collect, reduce, transform, and route machine data from anywhere, to anywhere. Simple right? As a nice bonus it not only helps you improve visibility into what the hell is going on, but also helps you save money almost by accident. Kind of like not putting a whole bunch of vowels and other letters that would be easier to spell in a company name. To learn more visit: cribl.io


Corey: This episode is sponsored in part by Thinkst. This is going to take a minute to explain, so bear with me. I linked against an early version of their tool, canarytokens.org in the very early days of my newsletter, and what it does is relatively simple and straightforward. It winds up embedding credentials, files, that sort of thing in various parts of your environment, wherever you want to; it gives you fake AWS API credentials, for example. And the only thing that these things do is alert you whenever someone attempts to use those things. It’s an awesome approach. I’ve used something similar for years. Check them out. But wait, there’s more. They also have an enterprise option that you should be very much aware of canary.tools. You can take a look at this, but what it does is it provides an enterprise approach to drive these things throughout your entire environment. You can get a physical device that hangs out on your network and impersonates whatever you want to. When it gets Nmap scanned, or someone attempts to log into it, or access files on it, you get instant alerts. It’s awesome. If you don’t do something like this, you’re likely to find out that you’ve gotten breached, the hard way. Take a look at this. It’s one of those few things that I look at and say, “Wow, that is an amazing idea. I love it.” That’s canarytokens.org and canary.tools. The first one is free. The second one is enterprise-y. Take a look. I’m a big fan of this. More from them in the coming weeks.


Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. My guest today is Craig McLuckie, who’s a VP of R&D at VMware, specifically in their modern applications business unit. Craig, thanks for joining me. VP of R&D sounds almost like it’s what’s sponsoring a Sesame Street episode. What do you do exactly?


Craig: Hey, Corey, it’s great to be on with you. So, I’m obviously working within the VMware company, and my charter is really looking at modern applications. So, the modern application platform business unit is really grounded in the work that we’re doing to make technologies like Kubernetes and containers, and a lot of developer-centric technologies like Spring, more accessible to developers to make sure that as developers are using those technologies, they shine through on the VMware infrastructure technologies that we are working on.


Corey: Before we get into, I guess, the depths of what you’re focusing on these days, let’s look a little bit backwards into the past. Once upon a time, in the dawn of the modern cloud era—I guess we’ll call it—you were the original product lead for Google Compute Engine or GCE. How did you get there? That seems like a very strange thing to be—something that, “Well, what am I going to build? Well, that’s right; basically a VM service for a giant company that is just starting down the cloud path,” back when that was not an obvious thing for a company to do.


Craig: Yeah, I mean, it was as much luck and serendipity as anything else, if I’m going to be completely honest. I spent a lot of time working at Microsoft, building enterprise technology, and one of the things I was extremely excited about was, obviously, the emergence of cloud. I saw this as being a fascinating disrupter. And I was also highly motivated at a personal level to just make IT simpler and more accessible. I spent a fair amount of time building systems within Microsoft, and then even a very small amount of time running systems within a hedge fund.


So, I got, kind of, both of those perspectives. And I just saw this cloud thing as being an extraordinarily exciting way to drive out the cost of operations, to enable organizations to just focus on what really mattered to them which was getting those production systems deployed, getting them updated and maintained, and just having to worry a little bit less about infrastructure. And so when that opportunity arose, I jumped with both feet. Google obviously had a reputation as a company that was born in the cloud, it had a reputation of being extraordinarily strong from a technical perspective, so having a chance to bridge the gap between enterprise technology and that cloud was very exciting to me.


Corey: This was back in an era when, in my own technical evolution, I was basically tired of working with Puppet as much as I had been, and I was one of the very early developers behind SaltStack, once upon a time—which since then you folks have purchased, which shows that someone didn’t do their due diligence because something like 41 lines of code in the current release version is still assigned to me as per git-blame. So, you know, nothing is perfect. And right around then, then I started hearing about this thing that was at one point leveraging SaltStack, kind of, called Kubernetes, which, “I can’t even pronounce that, so I’m just going to ignore it. Surely, this is never going to be something that I’m going to have to hear about once this fad passes.” It turns out that the world moved on a little bit differently.


And you were also one of the co-founders of the Kubernetes project, which means that it seems like we have been passing each other in weird ways for the past decade or so. So, you’re working on GCE, and then one day you want to, what, sitting up and deciding, “I know, we’re going to build a container orchestration system because I want to have something that’s going to take me 20 minutes to explain to someone who’s never heard of these concepts before.” How did this come to be?


Craig: It’s really interesting, and a lot of it was driven by necessity, driven by a view that to make a technology like Google Compute Engine successful, we needed to go a little bit further. When you look at a technology like Google Compute Engine, we’d built something that was fabulous and Google’s infrastructure is world-class, but there’s so much more to building a successful cloud business than just having a great infrastructure technology. There’s obviously everything that goes with that in terms of being able to meet enterprises where they are and all the—


Corey: Oh, yeah. And everything at Google is designed for Google scale. It’s, “We built this thing and we can use it to stand up something that is world-scale and get 10 million customers on the first day that it launches.” And, “That’s great. I’m trying to get a Hello World page up and maybe, if I shoot for the moon, it can also run WordPress.” There’s a very different scale of problem.


Craig: It’s just a very different thing. When you look at what an organization needs to use a technology, it’s nice that you can take that, sort of, science-fiction data center and carve it up into smaller pieces and offer it as a virtual machine to someone. But you also need to look at the ISV ecosystem, the people that are building the software, making sure that it’s qualified. You need to make sure that you have the ability to engage with the enterprise customer and support them through a variety of different functions. And so, as we were looking at what it would take to really succeed, it became clear that we needed a little more; we needed to, kind of, go a little bit further.


And around that time, Docker was really coming into its full. You know, Docker solved some of the problems that organizations had always struggled with. Virtual machine is great, but it’s difficult to think about. And inside Google, containers we’re a thing.


Corey: Oh, containers have a long and storied history in different areas. From my perspective, Docker solves the problem of, “Well, it works on my machine,” because before something like Docker, the only answer was, “Well, backup your email because your laptop’s about to be in production.”


Craig: [laugh]. Yeah, that’s exactly right. You know, I think when I look at what Docker did, and it was this moment of clarity because a lot of us had been talking about this and thinking about it. I remember turning to Joe while we were building Compute Engine and basically said, “Whoever solves the packaging the way that Google did internally, and makes that accessible to the world is ultimately going to walk away with a game.” And I think Docker put lightning in a bottle.


They really just focused on making some of these technologies that underpinned the hyperscalers, that underpinned the way that, like, a Google, or a Facebook, or a Twitter tended to operate, just accessible to developers. And they solved one very specific thing which was that packaging problem. You could take a piece of software and you could now package it up and deploy it as an immutable thing. So, in some ways, back to your own origins with SaltStack and some of the technologies you’ve worked on, it really was an epoch of DevOps; let’s give developers tools so that they can code something up that renders a production system. And now with Docker, you’re able to shift that all left. So, what you produced was the actual deployable artifact, but that obviously wasn’t enough by itself.


Corey: No, there needed to be something else. And according to your biography, not only it says here that, I quote, “You were responsible for the formation of the CNCF, or Cloud Native Computing Foundation,” and I’m trying to understand is that something that you’re taking credit for or being blamed for? It really seems like it could go either way, given the very careful wording there.


Craig: [laugh]. Yeah, it could go either way. It certainly got away from us a little bit in terms of just the scope and scale of what was going on. But the whole thesis behind Kubernetes, if you just step back a little bit, was we didn’t need to own it; Google didn’t need to own it. We just needed to move the innovation boundary forwards into an area that we had some very strong advantages.


And if you look at the way that Google runs, it kind of felt like when people were working with Docker, and you had technologies like Mesos and all these other things, they were trying to put together a puzzle, and we already had the puzzle box in front of us because we saw how that technology worked. So, we didn’t need to control it, we just needed people to embrace it, and we were confident that we could run it better. But for people to embrace it, it couldn’t be seen as just a Google thing. It had to be a Google thing, and a Red Hat thing, and an Amazon thing, and a Microsoft thing, and something that was really owned by the community. So, the inspiration behind CNCF was to really put the technology forwards to build a collaborative community around it and to enable and foster this disruption.


Corey: At some point after Kubernetes was established, and it was no longer an internal Google project but something that was handed over to a foundation, something new started to become fairly clear in the larger ecosystem. And it’s sort of a microcosm of my observation that the things that startups are doing today are what enterprises are going to be doing five years from now. Every enterprise likes to imagine itself a startup; the inverse is not particularly commonly heard. You left Google to go found Heptio, where you were focusing on enterprise adoption of open-source technologies, specifically Kubernetes, but it also felt like it was more of a cultural shift in many respects, which is odd because there aren’t that many startups, at least in that era, that were focused on bringing startup technologies to the enterprise, and sneaking in—or at least that’s how it felt—the idea of culture change as well.


Craig: You know, it’s really interesting. Every enterprise has to innovate, and people tend to look at startups as being a source of innovation or a source of incubation. What we were trying to do with Heptio was to go the other way a little bit, which was, when you look at what West Coast tech companies were doing, and you look at a technology like Kubernetes—or any new technology: Kubernetes, or KNative, or there’s some of these new observability capabilities that are starting to emerge in this ecosystem—there’s this sort of trickle-across effect, where it’s starts with the West Coast tech companies that build something, and then it trickles across to a lot of the progressive forward-leaning enterprise organizations that have the scale to consume those technologies. And then over time, it becomes mainstream. And when I looked at a technology like Kubernetes, and certainly through the lens of a company like Google, there was an opportunity to step back a little bit and think about, well, Google’s really this West Coast tech company, and it’s producing this technology, and it’s working to make that more enterprise-centric, but how about going the other way?


How about meeting enterprise organizations where they are—enterprise organizations that aspire to adopt some of these practices—and build a startup that’s really about just walking the journey with customers, advocating for their needs, through the lens of these open-source communities, making these open-source technologies more accessible. And that was really the thesis around what we were doing with Heptio. And we worked very hard to do exactly as you said which is, it’s not just about the tech, it’s about how you use it, it’s about how you operate it, how you set yourself up to manage it. And that was really the core thesis around what we were pursuing there. And it worked out quite well.


Corey: Sitting here in 2021, if I were going to build something from scratch, I would almost certainly not use Kubernetes to do it. I’d probably pick a bunch of serverless primitives and go from there, but what I respect and admire about the Kubernetes approach is companies can’t generally do that with existing workloads; you have to meet them where they are, as you said. ‘Legacy’ is a condescending engineering phrase for ‘it makes money.’ It’s, “Oh, what does that piece of crap do?” “Oh, about $4 billion a year.” So yeah, we’re going to be a little delicate with what it does.


Craig: I love that observation. I always prefer the word ‘heritage’ over the word legacy. You got to—


Corey: Yeah.


Craig: —have a little respect. This is the stuff that’s running the world. This is the stuff that every transaction is flowing through.


And it’s funny, when you start looking at it, often you follow the train along and eventually you’ll find a mainframe somewhere, right? It is definitely something that we need to be a little bit more thoughtful about.


Corey: Right. And as cloud continues to eat the world well, as of the time of this recording, there is no AWS/400, so there is no direct mainframe option in most cloud providers, so there has to be a migration path; there has to be a path forward, that doesn’t include, “Oh, and by the way, take 18 months to rewrite everything that you’ve built.” And containers, particularly with an orchestration model, solve that problem in a way that serverless primitives, frankly, don’t.


Craig: I agree with you. And it’s really interesting to me as I work with enterprise organizations. I look at that modernization path as a journey. Cloud isn’t just a destination: there’s a lot of different permutations and steps that need to be taken. And every one of those has a return on investment.


If you’re an enterprise organization, you don’t modernize for modernization’s sake, you don’t embrace cloud for cloud’s sake. You have a specific outcome in mind, “Hey, I want to drive down this cost,” or, “Hey, I want to accelerate my innovation here,” “Hey, I want to be able to set my teams up to scale better this way.” And so a lot of these technologies, whether it’s Kubernetes, or even serverless is becoming increasingly important, is a capability that enables a business outcome at the end of the day. And when I think about something like Kubernetes, it really has, in a way, emerged as a Goldilocks abstraction. It’s low enough level that you can run pretty much anything, it’s high enough level that it hides away the specifics of the environment that you want to deploy it into. And ultimately, it renders up what I think is economies of scope for an organization. I don’t know if that makes sense. Like, you have these economies of scale and economies of scope.


Corey: Given how down I am on Kubernetes across the board and—at least, as it’s presented—and don’t take that personally; I’m down on most modern technologies. I’m the person that said the cloud was a passing fad, that virtualization was only going to see limited uptake, that containers were never going to eat the world. And I finally decided to skip ahead of the Kubernetes thing for a minute and now I’m actually going to be positive about serverless. Given how wrong I am on these things, that almost certainly dooms it. But great, I was down on Kubernetes for a long time because I kept seeing these enterprises and other companies talking about their Kubernetes strategy.


It always felt like Kubernetes was a means to an end, not an end in and of itself. And I want to be clear, I’m not talking about vendors here because if you are a software provider to a bunch of companies and providing Kubernetes is part and parcel of what you do, yeah, you need a Kubernetes strategy. But the blue-chip manufacturing company that is modernizing its entire IT estate, doesn’t need a Kubernetes strategy as such. Am I completely off base with that assessment?


Craig: No, I think you’re pointing at something which I feel as well. I mean, I’ll be honest, I’ve been talking about [laugh] Kubernetes since day one, and I’m kind of tired of talking about Kubernetes. It should just be something that’s there; you shouldn’t have to worry about it, you shouldn’t have to worry about operationalizing it. It’s just an infrastructure abstraction. It’s not in and of itself an end, it’s simply a means to an end, which is being able to start looking at the destination you’re deploying your software into as being more favorable for building distributed systems, not having to worry about the mechanics of what happens if a single node fails? What happens if I have to scale this thing? What happens if I have to update this thing?


So, it’s really not intended—and it never was intended—to be an end unto itself. It was really just intended to raise the waterline and provide an environment into which distributed applications can be deployed that felt entirely consistent, whether you’re building those on-premises, in the public cloud, and increasingly out to the edge.


Corey: I wound up making a tweet, couple years back, specifically in 2019, that the nuclear hot take: “Nobody will care about Kubernetes in five years.” And I stand by it, but I also think that’s been wildly misinterpreted because I am not suggesting in any way that it’s going to go away and no one is going to use it anymore. But I think it’s going to matter in the same way as the operating system is starting to, the way that the Linux virtual memory management subsystem does now. Yes, a few people in specific places absolutely care a lot about those things, but most companies don’t because they don’t have to. It’s just the way things are. It’s almost an operating system for the data center, or the cloud environment, for lack of a better term. But is that assessment accurate? And if you don’t wildly disagree with it, what do you think of the timeline?


Craig: I think the assessment is accurate. The way I always think about this is you want to present your engineers, your developers, the people that are actually taking a business problem and solving it with code, you want to deliver to them the highest possible abstraction. The less they have to worry about the infrastructure, the less they have to worry about setting up their environment, the less they have to worry about the DevOps or DevSecOps pipeline, the better off they’re going to be. And so if we as an industry do our job right, Kubernetes is just the water in which IT swims. You know, like the fish doesn’t see the water; it’s just there.


We shouldn’t be pushing the complexity of the system—because it is a fancy and complex system—directly to developers. They shouldn’t necessarily have to think like, “Oh, I need to understand all of the XYZ is about how this thing works to be able to build a system.” There will be some engineers that benefit from it, but there are going to be other engineers that don’t. The one thing that I think is going to—you know, is a potential change on what you said is, we’re going to see people starting to program Kubernetes more directly, whether they know it or not. I don’t know if that makes sense, but things like the ability for Kubernetes to offer up a way for organizations to describe the desired state of something and then using some of the patterns of Kubernetes to make the world into that shape is going to be quite pervasive, and I’m really seeing signs that we’re seeing it.


So yes, most developers are going to be working with higher abstractions. Yes, technologies like Knative and all of the work that we at VMware are doing within the ecosystem will render those higher abstractions to developers. But there’s going to be some really interesting opportunities to take what made Kubernetes great beyond just, “Hey, I can put a Docker container down on a virtual machine,” and start to think about reconciler-driven IT: being able to describe what you want to have happen in the world, and then having a really smart system that just makes the world into that shape.


Corey: This episode is sponsored by our friends at Oracle HeatWave is a new high-performance accelerator for the Oracle MySQL Database Service. Although I insist on calling it “my squirrel.” While MySQL has long been the worlds most popular open source database, shifting from transacting to analytics required way too much overhead and, ya know, work. With HeatWave you can run your OLTP and OLAP, don’t ask me to ever say those acronyms again, workloads directly from your MySQL database and eliminate the time consuming data movement and integration work, while also performing 1100X faster than Amazon Aurora, and 2.5X faster than Amazon Redshift, at a third of the cost. My thanks again to Oracle Cloud for sponsoring this ridiculous nonsense. 


Corey: So, you went from driving Kubernetes adoption into the enterprise as the founder and CEO of Heptio, to effectively, acquired by one of the most enterprise-y of enterprise companies, in some respects, VMware, and your world changed. So, I understand what Heptio does because, to my mind, a big company is one that is 200 people. VMware has slightly more than that at last count, and I sort of lose track of all the threads of the different things that VMware does and how it operates. I could understand what Heptio does. What I don’t understand is what, I guess, your corner of VMware does. Modern applications means an awful lot of things to an awful lot of people. I prefer to speak it with a condescending accent when making fun of those legacy things that make money—not a popular take, but it’s there—how do you define what you do now?


Craig: So, for me, when you talk about modern application platform, you can look at it one of two ways. You can say it’s a platform for modern applications, and when people have modern applications, they have a whole variety of different ideas in the head: okay, well, it’s microservices-based, or it’s API-fronted, it’s event-driven, it’s supporting stream-based processing, blah, blah, blah, blah, blah. There’s all kinds of fun, cool, hip new patterns that are happening in the segment. The other way you could look at it is it’s a modern platform for applications of any kind. So, it’s really about how do we make sense of going from where you are today to where you need to be in the future?


How do we position the set of tools that you can use, as they make sense, as your organization evolves, as your organization changes? And so I tend to look at my role as being bringing these capabilities to our existing product line, which is, obviously, the vSphere product line, and it’s almost a hyperscale unto itself, but it’s really about that private cloud experience historically, and making those capabilities accessible in that environment. But there’s another part to this as well, which is, it’s not just about running technologies on vSphere. It’s also about how can we make a lot of different public clouds look and feel consistent without hiding the things that they are particularly great at. So, every public cloud has its own set of capabilities, its own price-performance profile, its own service ecosystem, and richness around that.


So, what can we do to make it so that as you’re thinking about your journey from taking an existing system, one of those heritage systems, and thinking through the evolution of that system to meet your business requirements, to be able to evolve quickly, to be able to go through that digital transformation journey, and package it up and deliver the right tools at the right time in the right environment, so that we can walk the journey with our customers?


Corey: Does this tie into Tanzu, or is that a different VMware initiative slash division? And my apologies on that one, just because it’s difficult for me to wrap my head around where Tanzu starts and stops. If I’m being frank.


Craig: So, [unintelligible 00:21:49] is the heart of Tanzu. So Tanzu, in a way, is a new branch, a new direction for VMware. It’s about bringing this richness of capabilities to developers running in any cloud environment. It’s an amalgamation of a lot of great technologies that people aren’t even aware of that VMware has been building, or that VMware has gained through acquisition, certainly Heptio and the ability to bring Kubernetes to an enterprise organization is part of that. But we’re also responsible for things like Spring.


Spring is a critical anchor for Java developers. If you look at the Spring community, we participate in one and a half million new application starts a month. And you wouldn’t necessarily associate VMware with that, but we’re absolutely driving critical innovation in that space. Things, like full-stack observability, being able to not only deploy these container-packaged applications, but being able to actually deal with the day two operations, and how to deal with the APM considerations, et cetera. So, Tanzu is an all-in push from VMware to bring the technologies like Kubernetes and everything that exists above Kubernetes to our customers, but also to new customers in the public cloud that are really 
looking for consistency across those environments.


Corey: When I look at what you’ve been doing for the past decade or so, it really tells a story of transitions, where you went from product lead on GCE, to working on Kubernetes. You took Kubernetes from an internal Google reimagining of Borg into an open-source project that has been given over to the CNCF. You went from running Heptio, which was a startup, to working at one of the least startup-y-like companies, by some measures, in the world.s you seem to have gone from transiting from one thing to almost its exact opposite, repeatedly, throughout your career. What’s up with that theme?


Craig: I think if you look back on the transitions and those key steps, the one thing that I’ve consistently held in my head, and I think my personal motivation was really grounded in this view that IT is too hard, right? IT is just too challenging. So, the transition from Microsoft, where I was responsible building package software, to Google, which was about cloud, was really marking that transition of, “Hey, we just need to do better for the enterprise organization.” The transition from focusing on a virtual machine-based system, which was the state of the art at the time to unlocking these modern orchestrated container-based system was in service of that need, which was, “Hey, you know, if you can start to just treat a number of virtual machines as a destination that has a distributed operating system on top of it, we’re going to be better off.” The need to transition to a community-centric outcome because while Google is amazing in so many ways, being able to benefit from the perspective that traditional enterprise organizations brought to the table was significant to transitioning into a startup where we were really serving enterprise organizations and providing that interface back into the community to ultimately joining VMware because at the end of the day, there’s a lot of work to be done here.


And when you’re selling a startup, it’s—you’re either selling out or you’re buying in, and I’m not big on the idea of selling out. In this case, having access to the breadth of VMware, having access to the place where most of the customers are really cared about were living, and all of those heritage systems that are just running the world’s business. So, for me, it’s really been about walking that journey on behalf of that individual that’s just trying to make ends meet; just trying to make sure that their IT systems stay lit; that are trying to make sure that the debt that they’re creating today in the IT environment isn’t payday loan debt, it’s more like a mortgage. I can get into an environment that’s going to 
serve me and my family well. And so, each of those transitions has really just been marked by need.


And I tend to look at the needs of that enterprise organization that’s walking this journey as being an anchor for me. And I’m pleased with every transition I’ve made. Like, at every point we’ve—sort of, Joe and myself, who’s been on this journey for a while, have been able to better serve that individual.


Corey: Now, I know that it’s always challenging to talk about the future, but do you think you’re done with those radical transitions, as you 
continue to look forward to what’s coming? I mean, it’s impossible to predict the future, but you’re clearly where you are for a reason, and I’m assuming part of that reason is because you see an opportunity; you see a transformation that is currently unfolding. What does that look like from where you sit?


Craig: Well, I mean, my work in VMware [laugh] is very far from done. There’s just an amazing amount of continued opportunity to deliver value not only to those existing customers where they’re running on-prem but to make the public cloud more intrinsically accessible and to increasingly solve the problems as more computational resources fanning back out to the edge. So, I’m extremely excited about the opportunity ahead of us from the VMware perspective. I think we have some incredible advantages because, at the end of the day, we’re both a neutral party—you know, we’re not a hyperscaler. We’re not here to compete with the hyperscalers on the economies of scale that they render.


But we’re also working to make sure that as the hyperscalers are offering up these new services and everything else, that we can help the enterprise organization make best use of that. We can help them make best use of that infrastructure environment, we can help them navigate the complexities of things like concentration risk, or being able to manage through the luck and potential that some of these things represent. So, I don’t want to see the world collapse back into the mainframe era. I think that’s the thing that really motivates me, I think, the transition from mainframe to client-server, the work that Wintel did—the Windows-Intel consortium—to unlock that ecosystem just created massive efficiencies and massive benefits from everyone. And I do feel like with the combination of technologies like Kubernetes and everything that’s happening on top of that, and the opportunity that an organization like VMware has to be a neutral party, to really bridge the gap between enterprises and those technologies, we’re in a situation where we can create just tremendous value in the world: making it so that modernization is a journey rather than a destination, helping customers modernize at a pace that’s reasonable to them, and ultimately serving both the cloud providers in terms of bringing some critical workloads to the cloud, but also serving customers so that as they live with the harsh realities of a multi-cloud universe where I don’t know one enterprise organization that’s just all-in on one cloud, we can provide some 
really useful capabilities and technologies to make them feel more consistent, more familiar, without hiding what’s great about each of them.


Corey: Craig, thank you so much for taking the time to speak with me today about where you sit, how you see the world, where you’ve been, and little bits of where we’re going. If people want to learn more, where can they find you?


Craig: Well, I’m on Twitter, @cmcluck, and obviously, on LinkedIn. And we’ll continue to invite folks to attend a lot of our events, whether that’s the Spring conferences that VMware sponsors, or VMWorld. And I’m really excited to have an opportunity to talk more about what we’re doing 
and some of the great things we’re up to.


Corey: I will certainly be following up as the year continues to unfold. Thanks so much for your time. I really appreciate it.


Craig: Thank you so much for your time as well.


Corey: Craig McLuckie, Vice President of R&D at VMware in their modern applications business unit. I’m Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice along with a comment that I won’t bother to read before designating it legacy or heritage.


Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need the Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.


Announcer: This has been a HumblePod production. Stay humble.


Newsletter Footer

Get the Newsletter

Reach over 30,000 discerning engineers, managers, enthusiasts who actually care about the state of Amazon’s cloud ecosystems.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Sponsor Icon Footer

Sponsor an Episode

Get your message in front of people who care enough to keep current about the cloud phenomenon and its business impacts.