Screaming in the Cloud
Audio Icon
The New Docker with Donnie Berkholz
Episode Summary
Donnie Berkholz, Ph.D., is VP of Products at Docker. Prior to this position, he was an executive in residence at Scale Venture Partners, VP of IT Service Delivery of CWT, director of development, DevOps, and IT operations at 451 Research, an open-source leader at Gentoo Linux, a senior analyst at RedMonk, and a research fellow at the Mayo Clinic, among other positions. He earned his Ph.D. in biochemistry and biophysics from Oregon State University in 2009. Join Corey and Donnie as they talk about the new iteration of Docker and how the company has reinvented itself in the past year and a half, the blurring line between developers and operations, how no container runs in isolation, why multi-cloud is possible but not realistic, how Docker doesn’t want to be the runtime platform in production, what Donnie thinks Docker will look like 15 years from now, and more.
Episode Show Notes and Transcript

About Donnie

Donnie is VP of Products at Docker and leads product vision and strategy. He manages a holistic products team including product management, product design, documentation & analytics. Before joining Docker, Donnie was an executive in residence at Scale Venture Partners and VP of IT Service Delivery at CWT leading the DevOps transformation. Prior to those roles, he led a global team at 451 Research (acquired by S&P Global Market Intelligence), advised startups and Global 2000 enterprises at RedMonk and led more than 250 open-source contributors at Gentoo Linux. Donnie holds a Ph.D. in biochemistry and biophysics from Oregon State University, where he specialized in computational structural biology, and dual B.S. and B.A. degrees in biochemistry and chemistry from the University of Richmond.



Announcer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.

Corey: This episode is sponsored in part by Thinkst. This is going to take a minute to explain, so bear with me. I linked against an early version of their tool, in the very early days of my newsletter, and what it does is relatively simple and straightforward. It winds up embedding credentials, files, that sort of thing in various parts of your environment, wherever you want to; it gives you fake AWS API credentials, for example. And the only thing that these things do is alert you whenever someone attempts to use those things. It’s an awesome approach. I’ve used something similar for years. Check them out. But wait, there’s more. They also have an enterprise option that you should be very much aware of You can take a look at this, but what it does is it provides an enterprise approach to drive these things throughout your entire environment. You can get a physical device that hangs out on your network and impersonates whatever you want to. When it gets Nmap scanned, or someone attempts to log into it, or access files on it, you get instant alerts. It’s awesome. If you don’t do something like this, you’re likely to find out that you’ve gotten breached, the hard way. Take a look at this. It’s one of those few things that I look at and say, “Wow, that is an amazing idea. I love it.” That’s and The first one is free. The second one is enterprise-y. Take a look. I’m a big fan of this. More from them in the coming weeks.

Corey: This episode is sponsored in part by our friends at Lumigo. If you’ve built anything from serverless, you know that if there’s one thing that can be said universally about these applications, it’s that it turns every outage into a murder mystery. Lumigo helps make sense of all of the various functions that wind up tying together to build applications. It offers one-click distributed tracing so you can effortlessly find and fix issues in your serverless and microservices environment. You’ve created more problems for yourself; make one of them go away. To learn more, visit


Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. Today I’m joined by Donnie Berkholz, who’s here to talk about his role as the VP of Products at Docker, whether he knows it or not. Donnie, welcome to the show.

Donnie: Thanks. I’m excited to be here.

Corey: So, the burning question that I have that inspired me to reach out to you is fundamentally, and very bluntly and directly, Docker was a thing in, I want to say the 2015-ish era, where there was someone who gave a parody talk for five minutes where they got up and said nothing but the word Docker over and over again, in a bunch of different tones, and everyone laughed because it seemed like, for a while, that was what a lot of tech conference talks were about 50% of the way. It’s years later, now, and it’s 2021 as of the time of this recording. How is Docker relevant today?

Donnie: Great question. And I think one that a lot of people are wondering about. The way that I think about it, and the reason that I joined Docker, about six months back now, was, I saw the same thing you did in the early 2010s, 2013 to 2016 or so. Docker was a brand new tool, beloved of developers and DevOps engineers everywhere. And they took that, gained the traction of millions of people, and tried to pivot really hard into taking that bottom-up open-source traction and turning it into a top-down, kind of, sell to the CIO and the VP operations, orchestration management, kind of classic big company approach. And that approach never really took off to the extent that would let Docker become an explosive success commercially in the same way that it did across the open-source community and building out the usability of containers as a concept.

Now, new Docker, as of November 2019, divested all of the top-down operations production environment stuff to Mirantis and took a look at what else there was. And the executive staff at the time, the investors thought there might be something in there, it’s worth making a bet on the developer-facing parts of Docker to see if the things that built the developer love in the first place were commercially viable as well. And so looking through that we had things left like Docker Hub, Docker Engine, things like Notary, and Docker Desktop. So, a lot of the direct tools that developers use on a daily basis to get their jobs done when they’re working on modern applications, whether that’s twelve-factor, whether that’s something they’re trying to lift and shift into a container, whatever it might look like, it’s still used every day. And so the thought was, there might be something in here.

Let’s invest some money, let’s invest some time and see what we can make of it because it feels promising. And fast-forward a couple of years—we’re in early 2021—we just announced our Series B investment because the past year has shown that there’s something real there. People are using Docker heavily; people are willing to pay for it, and where we’re going with it is much higher level than just containers or just a registry. I think there’s a lot more opportunity 
there. When I was watching the market as a whole drifting toward Kubernetes, what you can see is, to me, it’s a lot like a repeat of the old OpenStack days where you’ve got tons of vendors in the space, it’s extremely crowded, everybody’s trying to sell the same thing to the same small set of early adopters who are ready for it.

Whereas if you look at the developer side of containers, it’s very sparsely populated. Nobody’s gone hard after developers in a bottom-up self-service kind of way and helped them adopt containers and helped them be more productive doing so. So, I saw that as a really compelling opportunity and one where I feel like we’ve got a lot of runway ahead of us.

Corey: Back in the early days—this is a bit of a history lesson that I’m sure you’re aware of, but I want to make sure that my understanding winds up aligning with yours is, Docker was transformative when it was announced—I want to say 2012, in Santa Clara, but don’t quote me on that one—and, effectively, what it promised to solve was—I mean, containerization was not a new idea. We had that with LPARs on mainframes way before my time. And it’s sort of iterated forward ever since. What it fundamentally solved was the tooling around those things where suddenly it got rid of the problem of, “Well, it worked on my machine.” And the rejoinder from the grumpy ops person—which I very much was—was, “Great. Then backup your email because your laptop’s about to go into production.”

By having containers, suddenly you have an environment or an application that was packaged inside of a mini-environment that was able to be run basically anywhere. And it was, write once, deploy basically as many times as you want. And over time, that became incredibly interesting, not just for developers, but also for folks who were trying to migrate applications. You can stuff basically anything into a container. Whether you should or not is a completely separate conversation that I am going to avoid by a wide margin. Am I right so far in everything that I have said there?

Donnie: Yep. Absolutely.

Corey: Awesome. So, then we have this container runtime that handles the packaging piece. And then people replaced Docker in that cherished position in their hearts—which is the thing that they talk about, even when you beg them to stop—with Kubernetes, which is effectively an orchestration system for containers, invariably Docker. And now people 
are talking about that constantly and consistently. If we go back to looking at similar things in the ecosystem, people used to care tremendously about what distribution of Linux they ran.

And then—well, okay. If not the distro, definitely the OS wars of, is this Windows or is this a Linux workload? And as time has gone on, people care about that less and less where they just want the application to work; they don’t care what it’s running in under the hood. And it feels that the container runtime has gotten to that point as well. And soon, my belief is that we’re going to see the orchestrator slip below that surface level of awareness of things people have to care about, if for no other reason than if you look at Kubernetes today, it is fiendishly complicated, and that doesn’t usually last very long in this space before there’s an abstraction layer built that compresses all of that into something you don’t really have to think about, except for a small number of people at very specific companies. Does that in any way change, I guess, the relevance of Docker to developers today? Or am I thinking about this the wrong way with viewing Docker as a pure technology, instead of an ecosystem?

Donnie: I think it changes the relevance of Docker much more to platform teams and DevOps teams—as much as I wish that wasn’t a word or a term—operations groups that are running the Kubernetes environments, or that are running applications at scale in production, where maybe in the early days, they would run Docker directly in prod, then they moved to running Docker as a container runtime within Kubernetes, and more recently, the core of Docker—which was containerd—as a replacement for that overall Docker, which used dockershim. So, I think the change here is really around, what does that production environment look like? And where we’re really focusing our effort is much more on the developer experience. I think that’s where Docker found its magic in the first place was in taking incredibly complicated technologies and making them really easy in a way that developers love to use. So, we continue to invest much more on the developer tools part of it, rather than what does the shape of the production environment look like?

And how do we horizontally scale this to hundreds or thousands of containers? Not interesting problems for us right now. We’re much more looking at things like how do we keep it simple for developers so they can focus on a simple application. But it is an application and not just a container, so we’re still thinking of moving to things that developers care about. They don’t necessarily care about containers; they care about their app.

So, what’s the shape of that app, and how does it fit into the structure of containers? In some cases, it’s a single container, in some cases, it’s multiple containers. And that’s where we’ve seen Docker Compose pick up as a hugely popular technology. When we look at our own surveys, when we look at external surveys, we see on the order of two-thirds of people who use Docker using Compose to do it, either for ease of automation and reproducibility or for ease of managing an application that spans across multiple containers as a logical service, rather than try and shove it all in 
one and hope it sticks.

Corey: I used to be relatively, I guess, cynical about Docker. In fact, one of my first breakout talks started life as a lightning talk called “Heresy in the Church of Docker,” where I just came up with a list of a few things that were challenging and didn’t fully understand. It was mostly jokes, and the first half of it was set to the backstory of an embarrassing chocolate coffee explosion that a boss of mine once had. And that was great. Like, what’s the story here? What’s the relevance? Just a story of someone who didn’t understand their failure modes of containers in 
production. Cue laugh.

And that was great. And someone came up to me and said, “Hey, can you give the full version of that talk at ContainerCon?” To which my response was, “There’s a full version?” Followed immediately by, “Absolutely.” And it sort of took life from there.

Now, I want to say that talk hasn’t aged super well because everything that I highlighted in that talk has since been fixed. I was just early and being snarky, and I genuinely, when I gave that first version, didn’t understand the answers. And I was expecting to be corrected vociferously by an awful lot of folks. Instead, it was, “Yeah, these are challenges.” At which point I realized, “Holy crap, maybe everyone isn’t 80 years ahead of me in technical understanding.” And for 
better or worse, it’s set an interesting tone.

Donnie: Absolutely. So, what do you think people really took out of that talk that surprised you?

Corey: The first thing that I think, from my perspective, that caught me by surprise was that people are looking at me as some sort of thought leader—their term, not mine—and my response was, “Holy crap. I’m not a thought leader. I’m just a loud, white guy in tech.” And yep, those are pretty much the same thing in some circles, which is its own series of problems. But further, people were looking at this and taking it seriously, as in, “Well, we do need to have some plans to mitigate this.”

And there are different discussions that went back and forth with folks coming up with various solutions to these things. And my first awareness, at least, that pointing out problems where you don’t know the answer is not always a terrible thing; it can be a useful thing as well. And it also—let me put a bit of a flag there as far as a point in time because looking back at that talk, it’s naive. I’ve done a bunch of things since then with Docker. I mean, today, I run Docker on my overpowered Mac to have a container that’s listening with our syslog.

And I have a bunch of devices around the house that are spitting out their logs there, so when things explode I have a rough idea of what happened. It solves weird problems. I wind up doing a number of deployment processes here for serverless nonsense via Docker. It has become this pervasive technology that if I were to take an absolutist stance that, “Oh, Docker is terrible. I’m never going to use Docker.”

It’s still here for me, and it’s still available and working. But I want to get back to something you said a minute ago because my use of Docker is very much the operations sysadmin-with-title-inflation whatever we’re calling them this week; that use case and that model. Who is Docker viewing as its customer today? Who as a company are you identifying as the people with the painful problem that you can solve?

Donnie: For us, it’s really about the developer, rather than the ops team. And specifically it’s about the development team. And this to me is a really important distinction because developers don’t work in isolation; developers collaborate together on a daily basis, and a lot of that collaboration is very poorly solved. You jump very quickly from, “I’m doing remote pairing in my code editor,” to, “It’s pushed to GitHub, and it’s now instantly rolling into my CI pipeline on its way to production.” There’s not a lot of intermediate ground there.

So, when we think about how developers are trying to build, share, and run modern applications, I think there’s a ton of whitespace in there. We’ve been sharing a bunch of experiments, for anybody who’s interested. We do community all-hands every couple of months where we share, here’s some of the things we’re working on. And importantly, to me, it’s focused on problems. Everything you were describing in that heresy talk was about problems that exist, and pointing out problems.

And those problems, for us, when we talk to developers using Docker, those problems form the core of our roadmap. The problems we hear the most often as the most frustrating and the most painful, guess what? Those are the things we’re going to focus on as great opportunities for us. And so we hear people talking about things like they’re using Docker, or they’re using containers, but they have a really hard time finding the good ones. And they can’t create good ones, they are just looking for more guidance, more prescription, more curation, to be able to figure out where’s this good stuff amidst the millions of containers out there? How do I find the ones that are worth using, for me as an individual, for me as a team, and for me as a company. I mean, all of those have different levels of requirements and 
expectations associated with them.

Corey: One of the perceptions I’ve had of the DevOps movement—as someone who started off as a grumpy Linux systems administrator—is the sense that they’re trying to converge application developers with infrastructure engineers at some point. And I started off taking a very, “Oh, I’m not a developer. I don’t write code.” And then it was, “Huh. You know, I am writing an awful lot of configuration, often in something like Ruby or Python.” And of course, now it seems like everyone has converged as developers with the lingua franca of all development everywhere, which is, of course, YAML. Do you think there’s a divide between the ops folks and the application developers in 2021?

Donnie: You know, I think it’s a long journey. Back when I was at RedMonk, I wrote up a post talking about the way those roles were changing, the responsibilities were shifting over time. And you step back in time, and it was very much, you know, the developer owns the dev stack, the local stack, or if there’s a remote developer environment, they’re 100% responsible for it. And the ops team owned production, 100% responsible for everything in that stack. And over the past decade, that’s clearly been evolving.

They could still own their code in production and get the value out of understanding how that was used, the value of fast iteration cycles, without having to own it all, everywhere, all of the time, and have to focus their time on things that they had really no time or interest to spend it on. So, those things have both been happening to me, not in parallel, quite; I think DevOps in terms of ops learning development skillsets and applying those has been faster than development teams who were taking ownership for that full lifecycle and that iteration all the way to production, and then back around. Part of that is cultural in terms of what developer teams have been willing to do. Part of it is cultural in terms of what the old operations teams—now becoming platform engineering teams—have been willing to give up, and their willingness to sacrifice control. There’s always good times like PCI compliance, and how do you fight those sorts of battles.

And when I think about it, it’s been rotating. And first, we saw infrastructure teams, ops teams, take more ownership for being a platform, in a lot of cases, either guided by the emerging infrastructure automation config management tools like CFEngine back in the early 90s, which turned into Puppet and Chef, which turned into Ansible and Salt, which now continue to evolve beyond those. A lot of those enabled that rotation of responsibilities where infrastructure could be a platform rather than an ops team that had to take ownership of overall production. And that was really, to me, it was ops moving into a development mindset, and development capabilities, and development skillsets. Now, at the same time, development teams were starting to have the ability to take over ownership for their code running into production without having to take ownership over the full production stack and all the complexities involved in the hardware, and the data centers, and the colos, or the public cloud production environments, whatever they may be.

So, there’s a lot of barriers in the way, but to me, those have been all happening alongside, time-shifted a little bit. And then really, the core of it was as those two groups become increasingly similar in how they think and how they work, breaking down more of the silos in terms of how they collaborate effectively, and how they can help solve each other’s problems, instead of really being separate worlds.

Corey: This episode is sponsored by ExtraHop. ExtraHop provides threat detection and response for the Enterprise (not the starship). On-prem security doesn’t translate well to cloud or multi-cloud environments, and that’s not even counting IoT. ExtraHop automatically discovers everything inside the perimeter, including your cloud workloads and IoT devices, detects these threats up to 35 percent faster, and helps you act immediately. Ask for a free trial of detection and response for AWS today at

Corey: Docker was always described as a DevOps tool. And well, “What is DevOps?” “Oh, it’s about breaking down the silos between developers and the operations folks.” Cool, great. Well, let’s try this. And I used to run DevOps teams. I know, I know, don’t email me. When you’re picking your battles, team naming is one of the last ones I try to get to.

But then we would, okay, I’m going to get this application that is in a container from development. Cool. It’s—don’t look inside of it, it’s just going to make you sad, but take these containers and put them into production and you can manage them regardless of what that application is actually doing. It felt like it wasn’t so much breaking down a wall, as it was giving a mechanism to hurl things over that wall. Is that just because I worked in terrible places with bad culture? If so, I don’t know that I’m very alone in that, but that’s what it felt like.

Donnie: It’s a good question. And I think there’s multiple pieces to that. It is important. I just was rereading the Team Topologies book the other day, which talks about the idea of a team API, and how do you interface with other teams as people as well as the products or platforms they’re supporting? And I think there’s a lot of value in having the ability to throw things over a wall—or down a pipeline; however you think about it—in a very automated way, rather than going off and filing a ticket with your friendly ITSM instance, and waiting for somebody else to take action based on that.

So, there’s a ton of value there. The other side of it, I think, is more of the consultative role, rather than the take work from another team and then go do another thing with it, and then pass it to the next team down and then so on, unto eternity. Which is really, how do you take the expertise across all those teams and bring it together to solve the problems when they affect a broader radius of groups. And so, that might be when you’re thinking about designing the next iteration of your application, you might want to have somebody with more infrastructure expertise in the room, depending on the problems you’re solving. You might want to have somebody who has a really deep understanding of your security requirements or compliance requirements if you’re redesigning an application that’s dealing with credit card data.

But all those are problems that you can’t solve in isolation; you have to solve them by breaking down the barriers. Because the alternative is you build it, and then you try and release it, and then you have a gatekeeper that holds up a big red flag, delays your release by six months so you can go back and fix all the crap you forgot to do in the first place.

Corey: While on the topic of being able to, I guess, use containers as sort of as these agnostic components, I suppose, and the effects that that has, I’d love to get your take on this idea that I see that’s relatively pervasive, which is, “I can build an application inside of containers”—and that is, let’s be clear, that is the way an awful lot of containers are being built today. If people are telling you otherwise, they’re wrong—“And then just run it in any environment. You’ve built an application that is completely cloud agnostic.” And what cloud you’re going to run it in today—or even your own data center—is purely a question of either, “What’s the cheapest one I can use today?” Or, “What is my mood this morning?” And you press a button and the application lives in that environment flawlessly, regardless of what that provider is. 
Where do you stand on that, I guess, utopian vision?

Donnie: Yeah, I think it’s almost a dystopian vision, the way I think about it—which is the least common denominator approach to portability—limits your ability to focus on innovation rather than focusing on managing that portability layer. There are cases where it’s worth doing because you’re at significant risk, for some reason, of focusing on a specific portability platform versus another one, but the bulk of the time, to me, it’s about how do you focus your time 
and effort where you can create value for your company? Your company doesn’t care about containers; your company doesn’t care about Kubernetes; your company cares about getting value to their customers more quickly. So, whatever it takes to do that, that’s where you should be focusing as much time and energy as possible. So, the container interface is one API of an application, one thing that enables you to take it to different places, but there’s lots of other ones as well.

I mean, no container runs in isolation. I think there’s some quote, I forget the author, but, “No human is an island” at this point. No container runs in isolation by itself. No group of containers do, either. They have dependencies, they have interactions, there’s always going to be a lot more to it, of how do you interact with other services?

How do you do so in a way that lets you get the most bang for your buck and focus on differentiation? And none of that is going to be from only using the barest possible infrastructure components and limiting yourself to something that feels like shared functionality across multiple cloud providers or multiple other platforms.

Corey: This gets into the sort of the battle of multi-cloud. My position has been that, first, there are a lot of vendors that try and push back against the idea of going all-in on one provider for a variety of reasons that aren’t necessarily ideal. But the transparent thing that I tend to see—or at least I believe that I see—is that well, if fundamentally, you wind up going all-in on a provider, an awful lot of third-party vendors will have nothing left to sell you. Whereas as long as you’re trying to split the difference and ride multiple horses at once, well, there’s a whole lot of painful problems in there that you can sell solutions to. That might be overly cynical, but it’s hard to see some stories like that.

Now, that’s often been misinterpreted as that I believe that you should always have every workload on a single provider of choice and that’s it. I don’t think that makes sense, either. I mean, I have my email system run in GSuite, which is part of Google Cloud, for whatever reason, and I don’t use Amazon’s offering for the same because I’m not nuts. Whereas my infrastructure does indeed live in AWS, but I also pay for GitHub as an example—which is also in the Azure business unit because of course it is—and different workloads live in different places. That’s a naive oversimplification, but in large companies, different workloads do live in different places.

Then you get into stories such as acquisitions of different divisions that are running in completely different providers. I don’t see any real reason to migrate those things, but I also don’t see a reason why you have to have single points of control that reach into all of those different application workloads at the same time. Maybe I’m oversimplifying, and I’m not seeing a whole subset of the world. Curious to hear where you stand on that one?

Donnie: Yeah, it’s an interesting one. I definitely see a lot of the same things that you do, which is lots of different applications, each running in their own place. A former colleague of mine used to call it ‘best execution venue’ over at 451. And what I don’t see, or almost never see, is that unicorn of the single application that seamlessly migrates across multiple different cloud providers, or does the whole cloud-bursting thing where you’ve got your on-prem or colo workload, and it seamlessly pops over into AWS, or Azure, or GCP, or wherever else, during peak capacity season, like tax season if you’re at a tax company, or something along those lines. You almost never see anything that realistically does that because it’s so hard to do and the payoff is so low compared to putting it in one place where it’s the best suited for it and focusing your time and effort on the business value part of it rather than on the cost minimization part and the risk mitigation part of, if you have to move from one cloud provider to another, what is it going to take to do that? Well, it’s not going to be that easy. You’ll get it done, but it’ll be a year and a half later, by the time you get there and your customers might not be too happy at that point.

Corey: One area I want to get at is, you talk about, now, addressing developers where they are and solving problems that they have. What are those problems? What painful problem does a developer have today as they’re building an application that Docker is aimed at solving?

Donnie: When we put the problems that we’re hearing from our customers into three big buckets, we think about that as building, sharing, and running a modern application. There’s lots of applications out there; not all of them are modern, so we’re already trying to focus ourselves into a segment of those groups where Docker is really well-suited and containers are really well suited to solve those problems, rather than something where you’re kind of forklift-ing it in and trying to make it work to the best of your ability. So, when we think about that, what we hear a lot of is three common themes. Around building applications, we hear a lot about developer velocity, about time being wasted, both sitting at gatekeepers, but also searching for good reusable components. So, we hear a lot of that around building applications, which is, give me a developer velocity, give me good high-trust content, help me create the good stuff so that when I’m publishing the app, I can easily share it, and I can easily feel confident that it’s good.

And on the sharing note, people consistently say that it’s very hard for them to stay in sync with their teams if there’s multiple people working on the same application or the same part of the codebase. It’s really challenging to do that in anything resembling a real-time basis. You’ve got the repository, which people tend to think of—whether that’s a container repository, or whether that’s a code repository—they tend to think of that as, “I’m publishing this.” But where do you share? What do you collaborate on things that aren’t ready to publish yet?

And we hear a lot of people who are looking for that sort of middle ground of how do I keep in sync with my colleagues on things that aren’t ready to put that stamp on where I feel like it’s done enough to share with the world? And then the third theme that we hear a lot about is around running applications. And when I distinguish this against old Docker, the big difference here is we don’t want to be the runtime platform in production. What we want to do is provide developers with a high-fidelity, consistent kind of experience, no matter which environment they’re working with. So, if they’re in their desktop, if they’re in their CI pipeline, or if they’re working with a cloud-hosted developer environment, or even production, we want to provide them with that same kind of feeling experience.

And so an example of this was last year, we built these Compose plugins that we call code-to-cloud plugins, where you could deploy to ECS, or you could deploy to ACI cloud container instances, in addition to being able to do a local Compose up. And all of that gives you the same kind of experience because you can flip between one Docker context and the other and run, essentially, the same set of commands. So, we hear people trying to deal with productivity, trying to deal with collaboration, trying to deal with complex experiences, and trying to simplify all of those. So, those are really the big areas we’re looking at is that build, share, run themes.

Corey: What does that mean for the future of Docker? What is the vision that you folks are aiming at that goes beyond just, I guess—I’m not trying to be insulting when I say this, but the pedestrian concerns of today? Because viewed through the lens of the future, looking back at these days, every technical problem we have is going to seem, on some level, like it’s, “Oh, it’s easy. There’s a better solution.” What does Docker become in 15 years?

Donnie: Yeah, I think there’s a big gap between where people edit their code, where people save their source code, and 
that path to production. And so, we see ourselves as providing a really valuable development tools that—we’re not going to be the IDE and we’re not going to be the pipeline, but we’re going to be a lot of that glue that ties everything together. One thing that has only gotten worse over the years is the amount of fragmentation that’s out there in developer toolchains, developer pipelines, similar with the rise of microservices over the past decade, it’s only gotten more complicated, more languages, more tools, more things to support and an exponentially increasing number of interconnections where things need to integrate well together. And so that’s the problem that, really, we’re solving is all those things are super-complicated, huge pain to make everything work consistently, and we think there’s a huge amount of value there and tying that together for the individual, for the team.

Corey: Donnie, thank you so much for taking the time to speak with me today. If people want to learn more about what you’re up to, where can they find you?

Donnie: I am extremely easy to find on the internet. If you Google my name, you will track down, probably, ten different ways of getting in touch. Twitter is the one where I tend to be the most responsive, so please feel free to reach out 
there. My username is @dberkholz.

Corey: And we will, of course, put a link to that in the [show notes 00:29:58]. Thanks so much for your time. I really appreciate the opportunity to explore your perspective on these things.

Donnie: Thanks for having me on the show. And thanks everybody for listening.

Corey: Donnie Berkholz, VP of products at Docker. I’m Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice along with an insulting comment that explains exactly why you should be packaging up that comment and running it in any cloud provider just as soon as you get Docker’s command-line arguments squared away in your own head.

Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit to get started.

This has been a HumblePod production. Stay humble.

View Full TranscriptHide Full Transcript