Ev Kontsevoy is the CEO of Gravitational, where he and other engineers build open-source tools for other developers for securely delivering cloud apps to restricted and regulated environments. Besides computers, Ev’s obsessed with trains and old film cameras.
- Gravitational website: https://gravitational.com/
- Gravitational GitHub: https://github.com/gravitational
- Teleport GitHub: https://github.com/gravitational/teleport
Announcer: Hello, and welcome to Screaming in the Cloud with your host, Cloud Economist Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.
Corey: This episode is sponsored in part by Catchpoint. Look, 80 percent of performance and availability issues don’t occur within your application code in your data center itself. It occurs well outside those boundaries, so it’s difficult to understand what’s actually happening. What Catchpoint does is makes it easier for enterprises to detect, identify, and of course, validate how reachable their application is, and of course, how happy their users are. It helps you get visibility into reachability, availability, performance, reliability, and of course, absorbency, because we’ll throw that one in, too. And it’s used by a bunch of interesting companies you may have heard of, like, you know, Google, Verizon, Oracle—but don’t hold that against them—and many more. To learn more, visit www.catchpoint.com, and tell them Corey sent you; wait for the wince.
Corey: This episode is sponsored in part by strongDM. Transitioning your team to work from home, like basically everyone on the planet is? Managing gazillions of SSH keys, database passwords, and Kubernetes certificates? Consider strongDM. Manage and audit access to servers, databases—like Route 53—and Kubernetes clusters no matter where your employees happen to be. You can use strongDM to extend your identity provider, and also Cognito, to manage infrastructure access. Automate onboarding, offboarding, waterboarding, and moving people within roles. Grant temporary access that automatically expires to whatever team is unlucky enough to be on call this week. Admins get full audit ability into whatever anyone does: what they connect to, what queries they run, what commands they type. Full visibility into everything; that includes video replays. For databases like Route 53, it’s a single unified query log across all of your database management systems. It’s used by companies like Hearst, Peloton, Betterment, Greenhouse, and SoFi to manage their access. It’s more control and less hassle. StrongDM: Manage and audit remote access to infrastructure. To get a free 14-day trial, visit strongDM.com/sitc. Tell them I sent you, and thank them for tolerating my calling Route 53 a database.
Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. This week's promoted guest is Ev Kontsevoy. Ev, you are the CEO of a company called Gravitational. First, welcome to the show, and secondly, what's Gravitational do?
Ev: First, thank you for having me, and second, Gravitational enables developers to have secure access to all of their production environments, and it also allows you to run applications on any environment anywhere in the world.
Corey: It's definitely the right direction to go in, compared to, I don't know, giving developers insecure access to other people's production environments, which seems to be incredibly in vogue in some circles these days. But—
Corey: —it's important to solve problems like this, where I want to access different environments that, if there's a just and loving God, are actually separated from one another. It's the old saw of everyone has a test environment; some people are lucky enough to have it be separate from their production environment. It becomes a challenge that everyone struggles with on some level. And everyone who talks about having solved it is basically lying, in my experience. “Oh, yeah, we have a completely distinct test environment that's a high fidelity copy of production.” And then you scratch a little bit, and it turns out the, “Well, you know, except for the continuous deployment server because that thing talks to everything, and please don't ask us about our security posture there.” So, there's a lot of directions you can go in when we're talking about securing access into environments. Where do you folks start, and where do you stop?
Ev: Well, it's interesting thing that you used this word environment so many times. We believe, fundamentally, that having to maintain computing environments is just fundamentally a major limitations for us—software developers today. Like, back in the day, I could just build my software, put it in a box, give it to you, and that would be the end of transaction, right? You would put it on your laptop, or put it on your server in the basement, and you will just run it yourself, or software will just run by itself. But as we transition to this cloud world, so now in addition to building your software, you also need to build and maintain the environment.
You know, we celebrate everything is—infrastructure as code movement, but what it leads to is just enormous complexity. So, essentially, we're not just building our applications, we're also building computers to run our applications on because that's what these environments are all about. So, ultimately, we want just to enable you—engineer—to not even think about the environment. Don't you think that engineers should just do a git commit, git push and go home, and have software magically run anywhere in the world where users need it? So, that's really where we're going. But we're starting with access. Access today is the problem that we solve really well. And if you accessing your servers using something else, you’re probably either not doing it well, or you’re probably very inconvenienced.
Corey: I talk from time to time about the ridiculous serverless system that I built that is my newsletter publication system, and aspects of that do have a, everything that hits git automatically gets deployed. Now, it sounds awesome—and it is from a developer perspective—but let's be real for a second here: there's no test built into this. I'm a terrible developer. Security? Yeah. There's one user account in this thing: it's mine, it has full access to everything, and my default security posture is, sure. It's not so much a posture as it is an unfortunate slouch.
Ev: You know why is that?
Corey: Because, again, what you talk about is right, it's a colossal pain in the butt. When I don't have to worry about things like insider threats; this is a back-end system, at the end of the day, so if things break, there is no user that is impacted beyond me; and it gets everything else out of my way. There's no value to me in increasing the security posture because, in my perspective and perspective many other folks, security is a continuum between, effectively, being, it completely wide open and being completely unusable. Where on that spectrum do you wind up wanting to fall? I bias for getting things done quickly.
Ev: Absolutely. If you don't mind, I will be quoting you in the future. You basically said, there is no value in security. This is why I don't like when people call Gravitational a security company. We don't give you security, we give you instant access to whatever you need to be productive right now.
So, one of our open-source project is called Gravitational Teleport. Why we call it Teleport? Because it creates this illusion that you can teleport any computing device your company owns into the same room with you, so you can instantly access it using protocols like SSH, Kubernetes, and support for other protocols coming. It’s on GitHub; go check it out. But it completely erases this mental partitioning that we apply to all these environments, like test, production, Amazon, GCP, VMware, on-prem, basement, satellite, self-driving car.
All of those are going to be instantly accessible to you without compromising security. Security, it's almost like when you building a bridge over a river. Is security benefit? Of course, you don't want that bridge to collapse, right? So, it's secure in that sense, but the benefit is getting there, getting on the other side of the river. So, that what we believe we enable with Teleport, specifically.
Corey: Unplug the computer and it's way more secure, but your customers are probably not going to be happy.
Ev: Yes. You're losing access.
Corey: You're right when you, I guess, tweak my quote to say that there's no value in security. There's a lot of truth to that because I'm very reassured constantly by a wide variety of companies that security is incredibly important to them. They always tell me this in their announcements of data breaches holding my data, where it was very clear that security was not very important to them. It was a backburner priority, and now they're doing damage control.
It's, “Oh, what is your intrusion detection system?” “The front page of The New York Times. We check that thing every day.” It becomes an afterthought because you're not going to improve security and get to your next business milestone by doing that, in most cases. It is not directly aligned with the stated goal of most businesses to improve security posture. It's something has to be done; it's not a value add.
Ev: Look, it's just expensive, too, and there is always talent shortage to remember about. If you think of major tech companies here in the Valley, you know, Google, Netflix, Facebook, go and ask them, “How do you SSH into anything?” And you will see that most of these companies, they built quite sophisticated internal products to do that. They do not use off the shelf completely unmodified components, like OpenSSH, for example. OpenSSH is just the building block.
Who else can do that? That's really my question. I went to a couple of dinners with CTOs of other companies here in the Valley, and they all confessed to me that they're all struggling with it, that it's basically—it's wild, wild west out there, how access to infrastructure is implemented. And you could even do it yourself. Go and ask your engineer friends, “Hey, do you still have access to production environment of your former employer?” You know, how many of them will say, “Yes, I do.”
Isn't that scary? Like, there are companies out there running applications holding data of their customers, and their former employees still have access to it. Yeah, because of this trade-off of security and convenience. If you don't make it convenient for engineers, they are not going to be productive, or they are going to build backdoors. So, I've seen that happening.
Corey: One of the more, I guess, amusing anecdotes from earlier in my career was when I was unceremoniously fired from a company—that kind of happened a lot based upon, you know, my personality, and everything I say, combined with everything that I do—and for the next day or so, there were a repeated series of outages on the company's service. My perspective is and always has been, once I no longer work here, I don't care about you anymore; I'm certainly not vengeful, or wrathful. But what I strongly suspect happened—because remember, I ran the ops teams at these places—where suddenly they're having to rotate credentials across the board of the shared service role stuff, which is absolutely the right move. “Eh, we wound up letting someone go. Maybe they're bitter, maybe they're vengeful but, oh crap, that person had access to all of these secrets that are difficult to change. We'd better get started right away.” And frankly, as a former employee, I want them to do that. Two years later, if there's a data breach there, I don't want to be even a remote consideration—
Corey: —as the cause of having done that. Because it's, “I don't work here anymore, please lock me out.”
Ev: Absolutely. I would even say the rotating credential is an anti-pattern. Like you're not even supposed to have long-lasting credential, so that removes the need the rotation. Technically, this benefit is called reducing operational overhead of implementing proper security. Well, it starts with using the proper tools.
I have a somewhat controversial statement, for example, to say that, “Hey, if you're using SSH keys today to access your infrastructure, even if you're storing those keys in a secure vault, you're doing it wrong. You're not supposed to be using SSH keys to access servers. You're supposed to be using certificates, and certificates need to be issued with automatic expiration for you every day.” So, it's just as convenient to use, but it removes the need to rotate anything. So, if you just don't show up for work the next day, your access will be automatically revoked.
So, that's the way you do SSH security, for example. And this list goes on and on and on, if you do it right, or if you are using a solution that does this right, by default, without you having to configure thousands of config files all over the world, then the operational overhead and pain kind of go away. And that is something that we, with our open-source project, to trying to promote and enable.
Corey: The challenge I run into is, I agree with everything you just said from a high-level perspective, but then it turns into almost conference-ware on some level, where the idea of what you say versus the real-world consequences slash implementation of that winds up breaking down. For my exact use case, for example, I have an EC2 instance that I use as my development environment. When I'm traveling, I only take an iPad with me and I use an SSH client called Blink to get into this thing. It also speaks Mosh, which piggybacks on the SSH handshake, but that's neither here nor there. It does not support certificates.
And it's an iOS app, so getting it wedged in is going to be a little bit of a challenge. So, I could see in an environment where I'm doing that, yeah, we don't use SSH key pairs, except for that thing. Same story with GitHub repositories. To my understanding, they also don't support SSH certificates and whatnot. So, this turns into the edge case exception territory pretty easily, where, yeah, we generally believe as a best practice that you should not be SSH key pairs, but here's the long list of things that require it, so we do it anyway.
Ev: Absolutely. You could also talk about network equipment. How many routers you've purchased even for your own house or apartment that have a baked SSH server that only supports keys?
Corey: Someone has fancy network equipment. Mine is still stuck on Telnet.
Ev: [laughs]. Oh, yeah, that's, that's even worse. But, seriously, this is why we are doing it in the open because at the end of the day, if you have this vision for how future is going to look like, and that's a better future than the present, so you need to find as many like-minded people as possible. And the best way to do it is just to put everything you're building in the open out there, and make it easily accessible. So, if someone is working on the next generation of iPad SSH client, they could just go, and take, and use our code to make it support certificates.
Or if someone is struggling to set up certificate authentication for SSH with existing open-source tools, here's another open-source alternative, and it does certificates by default, so there is no complexity, kind of, associated with it. So, by doing it in the open, and interacting with the community, and sitting here chatting with you, that's how I think we will proceed moving forward. It's just like, slowly reshape the future, towards simplicity, first of all; ease of use, first of all; and then security compliance and everything else that come with it.
Corey: So, tell me a little bit about what the getting started process looks like. It's one of those ideas where, on some level—we see this on conference stages all the time where—the one that really stuck with me was I was reading an AWS blog post about how the whole point and value of Kubernetes—which anything they say after this that isn't, “It's good on your resume,” is kind of a lie. And that's a hill that I understand is unpopular but also completely correct—and they say part of the value here is that you never have to SSH into your environment ever again. And that was great. When I finished reading the blog post, I checked what else would come out that day, and oh, now they've launched this new integration with Systems Manager Session Manager that lets you get a shell inside your containers so you don't have to SSH into them anymore. It’s, oh, that's right, the things we say versus the things we do. What's the getting started process look like that helps make the ideal city on a hill version a little bit closer to reality.
Ev: So, before we're going to getting started, I think that the question of either you should or should not SSH into pods—or SSH into machines that Kubernetes is running on—it's really up to you. It's up to your organization; it's up to your operational philosophy. I don't think there is a single answer, like, or industry best practice that you just go out and say, “You should always do that.” And when companies come out with these messages, you're right, it just feels not genuine. There is a way to do it simply and securely.
So, and if you want to SSH into the same infrastructure that Kubernetes is running on, go ahead and do it. However, make sure that you use exact same credentials, make sure that they are consistent. So, for example, Kubernetes has role-based access control. SSH at its core does not have role-based access control, so you should use SSH implementation that enables you to set the same kind of roles and same permissions. For example, you could say, developers must never touch production data.
So, your SSH layer needs to understand what is the production and what is staging, right? And then when you accessing a machine, your SSH layer needs to be aware if there is any customer data on that machine, or that machine gives you access to customer data. Traditional open-source SSH tools, they're just too low-level to understand these modern cloud complexity. And this is why some companies just say, no, we're going to disable SSH completely. They should only use Kubernetes. And then they run into other issues when they do that.
So, going back to our project, how did you get started? Well, go to github/gravitational/teleport and look at the readme. It's very small readme, and it tells you that Teleport is a single binary, same as sshd. So, you put it on your servers, and then you give it a very little configuration. And it gives you all of these things: it gives you the proxy that you use to access all of your infrastructure, it gives you role-based access control—what's coming up in open-source version, by the way—it gives you integration with single sign-on so you could do like something like GitHub authentication to get into infrastructure or Okta or whatever. And it does it in the same way as you access Kubernetes.
So, if you are member of a developer's team on Kubernetes RBAC, you are going to be member of developer’s team on your SSH RBAC. And the same rules that you set for developers will be applied for both protocols. So, you could have your cake and eat it, too. But yeah, then you click download link to play with it, and then there is documentation on the quickstart. It's basically the same experience as we're all accustomed to when playing with well-packaged open-source solutions.
Corey: A lot of marketing on your site talks about using this to get into clusters of machines or, of course, Kubernetes, which is the third rail I am not touching at the moment because, oh dear stars, do I get letters whenever I do. And that's great and all, but what my primary use for most of what I do with EC2 these days is using it as a developer environment. I don't have a cluster because it turns out that some of the EC2 instances I use are really big and they keep making bigger ones, so the problem gets way easier.
Ev: [laughs]. It's kind of interesting, you mentioned this. All right, let me step down from, you know, CEO of Gravitational role here and, as an engineer, I do find it quite interesting that we are now getting these enormous machines with 64 cores and terabytes of RAM—thanks for AMD stepping up their game. So, it is kind of questionable, do you even need an entire environment with this kind of auto-scaling stuff if computing is now so cheap? And engineering your application to run on a single box is actually much simpler.
So, part of me looks at this whole thing, and I'm thinking, how many startups are out there, how many, just, web applications would do way better running on a single box? And you could have the other one for failover, but the point is, that I'm quite fascinated with the progress we're making on computing, again. But going back to your question on, what is a cluster? Do I even need a cluster? This is basically a question about the language.
It's the word ‘environment’ I like to use. Like, you have an environment you want to go to: it could be a single machine, it could be two, it could be two thousand. But then you have these things like regions, right? You could have a single node in one region on AWS, then you could have another node in another region. So, how do you call those?
And then you have, for example, systems like Kubernetes, and they do use word ‘cluster.’ So, we try to use language that is as agnostic as possible. Just think of cluster as just a collection of machines, and a single node is a cluster. And that's another problem that we're struggling with. How do you call a server these days?
If use the word server, some people will say, “Oh, no, no, no I don't need access servers. I need to access VMs.” [laughs]. Or they will say, “No. I don't need to access VMs I need to access computing instances.” So, what is it this thing you're accessing? Or is it an instance, or maybe it's a Kubernetes pod.
So, we try to use this language that's kind of neutral. So, we use ‘cluster’ to describe a collection of any computing devices you may have, And we use the word ‘node’ to describe anything you can SSH into. It could be a pod, it could be a VM, it could be an instance, it could be a server, it could be Raspberry Pi or a self-driving vehicle. So, it's all node from Teleport’s point of view.
Corey: In what you might be forgiven for mistaking for a blast from the past, today I want to talk about New Relic. They seem to be a relatively legacy monitoring company, and I would have agreed with that assessment up until relatively recently. But they did something a little out there: they reworked everything. They went open source, they made it so you can monitor your whole stack in one place and, most notably from my perspective, they simplified their pricing into something that is much more affordable for almost everyone. There's even a free tier with one user and 100 gigs per month, totally free. Check it out at newrelic.com.
Corey: To be clear, it has its own standalone installer, or am I in NPM hell, if I want to be using it?
Ev: It's a single binary. You don't need installers. Again, we as a company, we are addicted to comple—to simplicity.
Corey: Oh, you almost misspoke there as, “We're addicted to complexity.” And yeah, I actually have a list about 15 companies, I could absolutely put that as their tagline.
Ev: And you understand why this happens, especially in the open-source world. If you make your product so easy to use, and so dead simple, then people will just start saying, “We know what, why would I pay you money?” It's open-source, it's just this magic dust, I could sprinkle it all over my infrastructure and call it a day. So, then the open-source companies, they're basically forced to make the product more complex. And they say, “Well, these 57 features that exists only in enterprise version, and you really need some consulting help to set them up.” So, I could see why that is the case sometimes.
But in our case, I think it kills the value proposition. If you have stamina and talent to deal with complexity today, you could build yourself a fantastic access solution using OpenSSH. Go ahead and do it. But if you want something that requires as little as possible time commitment, and even expertise, you want the right thing by default, so go and download Teleport and see how easy it is. It's a single binary. It's the simplest thing we could possibly think of.
Corey: Getting up and running quickly and easily is helpful. I absolutely agree with that. This is one of those stories where I am in no way shape or form an expert in the area that you have built an entire company around, however, I'm an overconfident white guy on the internet, so of course, I'm going to make unfounded wild speculation, and present it as fact. My experience with the open-source world has always been that people are thrilled to pitch in on open-source projects and get them to a point where they scratch their own itch, but a lot of what makes software usable and approachable by various folks is accessibility. It's UX, it's polish. And that's not really fun for people to pitch in on, in most cases, on a volunteering basis. So, I've always sort of taken the perhaps overly charitable position, that the reason that so much open-source software is crappy is because the stuff that makes it easier to work with isn't the fun stuff to build. You need to start paying people to do those things.
Ev: Oh, that's absolutely true.
Corey: And I do understand that I'm conflating a bunch of things that don't necessarily agree. Open-source does not mean volunteers only: a lot of people are paid to work on open-source; there's a variety different governance models, et cetera, et cetera. Please, please, please don't write me letters on this one.
Ev: I was expecting you to say something more controversial, but honestly, everything you said, I think most of us will agree with. That, yes, it is true that what motivates people to begin open-source projects is to scratch their own itch. For example, why we started Teleport? So, the previous company, the same team here, we built was Mailgun: email delivery, you probably heard of it.
And after Mailgun acquisition by Rackspace, so we joined Rackspace, this much, much larger cloud company. And Rackspace, understandably, they told us, well, you have to migrate Mailgun from SoftLayer, which is now IBM, to Rackspace data center. So, think about it. So, you have this cloud environment that you set up, and you just need to take it and move it somewhere. How long do you think it took us? Wild guess?
Corey: I'm going to guess… well, that's a problem. There's two different directions I could take that in. I could come out with something, “Oh, 30 seconds.” And then it like, “Well, no. We're not that good.” Which is never a good thing. Or I could go the opposite direction, “18 years?” And the answer is, “No, no, no, we're a defense contractor. It took us 40.” So, there's no good answer as far as how to come up with that. So, I would hesitate to even hazard a guess.
Ev: But why didn't you say 18 seconds? Because if you think about it—so someone asked these guys to move a bunch of software. So, what is software? Software is just text files. So, how big are those files?
I don't know, like a megabyte, five megabyte. How much code can you type in the three years? So, even if it, let's say, five megabytes, you take the speed of internet and you just divide that by five megabyte, that's the speed of transfer that software can travel between data centers, so why isn't it seconds? That's the same question to ask. And my non-technical friends, when they asked me, “So what are you doing post-acquisition?” And I said, “Well, we're moving our software from one data center to another.” And they said, “Well, how long does it take to copy a bunch of files? Why is it a project? So, how long it took you?” And I said, “Six months.” And people is like, “Wow. Why is it six months?”
It's because of all of this complexity that we've attached to all these environments. And that's why we started to work on Teleport once we were out of Rackspace. Because even setting up similar infrastructure security takes you a while, okay? So yes, we did scratch our own itch. And the second open-source project we also work on, it's called Gravity. Gravity allows you to take all of your software, like your entire Kubernetes cluster—so that's one important limitation, that Gravity only works with Kubernetes clusters—so it takes your Kubernetes cluster—
Corey: And I personally really hope that the next words out of your mouth to finish that sentence are, “And throw it in the garbage.” But I have a sneaking suspicion, that is not the case.
Ev: You know what? It makes it easier. But let me finish that sentence.
Corey: [laughs]. Of course.
Ev: So, it takes your Kubernetes cluster, and packages it into a single file, similar to a Docker image. It says, “This file is your software.” So, if you want to throw it into garbage, you can literally drag and drop it into a garbage on your desktop. But if you want to drag and drop it into a different data center, you could do that, too. So, now when CIA comes to you, and they say, “We want to use your software, but we don't want your SaaS, we want yourself there on our own top-secret cloud, and we're not going to let you your DevOps people touch it.” So, then you will use Gravity to give it to them and say, “Here is file. And that file is the software you want.”
This level of simplicity is what I've personally been missing since I moved from, kind of, more traditional server programming to this cloud world, where modern cloud applications, they don't even feel like software, sometimes. They feel like it’s an advanced form of configuration for your environment. It's this thin layer of stuff that you spreading across many, many instances on Amazon something. And then you can't even tell where my software. It’s just, like, it's everywhere. It's 15 different repositories, and a couple of Docker registries and no one really knows how to collect it together. So, that's what Gravity does, it allows you to say, “This file is my software,” and then it will just run anywhere by itself.
So, yes, going back to your original question, we did build these things just scratch our own itch, but the ease of use is probably the most important internal metric that we share when we work on this project; simplicity. And that is because it just so happens that making things simple and making it management free is also scratching our own itch. Just think of Gravitational engineers. We've all done our share of DevOps and system administration in the past, and we are also engineers, developers, so we don't really like babysitting hardware. Because when you are babysitting your environment, that's really what you're doing.
So, my ideal version of any kind of software product, it needs to be unmanaged. You know what people say, “Oh, buy managed Kubernetes, managed database, managed this, managed that. It’s like, “Why do we need to manage software?” We dreaming about having self-driving cars. You see the irony here? So, we want cars to be self-driving, but we want to manage software? Why can't we make software that's self-driving first, before attempting something even riskier?
Corey: Oh, absolutely. It's the ideas of things we claim to want, and things we actually want our worlds apart. Great, we have this complete CI/CD system. We had to add a step where a human being can click a button for audit compliance. Yeah, that's not great. And then they automate something that hits a key every 20 seconds on a keyboard, with some physical IoT robot to get around that it’s, okay, at this point, you've built a ridiculous Rube Goldberg contraption. But that's in every CI/CD system on a long enough timeline.
Ev: Let me tell one little example. When sometimes investors would try to book a meeting with us, and they would be asking this questions like, “Do you have any plans to add this advanced user interface or user interface for this or that?” And sometimes I just give them a straight answer, that if you make a feature robust enough and it just works, so you don't really need to manage it, so you don't really need the user interface for it, but I like to give them this joke. Like, look. You have SSD controller in your laptop, right now. Every single employee at your company has that SSD in their machine.
It says these have controllers. Controllers run complex piece of software on them. Do you look for a single pane of glass to manage SSD controllers across all of your employees? Of course, you don't look for that. It's silly. Why? Because it just works. So, why don't we make our cloud software to be like SSD controllers, so then we don't need to have this massive control panels with knobs, and charts, and graphs, and logs? That is the future that we're optimizing for, and I can't wait for this to happen. I'm personally tired managing environments.
Corey: That's, I think, something that everyone wants to say: that they're tired. They're tired of doing the stuff that is garbage and doesn't add direct value. Going with AWS Organizations—which is where I tend to focus on—really emphasizes this. I spend more time setting up subordinate accounts for isolation of workloads, or teams than I often do working within those accounts. It's painful process, it's boilerplate, and solutions are slowly evolving in that direction, but if I didn't have to do a lot of that stuff, I wouldn't. Which means that this ties back to the whole problem of, what are we trying to achieve? And if whatever I'm working on now doesn't directly align with that, I'm probably going to have acid as soon as humanly possible.
Ev: Yes. I honestly, I thought that Heroku-like environments are going to be the future. It's been now, what, almost 10 years since Heroku was launched. I don't feel we have made enough progress on that era. Some people call it serverless, but what I see, like, this serverless movement is being hijacked by cloud providers by simply saying, “Hey, we're going to manage this serverless framework on top of servers, basically, for you.”
But ultimately, that's something I don't want to think about. I want to think about this entire environment: AWS, a region, access point. That's my computer. So, going to push my software into it, preferably as a single file. It just makes it easy for me to reason about this way. And just have it run there by itself. That will be the dream. And I don't want to know about what load balancer is. I don't even understand. If my application has a defined entry point, why can’t this thing make it accessible for me? Why do I need to understand different types of LBs, and how to scale in groups? I think we're pretty close to closing this gap in our abstractions. So, I think it's about time, and Gravitational is working on it.
Corey: Before we call this an episode, there's one more thing I want to talk to you about. Now, for folks who have not been on this podcast before—which I'm told is still more people than have been because I don't have that many episodes yet—one of the things I do on the background process is, I start having a quick conversation before I whack the record button. And I asked a very small list of questions. One of them—which leads to fascinating answers—is, what do you want to make sure that we don't talk about because this is not a story of, I only tell the stories people want to have told, but if I sit here, and I beat someone up on PR missteps that their company has made, for example, it's not a good episode. It's awkward, it's uncomfortable, and no one likes it, so I want to avoid those things, if possible. And I've gotten a range of hilarious answers over the years of asking that question. But you gave me a great one, which is specifically, don't ask you to shit on other company’s technology. I love that answer. Tell me more.
Ev: Well, I believe that we are much better off when we build on top of lessons that we learn from each other. No single company—even as powerful as Microsoft, Google, or Amazon—is capable of solving every single problem in the best possible way. So, if I make a mistake, I don't mind that some other open-source project will come in and correct me for it. That keeps me honest. And I will do the same to other companies who are working in open-source space.
And also, as an engineer, I love stitching the best open-source tools from different authors, from different vendors, to assemble solution that works for me. So, by criticizing each other's projects, we’re not really helping anyone making these choices. But what I do think is fair is criticizing approaches. For example, what I earlier said about SSH keys. It's just not very scalable way of doing it, and I could argue, using very technical arguments for it. But genuinely, I do believe that every open-source project, every product out there deserves some attention, and ultimately we should be trying to integrate with each other, hopefully, using open-source, open standards that leads to this outcome that I'm dreaming about.
Corey: People think that a lot of my brand is built on crapping on company’s technologies, and they're kind of right, but I'm careful to punch up. There's a reason that I own twitterforpets.com. If I make fun of someone's actual small startup, I'm a jerk because that's people's hopes, dreams, et cetera. Most of the companies I make fun of, other than a few very egregious examples, are either multibillion-dollar entities or are publicly traded.
At that point, you've opened yourself up to scrutiny and criticism, and frankly, you can weather my slings and arrows in a reasonable way. If people are listening to what I say and feeling bad as a result, I've failed somewhere. I also think that when companies try this—our entire marketing brand and persona is going to be built on crapping on other people's work—it doesn't look good at all.
Ev: I agree. Look, even going after larger companies, you have to remember that the companies are not monsters. Take, for example, Amazon famous for the two pizza teams. So, if you picking like a particular Amazon offering, and you going after them, there is basically, like—what—10, 15, 20 people behind it. So, that's really the group you having an argument with; it's not all of Amazon. And they all have feelings, and we all work hard, and I do believe that the at the end of the day, that we love technology, we like computers, we like what we do, so creating drama, necessarily, that's not something I could be excited about.
Corey: Yeah. At that point, the feud becomes the story, rather than the actual value of what it is that you've built.
Ev: [laughs]. Exactly. It's just too much of that is happening in open-source space, and that is unfortunate.
Corey: It's the rage-fueled equivalent of we don't have much useful to say, so we're going to throw a big party at a conference.
Corey: So, one last question to get the slightly back to topic before we wind up calling it a show. Do you think that, as we look at what's happened over the course of the industry, progressing from running things in mainframes to the whole data center story, to the thin client, thick client, et cetera, back, now we're in a world of cloud, do you think this is the end state of, I guess, the pinnacle of computing? Nothing to go beyond this; shut it all down, we're done. Where do we go from here?
Ev: I think there is definitely going to be another thing that will come to replace the Cloud. And I hope that soon we will be able to say that, “Oh, if you're doing cloud-native computing or cloud-native application, that's the legacy way of doing thing.” I don't know how this next thing is going to be called, but I do like to think about it a lot, like, what it will look like. And I think one area for improvement is for us to close this gap.
We keep saying the data center is a new computer. Like, data center is a computer. Data center is a computer. Kubernetes is an operating system for data center as a computing device. A DC/OS from Mesosphere, remember? But it just hasn't happened yet.
Kubernetes still feels like a collection of primitives to manage a bunch of containers, and plus the million of other stuff—it just feels like you're dealing with drivers. If you compare Kubernetes to an operating system, I say that drivers are too exposed. If I'm building an application for Linux, or Mac, or Windows, I don't think about USB drivers if I wanted to take sound from USB mic. But modern cloud environments, they make us think about load balancers, volumes, and all those stuff that is just too low level. So, I believe that we will arrive at this post-cloud world where a data center truly becomes a computer; where the process of creating and publishing an application for Mac OS, or iPhone, or AWS will be extremely similar, where you say, “Here's my file. This file is this image, it's my application. You could put it into a AWS account, and it will run.”
That's what I believe the post-cloud world is going to look like, and it's almost like Gravitational vision for the serverless. Because today, when we talk about serverless, it's just basically another framework on top of something like Kubernetes. But we believe that serverless is when you—the process of building an application ends with a single build artifact: this file is my software. Where it will run, I don't want to care. I don't want to know. That's a true serverless to me. So, hopefully, once that happens, we could finally say goodbye to cloud-native world. Welcome to this post-cloud world that Gravitational is trying to enable.
Corey: And you're doing a better job of articulating that vision and that story than the certain large company that did a cloudless hashtag that resulted very quickly in being, effectively, cyber-bullied off the internet because the entire premise was ludicrous. I think that you're right. I think there is definitely something that has to come next. If not, then what are we all doing here? We could not have imagined 15 years ago, a lot of the things that we take for granted today. And I imagine that that trend is not likely to slow down anytime soon. I feel like that is tempting fate to make that observation in 2020, but I mean it from an optimistic point of view.
Ev: Agreed. Agreed. I'm not going to talk about those other companies because that's specifically something we, I asked you not to ask me about.
Corey: Exactly. We're not going to name names, it's fine. But if it helps anything, they’re worth tens of billions—sorry, my apology. They’re worth hundreds of billions of dollars. So, again, I don't consider it punching down, which is really I think shorthand for what I view this as.
It’s, I learned, you can punch down at big companies when they're trying something new and you're crapping on them, or they're talking about their journey and how they wound up going somewhere. I got that one wrong in the early days. And well, “I’m sorry, I'll do better,” is sometimes the only thing you can say.
Ev: Sounds like a plan.
Corey: So, if people want to learn more about what you have to say, what you have to show them, what do you have to sell them, in some cases, where can they find you?
Ev: They could go to gravitational.com. They can click on Teleport to learn how we do this magical access to everything, or they can click on Gravity to learn about packaging applications as a single file. Or they could go on GitHub and just dive straight into the code. It's github.com/gravitational.
Corey: Excellent. And thank you so much for taking the time to speak with me. I really appreciate it.
Ev: Thank you for having me, Corey.
Corey: Ev Kontsevoy is the CEO of Gravitational. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on Apple Podcasts, whereas if you've hated this podcast, please leave a five-star review on Apple Podcasts along with a comment explaining why everything Gravitational is building is overly complicated and unnecessary because all you really need is Telnet.
Announcer: This has been this week’s episode of Screaming in the Cloud. You can also find more Corey at ScreamingintheCloud.com, or wherever fine snark is sold.
This has been a HumblePod production. Stay humble.