Five Slot Machines at Once: Chris Weichel on the Future of Software Development

Episode Summary

On this episode of Screaming in the Cloud, Corey welcomes back Chris Weichel, CTO of Ona (formerly Gitpod). Chris explains the rebrand and why Ona is building for a future where coding agents, not just humans, write software.

Episode Video

Episode Show Notes & Transcript

They discuss what changes when agents spin up environments, why multi-agent workflows feel addictive, and how Ona is solving the scaling and safety challenges behind it.
If you’re curious about the next wave of software engineering and how AI will reshape developer tools, this episode is for you.

About Chris: Chris Weichel is the Chief Technology Officer at Ona (formerly Gitpod), where he leads the engineering team behind the company’s cloud-native development platform. With more than two decades of experience spanning software engineering and human–computer interaction, Chris brings a rare combination of technical depth and user-centered perspective to the systems he helps design and build.

He is passionate about creating technology that empowers people and tackling complex engineering challenges. His expertise in cloud-native architecture, programming, and digital fabrication has earned him multiple publications, patents, and industry awards. Chris is continually exploring new opportunities to apply his broad skill set and enthusiasm for building transformative technology in both commercial and research settings.

Show Highlights
(00:00) Introduction to Modern Software Interfaces
(00:55) Welcome to Screaming in the Cloud
(01:02) Introducing Chris Weichel and Ona
(02:23) The Evolution from Git Pod to Ona
(03:26) Challenges and Insights on Company Renaming
(05:16) The Changing Landscape of Software Engineering
(05:54) The Role of AI in Code Generation
(12:04) The Importance of Development Environments
(15:44) The Future of Software Development with Ona
(21:31) Practical Applications and Challenges of AI Agents
(30:01) The Economics of AI in Software Development
(38:11) The Future Vision for Ona
(39:41) Conclusion and Contact Information

Links:
Christian Weichel LinkedIn: https://www.linkedin.com/in/christian-weichel-740b4224/?originalSubdomain=de

Sponsor: Ona: https://ona.com/

Transcript

Chris: Fundamentally, the interfaces that we as software engineers use today aren't built for this. They're built to do one thing very deeply at a time writing code, right. Now our interfaces need to change, and the environments in which we do that work needs to change. My laptop is built to do one thing at a time.

I mean, anyone who's tried to run different Python versions on one machine knows what I'm talking about, and so are my IDs. So my environments and my interfaces need to change. To get the productivity out of these agents, and that's very fundamentally what ona does is it gives you, ma, as many of these environments as you need perfectly set up for the task at hand and it gives you an interface that MA that helps you find flow and joy in doing more things.

In parallel,

Corey: welcome to Screaming in the Cloud. I'm Corey Quinn, and my guest today has been on the show before. Chris Weichel is the CTO of ona, which we have not spoken about on the show before, because once upon a time until recently, they were known as Git Pod. Chris, thank you for returning.

Chris: Thank you for having me again.

Corey: This episode is brought to you by ona, formerly GIT Pod. Are you tired of coding agents pushing your S3 bucket to the wrong AWS account or having them commit your entire downloads folder because they thought your tax documents were part of the code base? All while using privacy policies that basically say that your customer data deserves a nice vacation on random cloud servers.

Introducing ona where your coding agents run in isolated sandboxes. Securely within your own VPC. ona lets you run agents at scale with workflows, bulk open requests that finally tackle that Java migration that you started in 2019 or automatically fix CDEs when your scans find them. ona also supports private AI models.

Through Amazon Bedrock that your corporate overlords might even approve of. Head to ona, that's O a.com and get $200 of free credits using the code screaming in the cloud because your laptop wasn't designed to babysit over caffeinated rogue coding agents with root access. As, as you might have picked up from that intro.

I, I have a leading question I would like to begin with. Um, GIT Pod was an interesting name of the company because it was, it was, oh, it was like GitHub. No, actually. And you sort of got to a point of understanding what it was, and now it's all that work we did to teach you what that word was and that it was pronounced with a hard G instead of a soft, well now we're changing it again to something else.

Chris: Why? There's a, a number of reasons. The one is that GI Pod as a name really doesn't make sense anymore. One, we famously left Kubernetes, so the pod part is out and Git isn't at the center of it all. You know, it's, it's a very important piece of technology for sure, but so much of what he can do with GI part now owner isn't centered around Git anymore.

So the name really has become a bit of a misnomer. And to be frank, the amount of times we've been confused for GitHub or GitLab or spelled with a capital P for no apparent reason, I'm just very glad we can leave all that behind us. So hence the, hence the rename.

Corey: I am always a little bit leery of company renames, and that is in many ways unfair to you.

The one that sticks out in my mind was Mesosphere after they renamed for 2D, two IQ, and I. Even now, I had to look that up to make sure I was getting those letters correctly, and it turns out that the correct name is now acquired by Nutanix, so, oh, okay. It's. Brand equity is super freaking hard. It is. It takes a long time to teach people things and okay, we're going to be changing our name, our logos, et cetera is hard.

I, I saw that Facebook was able to do that to meta and I would've bet anything that, uh, that, that wouldn't have worked because it's been how many years since. Google did that with Alphabet and every time in a newspaper article to this day that we see anything about it, it is alphabet, parentheses, Google's parent company, close parent.

It's, it's one of those things where sometimes it sticks, but usually it feels like it's going to have that parenthetical forever. What's your sense on this one?

Chris: My take on this is as a company, if you wanna rename, if you're, if you're small enough, it doesn't matter because no one knows you. If you're big enough, it's, everyone's gonna hear about it.

So, you know, it's fine if you do. And then there's sort of the trough in the middle where, um, it's a bit hit or miss. I think for us, the main reason we did that is because we're really at the precipice of a pretty fundamental change on how software is written. With that, like ONA isn't just a rename, it's really a refounding of what it is that we do.

It isn't a pivot, you know, it's not like we're doing something else, but it marks a new chapter on this trajectory that we've been on since, since the inception of the company. And with that, we also want to be known for leading. Uh, leading where we're going as, as software engineering as a whole, and so the new name signifies that ambition.

Corey: Normally I would discount this to be direct as, oh, well everything is changing about software engineering, is it though? But I've been beating code into submission for longer than is appropriate given how terrible my code still is. And I, I think that it is. It difficult to make the straight-faced assertion that nothing is different about writing code in 2025 than it was back in 2020.

The world has foundationally changed. You can debate where AI is making inroads versus not, but one area in which it has excelled has been in code generation.

Chris: Absolutely. And. The, the way we think about this is really, we've gone through three waves of how we write code, and the, the very first one is where we've essentially artisanally handcrafted every single line, say for code generation and auto complete.

And this is how we've been writing code for really the longest time, certainly since I can remember and. A few years ago when AI first entered the, the scene, we started to have co coded as like copilot and the likes that gave us better auto complete. And they, you know, made the time shorter that it, they reduced the time that it took to write code, but they didn't fundamentally change the pattern.

You know, it was still a human sitting there typing stuff and hitting tap tap every once in a while to get better code, or, I dunno, better, but more at least. And. Not too long ago. Essentially, agents entered the scene and they very fundamentally changed the pattern because now it's no longer humans writing code, but it's machines writing code.

And to what extent and how much and how well that's all debatable. Like we can happy to talk about that. But certainly the truth of the matter is that now we have these things that can write and modify code, modified code for us at a level of abstraction that's arguably a level up from. The programming languages we've been using thus far.

And so that's a very fundamental change in, in how software is being written. Not unlike, you know, changes from assembly code to higher level languages to see and the likes to object oriented languages to now. I mean, you know, it's almost a beaten sent sentence or beaten saying that English is a new programming language.

I don't believe that to be true. That's not, that's not the thing, because we're bad at that one too. Yeah, exactly. But certainly we, um, I mean, me too, like the under specifications is a key problem. And that is still so, so I'm not, I'm not saying this is a new language, this is a new abstraction, but it is a way we communicate now about code that's a very fundamental to a machine.

It's a very fundamentally different way how, how we interact with code.

Corey: We, I, I keep observing that I don't know how to live in this current world that we're in because we spent enough money and made the computers expensive and powerful enough that they are simultaneously capable of doing what we mean instead of what we say.

And are bad at math while they do it. So it's this, I, I don't fully understand this world I find myself in and I'm starting to wonder, does this mean that I've finally lived too long? And maybe other people would argue that I definitely have, but it's like I have young children and they, I, I like, how do I explain to them how computers work on a month to month basis?

It's, it's shifting under me.

Chris: It certainly moves very, very quickly. I mean, we're recording this at, at the time that we are recording this, literally Sauna 4.5 just dropped

Corey: Yeah. Within the last hour of us whacking the record button, so we have no idea whether it's good, whether it's bad, who supports At the moment it's just anthropic out there alone.

I'm sure All the Me too. We support this now. We'll compiling in as we literally speak, but it's, it is weird because state of the art. Is still moving rapidly. It's not the meteoric growth curve. It's been over the last couple of years. Things have slowed down now, but it is definitely still showing the ability to surprise us.

Chris: Oh, absolutely. And you know, the, the half hour before this show, I literally had ONA ads. Sauna 4.5 support to itself.

Corey: See, okay. The first product I've heard of supporting it is you Good work. Your timing is excellent. Now, I, I have to ask in a bit of confession of my own, we are in the process of renaming our company from the Duck Bill Group to Simply Duck Bill as we expand into a services offering as well as pure ser as to software offering as well as pure services.

It, the group does not really carry the same weight and internally it is hard for us to, to, to correct ourselves after. Eight years of inertia of saying it the way that we have. Uh, so my two questions for you are, one, do you still find yourself referring to the company as GI Pod internally? And two, if I were to do a grep at a word count of the term GI pod in your code base, how many would I find?

Chris: Okay. Do I still say get pot every once in a while? I do, but surprisingly little like I expected it to be a lot more The general save is GI pod now owner and then you carry on in terms of the word count. If we looked at the ratio of GI pod to owner, in our co base, it's orders of magnitudes, more GI pod than owner.

Corey: We have a, yeah. Oh, we had an worldly working name of our product for the first three weeks. We were building it, and it is that, that legacy name is still in our code base because it, those, eh, fix that later naming decisions become load bearing. We don't think anything is gonna break if we just do a global find and replace at the same time.

But it might. So that's a question of, okay, how, how much extra work do we wanna create for ourselves today? Mm. We're gonna keep kicking that can down the road. Surely this problem won't get worse with time.

Chris: I mean, we, we have customers who obviously rely on our API and we're not gonna break them. You know, our API contracts, um, are wholly to us.

We, we won't break them. So cl we'll, we will have GI Pot in our copays for all eternity. The ratio is gonna shift.

Corey: Yeah. And it has to, uh, has the product itself changed significantly? That's, that's the other question because I find that shifting names is, if it's not an exactly an atomic operation, it's, it's pretty close.

I mean, you only have one logo simultaneously in the upper left hand corner, but the product itself has to simultaneously serve the use case that it has been sold to solve for before, but also. Uh, pivoting to embrace new things. I, I will say I give you folks credit more so than I do. Most companies, uh, everyone now has slapped AI on the above the fold on their landing page and like, we are an AI company and have been for years.

Funny 'cause I look back three years ago at your conference talks. I see no mention of it, but we'll let that slide. In your case, you, you have taken that deeper. You have renamed the company, you have. Made a public declaration that this is what we are about and whether it is the right path or the wrong path, no one can deny that you're committing to it.

Chris: Yeah. The, you know, the thing that we've been building for. For a very long time now, it's essentially the automation of development environments. It's the ability to create a development environment with a click of a button, something that is incredibly useful for humans because it removes a lot of work and toil from setting up development environments and maintaining them five hours per week, uh, um, studies and, and data show.

And that's very helpful for humans and it's existential for agents. If you want an agent to scale beyond your, beyond your machine, and you wanna run five of them in parallel, or even just avoid that agent accidentally sending an email to your boss having some unkind words or accessing production, because all of this happens to be aWeichel able on the same laptop you run your terminal agent in.

If you want to avoid all that, you need to put them in isolated, readily set up development environments.

Corey: You are not wrong. I, I have problems with cursor constantly because I have set up my ZSH prompt to reflect what I need as a human being editing the thing. It uses some power line nonsense and some other stuff as well, because I've had, you know, an afternoon to kill and I now in most, most, uh, terminal environments until it gets set up.

It has glyphs that don't render properly. It has fonts that aren't present, and as a result, everything looks janky and broken. Most of these tools because I have, I have gotten my shell working for me as a human. Computers have not yet caught up to that.

Chris: Absolutely. There's a reason why, you know, cloud calls, it dangerously skip permissions.

If you wanted to give, if you want to give it a blanket check to do anything and everything.

Corey: Yeah, I, I can't run that on my laptop. I have client data there. It is a hard stop, so I, I give it its own dedicated EC2 instance and for one side project in its own unbounded AWS account via instance role. So there's dangerous, and then there's whatever the hell this is, with basically an unbounded blank check to go ahead and spin up nat gateways to its heart's content.

Uh, there's no way this will wind up being a hilariously expensive joke at my expense.

Chris: Yeah, that's, that's a brave choice. There I say slightly more sensible choices to, um, have this in a controlled, guarded development environment set up. And that's where fundamentally what Oona is and what we built at, at Gipp put for a long time and now, um, extended for agents so.

The heart of the product that is the environments remains. We now speak of that as ONA environments, and within these environments we run an agent, ONA agent that that does its work and it's subject to the same guardrails that previously existed for these environments, plus specific agent guardrails. So you can decide what it has access to if you want to, you can give it unbounded access to your AWS account.

I would not recommend that by default. Obviously comes locked down, has same defaults, but. The key point here is we renamed the company because it signified the next step on this trajectory we've been on all along. You know, it's not a pivot. It's not a random edition offshoot. We gotta do something with ai.

It's so naturally followed that these development environments that we built for humans also work very well for machines. In fact, we. When we architected the platform, we thought of machine use cases, not necessarily agents at the time, but it was clear that there'd be more, um, machine and machine use cases that become relevant, that also need development environments and that fit the bill so perfectly now.

Corey: There's a lot to be said for the ability for systems to interface with each other. Well, I would argue that MCP is potentially a revolution in its infancy just because now you have a, it goes beyond APIs. These are things that self-describe in a. Parable way to each other, what the tool is, what this endpoint lets you do that has legs, uh, that, that extend far beyond a particular iteration of these things.

Like what? It's effectively from my old person perspective, it's the sense of what if every time you connect to an endpoint. It would give you the equivalent of a man page that told you what it did, how it worked, what arguments it could take, and best results do the following. That is non-trivial. I'm sort of annoyed we didn't come up with that as a, as a standard long before now.

Chris: I mean, you know, at least you didn't try to push the semantic web for decades. Like I'm pretty sure there's some people who, uh, you know, who are even more annoyed at the success of something as simple as MCP than you are.

Corey: It's the, I think part of the problem and the reason we're seeing it work here is you cannot universally change the way that humans interact with something.

Uh, source people will still be calling you GI Pod 20 years from now in some corners of the world. The, but when you have a shift that's powered by LLMs, suddenly there is a, that sort of global context and Overton window that moves extraordinarily rapidly. I. In fact, that's one of the challenges I suspect you'll have is it's going to take some time for LLMs themselves to get word of the name change.

I, I found that whenever I'm building something new and just vibe coding something, shit posty, it'll often park it on Versal for a front end. Now, I don't have strong opinions about front end. I just know I'm bad at it globally. But that's the one, the LLM picks and like I'm better correct the robot please.

Chris: Absolutely like the, you know, the name GI Pod is essentially before the cutoff of most models right now. But then that too will change. Obviously there are new models. I mean, you know, only 4.5 just dropped, so. That, that too will change and the the models will, will adapt and, and learn a new thing. That said, I actually like the idea that we are so well known that even 20 years from now someone is gonna refer to us as gift pod

Corey: I.

The question is, is whether that is some, whether that's because people are actively using it then or someone is just so ornery and obstinate that they refuse to accept that anything after 2023 exists. I'm starting to see the joys of being a curmudgeon. So these days now, since people have to take a step back and ask the question a little bit differently since I, I imagine that the, the nuances of the answer are, are there, what does ONA do?

Chris: Very fundamentally, the thinking goes, we now have these machines that can do work for us. You know that we can give a task and to varying degrees of autonomy can do work for us. A mental model that we found very helpful is time between disengagements. It's a mental model coming from self-driving cause, and it describes the time between the car disengaging and the human having to take over.

It's a measure of autonomy and seconds is essentially lane assist and minutes to hours is backseat of a Waymo with. Agents we're seeing the same thing. You know where we're coming from, this tap, tap, auto complete lane assist, and we're moving to minutes, hours of sensible autonomous work called Code Codex on agent, all demonstrate that.

Now, the question then is how do we turn this increasing autonomy into productivity? Because that's obviously what we're asked for. Fundamentally, software creation is an. Economic endeavor. So you know, it needs to be economical. How can we, how can we turn this into more productivity? And the only way we can really do that is by doing more things in parallel.

If I now need to sit there and watch the agent do its thing, I didn't gain much because it's my time as a human. That's expensive. It's human attention. That's expensive. So how do we, how do we scale human attention fundamentally and. Again, the only way we can do this is by doing more things in parallel.

Fundamentally, the interfaces that we as software engineers use today aren't built for this. They're built to do one thing very deeply at a time writing code. Right. Now our interfaces need to change, and the environments in which we do that work needs to change. My laptop is built to do one thing at a time.

I mean, anyone who's tried to run different Python versions on one machine knows what I'm talking about, and so are my IDs. So my environments and my interfaces need to change. To get the productivity out of these agents, and that's very fundamentally what on does is it gives you ma, as many of these environments as you need perfectly set up for the task at hand and it gives you an interface that MA that helps you find flow and joy in doing more things.

Parallel.

Corey: This episode is brought to you by ona, formerly Git Pod. Are you tired of coding agents pushing your S3 bucket to the wrong AWS account or having them commit your entire downloads folder because they thought your tax documents were part of the code base? All while using privacy policies that basically say that your customer data deserves a nice vacation on random cloud servers.

Introducing ona where your coding agents run in isolated sandboxes. Securely within your own VPC. ona lets you run agents at scale with workflows, bulk open requests that finally tackle that Java migration that you started in 2019 or automatically fix CDEs when your scans find them. ona also supports private AI models.

Through Amazon Bedrock that your corporate overlords might even approve of. Head to Oona, that's o a.com and get $200 of free credits using the code screaming in the cloud because your laptop wasn't designed to babysit over caffeinated rogue coding agents with root access. At some level, I'm starting to feel that my A DHD in attentiveness and pivoting from thing to thing to thing has become something of an asset.

When you have agent driven stuff, uh, I would like it a little bit more. If there were a healthy medium, somewhere between you have full access to everything, go ahead and never ask for feedback versus, oh, am I allowed to read this file that I just wrote? There's a, there is a different, there's a sliding scale of comfort with it and the things for which I wish to be interrupted and need to give human input on.

And conversely, there are times I see it doing things where I have to see how fast I can hit control C because no, no, no, no, no. I happen to know that sort of thing very well and down that path lies madness.

Chris: Absolutely. I think there, there are two key elements that, that you brought up here. One is globally, what is the thing allowed to do and what isn't it allowed to do?

Right now, you know, we're as an, as an industry, we're working with these reasonably simplistic denialists, you know, where you tell an agent, Hey, you're not allowed to run AWS because I don't want you to drop my production RDS instance, but the agent is gonna get very, very clever and doesn't care about compliance at all.

You know, agents don't care about getting fired, so it's gonna try and still make it happen. I've worked with people like that. Please continue. Yeah, it's not only agents, so. Just denying, Hey, you can't run the, AWS command isn't gonna do much good. It needs to go deeper than that. And that's something that we're very, uh, that we're exploring right now.

Like, how can we bake that into the environment? How can we make these guardrails more sophisticated? That's one. The other is, if you're doing five things in parallel, you know, how do you steer this agent? How do you, how do you get good feedback? How do you give good feedback? And here we're. I think we've hit a very nice form factor that lets you guide the agent as it does its work.

It's gonna pick up your messages when, when it think it is the right time. And we've worked hard on making sort of an emergency stop button, like you can hit escape and it's gonna stop dead in its tracks because it's really, really important for you to retain control over what the thing is doing

Corey: there.

There's also this idea that it, it is forcing in some ways, rigor that I am seeing people. Actually care about making things reproducible of, huh? I really will need a rollback strategy here instead of hand waving my way around it. Because sometimes it'll do disastrous things. And we've seen some public examples of it doing those sorts of things where it becomes really clear that people have paid insufficient attention to a lot of these.

Like, Hey, I just deleted my entire database. What do I do about that? Like, well, ideally you make different slash better choices.

Chris: Absolutely, like one interesting effect of this is I now raise PRS that I need to review myself. So I have an H invite code, create a draft pr, and then I review that draft PR as though it was written by someone else.

So code that has my name on it, you know, now I need to make sure that it's worthy of having my name on it. Like it's still my reputation on the line here. And so there, there is, um, uh, there's an interesting change in dynamic. One other thing is it's. Actually incredibly addictive. Like for a long time I was really worried about how are we gonna find joy and flow in this multitasking A DHD feeding.

What sounds like a nightmare, to be honest, like, you know, had someone told me two years ago that, hey, the thing you're really gonna do is you're gonna work on five things at the same time. I would've taught that person to insert expletive fear, you know? Yes. Now. So for me, this has been a really interesting question is how can we find flow and joy in this?

And it turns out that one, it's an interface question, but also as software engineers, arguably we have a somewhat. Addictive addict, if that's the word, mindset to begin with. Because you know who is, ah, just one more change and then my tests are gonna pass. Just one more change and then it's gonna work.

How many nights have we spent doing that? So arguably there's some addictive pattern here already. We're essentially playing a slot machine, you know, just one more change and it's gonna work. What agents have done is they've made it incredibly cheap to play five slot machines at the same time.

Corey: Yeah, that, that's a good way of putting it.

Chris: It's so addictive that I've contemplated adding parental controls for myself.

Corey: Uh, I've seen git uh, work trees being used explicitly for this, where you can check out different branches to different directories and let these things run in parallel on either different issues or, alright, we're gonna have a bake off and see which one of you comes up with the best answer.

What I'm waiting for is the agent now that supervises those things and makes those evaluations. Like I want some, I want like the project manager at this point, I something that can say, oh, this doesn't pass muster. Or, okay, here's a whole bunch of tasks, or I'm trying to one shot it. We're gonna break it down and pass it out to each of you in sequence.

Chris: I think this is, this is a very interesting space. So like the, the sort of multi-agent interaction, I don't think anyone's corrected that yet. Um, there are very interesting ideas out there. This is certainly something that will come. Also, what we see right now as a key skill now, is to really decompose and break down a problem into a chunk that works for an agent.

Like agents are tools and so you need to learn how to use them, how to prompt them, how to use 'em well, what, what size a problem they can attack. You know, doing this decomposition is, is a, is a very valuable skill right now that we'd obviously all want to be able to outsource to yet another agent.

Corey: That that is a constant problem we're all dealing with right now.

It's a universal problem where I, we are pushing the frontier bounds here and seeing what's possible. I think if you've only played with this stuff a few months ago and like, eh, it was okay. It's time to reevaluate it. This is one of those rapidly advancing areas, and I generally want to call out hype when I see it.

Yes, we are in a hype bubble here. I think that is not particularly controversial, but unlike the insane blockchain hype bubble, there's clearly something of value here that is, this is not problem. This is not solution in search of a problem in quite the same way. This is something that is transforming the way some things are being done.

Now, maybe we're a little too eager to map those to everything else. But there is some kernel here of this has staying power.

Chris: Absolutely. And. Is our agents gonna replace humans? I personally don't think so. You know, they're, they're gonna augment humans. They're gonna make people more effective, but they're not gonna replace them.

Also, Javins Paradox is very real. The moment we make something cheaper, we do more of it. So we're now making software production cheaper. So we're gonna do more of it simply, we're gonna write more software,

Corey: we're gonna write more software. That's historically, that has been the antipater. Think about this, where it used to be that, oh, we're gonna solve our own custom problem in house.

We're going to write it ourselves. I've worked in too many environments where there's such a strong, not invented here syndrome that everyone builds custom stuff, but becomes a maintenance nightmare. So it turned into a point a lot of shops, my own included, where we historically have been down this path of we're gonna build our own custom tooling for my newsletter.

It is a rat's nest nightmare of different things bolted together to build a production system. And when someone asks me why I didn't use. Curated, do co. My question was. Wait, why? I didn't use what? Because I didn't know it existed, or I would have, and it would've saved me so much effort. But we're seeing that invert now where there's a bunch of little things that I need to do throughout the course of my workday.

I am not going to hire a developer to do these things, and I'm not going to sit around and build all of the, all these tools or pay for these things. But hey, every week I need to find my top 10, uh, most engaged posts on Blue Sky so I can put it in the hidden newsletter. Easter egg that's in every episode.

I can write a dumb script that does that in, I, I tell an agent to do it, and I go get a cup of coffee, and it's done by the time I get back. Suddenly writing more software is the change for the first time. Nons sarcastically that'll fix it, because usually that's a sarcastic thing to say, oh, I'm gonna write the more software.

Great. That'll fix it. This will fix it because it's the glue between things.

Chris: Absolutely. Also, we no longer need to excessively generalize because the creation of software has become so cheap. I can solve this one specific problem, and I don't need to solve it for these other three instances because, you know, I, I can just ask an agent to solve it for this, for these other three specific instances specifically.

And so software becomes more and simpler that way.

Corey: It, it also changes the way that I think we view the cost of doing software. The, the pricing models for all these agent things are very strange. I've, I've seen the leaderboards for people who are using the $200 a month clause subscription and how much, uh, value they're getting out of it.

If they were paying per inference, it's tens of thousands of dollars a month in some cases. It, it makes me worry that, okay, is this as, is this as economically sustainable as I want it to be? 'cause I'm not going back to writing JavaScript by hand. I'm just. Not, so I, I'm very interested in getting local inference to a point where it can at least do the fancy tab, complete style thing, even if it's not as good as the frontier models.

There are, there are many things I don't need it to reach out to the internet for. I don't need the very latest and greatest Claude Sonnet 4.5 to go ahead and indent my YAML properly. I, I feel like that's the sort of thing that a model from three years ago can do.

Chris: And there's that, um, token, short squeeze article that was all the hype on, on the orange website not too long ago.

The, the key premise of it is that, you know, tokens get, get ever more, get ever cheaper and cheaper. So if you just look at GPT-4 level, so Elemis ELO one 30, um, intelligence a year ago as compared to now, it dropped by a factor of 140. At the same time, we're using about 10 K more tokens. So we're using an order of mag, two orders of magnitudes, more tokens than the price dropped.

Unnecessarily we'll need to see two things. One is, as you point out more precise models, you know, that make that cost intelligence straight off to a point where, where it works like this one size fits all isn't gonna scale. The other is. We'll need to recognize that AI doesn't make the creation of software free.

It just changes the economics. So scaling a model is much easier than scaling humans, and this is why we can produce more software, but that doesn't make it free. And this time right now where we live in VC money subsidized token land will need to come to an end eventually. So I think we're gonna see a proliferation of different models that make that trade off better and we'll need to see.

And we are seeing already like pricing models that are much more aligned with the value you're getting rather than a flat fee.

Corey: Yes and no, because we're not seeing outcome-based pricing on any of these things. It's not like, okay, I'll only charge you if the code works like that. That would be an interesting gamble.

But I don't know anyone who'd want to take the other side.

Chris: That's a really tough one. Finding a way to make this one really work, I think is extremely interesting because it aligns incentives so, so well, the question is, what is the outcome? You know, like code working an agent can show you that the code works.

Does it do the right thing? I don't know. Does it solve your business problem? No idea. So the, you know, what is the outcome you're optimizing for? Which is why I reckon most don't. Price this way yet because it's incredibly tough nut to crack.

Corey: Yeah. I, I think that this is where some of the most interesting stuff is yet to come.

So I've been doing a lot of weird work lately in random shit posting things, and it's great watching it just get done and wait for me. And in some cases it'll even ping me when it's ready by hook it into, uh, the right notification service. But. I've been doing it hanging out on an EC2 instance, and it's doing that in a Team Ox section, uh, too, ah, te ox session.

There we go. And that's great, but it's a colossal pain of the butt to do that from blink. Uh, I can do it, but it's not pleasant and it makes me sad. Do you see a future where this gets easier to be done on mobile devices as we're out walking around, not staring at the big screen, instead looking at the smaller, happier screen.

Chris: Actually, this is already a reality. So with on spinning up development environments that aren't bound to the machine you are using ONA from, you can absolutely use it from your phone. And in fact, we've optimized the web experience also for mobile. The way I speak about this now is like I'm three times more productive on my phone than I was six months ago on my laptop.

Like, let me make this very concrete. I I, at this point, I have a four months old son and many evenings I'll sit with him on one arm as he is falling asleep, but I don't dare put him down.

Corey: Oh, can't do that, that, that, that restarts the cycle.

Chris: Exactly, exactly. Then I have to, you know, shush him and try and put him sleep again.

So clearly I can't use my laptop, but I can use my phone. And so many ideas for prototypes or actual changes that before would've been mere. Notes now are actual prototypes. I put them into ONA and by the time I wake up the next morning, the conversations I've had turned into actionable code. And that's a very fundamentally different way of working.

So being able to do this from mobile is already reality, and you don't have to use Team X or screen to do it.

Corey: Yeah, with the weird control characters and custom keyboards and the rest. Okay. You, you convinced me to try it out. A question I have for you that I've encountered a fair bit here is the, the multimodal approach to these things.

I can tell an agent to build a thing, I, it can go vibe code. It's hard out. Great. Uh. To the point where I'll even find myself stuck in that paradigm for things I really shouldn't be like, oh, go ahead and change this one string here because I want to change the capitalization of something. I, I should just be able to pop into vi or whatnot or edit that.

It, it feels like I have to pick a paradigm and stick with it, maybe past the point where it makes logical sense. How do you see that?

Chris: Yeah. A lot of agents really are built for a future that isn't here yet and maybe never will be, where the agent goes a hundred percent of the way. And I guess the, the set of problems for which this is true is increasing as agents get more capable.

But there are some things that lm simply aren't good at, or when, where source code just is the better way of specifying it. So if I want that color to be green instead of red, it's m. Much more likely that, you know, changing the hex myself is faster than trying to describe that to an agent. ONA is very much built around that idea where you can engage with code at the right level.

You can choose to not engage with it at all directly and simply be in the conversation. Or you can fold open a side panel and there's VS code right there on the web on the exact same environment. And if that's not enough, you can open a classic IDE emax if you have 12 fingers or VI or uh, vs code and inter cusa if you want.

And interact with that with that code more deeply. So in the same environment, I very much believe that agents get you very far and they'll go further and further. But there needs to be a way to engage with the code at that level.

Corey: Yeah. Right now it just feels like that's the expensive context, which almost as much as switching between entire projects, which I've gotten used to, but the Ooh, different tool now it, it feels like even the key bindings feel different and I don't like it.

Chris: Absolutely. You also want that conversation to be there. You know, what you don't want is to now go into, like, say you open. An editor and all of a sudden all your conversation, all that context, no pun intended, is gone. You really want that continuity between these different levels of, of engagement.

Corey: Yeah.

And then there's the other problem too of, alright, when do I want to get rid of that context and start fresh on this code base and have it take a different approach. There's no right answer yet.

Chris: Absolutely. I think this is really where it comes back to learning how to use this tool and the tool making it easy for you to work with it.

So for example, we essentially copied Claude Code's Clear Command. So in Owner also, it can just go slash clear and it's gonna reset the conversation. It's. Features like that, but also behavior like that, that I think will change over time as agents become more capable and as we all learn what the right ergonomics are for, uh, for these tools,

Corey: it is still an evolving space.

So my guess, my, my closing question for you is, in that future, as we see this evolving, what place does On Stand in

Chris: Owner very fundamentally is the mission control, the platform for humans and agents writing software. And that's where we stand. And 99% of the software isn't written on on weekends, but it's written in enterprises, it's written in large organizations, and that's who we serve.

Like we want to, we want to be able to bring these technologies in this way of working to everyone. And if you are a weekend warrior, please go try Ona. Go to owner.com, sign up, try it, use it. It works well for you if you work at an enterprise use owner. And this is the thing that I find really exciting that we can say this.

What we're really looking to do is to bring. Environments agents to folks in regulated industries and large organizations who right now really struggle to get these tools in house. You know, as an engineer, of course, I want the latest tools, of course I do. My CISO might not be so happy with me putting my company's source or this company's source code into some arbitrary cloud or untractable, LLM.

Where ONA stands is bringing these tools and capabilities to large organizations and individuals like

Corey: I like that. I am curious to see how this story continues to evolve. I really wanna thank you for taking the time to speak with me. If people want to learn more, where is the best place for them to find you?

Chris: The best place is to head over to owner.com. Check out the product right there. And then, um, also of course, Twitter, LinkedIn, the usual places to, to reach out. And, uh, thank you so much for having me.

Corey: No, and thank you Chris Hel, co-founder and CTO at ona. I'm cloud economist Corey Quinn, and this is Screaming in the Cloud.

If you've enjoyed this podcast, please, we have a five star review on your podcast platform of choice. Whereas if you've hated this podcast, please leave a five star review on your podcast platform of choice along with an angry, insulting comment that you don't even have to write. We'll let the LLM do it for you and don't worry.

It'll probably turn out fine.

Newsletter Footer

Get the Newsletter

Reach over 30,000 discerning engineers, managers, enthusiasts who actually care about the state of Amazon’s cloud ecosystems.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Sponsor Icon Footer

Sponsor an Episode

Get your message in front of people who care enough to keep current about the cloud phenomenon and its business impacts.