Episode Summary
Episode Video
Episode Show Notes & Transcript
Show Highlights
Links
- Wiz Cloud Security Health Scan: https://www.wiz.io/crying-out-cloud
- Modern (Jonathan Schneider's company): https://modern.ai
- LinkedIn (Jonathan Schneider): https://www.linkedin.com/in/jonathanschneider/
Transcript
Jonathan: These were all the sort of basic primitives. And then, you know, at some point we said, well, recipes could also emit structured data in the form of tables, just rows and columns of data. And we would allow folks to run those over thousands or tens of thousands of these, loss of semantic tree artifacts and extract data out.
This wound up being the fruitful bed for LLMs eventually arriving is that we had thousands of these recipes emitting data in various different forms. And if you could just expose this tools, all of those thousands of recipes to a model and say, okay, I have a question for you about this business unit.
The model could select the right recipe, deterministically, run it on potentially hundreds of millions of lines of code, get the data table back, reason about it, combine it with something else. And that's the sort of, I think, foundation for large language models to help with large scale transformation and impact analysis.
Corey: Welcome to Screaming in the Cloud. I'm Cory Quinn, and my guest today has been invited to the show because I've been experiencing, this may shock you some skepticism around a number of things in the industry, but we'll get into that. Jonathan Schneider is the CEO of modern. Uh, and before that, you've done a lot of things.
Jonathan, first, thank you for joining me.
Jonathan: Yeah. Thanks for having me here, Corey. Such a pleasure.
Corey: Wiz transforms cloud security and is on a mission to help every organization rapidly identify and remove critical risks in their cloud environments. Purpose built for the Cloud Wiz delivers full stack visibility, accurate risk, prioritization, and enhance business agility.
In other words, nobody beats the whiz. Leading organizations from the Fortune 100 to high growth companies, all trust whiz to innovate at the pace of cloud while staying secure, screaming in the cloud. Podcast listeners, that would be, you can also get a free cloud security health scan by going to wiz.io/scream.
That's w z.io/scream. Uh, we, we always have to start with a book story because honestly, I'm envious of those who can write a book. I just write basically 18 volumes of Twitter jokes over the years, but never actually sat down and put anything cohesive together. Uh, you were the author of SRE with Java microservices and the co-author of Automated Code Remediation, how to Refactor and Secure the Modern Software Supply Chain.
So you are professionally depressed, I assume.
Jonathan: I mean, I as like most software engineers, I hate writing documentation. So somehow that translated into, you know, write a, a full scale book instead. I, I honestly don't remember how that happened.
Corey: A series of escalating poor life choices is my experience of it.
I think no one wants to write a book. Everyone wants to have written a book, and then you went and did it a second time.
Jonathan: Yeah, much more a smaller one. That second one, you know, just the 35 pager luckily, but, but, you know, still, um, or it's always, uh, quite the effort.
Corey: So one thing that I, I wanted to bring you in to talk about is that the core of what your company does, which is, I, I, please correct me if I'm wrong on this software rewrites software modernization.
Effectively, you were doing what Amazon Queue transform purports to do, uh, before everyone went AI crazy.
Jonathan: Yeah, it started for me almost 10 years ago now at Netflix on the engineering tools team, where I was responsible for making people move forward, uh, in part, but they had that freedom and responsibility culture, so I could tell 'em, you're not where you're supposed to be.
And they would say, great, do it for me. Otherwise I'm, I got other things to do. Uh, and so really that forced our team into trying to find ways to automate that change on their behalf.
Corey: I never worked at quite that scale in production. I mean, I've consulted in there in places like that, but the, that's a very different experience 'cause you're hyperfocused on a specific problem.
But even at the scales that I've operated at, there was, there was never a. An intentional decision of someone's gonna start out today, and we're gonna write this in a language and framework that are 20 years old. So this stuff has always been extant for a while. It is, it has grown roots, it has worked its way into business processes and a bunch of things have weird dependencies.
In some cases on bugs. People are not, uh, declining to modernize software stacks because they haven't heard that there's a new version out. It's because this stuff is painfully hard because people and organizations that they build are painfully hard. I I'm curious, in your experience having gone through this at scale with zeros on the end of it, what, what are the, what are the sticking, what are the sticking points of this?
Why don't people migrate? Is it more of a technological problem or is it more of a people problem?
Jonathan: Well, first I would start and hopefully with a, a sympathetic viewpoint for the developer, which is like, pretend I haven't written any software yet and I'm actually starting from today. I look at all the latest available things, and I make the perfectly optimal choices for every part of my tech stack today, and I, I write this thing completely clean six months from now, those are no longer the optimal choices.
Corey: Oh God, yes. Me. Worst developer I ever met is me six weeks ago. It's awful. Like, what, what was this idiot thinking? You do get blame and it's you and wow, we need not talk about that anymore. But yeah, the past me was terrible at this.
Jonathan: That's right. And, and always will be future. You will be the pa the next past you.
So it's, it's, there's never an opportunity where we can say we're making the optimal choice and that optimal choice will continue to be right going forward. So, uh, I think that paired with one other fact, which is just that the. The tools available to us have essentially industrial industrialized software production to the point where we can write net new software super quickly using off the shelf and third party open source components we're expected to because you have to ship ship fast and you know, then what do you do when that stuff evolves at its own pace?
So nobody's really been good at it, and I think the more. Authorship, uh, automation that we've, that we've, uh, developed for ourselves from IDE rule-based intention actions to now, you know, AI authorship. This, like the time that we spend maintaining what we've previously written has continued to go up.
Corey: I would agree.
I, I think that there has been a. A a, a shift and a proliferation really, of technical stacks and software choices. And as you say, even if you make the optimal selection of every piece of the stack, which incidentally is where, where some people tend to founder, they spend six months trying to figure out the best approach, pick a direction and go, even a bad decision can be made to work.
But, but there are so many different paths to go that it's a near certainty that whatever you have built, you're. There, you're going to be one of a wide variety of different paths that you've picked. You're effectively become a unicorn pretty quickly regardless of how mainstream each individual choice might be.
Jonathan: That's right. Yep. That's just the nature of, of software development.
Corey: I I am curious since, uh, you did bring up the, uh, Netflix, uh, freedom and responsibility culture. Uh, one thing that has made me skeptical historically of Amazon queue's, transform abilities and, and many large companies that have taken a bite at this apple is they, they train these things and build these things.
Inside of a culture that has a very particular point of view that drives how software development is done. Uh, I like how many people have we met that have left large tech companies to go, found a startup, tried to build the exact same culture that they had at the large company and just founder on the rocks almost immediately.
Because the culture shapes the company and the company shapes the culture. You, you can't cargo cult it and expect success. How, how varied do you find that these modernization efforts are based upon culture?
Jonathan: I'm glad to say that for my own story, I had a degree of indirection here. I didn't go straight from Netflix to, to founding something, so I was at Netflix.
I think that freedom and responsibility culture meant that Netflix in particular had far less self similarity or consistency than say, a Google that has a very prescriptive standard for formatting and everything and the way they do things. And so I left Netflix. I went to Pivotal, VMware was working with large.
Enterprise customers like JP Morgan, fidelity, home Depot, et cetera, working on an unrelated problem in continuous delivery and saw them struggling with the same kind of problem of like migrations and modern, like everybody does. And what struck me was that even though they're very different cultures, uh, JP Morgan much more strongly resembles Netflix than it does Google.
Um, Netflix's uh, lack of consistency was by design or by culture, intentional. And JP Morgan's is just by the very sheer nature of the fact that they have 60,000 developers and 25 years of this, of history and development on this. And so a solution that works well for, uh, dissimilar by design actually works well in the typical enterprise, which is probably closer to Netflix than it is to Google.
Corey: Yeah, a lot of it depends on constraints too. Uh, JP Morgan is obviously highly reg, sorry, JP Morgan Chase. They're particular about the naming people are, they're obviously highly regulated and mistakes matter in a different context than they do when your basic entire business is streaming movies and also creating original content that you then cancel just when it gets good.
Jonathan: Right, right, right.
Corey: Right. Yes. So there's, there is that question, I guess, of how this stuff evolves. But taking it a bit away from the culture side of it, how do you find that modernization differs between programming languages? I mean, I, I dunno if people are watching this on the video or listening to it, if we may, we take all kinds, but you're wearing a hat right now that says JVM, so I'm, I'm just gonna speculate wildly that Java might be your first love, given that you did in fact write a book on it.
Jonathan: It was one of my first loves. Yeah. The technically a Java champion right now. Although, you know, I actually started in c plus plus and I hated Java for the first few years I worked on it. But, um, I actually think, uh,
Corey: Stockholm Syndrome can work miracles.
Jonathan: It it sure can. It absolutely can. I, I don't know that the.
Problems are, are that different? There's, you know, a lot of different engineering challenges. How statically typed is something, how dynamically typed is it, how accurate can a, a transformation be provably made to be? But in general, I think the problems are, um, the social engineering problems are harder than the, than the specifics of the transformation that's being made.
And those social engineering problems are like. Do I build a system that issues mass pull requests from essential team to all the product teams and expect that everybody's gonna merge them because. They love it when, you know, random things show up in their, uh, in their PRQ or, uh, do product teams perceive that, like unwelcome advice coming from an in-law and they're just looking for a reason to reject it, you know, and then they would prefer, instead to have an experience where, you know, when they're about to undergo a large scale transformation that they pull or they initiate the change and then merge it themselves.
So like those are the things that I think are, are highly similar regardless of the tech stack or company that's. Uh, because people are people, uh, kind of everywhere. Now
Corey: you take the suite of Amazon Q transform options and they have a bunch of of software modernization capabilities, but also getting people off of VMware due to, you know, extortion as well as getting off of the MI of the mainframe, which that last one is probably the thing I'm the most skeptical of companies have been trying to get off of the mainframe for 40 years.
The problem is not that you can't recreate something that does the same processing, it's that there are thousands of business processes that are critically dependent on that thing and you can't. Migrate them one at a time in most cases. I am highly skeptical that just pour some AI on it is necessarily going to move that needle in any material fashion.
Jonathan: I think that there's a, a two different kinds of activities here. One is code authorship, that new authorship, uh, that's what the copilots are doing, the Amazon queue is doing, et cetera. It's, it's really assisting in that respect. And then there's code maintenance, which is, I need to get this thing from one version of a framework to another.
Maintenance can also include, I'm trying to consolidate one feature flagging vendor to, or two feature flagging vendors to one. Um, but. When I think think of something like a COBOL to a modern stack JV young or.net or whatever the case might be, I honestly see that less of as, as a maintenance activity and more as an authorship activity, a new, and you're, you're writing net new software in a different stack and a different set of expectations and assumptions.
Um, and so I'm skeptical too. I don't, I don't think there's a magic wand, but to the extent that our authorship tools help us accelerate net new development, those problems. The cost of those problems goes down. I think over time.
Corey: Yeah, that, that does track and makes sense of how I tend to think about these things.
But at the same time that the cost of these things goes down and the technology increases, it still feels like these applications that are decades old in some cases are still exploding geometrically with respect to complexity.
Jonathan: That's right. Yeah.
Corey: Like how do you outrun it all?
Jonathan: Well, um. Uh, to me there's not just one approach here, but I feel like, um, you know, for my own sake and my, where my focus is, is really trying to reclaim, uh, developer time in some area so that it can refocus that effort elsewhere.
And I think one thing I hear pretty consistently is that because of that explosion in software under management right now. A developer spending like 30 or 40% of their time just kind of resiting applications and keeping, keeping the lights on. And that's something we need to like get rid of a bit or as minimize as much as possible so that, you know, the next feature they're developing isn't just a net new feature but is actually, you know, pulling some like old system into a more modern framework as well.
That's just another activity that can go back onto their, uh, is something they can do.
Corey: But that does track the, I guess the scary part too, is it having lived through some of these myself where we know that we need to upgrade the thing to break off the monolith, to master the wolf, et cetera, et cetera, et cetera, and it feels like there's never time to focus on that because you still have to ship features, but every feature you're doing feels like it's digging the technical debt hole deeper.
Jonathan: It is. It is. Yeah. So I mean that's, and this is what I mean is like if we can take the assets that we have on our management right now and, and like keep them moving forward, um, then um, we have like less drift and less, you know, um, complexity to deal with. Overall. It's an important part of piece of that puzzle I think.
Corey: As you said, you've been working on this for 10 years. Uh, gen AI really took off at the end of 2023, give or take Well, during 2023. And I'm curious to get your take on how that has evolved. I mean, yes, we all have to tell a story on some level around that. Uh, your uur l is modern.ai, so clearly there's, there is some marketing element to this, but, but you're, but you're a reasonable person on this stuff and you go deeper than most do.
Jonathan: I think a lot of what, what I've developed over the last several years, or our team has, has been, you know, accidentally leading towards this moment where, um, we've got a set of tools that, uh, an LM can take advantage of. So the first thing was, you know, when I'm looking at a code base, the text of the code is in.
I think to the abstract syntax tree of the code is insufficient. So things like tree sitters, you know, that I won't mention all the things builds on top of tree sitter, but if it's just abstract syntax tree, there's not enough information often for a model to latch onto to know how to make the right transformation.
And the reason I started open Rewrite in the very, at the very beginning, 10 years ago was because the very first problem I was trying to to solve at Netflix was moving from blitz for J and internal logging library to not blitz for J. We were just trying to kill off something we regretted. And yet that logging library looked almost identical in syn PAX to S SL for jj, any of the other ones.
And so just looking at log info. Well that looks exactly like log on info from another library. I couldn't, you know, narrowly identify where blitz for day still was, even in the environment. So I had to kind of go one level deeper, which is what does the compiler know about it? And that is actually a really difficult thing to do, to just take Texta code and parse it into an abstracts and text tree.
You can use tree sitter to go one step further and actually exercise the compiler and do all the symbol solving. Well, that actually means you have to exercise the compiler in some way. Well, how is that done? What are the source sets? What version does it require? What build tools require, like this winds up being this like Hu, hugely complex decision matrix to encounter an arbitrary repository and build out that LST.
We built out that that LST or loss of semantic tree, and then we started building these recipes, which could modify them. And those recipes stacked on other recipes to the point where like a spring boot migration has 3,400 steps in it. And these were all the sort of basic primitives. And then, you know, at some point we said, well, recipes could also emit structured data in the form of tables, just rows and columns of data.
And we would allow folks to run those over thousands or tens of thousands of these loss of semantic tree artifacts and extract data out. This wound up being the fruitful bed for LLMs eventually arriving, is that we had thousands of these recipes emitting data in various different forms. And if you could just expose as tools, all of those thousands of recipes to a model and say.
Okay, I have a question for you about this business unit. The model could select the right recipe, deterministically, run it on potentially hundreds of millions of lines of code, get the data table back, reason about it, combine it with something else, and that's the sort of, I think, foundation for large language models to help with large scale transformation and impact analysis.
Corey: This episode is sponsored by my own company, the Duck Bill Group, having trouble with your AWS bill. Perhaps it's time to renegotiate a contract with them. Maybe you're just wondering how to predict what's going on in the wide world of AWS. Well, that's where the Duck Bill group comes in to help. Remember, you can't duck the duck bill.
Bill, which I am reliably informed by my business partner is absolutely not our motto. Uh, to give a a somewhat simplified example, uh, it's easy to envision 'cause some of us have seen this where we'll have code that winds up cranking on data and generating an artifact, and then it staes that object into S3 because that is the defacto storage system of the cloud.
Next, it then picks up that same object and then runs a different series of transformation objects on it. Now, from a code perspective, there is zero visibility into whether that. Artifact being written to S3 is simply an inefficiency that can be written out and just have it passed directly to that sub-routine.
Or if there's some external process, potentially another business unit that needs to touch that artifact for something for reporting. Uh, quarterly earnings are a terrific source where a lot of this stuff sometimes winds up getting, uh, getting floated up and it's, it is impossible without having.
Conversations in many cases with people in other business units entirely to, to get there. That's the stumbling block that I have seen historically. I Is that the sort of thing that you're, that you wind up having to think about when you're doing these things or am I contextualizing this from a very different layer?
Jonathan: I do think of this process of large scale transformation, impact analysis, very much like what you're describing as like a, a data warehouse, ETL type thing, which is, you know, I need to take a source of data, which is the text to the code and enrich it. Into something that's everything. To compile our, knows all the dependencies and everything else from that point.
And once I have that data, that's a computationally expensive thing to do. Once I have that. There's a lot of different applications of that same data source.
Corey: I, I should point out that I have been skeptical of AI in a number of ways for a while now, and I wanna be clear that when I say skeptical, I do mean I'm middle of the road on it.
I see its value. I'm not one of those, it's just a way that kill trees and it's a dumb markoff chain generator. No, that is absurd. I'm also not quite on the fence of this changes everything and every business application should have AI baked into it. I am. I am very middle of the road on it, and the problem that I see as I look through all of this is it, it feels like it's being used to paper over.
A bunch of these problems where it has to talk to folks. I've used a lot of AI coding assistance and I see where these things tend to fall short and fall down. Uh, a big one is that they are seem incapable of saying, I don't know. We need to go get additional data. Instead, they are, uh, they're extraordinarily confident and authoritative and also wrong.
I say this as a white dude who has two podcasts. I am conversant with the being authoritatively wrong point of view here at sort of my people's culture. So it's, it's one of those, how do you, how do you meet in the middle on that? How do you get the value without going too far into the realm of absurdity?
Jonathan: Well, I, I do think that these things need to be, they need to collaborate together. And so, uh, so it is with Amazon Q Code transformer, that's, that's working to provide migrations for, uh, Java and other things. You see that, that Amazon Q Code transformer actually uses. Open rewrite a rule-based or deterministic system behind it to actually make a lot of those changes.
Corey: An an open source tool that incidentally you were the founder of, if I'm not mistaken.
Jonathan: That's right, yeah. And that, that our technology is really based on as well, and it's not just Amazon Q Code transformer as we've seen. Uh, you know, IBM Watson Migration Assistant built on top of Open rer, Broadcom application advisor built on top of Open Rewrite Microsoft GitHub, copilot AI Migration Assistant, I think is the current name also built on that.
And they, they're better together. I mean, it's, you know, that tool runs, uh, open rewrite to make a bunch of deterministic changes and then follows that up with further verification steps. That's, that is the, the golden path, I think, is trying to find ways in which. Non-determinism is helpful and to stitch together systems that are deterministic at their core as well.
Corey: I hate to sound like an overwhelming cynic on this, but it's one of the things I'm best at, uh, it the Python two to Python three migration, because Unicode had no other real discernible reason, uh, took a. Decade in no large part because the single biggest breaking change was the way that print statements were then handled as a function.
And you could get around that by importing the future package, uh, which affected a lot of, uh, two to three migration stuff. But it still took a decade for the system tools around the Red Hat ecosystem, for example, just run package management to be written, to take advantage of this. And that was. And please correct me if I'm wrong on this, a relatively trivial, straightforward uplift from Python two to Python three.
There was just a lot of it. Going, looking at that migration to anything that's even slightly more complicated than that seal feels like at a past a certain point of scale, an impossibility, you clearly feel differently given that you've built a successful company and an open source project around this.
Jonathan: Yeah. I think actually one of the characteristics that was difficult about that Python two to three migration are there's things like that that you described that were fairly simple changes and that were done at the language level. But alongside that came a host of other library changes that were made.
Not really because of Python two to three, but because there's an opportunity, they're breaking things, we'll break things and everybody, everybody, lets just break things, right? And so a lot of people got stuck on not just the language level changes, but all those library changes that happened at the same time.
And that's an interesting problem because it's kind of an unknown scoped problem, right? Well, how, how much breakage you have in your libraries very much depends on the libraries that you're using. Right now, um, so I mentioned earlier like the Spring Boot three migration, two to three migration open. Red recipe right now has 3,400 steps.
I promise there's some part of two to three that we don't cover yet. I don't know what that is. Uh, but somebody will encounter it. And for them
Corey: in production, most likely
Jonathan: in, yeah, they're gonna be trying to run the recipe. They're gonna find something that, oh, you don't cover Camel or something great, you know, like, uh, and so, and that's fine, you know, and we will encounter that.
And probably if they use Camel in one place, they use it a bunch of places. And so it'll be worth it then to build out that additional recipe that deals with that camel migration and then, you know, boom, and then you know that, and then that's sort of contributed back for the benefit of everybody else. I think what makes this approachable or tractable is really that we're all sort of building on the same substrate of third party and open source stuff.
From JP Morgan all the way down to tiny like, you know, 15 person engineering team, modern, like,
Corey: oh, oh, we, we can't overemphasize just how much open source has changed everything. Back in the Bell Labs days, seventies and eighties, it was everyone had to basically build their own primitives from the ground up.
It,
Jonathan: yeah, it was all completely bespoke.
Corey: Yeah. Now it's almost become a trope like, so implement quicksort in a whiteboard. Like, why would I ever need to do that? Okay. Uh, I, I guess another, another angle on my skepticism here is I work with AWS Bills and the AWS billing ecosystem is vast. Uh, but, but the billing space is a bounded problem.
Space. Unlike programming languages that are turning complete, you can build anything your heart desires. Uh, even in the billing space, I just came back from finops X in San Diego and none of the vendors are really making a strong AI play. And I'm not surprised by this because I have done a number of experiments with LLMs on AWS billing artifacts, and they consistently make the same types of errors that seem relatively intractable.
Uh, go ahead and make this optimization. That optimization is dangerous without a little more context fed into it. So I guess my somewhat sophomoric perspective has been, if you, if you can't solve these things with AI in a bounded problem space, how can you begin to tackle them in these open-ended problem spaces?
Jonathan: I, I'm, I'm with you actually. And there's a counterpoint to this, which is that I think. That the, all the large foundation models are somewhat undifferentiated. I mean, they kind of take pull position at any given time, but
Corey: Right. Two weeks later, the whole ecosystem is different.
Jonathan: Yeah. If they kind of all roughly have the same capabilities and there are some, like we said, there are very useful things they can do.
There's some utility there. You know, there are places where non-determinism is useful and to the extent that you can apply that non-deterministic. Then great. You know, like that, that that's, that's fantastic. But I'm not in a position where I think Spring Boot two to three upgrade or Python two to three upgrade applied to 5 billion lines of code is going to be deterministically acceptable either now or six months from now or a year from now.
And maybe I'll be a fool and wrong, but I don't think so.
Corey: Oh yeah. Honestly, this is, this whole AI revolution has sh turned my entire understanding of how computers work on, its on their heads. Like short of a rand function, you, you knew what the output of a given stance of code was going to be, given a certain input.
Now it kind of depends.
Jonathan: It does. It really does.
Corey: Yeah, the problem I run into is no matter how clever I have been able to be and the people I've worked with we're far smarter than I am, have been able to pull off, uh, the, these insights. There's always migration challenges and things breaking in production just because of edge and corner cases that we simply hadn't.
Considered the difference now is instead because there's a culture in any healthy workplace about not throwing Steven under the bus. Well, throwing the robot under the bus is a very different proposition. I told you AI was craps as the half of your team. That's AI skeptic, and it's not the ai, it's. Vault says the, uh, people who are big into I, into I, ai, ai, business, daddy logic.
And the reality is probably these things are complicated. Uh, compute, NN neither computer nor me nor beast are going to be able to catch all of these things in advance. That is why we have jobs.
Jonathan: I've noticed this just in even managing our team. You know, I, I catch people when they say, you know, but Junie said this, or, but, you know.
Uh, don't pass through to me what your assistant said. You, you're the responsible party when you tell me something. So you, you, you have a source. You check that source, you verify the integrity of the source, then you pass it to me. Right?
Corey: You can outsource the work, but not the responsibility. A number of lawyers are finding this, uh, uh, uh, to be the case when they're, they're not checking what paralegals have done or at least blaming the paralegals
Jonathan: for it, I'm sure.
Exactly. Always has been. Always has, but
Corey: it's, I, I also do worry that a lot of the skepticism around this, uh, even my own, my own aspect of it comes from a conscious or unconscious level of defensiveness where I'm worried this thing is going to take my job away. So the first thing I do, just to rationalize it to myself is point out that things I'm good at and that this thing isn't good at, at the moment.
Well, that, that's why I'll always have a job. Conversely, I don't think computers are gonna take jobs away from all of us in the foreseeable future. The answer is probably a middle ground in a similar passion fashion. The way the industrial Revolution sort of did a whole lot of, uh, of a number on people who are an in independent artisans.
So there's a, it's an evolutional process and I, I just worry that I am being too defensive, even unconsciously.
Jonathan: I, I think that's sometimes too. I, I really do feel like this is just a continuum of, of productivity improvement that's been underfoot for a long time with different technologies. And I mean, I remember the very first eclipse release and the very first eclipse release is when they were providing, you know, uh, rules-based refactorings inside the IDE.
And I remember being super excited every two or three months when they dropped another. And just looking at the release notes and seeing all the new. Things and what did that do? It made me faster at writing that new code. And you know, here we've got another thing that has very different characteristics.
It's like, it's almost good at all the things that IDE based refactor weren't good at, but I still guide it. And, you know, unlike a, yeah, I think the, the drop CEO said, uh, our CTO said IDs will be obsolete by the end of the year. I don't believe this at all. I don't believe this at all. I think we're still driving them.
Corey: I am skeptical in the extreme on a lot of that. I, I, I, because again, these, let's be honest here, these people have a thing they need to sell and they have billions and billions and billions of other people's money riding on the outcome. Yeah. That would shape my thinking in a bunch of ways. Both subtle and grows too.
I try and take the, a more neutral stance on this, but who knows?
Jonathan: I think it's not just neutral, it's a mature stance and it's one that's, uh, it's, it's a lot of experience going behind it. I, I think that you're right. I don't think we're we're anywhere close to being obsolete.
Corey: No, and I, and I also, frankly, I say this coming from an operations background, CIS had been turned SRE type where.
I have been through enough cycles of seeing today's magical technology become tomorrow's legacy shit that I have to support that I am, I have a natural skepticism built into almost every aspect of this just based on history. If nothing else,
Jonathan: you know what Vibe Coding reminds me of? It reminds me of model driven architecture about 25 years ago.
The like, you know, just produce a UML diagram and don't worry like the, the codal. I'll ship the, I'll just generate the rest of the application. Or it reminds me of, uh, behavior driven development when we said, oh, we'll just put in business people's hands. They write the test and, you know, don't, you know, we don't want engineers writing the test.
You want business? Like, I feel like we've seen this play out many, many, many times in various forms, and maybe this time's different. I, I don't think so.
Corey: And to be honest, I, I like to say that, well, computers used to be deterministic, but let's, let's be honest with ourselves, we'd long ago crossed threshold where no individual person can hold the entirety of what even a simple function is doing in their head.
They, they are putting their trust in the magic think box.
Jonathan: That's right. Yes. That's absolutely right.
Corey: So I really wanna thank you for taking the time to speak with me. If people can, people wanna go and learn more, where's the best place for them to find you?
Jonathan: I think it's, it's easy to find me on LinkedIn these days or, uh, you know, go find me on Moderna, M-O-D-E-R-N e.ai.
Um, either place. Happy to always, uh, send me a dm. Happy to answer questions
Corey: and we'll of course put that into the show notes. Thank you so much for your time, I appreciate it.
Jonathan: Okay, thank you Corey.
Corey: Jonathan Schneider, CEO at modern. I'm Cloud Economist Corey Quinn, and this is Screaming In the Cloud. If you've enjoyed this podcast, please leave a five star review on your podcast platform of choice.
Whereas if you hated this podcast, please leave a five star review on your podcast platform of choice along with an insulting comment that maybe you can find an AI system to transform into something halfway literate.