Episode Summary
Episode Show Notes & Transcript
From Cursor’s AI-driven customer service fail to why enterprises are consolidating from 15+ observability vendors, this conversation dives into the gap between AI hype and operational reality, and why the companies not shouting the loudest about AI might be the ones actually using it best.
Show Highlights
(00:48) – Introductions and what Dynatrace actually does
About Wayne Segar
Sponsor
Transcript
Wayne Segar: It becomes very, you know, imperative that, you know, you were observing how the health of everything was working before it becomes very imperative that you monitor the, uh, the backend of your ai, you know, componentry that you're putting in place now, particularly because in a lot of cases you don't necessarily even know what outcome is gonna happen, right?
It is, and to an extent, non-deterministic. You wanna understand, you know. What's the, what's the performance of this now that it's model I'm working with? How much is it costing me? Uh, that's another big thing, right? Depending on how I'm using it. So there's all these different things that kind of come into, come into it that become just as critical because it's, it's even more complex than.
Even the complexity you already had that was running, you know, your your main applications.
Corey Quinn: Welcome to Screaming in the Cloud. I'm Cory Quinn. I'm joined this week on this Promoted Guest episode by Wayne Seger, who is the director of Field CTOs at Dynatrace. Wayne, thank you for joining me.
Wayne Segar: Thank you very much. Pleasure to be here.
Corey Quinn: This episode is brought to us by our friends at Dynatrace. Today's development teams do more than ever, but challenges like fragmented, dueling, reactive debugging, and rising complexity can break flow and stall innovation.
Dynatrace makes troubleshooting outages easy with a unified observability platform that delivers AI powered analysis and live debugging. That means less time grappling with complexity. More time writing code and a frictionless developer experience. Try it [email protected]. One of the challenges of large companies is as they start having folks involved in various different adjectives, there's always an expansion specifically of job titles.
People start to collect adjectives like they're going out of style or whatnot. What is Dynatrace and what do you do there?
Wayne Segar: Yeah, so, so Dynatrace is, you know, at the very. You know, very definition of it. We are an observability and security company, right? So we're, we live, we kinda live in that space now.
What we do is certainly very broad. I mean, really we like to say our goal or our vision is to make software work perfectly, or a world where software works perfectly now, very aspirational. Uh, that I think we would all like to be there, but, uh, we also know that that's, you know, certainly challenging, but that is, you know, what we do and what we strive to do.
And so what we predominantly focus on is, you know, helping customers and helping businesses understand, you know, if there are problems, really the health of their systems and where said problems are, and ultimately. You know how to fix them in a timely fashion, or predominantly what we'd really like to do is automate those things so they don't even impact people, uh, at all.
Right? So the best way I describe it to everybody, you know, even the, uh, when somebody like, you know, my mom or somebody who's not in the technology space and answers I say is you, uh, you ultimately, everybody takes a every take. Everybody takes a flight, everybody takes an Uber, everybody checks into a hotel.
Most of the time you're doing that through a digital interaction. At some point when it doesn't work, it's a really bad experience. We work to prevent those bad experiences.
Corey Quinn: Unfortunately, we've hit a point where saying, oh, we're an observability company, is basically a half step better than, oh, we are an AI company.
It's, well, you have just told me a, a hemisphere that you live in. Great. Who are your customers, uh, that are, that are your bread and butter? Where do you folks start? Where do you folks stop?
Wayne Segar: Yeah, so we, we predominantly focus, um, you know, where our customer base is, is I would say the larger kind of scale enterprises.
Now, certainly that, you know, we're not excluding anybody by saying that, but in terms of where our, our, a lot of our install base is, is you can start to think about, you know, a global 15,000 or something like that in terms of, uh, you know, in terms of rankings. It's really a lot of those type of type of customers.
But like I said, it can also span, you know, in down, you know, down into, you know, smaller companies. Because at the end of the day, if they've got a digital property that somebody's using, they need to ensure that it's actually working and that experience is, is done well.
Corey Quinn: Uh, it's, it's strange. I tend to live on both sides of a very weird snake where I, uh, where I build stuff for fun that runs all of 7 cents a month.
I confess, I've never used Dynatrace for any of these things yet when I fixate WS bills for very large companies, you folks are everywhere. So it's, it's always interesting to realize that. Yeah, some folks think that you are. The end all, be all of Observ abilities and others will misspell your name because they're that unfamiliar with it.
Uh, I want to give you folks some credit as well when I visited your [email protected]. As of this recording, I'm sure some market person will fix this before we publish, but AI is nowhere above the fold. Yes, it's the first thing below the fold, but you have not re badged your entire company as the AI company, which frankly is laudable.
Wayne Segar: Yeah, I, I appreciate that. And, and there is a little bit of a reason for that, or maybe there's a little bit of a history to it is now we can go back and talk about the history of the company, which won't bore you too much with, but, but basically since we, uh, you know, since we launched kind of the, the platform that we know of today, kind of our flagship product, which was about 10 years ago at this point, really, um, is.
You know, we, we built it, we actually did build AI slash machine learning at the core of it. Okay. So we did do that. So that, and that was a while ago. So it's very much been a part of the platform for a long time. And we were talking, like back then we actually had AI in a lot of the marketing and, and at that point people would not even believe us.
They would say that that. You know, that's not a thing that doesn't work.
Corey Quinn: You were AI washing before AI washing got big.
Wayne Segar: It, it, it, it was exactly right. And so, um, and so we've obviously, you know, changed a little bit of that. Now it's, we find it interesting now that Right, every company out there, like you just said, has AI plastered on something, whether they're doing it or not, they're marketing towards it.
Um, and so, so that's kind of where we kinda look at it is, is we've evolved the platform. Of course as AI has evolved, but we've actually had it. As a core piece since it's, since its inception,
Corey Quinn: right. If you're, I guess you're orthogonal to what we do in that you are, we both are sort of overlap, not really orthogonal.
If I learn to use words correctly, that would be fantastic, but we are alike in that we both have large data sets that we have to make. I. Decisions based upon. So like are you using AI in this? Uh, well, I don't really know how to answer that if you're not using machine learning. I have several questions about what you think I would be doing to wind up getting to reasonable outcomes.
But am I just throwing this all into an L-L-M-A-P-I Categorically, no. That would cause way more harm than it would Good. So where does AI start and stop is honestly increasingly becoming a question for the philosophers.
Wayne Segar: Exactly. Yeah. Agreed.
Corey Quinn: So I wanna talk about something that you folks have done that I find fascinating from a, because I care about it very much, but I see it from a potentially different angle, and that is observability of Gen ai.
Uh, and that can mean. Two different things. The idea of, I'm not, I don't wanna conflate it with, oh, we're using AI to wind up telling you what's going on in your environment. You have large customers across the spectrum, but biasing toward the large, who are clearly doing a bunch of gen AI things. How are you thinking about observability for what is effectively the only workload people are allowed to talk about this year?
Wayne Segar: Yeah. And so what we're, what we're definitely seeing and where we've kind of focused is customers are specifically, like you said, in the larger size. What they're doing is they're, everybody's doing some sort of project, like you said, it's, people are working on it, they're talking about it. Um, now is it the most wide scale?
Is it running their most, uh, critical revenue line application yet? No, not really. At least I'm not seeing that. But that's maybe an aspiration of course. Where, so where we, but what we do look at it is the, these AI projects and these new workloads that they're developing are becoming a piece of their, let's say their broader, I.
Kind of application, you know, landscape. So it becomes very, you know, imperative that, you know, you were observing how the health of everything was working before it becomes very imperative that you monitor the, uh, the backend of your ai, you know, componentry that you're putting in place now, particularly because in a lot of cases you don't necessarily even know what.
Outcome is gonna happen. Um, right. It is. And to an extent, non-deterministic. Uh, and you wanna understand, you know, what's the, what's the performance of this Now that it's model I'm working with, how much is it costing me? Uh, that's another big thing, right? Depending on how I'm using it. So there's all these different things that kind of come into, come into it that become just as critical because it's, it's even more complex than.
Even the complexity you already had that was running, you know, your, your main applications.
Corey Quinn: Yep. I, I see it from the cost side, but it tends to take the perspective of being at the more of the micro level than it is the macro. Uh, companies generally aren't sitting there saying, well, we're spending a few hundred million a year on AWS so in our next contractual commitment, we're gonna boost that by a hundred million because of all the Gen ai.
But they do care that, okay, we have a workload that effectively can run. Indefinitely, continue to refine the outputs, have agents discuss these things. At what point do we hit a budget cap on that workload and then say, okay, and now this result is what we're going with. You see that sometimes with model refinements as well,
Wayne Segar: E, exactly.
That was actually the next point I was gonna make is, is that's what we're also seeing too, is people look at it from that perspective of, well, this we're change. We'll, we'll change the model out. And we'll see. Okay, was this, could this have been more cost effective in the more macro world? Right? So maybe it, like I said, it's, it's kind of a rounding error in terms of our like cloud bill today, but it now gives us the opportunity to understand how we can be efficient with this when we do scale, which inevitably will happen.
Corey Quinn: Uh, I, I do find that when people are trying to figure out does this thing even work there, there's not even a question. They reach for the latest and greatest top tier frontier model, uh, to figure out is this thing even possible? Because once it is an okay, yes, it turns out for, for let's take a toy application.
I built that to generate alt text for images before I put them up on the internet. Uh, can it do this? Terrific. Great. Now, if I start using this at significant scale and it starts costing money, I can switch over from Claude four sonnet. All the way back to, I dunno, Amazon Nova or an earlier Claude version or whatever, the economics makes sense, but at the moment, the cost for this stuff on a monthly basis rounds up to a dime.
So I really don't care all that much about cost at the current time. And that seems to be where a lot of folks are at with their experiments of, does this even work? If it's expensive, so be it. People's time and energy and the, and the lack of focus on other things is already more expensive than this thing is going to be by far.
At least that's how I'm seeing it.
Wayne Segar: I think I see that as well. I'm, I'm kind of in agreement with you there 'cause it's not, it's, it's certainly not at any scale yet. You're right. It's, it's very much in the stage of can we make something that works? And then I, I think people are starting to think about this is.
Can we not only make it work, but does it actually provide value back, which I think is the other big thing that people have. I struggled with. It's like, this is cool, but, uh, cool doesn't necessarily, you know, make or save us any money.
Corey Quinn: Right. And, and in many cases it seems that companies are taking what they've been doing for a decade and a half and now calling it ai, which, okay, and there, there are a lot of bad takes on it.
People are, oh, we're gonna replace our frontline customer service folks with a chatbot. Uh, cool. I've yet to find a customer that's happy with that. Uh, to give one great example that I found, uh, I was. Poking around on a, on Reddit last night, looking at a few of the technical things as I sometimes do when I'm looking for inspiration.
And someone mentioned that, uh, they canceled cursor because that is the first time in, uh, 20 years where they've done that for just poor customer service. And I had the same experience. I emailed in about a billing issue. I got a robot that replied that was in very fine gray text at the bottom, the fact that it was a robot, so I missed it the first time.
And then it basically chastised me for, uh, sending a second email in a couple days later. This will not improve response times. It's okay. I understand the rationale business side of why you would do that. People as people don't like it, they want to be able to reach out and talk. To humans that that's something that the big enterprise clouds had to learn is that okay, if you're talking about large value transactions, they want a human to get on the end of the phone or take them out for dinner or whatnot.
That doesn't necessarily scale from small user all the way up to giant enterprise. Customers have different profiles and need to be handled differently.
Wayne Segar: That's exactly right. Yeah. And, and I find your story kind of funny in a sense that, that the, the ai, which is the promise of it, is supposed to, you know, make things more efficient and it literally just did the same thing of.
You know, being it was rude to you and said that it couldn't even do the job any faster.
Corey Quinn: Exactly. Where I do see value for things like that with frontline support is okay, ticket comes in, look at it in your ticketing portal as a human customer service person. And it already picked up the tone. It rewrites a couple of different responses and links you to internal resources that are likely to help.
Uh, the only way I've seen customer facing ai, things like that make sense or what, it is very clear that it's an AI thing coming back. Or it gets human review before going out the door.
Wayne Segar: Yeah. Yeah. And, and that's even what we're seeing too, is. Like what I, I like to call it human in the loop.
Corey Quinn: Yeah, that's a good expression.
I like that
Wayne Segar: is what I see. That, you know, where, where some people are doing that today. Um, and this is more, more in when we start talking about autonomous or using more autonomous like operations where it's like, okay, great, something can tell me the problem, you know, like that, like Dynatrace, of course we can point you to causation of a problem.
Um, even. Tell you in some cases what it is that you should resolve or do, or at least give you suggestions of what they should be. Now, would you have some other, you know, agent interact with that make the change and immediately go and push it out and say, we're done? Probably not. Um, and that's where again, human in the loop is where I starting to see a lot of people.
You know, that's more of where people are envisioning it. Um, right now, uh, it is more of the strategy. Uh,
Corey Quinn: it, it's somewhat somo to look at observability as it tells me when the site is down, well, great pass a certain point of scale. The question has to become how down is it? But it goes significantly fur, uh, further than that.
Uh, since you have that position of seeing, uh, these entire workflows start to finish, how do you find that? Companies specifically in the enterprise space are taking thing, taking these projects from development to small scale production to large scale production. Uh, while I guess being respectful of the enterprise concerns, uh, obviously there's performance and security, but compliance starts to pay a large, play, a large role in it as well.
What are you seeing?
Wayne Segar: Yeah. So I would say that if I went back, and again, the space moves very quickly, but so if you only go, went back, uh, you know, six or eight months ago, um, people were raising their hands saying that, you know, compliance was the biggest blocker of everything. Um, meaning that. You know, you could, you could maybe test some things out, but in terms of what you could do, what data you could use, it was it, it became, you know, let's say very challenging.
Now I've seen that, where that started to open up a bit because companies have created, you know, AI centers of excellence internally that are, you know, let's say a little bit more designed to understand. What the compliance needs should be. Uh, and then what is acceptable and certainly then what isn't.
And that's kind of given people guidance as to how to maybe fast track or what, what is acceptable with their projects. So that's one of the things I've seen be more predominant is this actual. AI Center of Excellence that has come up in a lot of enterprises. Um, and, and so that's helped a lot. And then with kind of, with that, with that in mind, that's allowed, uh, you know, that's allowed customers and you know, let's say people in the company to, I.
Basically come up with, uh, you know, a better strategy of what they really want to end up with at the end of the day. So it's like starting backwards. That's the other thing I've seen people start to do a little bit more is start to think a little bit more backwards as, yeah, this idea sounds cool, but what would it do?
How would it improve? Either maybe it's an internal customer experience, which I think is where a lot of people are starting with is like. Start with our, um, our internal applications that we have that in service, our internal users, what can we do to improve their lives, make things more self-service, creating apps or AI-based apps that do that.
And then that gives us a lot of learnings to ultimately transition and move things to, to things that may be more external facing. So that's kind of the progression I've started to see and, and still seeing it obviously as it matures.
Corey Quinn: Trusting your AI stack is non-negotiable. That's why Dynatrace pairs perfectly with Amazon Bedrock.
Together they deliver on paralleled observability across your generative AI workflows. Monitor everything from model performance to token anomalies all in real time. See how Dynatrace enhances AI with Amazon Bedrock. Start your free [email protected]. Uh, something I have found is that a lot of these enterprises are over-indexing from where I sit on the how precious their data is, like it is their core IP that if we got out into the world, would destroy their business.
I've always been something of a skeptic on a lot of these claims. Even take the stuff that, uh, like at the Duck Bill group, things that we have built for internal tooling and whatnot, if that were to suddenly leak because we're suddenly terrible at computer security. It doesn't change our business any, it's not really a threat because the value is the relationships we've built, how to apply the outputs of these things, the context, the understanding.
Uh, if you, if I get access to all of AWS's, uh, code to run their hypervisors, I'm not gonna be a threat to AWS, I'm not gonna build a better cloud now. And I think a lot of companies find themselves in that position, but they still talk about it as if it's the end of days. If a, if a prompt leaks, for example.
Wayne Segar: I agree. I mean, I think that there's, you know, there's probably some correctness to the concern in, in certain industries.
Corey Quinn: Oh, I'm painting with a very broad brush. I, I, I want to be clear here.
Wayne Segar: Yeah, yeah. But I think, but I think that there is probably some over hesitancy and that is where I, I, again, where I've seen when people have created kind these more AI centers of excellence.
And they've brought on people who aren't. Let's say your, your traditional, let's say compliance folks who kind of look at things in a very black and white manner. They're usually more of a, okay, what does this really mean? Like if this data gets out there, does it matter? They look at it from that perspective versus like, you know, maybe a more traditional black and white compliance person would say data getting out there that equals bad.
That never happens. Right? You know what I mean? That's an immediate no. So I'm seeing some of that, you know, come around and change and, and that's kind of maybe one of the driving forces that have, you know, somewhat, somewhat grease some of the skids to allowing, you know, enterprises to adopt things a little bit more readily.
Corey Quinn: Yeah. And I think that also people at companies tend to get a little too insular. I've seen it with my customers, I've seen it my own career. Uh, I find it incredibly relaxing to work in environments where we're only dealing with money. 'cause I had a job once where I worked with, where an environment where if the data leaked, people could die.
That's, that is something that is, that weighs on you very heavily. But when all you do is work at a bank, for example, it's, it's easy to think that you're the only thing that. That colds, the force of darknesses at bay is the ATMs spitting out the right balance. Maybe you're closer to that than I want to acknowledge, but, but there is a sense of, at some point, what are we actually doing?
What is the actual risk? What is the truly sensitive data versus the stuff that just makes us feel bad or is embarrassing or makes us a violation of some, uh, contractual breach issue. There's, it is a broad, there's a broad area there, and there's a lot of nuance to it. I'm not saying people who care about this stuff are misguided, but it does lead to an observation that.
A lot of the upstart AI companies are able to innovate far faster and get further. Then a lot of the large enterprises specifically because as a natural course of, of growth, at some point your perspective shifts from capturing upside to protecting against downside to risk management. Uh, if a small, uh, small company starting in a, a coding assistant winds up having its say unhinged, ridiculous things well.
That's a PR experience that could potentially end in hilarious upside. Whereas for a giant hyperscaler with very serious customers, that could be disastrous. So they put significant effort into guardrails rather than innovating forward on capabilities because you have to choose at some point.
Wayne Segar: That's right.
Yeah. I, yeah, I agree. And I see that, you know, as well, right? I mean, that's the, that is the big thing is, is it depends, it depend on the company size. Um, and, you know, ultimately what's, like you said, what's the risk that they could, that they could accept, uh, at the end of the day, if something, if the worst case scenario actually happens, right?
Corey Quinn: Oh, absolutely. It's, it's the same mentality as well that causes companies to freak out about things that frankly they don't need to. Like, well, I was about to sign a deal for multiple millions of dollars with this company, except that one dude on Glassdoor who had a bad time working there for three months.
I don't know. Nevermind. Like, that does not happen among reasonable people. But I understand the reflexive, oh, dear God, what's happening here?
Wayne Segar: Yeah, I agree. And, and I'm kind of in that camp as well, which is that, um, there's always gonna be positive and negative things out there, uh, on a, on a, on, you know, on any company, right.
Um, but I, I tend to, I tend to live on more of the rational side of things, which like, I think you get to is some, some bad things or, you know, some negative Reddit posts that may happen because of a bad experience is not. That again, that's not gonna destroy a company, right? People are gonna, people are gonna buy because they like the people, they like the product, they like the technology.
Corey Quinn: I, I agree that's, that is what sensible people tend to do. Uh. Uh, getting back to, I guess, your place in the ecosystem on some level. Uh, one thing that becomes a truism with basically every workload past a certain point of scale is you don't have an observability vendor so much as you have an observability pipeline.
Different tools doing different things, uh, from different points of view. As je I proliferates into a variety of workloads and. A variety of different ways. Why is it that customers are going to Dynatrace for this instead of, ah, we have 15 observability vendors and now we're gonna add number 16 that purely does the AI piece.
Wayne Segar: Yeah. Really good question. Um, and I, I think the, the answer to that lies where we're part, and it's particularly again, in the more enterprise space. What the, the shift that, that I've observed happening, pun observed, right? Um, that I've seen happen is even before. I'd say the AI boom or people really going into AI was, uh, ultimately that.
People were consolidating things, they were really more on a consolidation play. Uh, which is to say, you know, we're trying to, we're trying to get down from the 15, like you said, maybe in your example, you're trying to get down from the, the 15 different vendors that do very similar things and maybe get down to, to four.
I just make up a number, right? It's not always many to one. It rarely is, but it can be many to few. And the reasons, you know, there's a ton of reasons why that's, there's beneficial around doing that. There's economics, there's, you know, there's efficiency gains and all that stuff. And so I, I've seen that start to happen, and so that's why, kind of going back to your question, why, why would somebody.
Let's say, you know, look at somebody again like a Dynatrace when you get into the AI observability space, instead of finding some, let's say, point solution that maybe does that specific niche, it goes back again to, it's a consolidation play because you know, customers just don't want to have to manage a portfolio of, you know, 16.
Or 20 things that very are are very similar.
Corey Quinn: Oh, I agree wholeheartedly with that. Absolutely it does. But the reality as well is so many, uh, there are a bunch of terrific observability companies that once they reach a certain growth tipping point, need to do all things. And I think it's to their detriment where if you have a company that, I dunno, emphasizes logging, and that is their bread and butter and that is what they grew up doing, and now they're okay, we need to check a box.
So we're gonna do metrics now. A week after they launch someone who's never heard of them before, stumbles across them, implements their metrics. Solution like this doesn't seem well baked at all. I guess it's all terrible. Across the board. It becomes harder and harder to distinguish at what, at what areas a particular vendor shines versus, which is more or less check the box as part of a platform play.
Ideally in the fullness of time, they fill in those gaps and become solid across the board. But it, it still also feels like a bit of the multi-function printer problem where it does three things. None of them particularly well. How do you square that circle?
Wayne Segar: Yeah. It, it's, it's a very difficult one to square, like you said.
Um, now the way that we look at it, you know, and this is, you know, it could be different from company to company. The way we look at it is we, we try not to be the, oh, well, we'll release this thing because it, you know, sounds like the market thinks we want it, but it does maybe 5% of what people really needed to do.
Um, and so we look at it from that perspective of, you know, what is the real pain point? Like I always like try to work backwards. What's the real pain point that customers have? Is there additional value that they gain? Because we can, because of the rest of the platform, right? The, the synergy you can get from the rest of the platform.
If the answer is no to those, to those questions, then we have to, to discuss whether that's an, a real area that we wanna invest in. Um, because like you said, it, it's like, great, we do this one little thing very well. But even from a Go to Market standpoint, it, it doesn't usually take off because. You, you're trying to, you're trying to sell to, to an audience or, or, or, or provide value to an audience that you're only doing 5% of what they really need.
Corey Quinn: Yeah, that's, and I think that that is the, the trick and the challenge that, because you simultaneously have to, you want to be able to provide all things to all people, but you also have to be able to interoperate effectively because. Every environment is its own bespoke unicorn, past a certain point of scale.
No one makes the same decision the same way. So as you start aggregating all those decisions together, things that make perfect sense for one customer might be disastrous for another. And, and you're always faced with a challenge of how configurable do you want to make the thing? Do you want to have it be highly prescriptive and it'll be awesome this way?
Or do you want someone to basically have to build their own observability from the spare parts you provide them? It's a spectrum.
Wayne Segar: Yeah, I, I agree. A a hundred percent. And like I said, kinda going back a little bit like the way that we look at it is, you know, we view it as the, the power in observability now, and then the power of observability kind of going forward is having, if you're collecting a whole bunch of data and that's getting exponential, but the data in context, having the contextual of it, and that's what can provide you actual value at the end of the day.
Uh, so that's why I said is that, would we go into something. That doesn't kind of marry that up or doesn't make sense. Like I said, it doesn't provide value to a lot of people, but it can. So again, going back to the AI observability stuff is. One of the things we do and, and that is, is a little bit unique to us, is we have a topology model of, uh, you know, of a, a customer's environment that like real time view of like, this dependency belongs to this and this and that, and all these, all these type of things.
And. Now that you're injecting a brand new kind of, let's say, piece to your topology, that being the AI infrastructure, if you're going to do that, uh, inside of your application space, you're gonna wanna have that context now of how do these maybe distributed systems, how are they interacting with it? And then, so there's, like I said, a lot of value in doing it in that, in that case.
And that's, and that's really where we focus on.
Corey Quinn: Yeah, and I think that's a very fair positioning to have. And I, I, not to name names, but I do talk to a lot of companies about observability because, like it or not, the ultimate arbiter of truth of what's really running in your environment is the AWS bill Observability via spend is not a complete joke, and I hear.
Complaints about vendors. It's always the squeaky wheel that gets to the grease. I don't hear complaints about you folks very often, though I do find you in these environments. So your positioning and the way that you're talking to customers and the way you're addressing their problems is very clearly onto something.
Believe it or not, you don't just live in, uh, in one of the boxes of the Gartner Magic Quadrant. You, you are out there in the real world. Yes,
Wayne Segar: yes, yes, we are. Uh, and like you said, we, we, uh, we. We are predominantly deployed, like I said, in the larger enterprise space. And you know, the way that we look at, you know, when we deal with customer and interaction, so can, may maybe go back to your point before where, where we're not coming up as the squeaky wheel.
We, we prefer obviously to be the valuable wheel or the valuable cog in the wheel per to per se is, you know, we, we work very, very diligently to ensure that, you know, we're. We're solving the customer's problem, not just finding a way to get them to use something new. We want to focus on the fact that you have a new challenge and how can we help address that?
Where's the value in the platform that can help you address that, not just, Hey, use this new thing because we have it. And maybe you gotta be cool.
Corey Quinn: I cannot adequately express how much of a differentiator that is between you and other vendors that I come across fairly regularly. It's, well, we need to boost market share.
Buy this thing too. It's, but, but I don't want to buy this thing. Well, tough. It's now getting rolled into your next contract. It. Becomes a challenge of at, at some point you have to start going broad instead of going deep. And I still think that this is an emerging enough space where that doesn't work for everyone, nor should we try and force that square peg into the round hole.
Wayne Segar: No, and like I said, and, and at the end of the day, all it does, if you do that, it creates frustration on two sides, right? Because one. As you can, as you're saying, like customers get frustrated because now they have, are either paying for something that's not really valuable, um, or it's not working out, and then, you know, then what, from from a company standpoint, you've now just created a, a, let's say a, a bad taste in somebody's mouth or a bad relationship and it, like I said, it doesn't really, it doesn't really have benefit in the long term for either side.
Corey Quinn: It doesn't work. It can't work. It's, I like the idea of sustainable companies doing things that are not necessarily as flashy, but they get the work done. I've, I've always had an affinity for quote unquote boring companies. It's, it's a lot less exciting than living on the edge and being in the news every week, but maybe I don't want my vendors to constantly be Jo, be jostling each other for headlines instead of solving the actual problem that they're paid to solve.
Wayne Segar: Exactly.
Corey Quinn: I really wanna thank you for taking the time to chat with me. If people wanna learn more, where should they go?
Wayne Segar: Yeah, so easiest way of course is dynatrace.com. Plenty of information there. Um, and we also have, uh, well, really two things you could do. You can certainly take a free trial. Again, no strings attached.
You don't have to put any payment information in nothing like that. Um, and you could just deploy it in your own environment, actually see it work, which is very well, and, uh, you know, then what you also good to get access to is, uh, we have a, a playground as well. So if you don't even wanna ins, you know, try it in your own environment yet you're not ready or can't do it.
We have a data, that actual environment that's running that you can actually play around with the actual live product,
Corey Quinn: which is fantastic. I I love that easy, that easy exposure to here, play with this and see how it works in a, not sure, somewhat contrived, but not massively so as opposed to, oh, you wanna learn how our product works?
Click here to set up a call with the sales team. I. I understand that that is how enterprises buy, but there's also small scale experiments where people just want to see if the thing works. Usually in weird hours and putting that blocker in for people not being able to get to actually kicking the tires doesn't serve anyone particularly well.
I, again, it doesn't work for every product, but it should for this.
Wayne Segar: Yep. I agree.
Corey Quinn: Thank you once again for your time. I really do appreciate it. Uh, Wayne Seger, director of Global Field CTOs at Dynatrace. I'm cloud economist Cory Quinn, and this is screaming in the Cloud. I. If you've enjoyed this podcast, please leave a five star review on your podcast platform of choice.
Whereas if you've hated this podcast, please leave a five star review on your podcast platform of choice along with an angry, insulting comment that we'll have no idea when it showed up, just because honestly, we don't have great observability into those things.