Episode Summary
Episode Video
Episode Show Notes & Transcript
Show Highlights
Rubrik: https://www.rubrik.com/sitc
Transcript
Dev: We started to feel we were onto something and we felt like we were onto something in in two parts. One of the parts was when we were pitching with Rewind, it felt differentiated and like it gave people like the safety blanket essentially, or that they didn't really have with, um, with their AI systems, in fact, pretty consistently.
One of the things that I heard was, oh, I didn't even know that this would be possible. Just to give you like the 10 seconds on what AI Agent Rewind is. Rubrik maintains backups of the most important production systems that an organization or enterprise has. Agent Rewind says if an agent operates on those systems and makes a mistake.
Delete something, it shouldn't have edited a field in the wrong way. We can allow you to correlate those agents' actions with the actual change to production system and allow you to recover in one click, uh, from that healthy snapshot. So that's like the agent re want pitch and it gave people a lot of safety.
That was one thing we learned. But the second thing we learned was. This ability to go ahead and recover was still a little bit of a a future problem, right? It gives 'em comfort that allows them to go ahead and do something that's coming out. But in order to build rewind, we also had to build a deeper understanding of the agent's systems.
We call it the agent map and the agent registry. Like we had to know, well, what agents are running inside of your ecosystem so we can figure out which ones we might need to rewind. And we need to know what are the types of things that they have access to? What actions can they take so we can rewind a given action that it actually ended up taking.
Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. This promoted guest episode is brought to us by our friends at Rubrik. Also brought to us by our friends at Rubrik is Dev Rishi, their GM of AI Dev. Thanks for joining me.
Dev: Hey, Corey. Thanks. I'm looking forward to being here.
Corey: Let's be honest about disaster recovery.
Your DR plan assumes everything fails gracefully and your backups are pristine. But what happens when ransomware's already in your backups or when the credentials your Dr. Runbook depends on are compromised? Most DR strategies are built for accidents. Not adversaries. Rubrik gets this. They isolate your backups from production credentials, make them immutable and can scan them for threats so you're not restoring malware along with your data when your Dr.
Day comes, and it inevitably will. You need more than a runbook and hope you need backups that are actually recoverable. Learn [email protected] slash SI tc. You are a somewhat recent newcomer over to Rubrik. Before this, you figured, you know what I'm gonna do, I'm gonna start an AI company. But you, you did it the hipster style while it was still underground, before it was cool.
What's the backstory there?
Dev: Yeah. Honestly, I'm not sure. AI was never, uh, it was ever not cool. I remember even when we started the company in 2021, I thought it was one of the most. I actually thought at the time most of AI was a little bit overvalued. Um, I thought it was overhyped. And one of the reasons I thought that was I had spent a long time as a product manager at Google.
I was the first PM for Kaggle, which is the data science machine learning community. And I saw this massive influx of, I think like citizen data scientists, people that were learning a lot about machine learning and ai and we're very excited about it. But, um, I was one of the product managers on the team that eventually became called Vertex ai, uh, Google Cloud's machine learning platform.
And candidly, very few enterprises were actually getting value, uh, I think out of enter, uh, ai and very few had actually even figured out how to go into production with those systems.
Corey: Yeah. My line for a while back in that era was that machine learning can sort through vast quantities of data and discover anything except a business model.
Yeah,
Dev: I think that's about accurate. You know, when we looked at companies that were actually making money in this space, it was data labeling companies like Scale ai, I think back in the day was making a killing, but their core business was not actually delivering necessarily production ready AI models for the enterprise.
It was just labeling data. And so I started the company in 2021, or along with a few of my co-founders, because we just believed in two things. The first was. This technology is powerful. It isn't delivered on that promise yet, but we saw what it was able to do at leading organizations like Google or YouTube or you know, Uber, where my co-founders came from.
And then the second was we thought we had an abstraction that would make it a little bit more accessible. You know, we'd seen a lot of companies die on this hill of trying to democratize machine learning and democratize deep learning. We thought we'd take our stab at it as well.
Corey: Yeah, I remember back then the, the use cases that they came up with were either highly specific to the point of uselessness for anyone else or.
Banal. Uh, the two examples that stuck in my mind were if you are a credit card, uh, company and you have a massive transaction volume, you can start to identify fraud via the power of machine learning. Great. I'm not that. The other one was, I think WeWork did, uh, machine learning to analyze traffic patterns and wound up, up discovering that they could reduce them if they had a second barista, uh, at certain hours of the day.
In other words. Humans like to drink coffee in the morning was their amazing discovery out of this. And it, it felt like it was a really interesting space that people struggled to articulate value from. Then we saw this massive gen AI explosion, uh, over the past few years and. Everyone is doing some experiment with it.
Some folks are rapidly rebranding as fast as they can to be AI companies, but the question of value seems to be one that hangs over the industry as a whole, just because it's terrific. When it gets it right, it often doesn't. There's are distinct costs associated with it, and people are still trying to figure out how exactly this factors into the thing that they're doing Has been my experience from talking to folks.
Am I missing something key? What? How are you seeing the industry evolve?
Dev: I want to answer that in just a second, but the first thing you mentioned actually was really interesting. You talked about how use cases back when we started the company in 21 were pretty banal, or you know, relatively consistent. I had the same thought.
I wanna talk about technology for a a minute and then go into this generative AI section. One of the things that I remember, like lamenting that actually back at that time was if you looked at every one of our competitors' websites at a use case section. Every competitor's use case section looked like the exact same thing.
There's a churn prediction model. There was an LTV prediction model, there was a fraud for, uh, you know, detection model. And there's like all these use cases of look at all the amazing things you can build with the platform. I think. Our insight at the time was that there was a newer technology and that the power of generative AI was gonna be unlocked.
Uh, or sorry, we didn't, it wasn't generative AI at the time. It was called deep learning. Uh, and you know, our insight was that, number one, we think that it'll work really well with unstructured data. So if you think about like fraud or churn, there were a lot about structured data problems, and we were really excited about unstructured data, raw texts and images and video.
Then the second I think, insight was this rise of what we called pre-trained models. Um, and so these are models that you didn't have to have a million records to be able to go ahead and build a data science team and clean the dataset on, but they were pre-trained so you could just sort of adapt what generally understood English towards your task.
Those were like two of the things that we decided to invest in with deep learning and we set out on a mission to democratize deep learning. And then I like to say open AI and large language models really democratize deep learning better than any of us. But then you asked the question of like outside of consumer, which is I think where generative AI has probably like delivered a lot of value.
I would argue for users today. What is the value that enterprises are seeing right in G 2K? And I'd, I partially agree with your observation like, are people getting a, like ROI or not out of it? And I actually think that the, what's been surprising to me coming into Rubrik is the reason why I think organizations aren't actually getting nearly as much value yet.
As I think has been forecasted, and I believe that they will get out in a few years at Preta base. You know, the startup that, uh, we worked on for the last four and a half years, we worked with a lot of digital native and leading AI engineering organizations, household brands you would know, uh, you know, were the ones that were deploying production models with us.
I would say the biggest difference between them and coming to Rubrik were like the customer base is Global 2000 Enterprise. Think about the most important regulated customers and financial services in healthcare. The biggest difference is actually on risk posture, and that is converted downstream into ROI.
The reason I think a lot of organizations haven't gotten the type of value that they want under A ROI yet is they haven't figured out the framework where they're going to let AI loose in terms of like the actual work that will produce value
Corey: at certain points of scale, companies become less about seizing opportunities and more about risk mitigation and management.
And when you have something that. Gets it right 80% of the time, and the other 20% goes off the rails to greater or lesser degrees. That becomes almost an unbounded risk vector and that I feel like that's why people are taking a very, okay, how, how do we put guardrails around this thing so that it doesn't destroy the company we have built?
Dev: Totally like, uh, let me ask you, what do you think is like the biggest enterprise AI trend of 2025 or 2026? I think the most hyped one is probably agentic ai, right? Like agents is the, the thing everybody wants to talk about.
Corey: Even getting a definition of what agent is, is becoming, uh, a. A exercise and tell me what you have to sell me.
Uh, everyone is defining it in ways that align with their view of the world. We saw similar things in the observability space in recent cycles.
Dev: Let me offer you a definition without something we sell you, because we don't sell an agent builder platform. We don't like make it, you know, something you build agents with.
I always think about agents as just LMS or models with access to tools. And so if you think about that, that really just means a model that can do work or take an action. Um, you know, on your behalf. And, uh, I think this comes back towards this idea of like, at a certain point organizations have to be about like risk mitigation.
If you think about like a lot of different software trends, the shift to cloud as an example, there is a huge amount of, um, I think nervousness at the beginning of the shift to cloud, predominantly around security and guardrails and like, you know, this idea of we're gonna go to a multi-tenant architecture in some way or the other, rather than like on my on-prem data center.
Uh, and now with ai, what we're talking about with agentic AI is like, look, it's gonna be great. It's gonna be able to do work. You're gonna be able to take action. And I know every it, and like security person I'm talking to is like, wait, so you wanna give a non-deterministic model? Meaning model that will, you know, uh, not necessarily operate within a certain defined framework, something that can come up with random answers at times.
You wanna give a non-deterministic model access to my production and enterprise systems, and you want me to be on the hook with it? No way. Uh, and so while I think it has a lot of different promises, a lot of the organizations that I think I speak to are kind of still stuck on like square one of how do I get comfortable with the idea of this running around inside of my ecosystem?
Corey: Yeah, it feels like every vibe coding project you start begins with, okay, roll for initiative. And it you, it always goes slightly differently depending upon the fates, for lack of a better term, which is great for some use cases and terrifying for others.
Dev: But I think like, coming to your point about ROI, I actually, by the way, I didn't know this three months ago.
Um, you know, when I was working, uh, and we were acquired into Rubrik about three to four months ago when I was working within, um, production AI systems and a lot of leading tech companies, we thought that there was a series of challenges that. Typically had to do with latency or throughput and, you know, the efficiency of models.
But over the last two to three months, um, I've had a chance to speak with about 180 to 200 different customers representing, you know, all sorts of swats from the Global 2K. And the thing that was really interesting to me is that we speak a lot about the promise of what AI could unlock. It can go ahead and do work on our behalf.
Imagine, imagine that wrote task you had to do to be able to prepare for an interview or to be able to send out an email. You can actually go ahead and read through the systems and write that out for you. But then in practice, if you think about what type of agent or what type of AI every single organization is actually rolling out today, for the ones that actually even are, um, first of all, they're all in read-only mode, right?
Like very few people are giving agents what we call like write or delete access. Very few agents, uh, people are giving agents the ability to like actually edit a system. And it's not because they can't think of the use case, and it's not because the business value argument isn't there. But it's because it feels like the risk is almost uncapped.
There's unlimited downside really in it for them.
Corey: I mean, I do myself in the test lab, but worst case, something blows up it. It's not that hard to restore from backups. Turns out data resilience is a thing. There's a Yeah, but the idea of doing this with. Production customer side data. I, because my laptop has theoretical access to customer environments, I can't let Claude code run loose on this thing.
I give it a bounded EC2 instance in a dedicated AWS account. Worst case can blow up my budget, but that's the end of it. It's not, it has no access to data that could cause me a n.
Dev: Yeah, exactly. We, um, I spoke with our head of InfoSec yesterday, right? And he was talking about how, look, there are people inside of our company that have quote unquote, super, super user admin privileges.
They can do incredible amounts of damage, but those people, number one, have all been background checked by the company. Number two, there's like all of these guardrails that are put, put in place, you know, sock that observes them. And number three, and maybe the most important. Like from his perspective, they operate at human pace.
So you know, the amount of actions that you can go ahead and take, you could probably blow up your AWS cosman, you know, uh, reasonably easily.
Corey: Yeah. Or compensating controls. The way we handle this in polite society, the, the bank teller theoretically can enrich themselves at your expense. The reason this doesn't happen is because there are audit controls and security flags and alarms that will go off everywhere.
The second, something like that happens.
Dev: Yeah. And there's probably like 30 to 40 years of software that's been developed around validating the employee and the human. And there's a certain pace that a human can go ahead and make a damage on. I would say with agents, you know, the line that we use internally is, well, they could do 10 x or a hundred x to damage in a hundred, like yeah, a 10th of the time.
The pace at which I think the operation is changing is something that there isn't really a resilience infrastructure for today. Um, and I think that's what's stopping a lot of the RI.
Corey: It. It feels like a lot of the value that AI has generated comes at the personal level. You alluded to some of it recently where we're talking about, oh, respond to this email.
For me, like I have what I like to politely call the asshole and email problem. I tend to write relatively tersely. It's a bad habit I get from Twitter. It. And it turns out that short emails look like I'm being imperious, so make this polite, but I still have to iterate through it a couple of times. It starts off with, I hope this email finds you well, which is not my style.
I'm likelier to begin with. I hope this email finds you before I do, because if that's at least my brand. And you have to tweak it and all, and like there's a bit of a human in the loop story and it's helpful, but it's not, you know, I'm not gonna pay $5,000 a month of personal money for these things. It's, it's helpful, but it doesn't necessarily justify a lot of the investment on the Yeah.
On the enterprise side, that story may look radically different. I have a number of customers. In effectively all of them who are doing some experiments with ai, the ones that are spending meaningfully on it, those use cases start to look a lot more like, uh, the B2C aspects of what it is that they do. The B2B companies that I work with are using it, uh, obviously for a number of things, but their spend is nowhere within orders of magnitude of what we're seeing when it starts getting mass deployed.
How are you seeing the global Fortune 2000 largely emerge? As far as trends go when it comes to ai?
Dev: I'm seeing a little bit of a bifurcation, and so I'm seeing a leading edge of companies and I'm talking about Global 2000 Enterprise still right now. Right? But I'm seeing a leading edge of companies. That are fully leaning in to ai.
And I, I, you know, I spoke with a fortune hundred company, uh, that walked me through how they were thinking about, uh, things at a board level. And they were going to brand their company, which is, you know, uh, household name decades and decades old into an AI native company. Uh, and I sat there and I was like, this is incredible.
Like this company which has. More employees than you know, any other company I can think of and like has such a large distribution network is now thinking about how to be able to go AI native, thinking about how to be able to do agents and other workflows for every single type of use case. And they're really looking like they're taking lead a thousand flowers bloom, but actually investing behind every single one of those flowers to be able to see where am I gonna get value?
Simple reason why they think that, you know, in five to 10 years, their workforce is gonna look massively different, right? Like they think the entire way that the work happens. I would say like five to 10% of companies that I speak to are in that bucket. And I think 90% of organizations that I speak to. Are in the bucket of kind of like a let's wait and see posture.
Let's go ahead and run a few different experiments. Let's have a center of excellence. Um, and like, let's start to in, uh, experiment with across both of them though, the one thing that I see that's really interesting is that typically your first handful of use cases are the ones that take the longest use cases are agents one through five.
Take a lot of doing. There's a lot that you have to be able to establish from a framework standpoint. How do you measure ROI, whose approvals are in there?
Corey: What tasks are they good at? Which do they struggle with?
Dev: But the fascinating thing, Corey, is how quickly organizations go from five to hundreds or thousands.
When we were thinking about like, what would rubrics play with an AI be, we had a huge question of like, when is this moment gonna come when people are going to have, you know, hundreds or thousands of agents deployed? And our thought process, well, okay, well is this gonna be. 12, 24, 36, uh, months away, we think it's going to happen, but what's the timeframe over it?
The thing that's really been interesting is like a set of conversations we had, we think like maybe, you know, 1, 4, 1 and five or so where people tell us, oh, that's already happening today. I'm actually seeing it kind of scale out, and again, it's only the, I'd say leading quartile at best that are in that bucket, but you can kind of see those early adopter motions that we think the rest of the market's going to follow.
So I think, you know, the very brief like summary for what are we seeing in the enterprise, what we're seeing most people in that one to five experimentation route. But what we see is the people that graduate from it start to scale up very, very quickly.
Corey: And I think that there's a misunderstanding around a lot of it too, where if I'm reaching out to support, for example, from a company I do business with, I don't wanna talk to a chat bot.
However, it would be convenient if the human that I talked to has, uh, an AI assisted context on their side of, oh, here are the previous support tickets I've opened. Oh, it looks like I might actually know how networks work. So is it plugged in? Might not be the first thing you lead with. For me, it starts to tailor the responses there.
And that's, that is a neat, transformative customer experience. But so often it shortens to, oh, we can lay off our entire support staff, which generally is not happening. Every time companies have tried this, it seems to have gone disastrously.
Dev: Yeah, I think that, um, you know, you said you don't wanna shout to the shop bot.
How many times have you been on a phone and been like, speak to an agent, speak to an agent. Speak to an agent.
Corey: I'm already pissed off 'cause I don't want, I'm, I'm a millennial, I'm an older millennial. I don't want to talk to people on the phone. If I did, I would take out a personal's ad. I great if I'm calling in and something has already gone off the rails.
Dev: Yeah. Corey, I'm a millennial. I'll say like, I don't wanna speak to somebody when I have a support issue almost, period. I just want it solved as fast as possible, right? Like, I don't wanna, I don't wanna speak to a chatbot. I also don't really wanna spend a long time explaining to a human what exactly is happening.
Corey: I will use a chatbot on my side because I wanna explain, here's the, here's the. Two hours of logging data that I have for this. Can you skim this down to a concise ticket that shows a skeleton reproduction case? I used to have to do that by hand. That was how I fixed things and invariably the chat bot will often fix it for me while doing that behavior.
It's incredibly helpful. It's, but the support tickets of it broke. Not the most helpful thing in the universe.
Dev: Let me connect it to what I think we need to be able to actually tie it together, which is, um, a lot of chat bots are, you know, they're interactive. The conversations, they're almost, they almost feel like they're incentivized to keep you talking.
What I want to be able to do is get to the fastest action possible. I want that system, which today, the reason I wanna speak to human is to today the person who can issue me my refund, or who can, you know, cancel the order, take that action is a human. I believe, I fundamentally believe that organizations are gonna get to the point where those actions are gonna be done autonomously.
They are gonna be done by like an agent or AI system. In order for that agent to be able to do it, it needs to be connected with that tool. It needs to be connected to the database that manages order cancellations as an example. Um, and I think where you're gonna see that. Flipping value happen from like, Hey, the AI driven chatbot is something that's driving me nuts to, oh, this was a way better experience than others, is when those chatbots start to get access to tools.
There's a lot of organizations they already have, but the type of organizations that have have a kind of risk forward posture still today.
Corey: Rubrik is sponsoring this segment because they figured out something important. Protecting your AI isn't just about backing up databases. It's about protecting the entire supply chain, the training data that teaches your models, the source code that defines them, and the live data that they're acting on.
Miss any piece of that, and you're basically rebuilding from scratch when things break. Rubrik secures all of it in one platform across your multi-cloud circus. So when your AI inevitably does something creative, you didn't expect you can recover without losing months of work. That's the difference between resilience and resume updating.
Learn more at rubrik.com/si. Tc. There's a lot of. I ought say optimism in the space. There's a lot of hype as well where companies are suddenly trying to wrap the exact same thing that they've been doing for 15 years in the AI story, despite the fact that they did not revamp their bestselling product to completely be AI native.
Uh, and the departure from what it previously was over an 18 month span like that, that would be in some cases lunacy. Um, in some cases it's, oh, great, we're an AI company. Like that's great. I thought you were a bus company, but there's this idea of being able to spill, to spill this out in, in ways that are, that are iterative and transformative and do lead to better outcomes.
Which I guess brings us to Rubrik on some level here. I, why did they acquire you and what is, what are you doing these days now that you're the GM of AI over there?
Dev: Yeah, it's a great question. I'll answer them in two parts, right? I think the first is. Well, why make the acquisition? Um, and I, Rubrik is a company, um, co-founder and CEO BIP Bull, I think has defined, Rubrik is a company that's been perpetually.
That the market's been perfectly confused how to bucket it, right? Like it started off in like a core on data backup, and then data protection and cyber resilience really became a larger market as we saw the rise of ransomware and other attacks. So it went away from like natural disaster, fire, flood for why you need the technology.
I think that the, um, kind of executive and founding team here have a long-term ambition in ai. The reason that they have a long-term ambition in AI is because the view is that one of the most fundamental kind of like substrates for feeding into AI models or others, is like, what is the actual customer data that is backing it up?
And Rubrik is one of the largest pre-populated data lakes for every single enterprise customer that it backs up and protects. All of the most important data that is gonna be important for our customers, like, you know, business resilience applications and day-to-day operations that our, that Arik customer is backed up really by like our underlying systems.
So the first observation that I think the rubric, um, exec team had motivating the acquisition was like, Hey, we think data is gonna be like a fundamental asset in AI and we wanna have a unique right to play here. And then the second piece was like, I think how that fit into the Preta base platform. And you know, what we did as a technology.
What we did, just as like a, you know, uh, I'd say the 32nd overview was our favorite customer quote is, generalized intelligence might be great, but I don't need my point of sale system to recite French poetry. We were targeting enterprise applications where you needed something narrow and specific done, and we wanted to help you be able to build out that application.
So the thesis was what Rubrik had in terms of the data, uh, uh, substrate, and then more recently the identity component as well. That helped you understand who has access to what could be paired with our platform. That gave you, you know, the ability to think about more tailored applications with models, uh, and we could build a more resilient enterprise story.
And so that was really, I think, you know, what brought us in at, um, at like, I would say 80% of the story. And I think 20% of the story is just the great kind of, I think, cultural fit that also existed across, um, the two teams. And then, uh, fast forwarding to today, what is it exactly that, you know, I'm doing here?
I like to kind of say that I don't have any,
Corey: I don't have any idea. I don't know either. Don't tell anyone. Kidding, kidding.
Dev: No, no, no. I, yeah, actually, very true. I would say when I come into like a newer market, we had our customer base and now I'm coming into newer market, the rubric customer base, which is tends to be IT, and security in large parts of global 2000 enterprises.
I think I walked in with the assumption, I don't know, like I actually don't know what the right way is to take the product that we have and the product that Rubrik has and be able to start to, you know, retrofit it towards what would the future of this specific market needs. So the only way I know how to answer that question is to get out and talk to as many people and customers that are in the field and hear from them what are they actually struggling with.
So in the last two and a half months, um, I've had a chance to speak with a little over like 180, um, organizations. And, you know, many, many different people inside of each of those organizations. Typically IT security, but also you can think of everywhere from the backup admin, all the way to the person who's heading AI inside of those organizations and understanding what is it that they're actually, you know, uh, struggling with.
And, uh, we, you know, it's hard to go ahead and have a super open-ended conversation. So we came in with an initial pitch that we called Agent Rewind. Agent Rewind connected what's happening with AI to rubric's core around resilience. It was this idea that if agents are gonna do 10 x damage in one 10th a time, what if we allowed you to revert back a destructive agent action?
We started to kind of have a, you know, we had these conversations that were. Very genuinely not sales oriented, in part because we, you know, the, the product was still too early to sell at the time that I was having these conversations.
Corey: I love those conversations. I just wish I could trust it a little bit more.
When people say, I'm not trying to sell you anything, and you have those conversations and it quickly transitions into a sales pitch, it's, yeah, but if I say it's a sales pitch, you won't take my call. It's, and you think lying to me is gonna lead you to a better outcome. I digress. Please continue.
Dev: I had the opposite experience because I think people were like, well, when can I go ahead and try this and when can I, like, you know, how much is this gonna be?
And I was like, we gotta, we gotta slow you down a little bit there.
Corey: That's, you know, you're onto something.
Dev: Yeah, exactly. That's that. But what you said is actually what we all started to feel, we started to feel we were onto something and we felt like we were onto something in, in two parts. One of the parts was when we were pitching with Rewind, it felt differentiated and like it gave people like the safety blanket essentially, or that they didn't really have with, um, with their AI systems, in fact, pretty consistently.
One of the things that I heard was, oh, I didn't even know that this would be possible. Just to give you like the 10 seconds on what AI Agent Rewind is. Rubrik maintains backups of the most important production systems that an organization or enterprise has. Agent Rewind says if an agent operates on those systems and makes a mistake.
Delete something, it shouldn't have edited a field in the wrong way. We can allow you to correlate those agents' actions with the actual change to production system and allow you to recover in one click, uh, from that healthy snapshot. So that's like the agent re want pitch and it gave people a lot of safety.
That was one thing we learned. But the second thing we learned was. This ability to go ahead and recover was still a little bit of a a future problem, right? It gives 'em comfort that allows them to go ahead and do something that's coming out. But in order to build rewind, we also had to build a deeper understanding of the agent's systems.
We call it the agent map and the agent registry. All the, like, we had to know, well, what agents are running inside of your ecosystem so we can figure out which ones we might need to rewind, and we need to know what are the types of things that they have access to, what actions can they take so we can rewind a given action that it actually ended up taking.
Corey: Yeah, it turns out natively there's no, uh, undo for an RMRF or a drop table.
Dev: Natively, there's no undo for a drop table, especially when it's like an MCP tool call made from some agent, you know, that sits upstream towards it.
Corey: Oh God. We've all had the thing where it's just, oh, this file looks, I'm just gonna overwrite it.
Like that was important. What are you doing important to the project? Not. I rev not, not important to the grand scheme of things, but yeah, that's why we have guardrails and test this in constrained environments.
Dev: You'd hope, right? Like that's why before I think people go through it, but it's, it's actually hard to even test these because software testing, you had unit tests and you'd essentially say like, okay, I wrote it.
I wrote the sub routine to be able to do X. Like I kind of know that it's not going to come out and start reciting French poetry. But this testing a non-deterministic, a non-deterministic system is meaningfully. More challenging and I think Rewind gave people a safety blanket. But as we were talking to people, the other thing that really kinda lit up with folks was.
A lot of folks told us, I actually don't even know what are all the different agents that are running in my ecosystem. I don't have that registry right now, and I don't have the ability to go ahead and map out what types of tools and actions can it do. And so we took that as well as the first party challenges that Rubrik was actually dealing with ourselves as we roll out agents as a company, and we formulated into like what our product vision and thesis ultimately has become.
Which to your point, Corey, I think you said earlier, uh, a lot of these companies are, uh, a paperclip company and they're now like, Hey, we're an AI company and you know, have you changed the core product of the paperclip or not? We kind of took a view that we needed to launch a a net new product really. So there's the Rubrik Security Cloud, which is the thing that Rubrik has really kind of built its core business around.
And we just announced, uh, most recently our new product, which is called the Rubrik Agent Cloud. Uh, and the rubric Agent Cloud comes up with three key pillars that are based on these conversations we had. The first pillar is monitoring and observability. I often think about monitoring observability is like that base layer you need to have.
You need to know what kind of agents are running. You need to be able to understand a little bit about what is the blast radius. The second pillar that I haven't seen a lot of people, I see a lot of people talk about, but I haven't seen a lot of really great software solutions towards. Our governance and policy enforcement with guardrails.
So we talk about guardrails, but what does that actually mean to be able to do at a systems level? That's what we solved, and one of the reasons we decided to solve it is I saw what it looked like to do AI governance inside of rubric. First party, I sat on our AI governance committee and things were done with via.
Documents that legal wrote, Google Sheets, you know, sort of the best of intentions, but the hardest things to enforce in practice. So we wanted to platformized that, and that's our second pillar. And then the third pillar is remediation. Rubrics always had a mentality assumed breach. I think everyone listening should probably assume something is going to happen to, if you deploy agents at the level of promise that they achieve
Corey: defense in depth, especially when the uh, attack is coming from inside the house.
Dev: Exactly defense and depth across the hyperscaler capabilities, but also the single control plane that gives you access across the different agents. So that's what I'm up to. Uh, now these days, uh, we're, we're building, working on rubric, agent cloud, and, uh, we started with this idea around we should be able to rewind these actions and then realized there's actually really a need for a broader control system here.
Uh, and that's what we've architected.
Corey: One last question I have because there are a few people that can answer this question honestly without. Sounding I, I guess like they're just imagining and prognosticating here, but you've done it. What lessons did you learn by starting a machine learning company in an era before Gen AI was all the rage.
On some level, I feel like it has to be frustrating 'cause you've done the work and lived in this space, and now every grifter out there claims to be an AI expert.
Dev: I think, um, it, it doesn't bother me so much because I've always had, like, you know, since I started my career in machine learning started well over a decade ago.
Uh, and since I started I always felt like AI was a. Candidly, like overhyped technology over the last two years is the first time I felt like it's under hyped, which is insane to say because I think that AI has such a hype cycle that's come in over the last two years. The reason I say that I think it's actually, you know, potentially under hyped is because I think that most organizations haven't gotten to that value discovery point yet because we haven't figured out, hopefully the easier part than figuring out how to make models super intelligent, which is the risk components around those models.
But like in terms of advice or how I think about it. Um, the thing that we've often struggled towards, I think in machine learning and AI conventionally, is really two things. The first is tying ML directly with value. Um, and so I worked with healthcare systems early on that were looking to be able to, um, build models that would, could help, um, expedite radiology processes.
Um, and this seems like a very straightforward use case that would be tied towards value, but it was sort of a data science and machine learning team doing it in a silo. There's a big question of like, okay, can the actual physician or operator, or can the insurance pair whoever needs to be able to use this?
Can they actually use the outputs of that? So I think the very first thing, it's like a startup lesson that also extends to everyone building towards AI is. Get to the dollars or the cents like as quickly as possible.
Corey: We'll just sell ads later. It'll be fine.
Dev: Yeah, well, or, or like, or get to value. If you're a consumer business, get to engagement, right?
Like I would say go for retention user. If you're a B2B business, like if you're a consumer, the ultimate source of truth is like our people spending time in your property. You will figure that piece out later if you're a B2B business. The ultimate source of truth is, does somebody trust me enough? To legitimately go to their boss and sign a purchase order.
Corey: It doesn't have to be for a lot of money. Just validate. Will they transfer a dollar from their bank account to ours?
Dev: My hot take is like in a consumer business, your job is to make users feel delighted. 'cause your source of truth is, do they refer you to a um. See you friend. My hot take is for an enterprise business, your job is to make your champion uncomfortable.
'cause you want them to be able to go like, feel the discomfort of going to their management of their boss. No one wants to ask for money, and honestly, people don't wanna go through procurement and security and vendor audits. You want them to feel. Convicted enough where they're like, it's worth the discomfort of going through those processes.
'cause that's when you know that you're onto something. Um, and so yeah, the very first piece of advice I think was just, um, really, and you know, the longer chunk of it is tie it directly towards value and know what truth looks like in your space. Um, and then the second is, uh, just recognize that you need to be able, recognize the importance of small and quick wins.
Conventional machine learning and data science looked like. Um, there was a conve, there was a joke, which was like ML was. 90% about cleaning data and 10% about complaining about clinically uh, cleaning data. Uh, and you know, there was a long process you would take before we started to realize value. And I think the other piece is like, recognize time to value.
Gen AI has never made it faster to be able to build a prototype or an agent. I actually think building agents now is the easy part.
Corey: No. Now the problem is, well, the prototype agents I've built is, oh, there's a new Claude Sonnet model out there. Oh crap. Where do I have to go and update all the model strings and I get to play whack-a-mole in my, uh, project directory of where did this all live
Dev: and hopefully nothing I built to scaffolding or observability breaks around it.
Corey: And the right answer, frankly, is, oh, you just instantiate Claude code and have it do it and just, you know, keep hitting at it. It'll be fine.
Dev: And my hope Corey, is the right answer will become you substantiate Rubrik Agent Cloud, all of the scaffold, great models out there, great agent builders out there.
But if you wanna make sure that they all kind of work across your organization, Rubrik Agent Cloud will like, monitor, govern, and help you remediate anytime something goes wrong. That's really what I think I'm excited about.
Corey: If people wanna learn more, where's the best place for 'em to find you?
Dev: Yeah, it's a great question.
You can actually go towards our, um, website and there's an agent Subpage that'll tell you about it. We've now started to post the webinars online and, uh, also demo videos. Just go ahead and reach out to us. Right now we're in an early access program, uh, and so we're starting to selectively onboard customers and we'd love to be able to chat further.
Corey: And we'll of course put links to that in the show notes. Thank you so much for taking the time to speak with me. I appreciate it.
Dev: Co. Really enjoyed it. Thanks for having me on.
Corey: Dev Rishi GM of AI at Rubrik. I'm cloud economist Corey Quinn, and this is Screaming at the Cloud. If you've enjoyed this podcast, please leave a five star review on your podcast platform of choice, whereas if you've hated this episode, please, we have a five star review on your podcast platform of choice, along with an angry comment written by an agent that has gone completely outside the bounds of what guardrails you thought were there.