The Intersection of AI, Security, and Cloud with Alyssa Miller

Episode Summary

Corey sits down with Alyssa Miller, the CISO at Epic Global, for a discussion that cuts through the noise of the technology world in this episode of Screaming in The Cloud. Alyssa celebrates her personal journey to becoming a licensed pilot and invaluable insights into the current state and future of AI, cloud computing, and security. This episode ventures beyond the typical tech hype, offering a critical look at the realities of AI, the strategic considerations behind cloud computing at Epic Global, and the importance of explainability in AI within regulated industries. Additionally, Alyssa and Corey highlight the cyclical nature of tech hype, the misconceptions surrounding AI's capabilities, and the impact of startup culture on genuine innovation.

Episode Video

Episode Show Notes & Transcript

Show Highlights:

(00:00) Introduction
(01:33) Corey celebrates Alyssa Miller getting her general aviation license.
(04:10) Considerations of cloud computing at Epic Global.
(06:45) The hype and reality of AI in today's tech landscape.
(11:49) Alyssa on the importance of explainability in AI within regulated industries.
(14:21) Debunking myths about AI surpassing human intelligence.
(19:30) The cyclical nature of tech hype, exemplified by blockchain and AI.
(24:58) Critique of startup culture and its influence on technology adoption.
(29:01) Alyssa and Corey discuss how tech trends often fail to meet their initial hype.
(31:57) Where to find Alyssa Miller online for more insights.

About Alyssa:

Alyssa directs the security strategy for S&P Global Ratings as Business Information Security Officer (BISO), connecting corporate security objectives to business initiatives. Additionally, she shares her message about evolving the way people think about and approach security, privacy and trust through speaking engagements at various conferences and other events. When not engaged in security research and advocacy, she is also an accomplished soccer referee, guitarist and photographer.

Links referenced: 

Transcript

Alyssa Miller: Yeah, and this is, I mean, I hate to minimize it in a way down to this, but it's the same thing we go through with every damn new technology. We get all excited, we scream it and proclaim it's going to fix the whole world's problems, it's going to be revolutionary, and then And we start to figure out, okay, this is what it can really do.

Corey Quinn: Welcome to Screaming in the Cloud. I'm Corey Quinn. It's been a couple of years since I caught up with today's guest. Alyssa Miller is now the CISO or CISO or however you want to mispronounce it at Epic Global.

Alyssa Miller: How are you? I'm doing well. And as long as you don't say CISO, I think we're okay. I've heard that one before and that one gets a little weird.

Corey Quinn: Yeah, you were, you were a BISO, previously s and p global. Now you're A-C-I-S-O, which tells me your grades are worse. I mean, how, how does that wind, what does the transition there look like?

Alyssa Miller: , I went from a B to a CI must be worse at my job. Uh, some might argue I am No, uh, no really exciting stuff actually 'cause the, the BSO role when I was in that role, I kind of knew that that was sort of like that, that last step before I'd probably land in a CSO role somewhere happened a little faster than I thought it was going to, but, uh.

You know, when the right opportunity comes knocking at the door, you kind of got to just jump in with it and go, so that's kind of the story of my career.

Corey Quinn: We'll get into that in a bit, but I want to talk about something that you've been fairly active at on the socials for a while now. Specifically, getting your general aviation pilot's license.

Alyssa Miller: Oh yeah, all this stuff up over here. Yeah, um, that, that was a lifelong dream, honestly. And I guess it was one of those dreams that I never really even thought of as a dream, because I just didn't think it would ever happen. When I was a kid, we lived on the northwest side of Milwaukee, and same house my entire life, my entire childhood, right?

But we were like, Maybe a mile from this local municipal airport. And so those training planes were flying over all day, every day.

Corey Quinn: I spent summers out in Watertown. I know exactly what you're talking about. They're all there.

Alyssa Miller: All right. Yeah. So you'd like, you'd be driving down the road and they're like landing, you know, 35 feet over the top of your car. And you're like, Wow, pretty cool, but never thought it was going to happen. You know, didn't have money back then. Then I, when I had money, I was married with kids. Now I'm divorced. My kids are out of the house. It was kind of like, well, kind of now or never. So yeah, uh, back at the end of 2022, I finally You know, completed the all necessary training, passed my checkride, and I've been flying like crazy ever since.

Corey Quinn: I looked into doing it myself. There's always been a backburner item for me, and seeing you doing it, like, it's not the, well, if she could do it, how hard could it be? None of that. But it's like, okay, if someone's actually doing it, let me look into it.

Alyssa Miller: I mean, it's one of those things, I think, for a lot of people. It kind of sits there, partially because I don't think people really realize how attainable it is. Like, that's kind of what shocked me, too, is like, you know, oh, like, literally anybody can go to a local airport that has a flight school and take a discovery flight and find out if it's something you actually want to do, and then if you do, then you just start doing it. Taking lessons. And, you know, the only thing that keeps you from getting there is the FAA and their, their medical standards sometimes can be problematic for some folks.

Corey Quinn: Funny that you mentioned that. Because as I was looking into this, I realized that my ADHD acts as a disqualifier for this. And I was talking to someone who started coming up with this.

Well, there are ways around that. It's like, no, no, no, no, no. You misunderstand. Because as soon as I really. Thought about it, about what actually goes into being a pilot and the fact you have to follow a rote, boring checklist every time without deviation, et cetera. Oh, I should absolutely not be flying a plane. That's, that's a really good point, but yeah, so in my case, yeah.

Alyssa Miller: Yeah. Cause I mean, it's, it is that, that very systematic execution of a checklist every time that helps keep you safe. So yeah, I, I definitely get that, that perspective too.

Corey Quinn: So let's get back to the cloudy stuff. It's not the literal clouds, but more the metaphorical ones.

So, what are you doing mostly these days? Are you, are you seeing your workloads moving to the cloud as everyone tends to? Are you repatriating left and right, like everyone in the tech industry tells me is happening, but I see no evidence of? Where do you fall on that particular spectrum?

Alyssa Miller: So that's been kind of interesting, right?

Because at S& P Global, our entire environment within my division, was 100 percent cloud. We were invested in functions and containers. In fact, we had very few EC2 instances anymore, right? It was all, all super ephemeral wonderfulness. I get here at Epic and we are the most super hybrid, right? So we've got on prem, we've got multiple cloud, you name it.

So, you know, there is kind of that motion of slowly we're seeing more and more in our cloud environments, but what I actually really appreciate. is there's not a, let's do a cloud transformation for the sake of doing cloud transformation. Right? It's, it's one of those things as new products are being launched and it's, makes sense to launch it in the cloud, great, but, you know, we do e discovery work.

We've got a ton, a ton, ton of data. And, you know, trying to put all that storage in the cloud, we'd go broke. You know, I mean, seriously, it's just like, there's so much data that we hold in on prem, you know, storage infrastructure that, you know, if I started moving that into, you know, any type of cloud storage, I can't even imagine what those cloud costs would look like.

Corey Quinn: Oh, I can. I've seen companies do it. And that's, I think you're right. There's this idea that you need to be in cloud, that's almost like a mind virus that's taken over. And I don't, I don't think it holds water. Conversely, I also don't think that, oh, wait, we're going to migrate out of the cloud now because we'll save a bunch of money on that.

I don't see that happening either. I see workloads where people could do basic arithmetic and realize, oh, moving to cloud is probably going to be financially ruinous for us. We're not going to do it. Or, If it was going to be expensive, they did it because of the value it was going to unlock. But there's some stuff that was never looked at seriously when it came to cloud.

I am seeing proof of concepts not pan out. No, for whatever reason, it's not going to be a cloud workload for us. I am seeing people shifting workloads as they have different environments, and they think that it might be a better fit somewhere else. And I'm seeing more cross cloud stuff as people start using You know, a cloud provider that understands how infrastructure works, but also want to do some AI nonsense with someone who knows how AI nonsense works.

And in AWS's case, those are not the same thing. So, yeah, you're going to start seeing more cross

Alyssa Miller: You said that word! That word!

Corey Quinn: Yeah!

Alyssa Miller: Oh my god, have we heard enough about AI yet? Uh, speaking, speaking of technologies that people are adopting for the sole purpose of adopting the technology and not because it fits any use case, AI, top of the list, right?

There are so many, I've seen it in every single industry. Everybody wants to say they're using LLMs and generative AI and all these magical words that suddenly, you know, at the end of 2022, people had never heard of them before, suddenly I knew all about it, right?

I got a pitch this morning about this podcast from some rando saying that, oh, well, you should go ahead and feed the transcript of the podcast into GPT 4 through our service and it'll write a blog post.

Okay, first, what do I need you for if I'm going to do that? Two, what, why should someone bother to read something that I couldn't even be bothered to take the time to write? It's a, it's a complete lack of respect for the audience's time when you start down that path.

Totally. I mean, I think about it even, you know, there's, I've seen use cases for HR teams to use ChatGPT or LLM of your choice to send rejection emails.

Like, wow, I mean, cause those, we all know, those rejection emails aren't cold enough as it is. Now we're gonna have some AI chat bot write it, because we can't even be bothered to do that.

Corey Quinn: I have seen people who have done something I think is kind of neat. Where they'll wind up taking like an email that's overly terse, or they want to make sure it comes across the right way.

And then they'll launder it through chat GPT. Then I feel like at some level at some of the other end is going to get this wordy thing. Like, can you just like distill this down? Like it's like a protocol encapsulation decapsulation on both ends where it now acts as an API between humans communicating.

Alyssa Miller: I love it because that's probably already happening because yeah, we, we've done it. I'm literally in the middle of a whole, you know, everybody has to do these quarterly access reviews. You know, we're in the middle of that and I'm escalating to people who aren't completing their reviews and blah, blah, blah.

But so I had my Biso create a, an escalation email and you know, she fed it through, uh, I think Copilot and she did two versions. One was like the nice version and one was the very, you know, in your face authoritarian, you gotta get this done now kind of thing. And it was, it was fascinating to look at that.

Of course I said, use the nice one, not the mean one. I'll send the mean one later. You know, It is funny because yeah, somebody also take that now, you'll pop it through. Hey, summarize this for me. It's right there and I'll look, just hit it, you know, co pilot, summarize this for me. So it's kind of a, I never even thought about it that way, but that is, that is almost sadly comical.

Corey Quinn: It is. And part of the funny, funny aspect of it too, is that I find it's great for Breaking Through Walls of Creativity, where I'm staring at a blank screen, I need something up there, great, give me a badly written blog post you think is something I would write, then I can mansplain, angrily correct the robot, fine, that's useful, but I think where people get into trouble is when they start looking at using it in a scaled out way, where there's not going to be any human review of what goes out, which explains this proliferation of nonsense chatbots on everyone's website that just tell lies.

Now, what I love is there was a recent case with Air Canada, where nope, they were held to the terms that their idiot chatbot made up on the fly to answer someone's question about a bereavement fare. Awesome! I think that when you start being held responsible for the things your lying robot tells people, you're going to be a lot more strict about what the lying robot is allowed to tell people, and that's kind of the point.

Alyssa Miller: I actually, I'll have, I have to admit, I am really glad that I kind of was ahead of the curve on a lot of this when I was at S& P Global, so we did, I was at S& P Global Ratings, so we did all the financial ratings, and this was before ChatGPT exploded on the scene and everybody suddenly understood it.

Supposedly, what LLMs were and what generative AI meant and we were already looking at how can we use artificial intelligence in ratings and crafting ratings and the core concept that kept coming up was the idea of explainability. Right? Because you're talking now about a heavily regulated industry.

The rating, you know, financial ratings, of course, is, you know, SEC's got a lot to say, as they should, and you know, if you make a credit rating determination based on AI, if you can't go back and explain how it got there, How did it make that decision? That's, you know, two things. One, you mentioned like the hallucinations and that whole concept, but there's also even just that we all understand, hopefully by now, the inherent biases that we're unable to eliminate from our artificial intelligence system.

So it's like, if we're going to leverage this, we need to go back and, and have the explainability of how decisions are being made. So we can ensure there isn't bias.

Corey Quinn: Regulators have caught on to using AI as bias laundering.

Alyssa Miller: So now when, you know, I, I'm at Epic and suddenly in, you know, I remember very specifically, you know, November 2022, suddenly all this is going nuts and everybody's talking about it.

And a few months later, everybody's, you know, where does this make sense in our, our, Our, you know, product set, it was great to have already been down that road and say look, you know, if you're going to make decisions about eDiscovery information, it's got to be explainable. You can't exclude something from eDiscovery or include something into eDiscovery without specifically being able to say why.

Right? Someone's got to be able to explain it at the end of the day if it comes up in court. So, you know, it was, it was one of those things where it was like, it was great to be on the forefront of that because I've already seen, you know, organizations have gone headlong into various implications, implementations rather, of AI without understanding that.

Particular concept, and I'm sure people in my organization are tired of hearing me use that word, explainability, defensibility, whatever you want to call it. Like, but it's so critical if we're going to leverage it, kind of to your point before about, you know, breaking through that creative barrier. I want to create a blog, but I'm going to go back and read it and then figure out how to make it actually work.

You know, valuable and sound like it came from me and everything, you know, it's kind of that same concept. Use it to do some work, but you gotta be able to go back and understand what it did, and fix it where it did it wrong.

Corey Quinn: Because if you miss that, you're in trouble. There's this idea of, we're going to make the robot automatically say something, and we're going to assume that it's going to get everything right, because, well, how hard could it really be?

I don't know. That strikes me as a disaster just waiting to happen.

Alyssa Miller: So you know where that comes from? And this is, this is where I'm going to, I'm going to really tick off a lot of the AI people. I saw a video this past week of some guy who was trying to give a basic definition of artificial intelligence.

And he talked about making decision or making this You know, consciousness or whatever that's better than a human brain. And I'm like, that's not what AI is. AI is never, never. Well, I mean, okay. I won't say never. That's, that's way too absolute. It depends on the human question. Way off before AI is going to be better than a human brain.

Cause for one thing, who's creating it. Humans!

Corey Quinn: With the training data, yeah. It's great at creativity, it's great at doing things rapidly, but it's basically just a parrot on some level. Where

Alyssa Miller: it's Still a computer at the end of the day. It is still ones and zeros at the end of the day.

Corey Quinn: A lot of them, and they're very expensive.

Alyssa Miller: Or maybe we go into quantum computing. No, it's not ones and zeros, it's all sorts of That's a topic for a whole other episode. But no, seriously, it's, you know And I think that's, that's the core of the issue is people are expecting that artificial intelligence is somehow going to be better than humans when humans are the ones creating it.

Well, how does that work? How do we even know what better is? How do we even know? You know, what is better? What is better than the human brain? We can't say because it's outside of our brain. I mean, seriously, come on. So, we can focus on key aspects, like what are frustrations we have with the human element of things we do, and we can try to address those.

But it's never going to be, in my opinion, and I will use never this time, I don't see it being like this superior decision making entity over the top of a human brain, because every bit of intelligence that's built into it is the result of AI. Things that originated in a human brain.

Corey Quinn: Part of the problem, too, is that if you start really digging into a topic you know an awful lot about with Gen AI, you start noticing that it gets things remarkably wrong, remarkably quickly.

And there's a phrase, gel someone amnesia, where you, uh, when you understand that, like, when you read a mass media article about something you know a lot about, you find all the mistakes, but you forget that, and you take everything they say about things you don't know about at something approaching face value.

And we're seeing that on some level with Gen AI in the same approach, where there's this, this overall overarching idea that it somehow, it knows all these things super well, but we can prove Objectively, that it doesn't know these things in depth. What it does expose, if you want to look at it from a certain point of view, is the sheer amount of how much bullshit exists in the world today.

Alyssa Miller: Oh, totally. Absolutely. And what's really interesting, I don't know if you've seen these articles yet, there's at least one study I was reading, they've already proven that AI forgets. For the same reason as the human brain does. It is in there somewhere, but only so many penguins are going to fit on that iceberg.

And so as you keep adding penguins to the iceberg, it's knocking other ones off. Now it can go back and retrieve that, but ultimately, You look at the way AI is designed to work, it's, you know, it's, you know, the concentration of information around a specific concept, and it's the recency of that information.

So the same way we forget things because we keep knocking those penguins off the iceberg as we're adding new ones, AI is starting to do the exact same things. It's looking what's the most recent information, what have I seen most often in my model, What have I been exposed to the most? Which is of course where bias comes from as well.

And then that's what it's leveraging. And it may be forgetting or not in its case, accessing that other piece of data that was actually the correct data about that particular topic. And this is, that's where it's like fascinating to me that this is actually occurring. We, so, you know, again, back to, are we creating anything that's better than the human brain?

Well, if it's forgetting just like the human brain forgets. Well, I know, okay, um, yeah, we've pretty much failed in that pursuit already.

Corey Quinn: It leads to weird places and unfortunate timing. It's, uh, I don't know what the right answer is on most of these things. Truly, I don't. I am curious to figure out where, where we wind up going from here.

And I think that there's going to be a lot of, I think, things that we learn mostly by watching others get it wrong. The thing that blows my mind is it's such a untested area where, okay, you now have a magic parrot that can solve for a number of rote, obnoxious tasks super quickly. I've had it do things like write an email.

Canceling my service with this company, etc. And then it just, it winds up spitting it out. I can copy and paste and call it good. As a CISO, I imagine you'll appreciate the fact that I don't ever put things like account numbers into these things. I can fill that in later. I'll use placeholders for names. I don't put in specific company data.

And I don't have to worry about who's training on what at that point. But apparently that's, that's not a common approach. But then, okay, it speeds those things up and it's great. But all these companies right now are talking about how it's going to revolutionize everything. There's hype, just like the blockchain at its worst, but at least here there's some demonstrable value that can be found.

It's just a question of how much vis a vis the hype.

Alyssa Miller: Yeah, and this is, I mean, I hate to minimize it in a way down to this, but it's the same thing we go through with every damn new technology. We get all excited, we scream it, we proclaim it's going to fix the whole world's problems, it's going to be revolutionary, and then And we start to figure out, okay, this is what it can really do.

You know, blockchain is a great example. Blockchain fails. I mean, I shouldn't say it failed, right? It didn't fail and it's in use. But it didn't become this big, overarching, revolutionary, changing the world thing. That everybody said it was going to. And the reason why was it was super complex and where we thought it was going to impact us the most, we're in tasks that didn't need that level of complexity.

Sure. We created cryptocurrency off of it, which, okay.

Corey Quinn: Yay.

Alyssa Miller: We've still yet to see where that's going to end up, but it, it's not going where people who started it thought it was going to go.

Corey Quinn: Real soon, it's been almost, what, 15 years, and we're still waiting for it to, okay, demonstrate like, demonstrate the value, please?

Alyssa Miller: Well, I mean, just the volatility of it. We've not seen that relax yet. And you know, and that's, you, you look at like currency markets and if there's one thing investors and anyone else values in currency markets, It's the relative stability of those markets.

Corey Quinn: Right. So it becomes a speculative asset. And yeah.

Oh yeah. We can rant about this and wind up with the most odious people in the world in the comments if we're not careful.

Alyssa Miller: And we've also seen how manipulatable it is, right? I mean, granted we can, you know, we've, we've seen people, you know, countries, nation states manipulate, you know, monetary values and things like that to some degree, but it's always open and it's always clear what's happening. And there are countermeasures for that.

Corey Quinn: The problem, too, is Joubin is the same people, though, that are hyping cryptocurrency are now pivoting to hyping AI, and it made no sense to me until I realized what's going on. They're NVIDIA's street team. They don't care what it is you're doing with them, they just want you to buy an awful lot of NVIDIA GPUs.

Alyssa Miller: I can see that! You know it would actually make a lot of sense, because who is benefiting from all this? Yes, the GPU makers, NVIDIA being probably the biggest,

Corey Quinn: the only one of relevance. They're now a 2 trillion company, the last I checked. They're larger than Amazon.

Alyssa Miller: So all those bit mining, you know, rigs just turned into AI rig.

And before, and you know, and then of course you throw in there a little measure of deep fakes, which we, you know, heard that was the opposite, right? That was going to revolutionize the world because it was gonna make everything evil and horrible and terrible. Well that we started talking about in 2018.

Six years later, I'm still waiting, you know, we hear these little anecdotals. Oh, somebody got breached by a deepfake audio.

Corey Quinn: I was reading an article recently about someone who wound up, uh, like they start off by telling us how intelligent and sophisticated they are. Okay, great. And then the story continues on and they wind up like getting the CIA involved and Amazon involved, supposedly to the point where they're now taking 50, 000 in cash out of a bank, putting it in a shoebox and handing it to a stranger in a parking lot.

It's like, okay. There are several points here at which something should have flagged for you. That this might be something weird going on here.

Alyssa Miller: And that's exactly my point. It's like, you know, the deepfake wasn't the issue. Deepfake videos, okay, we all know they exist. If deepfake videos did anything wrong, Anything that was a threat to society, I think it's that they enabled the tinfoil hack crew to come up with all new levels of conspiracy theory.

Like, any video I see, I can just claim that it's deepfake and I don't have to believe that it's true. And there are people working on that problem and there are solutions for it, but the problem, the fact of the matter is, it's just not that widespread of a problem that anyone needs a technology solution to solve it.

Corey Quinn: They don't. And instead it becomes something that people are So looking to slap it into everything, like how many startups have been founded, it's quite simply just, it's, it's basically an API call to OpenAI, and then it does a little bit of text manipulation and it comes back, then they're, then they're surprised and terrified when an OpenAI feature release puts them out of business.

How many companies were tanked when the only thing they did is they taught Chad Gippity how to talk to PDF files? Yeah, sure. Like it's never going to be something that they figure out on their own.

Alyssa Miller: Oh my gosh, Corey, you just touched on something now. You ready? I'm just, sorry, this is me getting up on my soapbox now because what you're talking about is startup world in general.

And now I'm really going to piss people off. So I'm, I'm, I'm hearing all the VCs right now saying if she ever tries to start a company, screw her. But here's the reality of startup culture right now. Startup culture is nothing more than find some minimally, you know, useful, uh, incremental improvement on existing stuff.

Declare it a revolutionary new functionality that nobody's doing and it's, it's the greatest thing and then produce a product on it. Get enough suckers to buy it in three years and then sell. Right? I mean, it is a, it is a cookie cutter approach and it's why we have a, just a metric ton of God awful, you know, Different acronyms and things that we have to have all these products for.

And the reality is, these are just incremental features that your existing tool set could just build in. Or maybe you could actually, I don't know, innovate a little and create yourself to work with your tool sets. And And it's it happens across the board. So yeah, we're seeing it with AI right now. And what happens?

Oh, yeah. Hey, we got this cool new whiz bang AI blah blah blah. Well, yeah, how's it work? Oh, well, it's based on chat GPT So literally all you're doing is like scrubbing my prompts and then feeding at the chat GPT and and you know Parodying back chat GPT's response to me Why do I, again, to your point before, why do I need you for that?

Corey Quinn: I've had a bunch of co workers, like, there's some people that I've met over the years, where it's, yeah, their job is going to be imperiled by what is effectively an automated bullshit generator. And all of the world is bullshit, and an awful lot of what they did was absolutely bullshit. Um, it's, their job function just potentially winds up getting rid of, but not well.

Alyssa Miller: Right. Not well. And also, okay, what is it in your job function that was so easily replaced by this that then you couldn't expand your expertise, right? And this is the other part. I, I, yes, I do have empathy. God, for anyone who, who feels, you know, that their job is legitimately in threat from this. But it's also like, okay, we've seen that throughout the course of time, right?

That's just, that's the nature of everything in how our capitalist society works. You know, technologies evolve and change robots. autoworkers. Yeah, they did in a lot of cases, right? We reduced the number of autoworkers and yet we still have autoworkers. What are they doing now? They're engineering the robots.

They're working. I mean, they're programming them. They're, they're maintaining them. They're doing all these other things. So it's just the skillsets have to keep evolving because as we create new skillsets, we create new tools to automate those skillsets. This is, I mean, we can go back, you know, centuries.

And see this occurring. It is nothing new, that concept. So that's why I try to encourage people who do feel threatened. It's like, yeah, it might change your job. It might mean you, you need to retool your brain a little bit to do something else. But this is just the natural progression. It doesn't make AI any more evil than, you know, you know, heavy machinery did when it took over mining.

Corey Quinn: Remarkably few jobs look the same as they did a hundred years ago. Every, everything is Upgraded, replaced, et cetera. As far as tooling goes, the only exceptions are specific artisans who are doing things from the perspective of, yeah, no, we want to do this the same way that we've done this for centuries.

Great. But agriculture has been upended. You see it with, uh, with masons, with all kinds of folks doing different things with different tooling. Why would office workers? Necessarily be any different. I also think that some of the enthusiasm comes from the idea that it's very good at making code suggestions because computer programming languages are a lot more syntactically rigid than human languages.

So it becomes easier to, to wind up bounding the problem space there. It doesn't work quite as easily when we're talking about interpersonal communication. There's more ambiguity that creeps in.

Alyssa Miller: Yeah. Or even some level of creativity, right? When we think about code, think about some of those more elegant solutions you see to a problem when you're coding.

We're not. We haven't seen a whole lot of that from AI yet. We're seeing very well written code, as you said, and it follows a lot of very, you know, some of the strictest conventions we wish our programmers would follow, but there's also some of those really just elegant maneuvers that you see developers make that I've not seen AI start doing yet.

And I've looked at a lot of code generation in AI and I've seen a lot of really impressive stuff, but there are just some really elegant things like, do I think AI could You know, go back to the day that, you know, we first, someone first invented the idea of a bubble sort. Do I think AI is going to create something like that from scratch?

No, because again, it's basing its knowledge off of everything that we've already created. The syntax, the, the, you know, just naming conventions and other rules and things. I don't see that level of innovation coming out of AI. At this point yet. Now maybe in the future that changes. Maybe we get better at how we're creating these systems and maybe I'm totally wrong and they will be better than the human brain, but I think we got a long way to go.

Corey Quinn: I think there's an awful lot of boosterism in the space. I think that there are people who have a definite financial incentive to overhype the crap out of this stuff and maybe it works, maybe it doesn't. But I think that there's a. You don't make headlines by being rational or conservative on these things.

You definitely do by saying it's going to change the nature of human existence. I don't know. I, I, I've seen a lot of things make that promise. Very few have delivered.

Alyssa Miller: Oh, the history of technology. Good God. And yeah, for me in cybersecurity, like. God, every new product that comes out is going to change the world.

Where's, where are we at with Zero Trust? How's that going?

Corey Quinn: My god, I am so tired of the, of the constant hyping on this. You think this year at RSA the expo floor is going to be full of AI powered stuff? Which is of course the same stuff it's always been, but they're going to say AI.

Alyssa Miller: Oh, there's going to be, oh god, I, I'm so glad I'm not going to RSA this year, just for that reason alone.

Corey Quinn: I live here, I'm sort of forced to.

Alyssa Miller: Oh yeah, that's, that's fair, but that's also part of why I'm not coming, because y'all, y'all's city is way too expensive. I've noticed that. Yeah, I'm sure you have. I would love to go, but yeah, uh, that, that's a big chunk of it, but no, I mean, and yeah, we know that last year was Palo Alto had zero trust plastered on every billboard in the entire city.

And everyone you talked to was AI and zero trust with the two things you heard about, you know, the year before that it was blockchain and AI, right? I mean, it just, man, it's always just one of the buzzwords and they just shift and, you know, it. Everybody's looking for that cool new marketing term they can use.

EDR, MDR, XDR, Sassy, whatever, I mean, just keep throwing things out there, right? I mean, and it's, it's all meaningless at the end of the day when it all still ties back to simple concepts that We were talking about 27 years ago when I entered the industry, and probably well before that.

Corey Quinn: And yet, here we are. It seems like so much has changed, but also so little has.

Alyssa Miller: Right? Well, what's changed is the technology, quite honestly. You know, I like, I get a board member ask me, so, so, when are we secure enough? My answer to that every time, when technology stops changing and you stop creating new shit. Because as long as we're making new products and we're using new technologies, cyber security is always going to be evolving and technology is always going to keep evolving because as we create new tech and then we create new tech on top of new tech and that, that, again, it, it's a tale as old as time.

Oh, now I sound like Disney.

Corey Quinn: You really do. Can you put that to music, please?

Alyssa Miller: I got to say, I think I just, and I'm going to get you sued because, you know, Disney's coming in here with their copyright. I hear they're kind of, uh, very diligent about defending their, their copyright.

Corey Quinn: Oh, yes. Uh, one, one does not play copyright games with the mouse.

Alyssa Miller: No, no.

Corey Quinn: I really appreciate your taking the time to talk to me about how you see things evolving. It's great to catch up with you. If people want to learn more, where's the best place for them to find you?

Alyssa Miller: Oof. God, I don't know. I have to admit, and I say this with a certain level of pain in my stomach, it's probably still Twitter.

Twitter. The thing with the bird. I don't care what he says. Twitter. That said, I'm Elton Blue Sky. My handle's pretty much the same everywhere. It's elissayim underscore infosec.

Corey Quinn: And we'll put a link to that in the show notes, because that's what we do here.

Alyssa Miller: That's awesome. And I mean, LinkedIn, I know you've got that info too. It's just a dash instead of an underscore because, you know, LinkedIn puts it in the URL and you can't use underscores in URLs. So

Corey Quinn: The joys of computers. Thank you so much for taking the time to speak with me.

Alyssa Miller: I really appreciate it. Yeah. Thank you for having me. It's always a blast. And like, wow, that went by fast.

Corey Quinn: It really did. Alyssa Miller, CISO at Epic Global. I'm cloud economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five star review on your podcast platform of choice. Whereas if you hated this podcast, please leave a five star review on your podcast platform of choice, along with an angry, insulting comment that you didn't read, because you just had some chatbot create it for you.

Newsletter Footer

Get the Newsletter

Reach over 30,000 discerning engineers, managers, enthusiasts who actually care about the state of Amazon’s cloud ecosystems.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Sponsor Icon Footer

Sponsor an Episode

Get your message in front of people who care enough to keep current about the cloud phenomenon and its business impacts.