Cloud Compliance and the Ethics of AI with Levi McCormick

Episode Summary

Levi McCormick, Cloud Architect at Jamf, joins Corey on Screaming in the Cloud to discuss his work modernizing baseline cloud infrastructure and his experience being on the compliance side of cloud engineering. Levi explains how he works to ensure the different departments he collaborates with are all on the same page so that different definitions don’t end up in miscommunications, and why he feels a sandbox environment is an important tool that leads to a successful production environment. Levi and Corey also explore the ethics behind the latest generative AI craze.

Episode Show Notes & Transcript

About Levi

Levi is an automation engineer, with a focus on scalable infrastructure and rapid development. He leverages deep understanding of DevOps culture and cloud technologies to build platforms that scale to millions of users. His passion lies in helping others learn to cloud better.


Links Referenced:


Transcript

Announcer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.

Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. A longtime friend and person has been a while since he’s been on the show, Levi McCormick has been promoted or punished for his sins, depending upon how you want to slice that, and he is now the Director of Cloud Engineering at Jamf. Levi, welcome back.

Levi: Thanks for having me, Corey.

Corey: I have to imagine internally, you put that very pronounced F everywhere, and sometimes where it doesn’t belong, like your IAMf policies and whatnot.

Levi: It is fun to see how people like to interpret how to pronounce our name.

Corey: So, it’s been a while. What were you doing before? And how did you wind up stumbling your way into your current role?

Levi: [laugh]. When we last spoke, I was a cloud architect here, diving into just our general practices and trying to shore up some of them. In between, I did a short stint as director of FedRAMP. We are pursuing some certifications in that area and I led, kind of, the engineering side of the compliance journey.

Corey: That sounds fairly close to hell on earth from my particular point of view, just because I’ve dealt in the compliance side of cloud engineering before, and it sounds super interesting from a technical level until you realize just how much of it revolves around checking the boxes, and—at least in the era I did it—explaining things to auditors that I kind of didn’t feel I should have to explain to an auditor, but there you have it. Has the state of that world improved since roughly 2015?

Levi: I wouldn’t say it has improved. While doing this, I did feel like I drove a time machine to work, you know, we’re certifying VMs, rather than container-based architectures. There was a lot of education that had to happen from us to auditors, but once they understood what we were trying to do, I think they were kind of on board. But yeah, it was a [laugh] it was a journey.

Corey: So, one of the things you do—in fact, the first line in your bio talking about it—is you modernize baseline cloud infrastructure provisioning. That means an awful lot of things depending upon who it is that’s answering the question. What does that look like for you?

Levi: For what we’re doing right now, we’re trying to take what was a cobbled-together part-time project for one engineer, we’re trying to modernize that, turn it into as much self-service as we can. There’s a lot of steps that happen along the way, like a new workload needs to be spun up, they decide if they need a new AWS account or not, we pivot around, like, what does the access profile look like, who needs to have access to it, which things does it need to connect to, and then you look at the billing side, compliance side, and you just say, you know, “Who needs to be informed about these things?” We apply tags to the accounts, we start looking at lower-level tagging, depending on if it’s a shared workload account or if it’s a completely dedicated account, and we’re trying to wrap all of that in automation so that it can be as click-button as possible.

Corey: Historically, I found that when companies try to do this, the first few attempts at it don’t often go super well. We’ll be polite and say their first attempts resemble something artisanal and handcrafted, which might not be ideal for this. And then in many cases, the overreaction becomes something that is very top-down, dictatorial almost, is the way I would frame that. And the problem people learn then is that, “Oh, everyone is going to route around us because they don’t want to deal with us at all.” That doesn’t quite seem like your jam from what I know of you and your approach to things. How do you wind up keeping the guardrails up without driving people to shadow IT their way around you?

Levi: I always want to keep it in mind that even if it’s not an option, I want to at least pretend like a given team could not use our service, right? I try to bring a service mentality to it, so we’re talking Accounts as a Service. And then I just think about all of the things that they would have to solve if they didn’t go through us, right? Like, are they managing their finances w—imagine they had to go in and negotiate some kind of pricing deal on their own, right, all of these things that come with being part of our organization, being part of our service offering. And then just making sure, like, those things are always easier than doing it on their own.

Corey: How diverse would you say that the workloads are that are in your organization? I found that in many cases, you’ll have a SaaS-style company where there’s one primary workload that is usually bearing the name of the company, and that’s the thing that they provide to everyone. And then you have the enterprise side of the world where they have 1500 or 2000 distinct application teams working on different things, and the only thing they really have in common is, well, that all gets billed to the same company, eventually.

Levi: They are fairly diverse in how… they’re currently created. We’ve gone through a few acquisitions, we’ve pulled a bunch of those into our ecosystem, if you will. So, not everything has been completely modernized or brought over to, you know, standards, if you will, if such a thing even exists in companies. You know [laugh], you may pretend that they do, but you’re probably lying to yourself, right? But you know, there are varying platforms, we’ve got a whole laundry list of languages that are being used, we’ve got some containerized, some VM-based, some serverless workloads, so it’s all over the place. But you nailed it. Like, you know, the majority of our footprint lives in maybe a handful of, you know, SaaS offerings.

Corey: Right. It’s sort of a fun challenge when you start taking a looser approach to these things because someone gets back from re:Invent, like, “Well, I went to the keynote and now I have my new shopping list of things I’m going to wind up deploying,” and ehh, that never goes well, having been that person in a previous life.

Levi: Yeah. And you don’t want to apply too strict of governance over these things, right? You want people to be able to play, you want them to be inspired and start looking at, like, what would be—what’s something that’s going to move the needle in terms of our cloud architecture or product offerings or whatever we have. So, we have sandbox accounts that are pretty much wide open, we’ve got some light governance over those, [laugh] moreso for billing than anything. And all of our internal tooling is available, you know, like if you’re using containers or whatever, like, all of that stuff is in those sandbox accounts.

And that’s where our kind of service offering comes into play, right? Sandbox is still an account that we tried to vend, if you will, out of our service. So, people should be building in your sandbox environments just like they are in your production as much as possible. You know, it’s a place where tools can get the tires kicked and smooth out bugs before you actually get into, you know, roadmap-impacting problems.

Corey: One of the fun challenges you have is, as you said, the financial aspect of this. When you’ve got a couple of workloads that drive most things, you can reason about them fairly intelligently, but trying to predict the future—especially when you’re dealing with multi-year contract agreements with large cloud providers—becomes a little bit of a guessing game, like, “Okay. Well, how much are we going to spend on generative AI over the next three years?” The problem with that is that if you listen to an awful lot of talking heads or executive types, like, “Oh, yeah, if we’re spending $100 million a year, we’re going to add another 50 on top of that, just in terms of generative AI.” And it’s like, press X to doubt, just because it’s… I appreciate that you’re excited about these things and want to play with them, but let’s make sure that there’s some ‘there’ there before signing contracts that are painful to alter.

Levi: Yeah, it’s a real struggle. And we have all of these new initiatives, things people are excited for. Meanwhile, we’re bringing old architecture into a new platform, if you will, or a new footprint, so we have to constantly measure those against each other. We have a very active conversation with finance and with leadership every month, or even weekly, depending on the type of project and where that spend is coming from.

Corey: One of the hard parts has always been, I think, trying to get people on the finance side of the world, the engineering side of the world, and the folks who are trying to predict what the business was going to do next, all speaking the same language. It just feels like it’s too easy to wind up talking past each other if you’re not careful.

Levi: Yeah, it’s really hard. Recently taken over the FinOps practice. It’s been really important for me, for us to align on what our words mean, right? What are these definitions mean? How do we come to common consensus so that eventually the communication gets faster? But we can’t talk past each other. We have to know what our words mean, we have to know what each person cares about in this conversation, or what does their end goal look like? What do they want out of the conversation? So, that’s been—that’s taken a significant amount of time.

Corey: One of the problems I have is with the term FinOps as a whole, ignoring the fact entirely that it was an existing term of art within finance for decades; great, we’re just going to sidestep past that whole mess—the problem you’ll see is that it just seems like that it means something different to almost everyone who hears it. And it’s sort of become a marketing term more so that it has an actual description of what people are doing. Just because some companies will have a quote-unquote, “FinOps team,” that is primarily going to be run by financial analysts. And others, “Well, we have one of those lying around, but it’s mostly an engineering effort on our part.”

And I’ve seen three or four different expressions as far as team composition goes and I’m not convinced any of them are right. But again, it’s easy for me to sit here and say, “Oh, that’s wrong,” without having an environment of my own to run. I just tend to look at what my clients do. And, “Well, I’ve seen a lot of things, and they all work poorly in different ways,” is not uplifting and helpful.

Levi: Yeah. I try not to get too hung up on what it’s called. This is the name that a lot of people inside the company have rallied around and as long as people are interested in saving money, cool, we’ll call it FinOps, you know? I mean, DevOps is the same thing, right? In some companies, you’re just a sysadmin with a higher pay, and in some companies, you’re building extensive cloud architecture and pipelines.

Corey: Honestly, for the whole DevOps side of the world, I maintain we’re all systems administrators. The tools have changed, the methodologies have changed, the processes have changed, but the responsibility of ‘keep the site up’ generally has not. But if you call yourself a sysadmin, you’re just asking him to, “Please pay me less money in my next job.” No, thanks.

Levi: Yeah. “Where’s the Exchange Server for me to click on?” Right? That’s the [laugh]—if you call yourself a sysadmin [crosstalk 00:11:34]—

Corey: God. You’re sending me back into twitching catatonia from my early days.

Levi: Exactly [laugh].

Corey: So, you’ve been paying attention to this whole generative AI hype monster. And I want to be clear, I say this as someone who finds the technology super neat and I’m optimistic about it, but holy God, it feels like people have just lost all sense. If that’s you, my apologies in advance, but I’m still going to maintain the point.

Levi: I’ve played with all the various toys out there. I’m very curious, you know? I think it’s really fun to play with them, but to, like, make your entire business pivot on a dime and pursue it just seems ridiculous to me. I hate that the cryptocurrency space has pivoted so hard into it, you know? All the people that used to be shilling coins are now out there trying to cobble together a couple API calls and turn it into an AI, right?

Corey: It feels like it’s just a hype cycle that people are more okay with being a part of. Like, Andy Jassy, in the earnings call a couple of weeks ago saying that every Amazon team is working with generative AI. That’s not great. That’s terrifying. I’ve been playing with the toys as well and I’ve asked it things like, “Oh, spit out an IAM policy for me,” or, “Oh, great, what can I do to optimize my AWS bill?” And it winds up spitting out things that sound highly plausible, but they’re also just flat-out wrong. And that, it feels like a lot of these spaces, it’s not coming up with a plausible answer—that’s the hard part—is coming up with the one that is correct. And that’s what our jobs are built around.

Levi: I’ve been trying to explain to a lot of people how, if you only have surface knowledge of the thing that it’s telling you, it probably seems really accurate, but when you have deep knowledge on the topic that you’re interacting with this thing, you’re going to see all of the errors. I’ve been using GitHub’s Copilot since the launch. You know, I was in one of the previews. And I love it. Like, it speeds up my development significantly.

But there have been moments where I—you know, IAM policies are a great example. You know, I had it crank out a Lambda functions policy, and it was just frankly, wrong in a lot of places [laugh]. It didn’t quite imagine new AWS services, but it was really [laugh] close. The API actions were—didn’t exist. It just flat-out didn’t exist.

Corey: I love that. I’ve had some magic happen early on where it could intelligently query things against the AWS pricing API, but then I asked it the same thing a month later and it gave me something completely ridiculous. It’s not deterministic, which is part of the entire problem with it, too. But it’s also… it can help incredibly in some weird ways I didn’t see coming. But it can also cause you to spend more time chasing that thing than just doing it yourself the first time.

I found a great way to help it—you know, it helped me write blog posts with it. I tell it to write a blog post about a topic and give it some bullet points and say, “Write in my voice,” and everything it says I take issue with, so then I just copy that into a text editor and then mansplain-correct the robot for 20 minutes and, oh, now I’ve got a serviceable first draft.

Levi: And how much time did you save [laugh] right? It is fun, you know?

Corey: It does help because that’s better for me at least and staring at an empty page of what am I going to write? It gets me past the writer’s block problem.

Levi: Oh, that’s a great point, yeah. Just to get the ball rolling, right, once you—it’s easier to correct something that’s wrong, and you’re almost are spite-driven at that point, right? Like, “Let me show this AI how wrong it was and I’ll write the perfect blog post.” [laugh].

Corey: It feels like the companies jumping on this, if you really dig into what we’re talking about, it seems like they’re all very excited about the possibility of we don’t have to talk to customers anymore because the robots will all do that. And I don’t think that’s going to go the way you want to. We just have this minor hallucination problem. Yeah, that means that lies and tries to book customers to hotel destinations that don’t exist. Think about this a little more. The failure mode here is just massive.

Levi: It’s scary, yeah. Like, without some kind of review process, I wouldn’t ship that straight to my customers, right? I wouldn’t put that in front of my customer and say, like, “This is”—I’m going to take this generative output and put it right in front of them. That scares me. I think as we get deeper into it, you know, maybe we’ll see… I don’t know, maybe we’ll put some filters or review process, or maybe it’ll get better. I mean, who was it that said, you know, “This is the worst it’s ever going to be?” Right, it will only get better.

Corey: Well, the counterargument to that is, it will get far worse when we start putting this in charge [unintelligible 00:16:08] safety-critical systems, which I’m sure it’s just a matter of time because some of these boosters are just very, very convincing. It’s just thinking, how could this possibly go the worst? Ehhh. It’s not good.

Levi: Yeah, well, I mean, we’re talking impact versus quality, right? The quality will only ever get better. But you know, if we run before we walk, the impact can definitely get wider.

Corey: From where I sit, I want to see this really excel within bounded problem spaces. The one I keep waiting for is the AWS bill because it’s a vast space, yes, and it’s complicated as all hell, but it is bounded. There are a finite—though large—number of things you can see in an AWS bill, and there are recommendations you can make based on top of that. But everything I’ve seen that plays in this space gets way overconfident far too quickly, misses a bunch of very obvious lines of inquiry. Ah, I’m skeptical.

Then you pass that off to unbounded problem spaces like human creativity and that just turns into an absolute disaster. So, much of what I’ve been doing lately has been hamstrung by people rushing to put in safeguards to make sure it doesn’t accidentally say something horrible that it’s stripped out a lot of the fun and the whimsy and the sarcasm in the approach, of I—at one point, I could bully a number of these things into ranking US presidents by absorbency. That’s getting harder to do now because, “Nope, that’s not respectful and I’m not going to do it,” is basically where it draws the line.

Levi: The one thing that I always struggle with is, like, how much of the models are trained on intellectual property or, when you distill it down, pure like human suffering, right? Like, this is somebody’s art, they’ve worked hard, they’ve suffered for it, they put it out there in the world, and now it’s just been pulled in and adopted by this tool that—you know, how many of the examples of, “Give me art in the style of,” right, and you just see hundreds and hundreds of pieces that I mean, frankly, are eerily identical to the style.

Corey: Even down to the signature, in some cases. Yeah.

Levi: Yeah, exactly. You know, and I think that we can’t lose sight of that, right? Like, these tools are fun and you know, they’re fun to play with, it’s really interesting to explore what’s possible, but we can’t lose sight of the fact that there are ultimately people behind these things.

Corey: This episode is sponsored in part by Panoptica.  Panoptica simplifies container deployment, monitoring, and security, protecting the entire application stack from build to runtime. Scalable across clusters and multi-cloud environments, Panoptica secures containers, serverless APIs, and Kubernetes with a unified view, reducing operational complexity and promoting collaboration by integrating with commonly used developer, SRE, and SecOps tools. Panoptica ensures compliance with regulatory mandates and CIS benchmarks for best practice conformity. Privacy teams can monitor API traffic and identify sensitive data, while identifying open-source components vulnerable to attacks that require patching. Proactively addressing security issues with Panoptica allows businesses to focus on mitigating critical risks and protecting their interests. Learn more about Panoptica today at panoptica.app.

Corey: I think it matters, on some level, what the medium is. When I’m writing, I will still use turns of phrase from time to time that I first encountered when I was reading things in the 1990s. And that phrase stuck with me and became part of my lexicon. And I don’t remember where I originally encountered some of these things; I just know I use those raises an awful lot. And that has become part and parcel of who and what I am.

Which is also, I have no problem telling it to write a blog post in the style of Corey Quinn and then ripping a part of that out, but anything that’s left in there, cool. I’m plagiarizing the thing that plagiarized from me and I find that to be one of those ethically just moments there. But written word is one thing depending on what exactly it’s taking from you, but visual style for art, that’s something else entirely.

Levi: There’s a real ethical issue here. These things can absorb far much more information than you ever could in your entire lifetime, right, so that you can only quote-unquote, you know, “Copy, borrow, steal,” from a handful of other people in your entire life, right? Whereas this thing could do hundreds or thousands of people per minute. I think that’s where the calculus needs to be, right? How many people can we impact with this thing?

Corey: This is also nothing new, where originally in the olden times, great, copyright wasn’t really a thing because writing a book was a massive, massive undertaking. That was something that you’d have to do by hand, and then oh, you want a copy of the book? You’d have to have a scribe go and copy the thing. Well then, suddenly the printing press came along, and okay, that changes things a bit.

And then we continue to evolve there to digital distribution where suddenly it’s just bits on a disk that I can wind up throwing halfway around the internet. And when the marginal cost of copying something becomes effectively zero, what does that change? And now we’re seeing, I think, another iteration in that ongoing question. It’s a weird world and I don’t know that we have the framework in place even now to think about that properly. Because every time we start to get a handle on it, off we go again. It feels like if they were doing be invented today, libraries would absolutely not be considered legal. And yet, here we are.

Levi: Yeah, it’s a great point. Humans just do not have the ethical framework in place for a lot of these things. You know, we saw it even with the days of Napster, right? It’s just—like you said, it’s another iteration on the same core problem. I [laugh] don’t know how to solve it. I’m not a philosopher, right?

Corey: Oh, yeah. Back in the Napster days, I was on that a fair bit in high school and college because I was broke, and oh, I wanted to listen to this song. Well, it came on an album with no other good songs on it because one-hit wonders were kind of my jam, and that album cost 15, 20 bucks, or I could grab the thing for free. There was no reasonable way to consume. Then they started selling individual tracks for 99 cents and I gorged myself for years on that stuff.

And now it feels like streaming has taken over the world to the point where the only people who really lose on this are the artists themselves, and I don’t love that outcome. How do we have a better tomorrow for all of this? I know we’re a bit off-topic from you know, cloud management, but still, this is the sort of thing I think about when everything’s running smoothly in a cloud environment.

Levi: It’s hard to get people to make good decisions when they’re so close to the edge. And I think about when I was, you know, college-age scraping by on minimum wage or barely above minimum wage, you know, it was hard to convince me that, oh yeah, you shouldn’t download an MP3 of that song; you should go buy the disc, or whatever. It was really hard to make that argument when my decision was buy an album or figure out where I’m going to, you know, get my lunch. So, I think, now that I’m in a much different place in my life, you know, these decisions are a lot easier to make in an ethical way because that doesn’t impact my livelihood nearly as much. And I think that is where solutions will probably come out of. The more people doing better, the easier it is for them to make good decisions.

Corey: I sure hope you’re right, but something I found is that okay we made it easy for people to make good decisions. Like, “Nope, you’ve just made it easier for me to scale a bunch of terrible ones. I can make 300,000 more terrible decisions before breakfast time now. Thanks.” And, “No, that’s not what I did that for.” Yet here we are. Have you been tracking lately what’s been going on with the HashiCorp license change?

Levi: Um, a little bit, we use—obviously use Terraform in the company and a couple other Hashi products, and it was kind of a wildfire of, you know, how does this impact us? We dove in and we realized that it doesn’t, but it is concerning.

Corey: You’re not effectively wrapping Terraform and then using that as the basis for how you do MDM across your customer fleets.

Levi: Yeah. You know, we’re not deploying customers' written Terraform into their environments or something kind of wild like that. Yeah, it doesn’t impact us. But it is… it is concerning to watch a company pivot from an open-source, community-based project to, “Oh, you can’t do that anymore.” It doesn’t impact a lot of people who use it day-to-day, but I’m really worried about just the goodwill that they’ve lit on fire.

Corey: One of the problems, too, is that their entire write-up on this was so vague that it was—there is no way to get an actual… piece of is it aimed at us or is it not without very deep analysis, and hope that when it comes to court, you’re going to have the same analysis as—that is sympathetic. It’s, what is considered to be a competitor? At least historically, it was pretty obvious. Some of these databases, “Okay great. Am I wrapping their database technology and then selling it as a service? No? I’m pretty good.”

But with HashiCorp, what they do is so vast in a few key areas that no one has the level of certainty. I was pretty freaking certain that I’m not shipping MongoDB with my own wrapper around it, but am I shipping something that looks like Terraform if I’m managing someone’s environment for them? I don’t know. Everything’s thrown into question. And you’re right. It’s the goodwill that currently is being set on fire.

Levi: Yeah, I think people had an impression of Hashi that they were one of the good guys. You know, the quote-unquote, “Good guys,” in the space, right? Mitchell Hashimoto is out there as a very prominent coder, he’s an engineer at heart, he’s in the community, pretty influential on Twitter, and I think people saw them as not one of the big, faceless corporations, so to see moves like this happen, it… I think it shook a lot of people’s opinions of them and scared them.

Corey: Oh, yeah. They’ve always been the good guys in this context. Mitch and Armon were fantastic folks. I’m sure they still are. I don’t know if this is necessarily even coming from them. It’s market forces, what are investors demanding? They see everyone is using Terraform. How does that compare to HashiCorp’s market value?

This is one of the inherent problems if I’m being direct, of the end-stages of capitalism, where it’s, “Okay, we’re delivering on a lot of value. How do we capture ever more of it and growing massively?” And I don’t know. I don’t know what the answer is, but I don’t think anyone’s thrilled with this outcome. Because, let’s be clear, it is not going to meaningfully juice their numbers at all. They’re going to be setting up a lot of ill will against them in the industry, but I don’t see the upside for them. I really don’t.

Levi: I haven’t really done any of the analysis or looked for it, I should say. Have you seen anything about what this might actually impact any providers or anything? Because you’re right, like, what kind of numbers are we actually talking about here?

Corey: Right. Well, there are a few folks that have done things around this that people have named for me: Spacelift being one example, Pulumi being another, and both of them are saying, “Nope, this doesn’t impact us because of X, Y, and Z.” Yeah, whether it does or doesn’t, they’re not going to sit there and say, “Well, I guess we don’t have a company anymore. Oh, well.” And shut the whole thing down and just give their customers over to HashiCorp.

Their own customers would be incensed if that happened and would not go to HashiCorp if that were to be the outcome. I think, on some level, they’re setting the stage for the next evolution in what it takes to manage large-scale cloud environments effectively. I think basically, every customer I’ve ever dealt with on my side has been a Terraform shop. I finally decided to start learning the ins and outs of it myself a few weeks ago, and well, it feels like I should have just waited a couple more weeks and then it would have become irrelevant. Awesome. Which is a bit histrionic, but still, this is going to plant seeds for people to start meaningfully competing. I hope.

Levi: Yeah, I hope so too. I have always awaited releases of Terraform Cloud with great anticipation. I generally don’t like managing my Terraform back-ends, you know, I don’t like managing the state files, so every time Terraform Cloud has some kind of release or something, I’m looking at it because I’m excited, oh finally, maybe this is the time I get to hand it off, right? Maybe I start to get to use their product. And it has never been a really compelling answer to the problems that I have.

And I’ve always said, like, the [laugh] cloud journey would be Google’s if they just released a managed Terraform [laugh] service. And this would be one way for them to prevent that from happening. Because Google doesn’t even have an Infrastructure as Code competitor. Not really. I mean, I know they have their, what, Plans or their Projects or whatever they… their Infrastructure as Code language was, but—

Corey: Isn’t that what Stackdriver was supposed to be? What happened with that? It’s been so long.

Levi: No, that’s a logging solution [laugh].

Corey: That’s the thing. It all runs together. Not it was their operations suite that was—

Levi: There we go.

Corey: —formerly Stackdriver. Yeah. Now, that does include some aspects—yeah. You’re right, it’s still hanging out in the observability space. This is the problem is all this stuff conflates and companies are terrible at naming and Google likes to deprecate things constantly. And yeah, but there is no real competitor. CloudFormation? Please. Get serious.

Levi: Hey, you’re talking to a member of the CloudFormation support group here. So, I’m still a huge fan [laugh].

Corey: Emotional support group, more like it, it seems these days.

Levi: It is.

Corey: Oh, good. It got for loops recently. We’ve been asking for basically that to make them a lot less wordy only for, what, ten years?

Levi: Yeah. I mean, my argument is that I’m operating at the account level, right? I need to deploy to 250, 300, 500 accounts. Show me how to do that with Terraform that isn’t, you know, stab your eyes out with a fork.

Corey: It can be done, but it requires an awful lot of setting things up first.

Levi: Exactly.

Corey: That’s sort of a problem. Like yeah, once you have the first 500 going, the rest are just like butter. But that’s a big step one is massive, and then step two becomes easy. Yeah… no, thank you.

Levi: [laugh]. I’m going to stick with my StacksSets, thank you.

Corey: [laugh]. I really want to thank you for taking the time to come back on and honestly kibitz about the state of the industry with me. If people want to learn more, where’s the best place for them to find you?

Levi: Well, I’m still active on the space normally known as—formerly known as Twitter. You can reach out to me there. DMs are open. I’m always willing to help people learn how to cloud better. Hopefully trying to make my presence known a little bit more on LinkedIn. If you happen to be over there, reach out.

Corey: And we will, of course, put links to that in the [show notes 00:30:16]. Thank you so much for taking the time to speak with me again. It’s always a pleasure.

Levi: Thanks, Corey. I always appreciate it.

Corey: Levi McCormick, Director of Cloud Engineering at Jamf. I’m Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice, and along with an insulting comment that tells us that we completely missed the forest for the trees and that your programmfing is going to be far superior based upon generative AI.

Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
Newsletter Footer

Get the Newsletter

Reach over 30,000 discerning engineers, managers, enthusiasts who actually care about the state of Amazon’s cloud ecosystems.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Sponsor Icon Footer

Sponsor an Episode

Get your message in front of people who care enough to keep current about the cloud phenomenon and its business impacts.