Jack has been passionate about (obsessed with) information security and privacy since he was a child. Attending 2600 meetings before reaching his teenage years, and DEF CON conferences shortly after, he quickly turned an obsession into a career. He began his first professional, full-time information-security role at the world's first internet privacy company; focusing on direct-to-consumer privacy.
After working the startup scene in the 90's, Jack realized that true growth required a renaissance education. He enrolled in college, completing almost six years of coursework in a two-year period. Studying a variety of disciplines, before focusing on obtaining his two computer science degrees. University taught humility, and empathy. These were key to pursuing and achieving a career as a CSO lasting over ten years.
Jack primarily focuses his efforts on mentoring his peers (as well as them mentoring him), advising young companies (especially in the information security and privacy space), and investing in businesses that he believes are both innovative, and ethical.
Announcer: Hello, and welcome to Screaming in the Cloud
with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud
Corey: LANs of the late 90’s and early 2000’s were a magical place to learn about computers, hang out with your friends, and do cool stuff like share files, run websites & game servers, and occasionally bring the whole thing down with some ill-conceived software or network configuration. That’s not how things are done anymore, but what if we could have a 90’s style LAN experience along with the best parts of the 21st century internet? (Most of which are very hard to find these days.) Tailscale thinks we can, and I’m inclined to agree. With Tailscale I can use trusted identity providers like Google, or Okta, or GitHub to authenticate users, and automatically generate & rotate keys to authenticate devices I've added to my network. I can also share access to those devices with friends and teammates, or tag devices to give my team broader access. And that’s the magic of it, your data is protected by the simple yet powerful social dynamics of small groups that you trust. Try now - it's free forever for personal use. I’ve been using it for almost two years personally, and am moderately annoyed that they haven’t attempted to charge me for what’s become an absolutely-essential-to-my-workflow service.
provides Cloud and NetOps teams with complete visibility into hybrid and multi-cloud networks. Ensure an amazing customer experience, reduce cloud and network costs, and optimize performance at scale — from internet to data center to container to cloud. Learn how you can get control of complex cloud networks at www.kentik.com
, and see why companies like Zoom, Twitch, New Relic, Box, Ebay, Viasat, GoDaddy, booking.com, and many, many more choose Kentik as their network observability platform.
Corey: Welcome to Screaming in the Cloud
. I’m Corey Quinn. This promoted episode is brought to us by our friends at Uptycs
and they have once again subjected Jack Roehrig, Technology Evangelist, to the slings, arrows, and other various implements of misfortune that I like to hurl at people. Jack, thanks for coming back. Brave of you.
Jack: I am brave [laugh]. Thanks for having me. Honestly, it was a blast last time and I’m looking forward to having fun this time, too.
Corey: It’s been a month or two, ish. Basically, the passing of time is one of those things that is challenging for me to wrap my head around in this era. What have you folks been up to? What’s changed since the last time we’ve spoken? What’s coming out of Uptycs? What’s new? What’s exciting? Or what’s old with a new and exciting description?
Jack: Well, we’ve GA’ed our agentless architecture scanning system. So, this is one of the reasons why I joined Uptycs that was so fascinating to me is they had kind of nailed XDR. And I love the acronyms: XDR and CNAPP is what we’re going with right now. You know, and we have to use these acronyms so that people can understand what we do without me speaking for hours about it. But in short, our agentless system looks at the current resting risk state of production environment without the need to deploy agents, you know, as we talked about last time.
And then the XDR piece, that’s the thing that you get to justify the extra money on once you go to your CTO or whoever your boss is and show them all that risk that you’ve uncovered with our agentless piece. It’s something I’ve done in the past with technologies that were similar, but Uptycs is continuously improving, our anomaly detection is getting better, our threat intel team is getting better. I looked at our engineering team the other day. I think we have over 300 engineers or over 250 at least. That’s a lot.
Corey: It’s always wild for folks who work in small shops to imagine what that number of engineers could possibly be working on. Then you go and look at some of the bigger shops and you talk to them and you hear about all the different ways their stuff is built and how they all integrate together and you come away, on some level, surprised that they’re able to work with that few engineers. So, it feels like there’s a different perspective on scale. And no one has it right, but it is easy, I think, in the layperson’s mindset to hear that a company like Twitter, for example, before it got destroyed, had 5000 engineers. And, “What are they all doing?” And, “Well, I can see where that question comes from and the answer is complicated and nuanced, which means that no one is going to want to hear it if it doesn’t fit into a tweet itself.” But once you get into the space, you start realizing that everything is way more complicated than it looks.
Jack: It is. Yeah. You know, it’s interesting that you mention that about Twitter. I used to work for a company called Interactive Corporation. And Interactive Corporation is an internet conglomerate that owns a lot of those things that are at the corners of the internet that not many people know about. And also, like, the entire online dating space. So, I mean, it was a blast working there, but at one point in my career, I got heavily involved in M&A. And I was given the nickname Jack the RIFer. RIF standing for Reduction In Force.
Jack: So, Jack the RIFer was—yeah [laugh] I know, right?
Corey: It’s like Buzzsaw Ted. Like, when you bring in the CEO with the nickname of Buzzsaw in there, it’s like, “Hmm, I wonder who’s going to hire a lot of extra people?” Not so much.
Jack: [laugh]. Right? It’s like, hey, they said they were sending, “Jack out to hang out with us,” you know, in whatever country we’re based out of. And I go out there and I would drink them under the table. And I’d find out the dirty secrets, you know.
We would be buying these companies because they would need optimized. But it would be amazing to me to see some of these companies that were massive and they produced what I thought was so little, and then to go on to analyze everybody’s job and see that they were also intimately necessary.
Corey: Yeah. And the question then becomes, if you were to redesign what that company did from scratch. Which again, is sort of an architectural canard; it was the easiest thing in the world to do is to design an architecture from scratch on a whiteboard with almost an arbitrary number of constraints. The problem is that most companies grow organically and in order to get to that idealized architecture, you’ve got to turn everything off and rebuild it from scratch. The problem is getting to something that’s better without taking 18 months of downtime while you rebuild everything. Most companies cannot and will not sustain that.
Jack: Right. And there’s another way of looking at it, too, which is something that’s been kind of a thought experiment for me for a long time. One of the companies that I worked with back at IC was Ask Jeeves. Remember Ask Jeeves?
Corey: Oh, yes. That was sort of the closest thing we had at the time to natural language search.
Jack: Right. That was the whole selling point. But I don’t believe we actually did any natural language processing back then [laugh]. So, back in those days, it was just a search index. And if you wanted to redefine search right now and you wanted to find something that was like truly a great search engine, what would you do differently?
If you look at the space right now with ChatGPT and with Google, and there’s all this talk about, well, ChatGPT is the next Google killer. And then people, like, “Well, Google has Lambda.” What are they worried about ChatGPT for? And then you’ve got the folks at Google who are saying, “ChatGPT is going to destroy us,” and the folks in Google who are saying, “ChatGPT’s got nothing on us.” So, if I had to go and do it all over from scratch for search, it wouldn’t have anything to do with ChatGPT. I would go back and make a directed, cyclical graph and I would use node weight assignments based on outbound links. Which is exactly what Google was with the original PageRank algorithm, right [laugh]?
Corey: I’ve heard this described as almost a vector database in various terms depending upon what it is that—how it is you’re structuring this and what it looks like. It’s beyond my ken personally, but I do see that there’s an awful lot of hype around ChatGPT these days, and I am finding myself getting professionally—how do I put it—annoyed by most of it. I think that’s probably the best way to frame it.
Jack: Isn’t it annoying?
Corey: It is because it’s—people ask, “Oh, are you worried that it’s going to take over what you do?” And my answer is, “No. I’m worried it’s going to make my job harder more than anything else.” Because back when I was a terrible student, great, write an essay on this thing, or write a paper on this. It needs to be five pages long.
And I would write what I thought was a decent coverage of it and it turned out to be a page-and-a-half. And oh, great. What I need now is a whole bunch of filler fluff that winds up taking up space and word count but doesn’t actually get us to anywhere—
Corey: —that is meaningful or useful. And it feels like that is what GPT excels at. If I worked in corporate PR for a lot of these companies, I would worry because it takes an announcement that fits in a tweet—again, another reference to that ailing social network—and then it turns it into an arbitrary length number of pages. And it’s frustrating for me just because that’s a lot more nonsense I have to sift through in order to get the actual, viable answer to whatever it is I’m going for here.
Jack: Well, look at that viable answer. That’s a really interesting point you’re making. That fluff, right, when you’re writing that essay. Yeah, that one-and-a-half pages out. That’s gold. That one-and-a-half pages, that’s the shit. That’s the stuff you want, right? That’s the good shit [laugh]. Excuse my French. But ChatGPT is what’s going to give you that filler, right? The GPT-3 dataset, I believe, was [laugh] I think it was—there’s a lot of Reddit question-and-answers that were used to train it. And it was trained, I believe—the data that it was trained with ceased to be recent in 2021, right? It’s already over a year old. So, if your teacher asked you to write a very contemporary essay, ChatGPT might not be able to help you out much. But I don’t think that that kind of gets the whole thing because you just said filler, right? You can get it to write that extra three-and-a-half pages from that five pages you’re required to write. Well, hey, teachers shouldn’t be demanding that you write five pages anyways. I once heard, a friend of mine arguing about one presidential candidate saying, “This presidential candidate speaks at a third-grade level.” And the other person said, “Well, your presidential candidate speaks at a fourth-grade level.” And I said, “I wish I could convey presidential ideas at a level that a third or a fourth grader could understand” You know? Right?
Corey: On some level, it’s actually not a terrible thing because if you can only convey a concept at an extremely advanced reading level, then how well do you understand—it felt for a long time like that was the problem with AI itself and machine-learning and the rest. The only value I saw was when certain large companies would trot out someone who was themselves deep into the space and their first language was obviously math and they spoke with a heavy math accent through everything that they had to say. And at the end of it, I didn’t feel like I understood what they were talking about any better than I had at the start. And in time, it took things like ChatGPT to say, “Oh, this is awesome.” People made fun of the Hot Dog/Not A Hot Dog App, but that made it understandable and accessible to people. And I really think that step is not given nearly enough credit.
Jack: Yeah. That’s a good point. And it’s funny, you mentioned that because I started off talking about search and redefining search, and I think I use the word digraph for—you know, directed gra—that’s like a stupid math concept; nobody understands what that is. I learned that in discrete mathematics a million years ago in college, right? I mean, I’m one of the few people that remembers it because I worked in search for so long.
Corey: Is that the same thing is a directed acyclic graph, or am I thinking of something else?
Jack: Ah you’re—that’s, you know, close. A directed acyclic graph has no cycles. So, that means you’ll never go around in a loop. But of course, if you’re just mapping links from one website to another website, A can link from B, which can then link back to A, so that creates a cycle, right? So, an acyclic graph is something that doesn’t have that cycle capability in it.
Corey: Got it. Yeah. Obviously, my higher math is somewhat limited. It turns out that cloud economics doesn’t generally tend to go too far past basic arithmetic. But don’t tell them. That’s the secret of cloud economics.
Jack: I think that’s most everything, I mean, even in search nowadays. People aren’t familiar with graph theory. I’ll tell you what people are familiar with. They’re familiar with Google. And they’re familiar with going to Google and Googling for something, and when you Google for something, you typically want results that are recent.
And if you’re going to write an essay, you typically don’t care because only the best teachers out there who might not be tricked by ChatGPT—honestly, they probably would be, but the best teachers are the ones that are going to be writing the syllabi that require the recency. Almost nobody’s going to be writing syllabi that requires essay recency. They’re going to reuse the same syllabus they’ve been using for ten years.
Corey: And even that is an interesting question there because if we talk about the results people want from search, you’re right, I have to imagine the majority of cases absolutely care about recency. But I can think of a tremendous number of counterexamples where I have been looking for things explicitly and I do not want recent results, sometimes explicitly. Other times because no, I’m looking for something that was talked about heavily in the 1960s and not a lot since. I don’t want to basically turn up a bunch of SEO garbage that trawled it from who knows where. I want to turn up some of the stuff that was digitized and then put forward. And that can be a deceptively challenging problem in its own right.
Jack: Well, if you’re looking for stuff has been digitized, you could use archive.org or one of the web archive projects. But if you look into the web archive community, you will notice that they’re very secretive about their data set. I think one of the best archive internet search indices that I know of is in Portugal. It’s a Portuguese project.
I can’t recall the name of it. But yeah, there’s a Portuguese project that is probably like the axiomatic standard or like the ultimate prototype of how internet archiving should be done. Search nowadays, though, when you say things like, “I want explicitly to get this result,” search does not want to show you explicitly what you want. Search wants to show you whatever is going to generate them the most advertising revenue. And I remember back in the early search engine marketing days, back in the algorithmic trading days of search engine marketing keywords, you could spend $4 on an ad for flowers and if you typed the word flowers into Google, you just—I mean, it was just ad city.
You typed the word rehabilitation clinic into Google, advertisements everywhere, right? And then you could type certain other things into Google and you would receive a curated list. These things are obvious things that are identified as flaws in the secrecy of the PageRank algorithm, but I always thought it was interesting because ChatGPT takes care of a lot of the stuff that you don’t want to be recent, right? It provides this whole other end to this idea that we’ve been trained not to use search for, right?
So, I was reviewing a contract the other day. I had this virtual assistant and English is not her first language. And she and I red-lined this contract for four hours. It was brutal because I kept on having to Google—for lack of a better word—I had to Google all these different terms to try and make sense of it. Two days later, I’m playing around with ChatGPT and I start typing some very abstract commands to it and I swear to you, it generated that same contract I was red-lining. Verbatim. I was able to get into generating multiple [laugh] clauses in the contract. And by changing the wording in ChatGPT to save, “Create it, you know, more plaintiff-friendly,” [laugh] that contract all of a sudden, was red-lined in a way that I wanted it to be [laugh].
Corey: This is a fascinating example of this because I’m married to a corporate attorney who does this for a living, and talking to her and other folks in her orbit, the problem they have with it is that it works to a point, on a limited basis, but it then veers very quickly into terms that are nonsensical, terms that would absolutely not pass muster, but sound like something a lawyer would write. And realistically, it feels like what we’ve built is basically the distillation of a loud, overconfident white guy in tech because—
Corey: —they don’t know exactly what they’re talking about, but by God is it confident when it says it.
Jack: [laugh]. Yes. You hit the nail on that. Ah, thank you. Thank you.
Corey: And there’s as an easy way to prove this is pick any topic in the world in which you are either an expert or damn close to it or know more than the average bear about and ask ChatGPT to explain that to you. And then notice all the things that glosses over or what it gets subtly wrong or is outright wrong about, but it doesn’t ever call that out. It just says it with the same confident air of a failing interview candidate who gets nine out of ten questions absolutely right, but the one they don’t know they bluff on, and at that point, you realize you can’t trust them because you never know if they’re bluffing or they genuinely know the answer.
Jack: Wow, that is a great analogy. I love that. You know, I mentioned earlier that the—I believe the part of the big portion of the GPT-3 training data was based on Reddit questions and answers. And now you can’t categorize Reddit into a single community, of course; that would be just as bad as the way Reddit categories [laugh] our community, but Reddit did have a problem a wh—I remember, there was the Ellen Pao debacle for Reddit. And I don’t know if it was so much of a debacle if it was more of a scapegoat situation, but—
Corey: I’m very much left with a sense that it’s the scapegoat. But still, continue.
Jack: Yeah, we’re adults. We know what happened here, right? Ellen Pao is somebody who is going through some very difficult times in her career. She’s hired to be a martyr. They had a community called fatpeoplehate, right?
I mean, like, Reddit had become a bizarre place. I used Reddit when I was younger and it didn’t have subreddits. It was mostly about programming. It was more like Hacker News. And then I remember all these people went to Hacker News, and a bunch of them stayed at Reddit and there was this weird limbo of, like, the super pretentious people over at Hacker News.
And then Reddit started to just get weirder and weirder. And then you just described ChatGPT in a way that just struck me as so Reddit, you know? It’s like some guy mansplaining some answer. It starts off good and then it’s overconfidently continues to state nonsensical things.
Corey: Oh yeah, I was a moderator of the legal advice and personal finance subreddits for years, and—
Jack: No way. Were you really?
Corey: Oh, absolutely. Those corners were relatively reasonable. And like, “Well, wait a minute, you’re not a lawyer. You’re correct and I’m also not a financial advisor.” However, in both of those scenarios, what people were really asking for was, “How do I be a functional adult in society?”
In high school curricula in the United States, we insist that people go through four years of English literature class, but we don’t ever sit down and tell them how to file their taxes or how to navigate large transactions that are going to be the sort of thing that you encounter in adulthood: buying a car, signing a lease. And it’s more or less yeah, at some point, you wind up seeing someone with a circumstance that yeah, talk to a lawyer. Don’t take advice on the internet for this. But other times, it’s no, “You cannot sue a dog. You have to learn to interact with people as a grown-up. Here’s how to approach that.” And that manifests as legal questions or finance questions, but it all comes down to I have been left on prepared for the world I live in by the school system. How do I wind up addressing these things? And that is what I really enjoyed.
Jack: That’s just prolifically, prolifically sound. I’m almost speechless. You’re a hundred percent correct. I remember those two subreddits. It always amazes me when I talk to my friends about finances.
I’m not a financial person. I mean, I’m an investor, right, I’m a private equity investor. And I was on a call with a young CEO that I’ve been advising for while. He runs a security awareness training company, and he’s like, you know, you’ve made 39% off of your investment three months. And I said, “I haven’t made anything off of my investment.”
I bought a safe and, you know—it’s like, this is conversion equity. And I’m sitting here thinking, like, I don’t know any of the stuff. And I’m like, I talk to my buddies in the—you know, that are financial planners and I ask them about finances, and it’s—that’s also interesting to me because financial planning is really just about when are you going to buy a car? When are you going to buy a house? When are you going to retire? And what are the things, the securities, the companies, what should you do with your money rather than store it under your mattress?
And I didn’t really think about money being stored under a mattress until the first time I went to Eastern Europe where I am now. I’m in Hungary right now. And first time I went to Eastern Europe, I think I was in Belgrade in Serbia. And my uncle at the time, he was talking about how he kept all of his money in cash in a bank account. In Serbian Dinar.
And Serbian Dinar had already gone through hyperinflation, like, ten years prior. Or no, it went through hyperinflation in 1996. So, it was not—it hadn’t been that long [laugh]. And he was asking me for financial advice. And here I am, I’m like, you know, in my early-20s.
And I’m like, I don’t know what you should do with your money, but don’t put it under your mattress. And that’s the kind of data that Reddit—that ChatGPT seems to have been trained on, this GPT-3 data, it seems like a lot of [laugh] Redditors, specifically Redditors sub-2001. I haven’t used Reddit very much in the last half a decade or so.
Corey: Yeah, I mean, I still use it in a variety of different ways, but I got out of both of those cases, primarily due to both time constraints, as well as my circumstances changed to a point where the things I spent my time thinking about in a personal finance sense, no longer applied to an awful lot of folk because the common wisdom is aimed at folks who are generally on a something that resembles a recurring salary where they can calculate in a certain percentage raises, in most cases, for the rest of their life, plan for other things. But when I started the company, a lot of the financial best practices changed significantly. And what makes sense for me to do becomes actively harmful for folks who are not in similar situations. And I just became further and further attenuated from the way that you generally want to give common case advice. So, it wasn’t particularly useful at that point anymore.
Jack: Very. Yeah, that’s very well put. I went through a similar thing. I watched Reddit quite a bit through the Ellen Pao thing because I thought it was a very interesting lesson in business and in social engineering in general, right? And we saw this huge community, this huge community of people, and some of these people were ridiculously toxic.
And you saw a lot of groupthink, you saw a lot of manipulation. There was a lot of heavy-handed moderation, there was a lot of too-late moderation. And then Ellen Pao comes in and I’m, like, who the heck is Ellen Pao? Oh, Ellen Pao is this person who has some corporate scandal going on. Oh, Ellen Pao is a scapegoat.
And here we are, watching a community being socially engineered, right, into hating the CEO who’s just going to be let go or step down anyways. And now they ha—their conversations have been used to train intelligence, which is being used to socially engineer people [laugh] into [crosstalk 00:22:13].
Corey: I mean you just listed something else that’s been top-of-mind for me lately, where it is time once again here at The Duckbill Group for us to go through our annual security awareness training. And our previous vendor has not been terrific, so I start looking to see what else is available in that space. And I see that the world basically divides into two factions when it comes to this. The first is something that is designed to check the compliance boxes at big companies. And some of the advice that those things give is actively harmful as in, when I’ve used things like that in the past, I would have an addenda that I would send out to the team. “Yeah, ignore this part and this part and this part because it does not work for us.”
And there are other things that start trying to surface it all the time as it becomes a constant awareness thing, which makes sense, but it also doesn’t necessarily check any contractual boxes. So it’s, isn’t there something in between that makes sense? I found one company that offered a Slackbot that did this, which sounded interesting. The problem is it was the most condescendingly rude and infuriatingly slow experience that I’ve had. It demanded itself a whole bunch of permissions to the Slack workspace just to try it out, so I had to spin up a false Slack workspace for testing just to see what happens, and it was, start to finish, the sort of thing that I would not inflict upon my team. So, the hell with it and I moved over to other stuff now. And I’m still looking, but it’s the sort of thing where I almost feel like, this is something ChatGPT could have built and cool, give me something that sounds confident, but it’s often wrong. Go.
Jack: [laugh]. Yeah, Uptycs actually is—we have something called a Otto M8—spelled O-T-T-O space M and then the number eight—and I personally think that’s the cutest name ever for Slackbot. I don’t have a picture of him to show you, but I would personally give him a bit of a makeover. He’s a little nerdy for my likes. But he’s got—it’s one of those Slackbots.
And I’m a huge compliance geek. I was a CISO for over a decade and I know exactly what you mean with that security awareness training and ticking those boxes because I was the guy who wrote the boxes that needed to be ticked because I wrote those control frameworks. And I’m not a CISO anymore because I’ve already subjected myself to an absolute living hell for long enough, at least for now [laugh]. So, I quit the CISO world.
Corey: Oh yeah.
Corey: And so, much of it also assumes certain things like I’ve had people reach out to me trying to shill whatever it is they’ve built in this space. And okay, great. The problem is that they’ve built something that is aligned at engineers and developers. Go, here you go. And that’s awesome, but we are really an engineering-first company.
Yes, most people here have an engineering background and we build some internal tooling, but we don’t need an entire curriculum on how to secure the tools that we’re building as web interfaces and public-facing SaaS because that’s not what we do. Not to mention, what am I supposed to do with the accountants in the sales folks and the marketing staff that wind up working on a lot of these things that need to also go through training? Do I want to sit here and teach them about SQL injection attacks? No, Jack. I do not want to teach them that.
Jack: No you don’t.
Corey: I want them to not plug random USB things into the work laptop and to use a password manager. I’m not here trying to turn them into security engineers.
Jack: I used to give a presentation and I onboarded every single employee personally for security. And in the presentation, I would talk about password security. And I would have all these complex passwords up. But, like, “You know what? Let me just show you what a hacker does.”
And I’d go and load up dhash and I’d type in my old email address. And oh, there’s my password, right? And then I would—I copied the cryptographic hash from dhash and I’d paste that into Google. And I’d be like, “And that’s how you crack passwords.” Is you Google the cryptographic hash, the insecure cryptographic hash and hope somebody else has already cracked it.
But yeah, it’s interesting. The security awareness training is absolutely something that’s supposed to be guided for the very fundamental everyman employee. It should not be something entirely technical. I worked at a company where—and I love this, by the way; this is one of the best things I’ve ever read on Slack—and it was not a message that I was privy to. I had to have the IT team pull the Slack logs so that I could read these direct communications. But it was from one—I think it was the controller to the Vice President of accounting, and the VP of accounting says how could I have done this after all of those phishing emails that Jack sent [laugh]?
Corey: Oh God, the phishing emails drives me up a wall, too. It’s you’re basically training your staff not to trust you and waste their time and playing gotcha. It really creates an adversarial culture. I refuse to do that stuff, too.
Jack: My phishing emails are fun, all right? I did one where I pretended that I installed a camera in the break room refrigerator, and I said, we’ve had a problem with food theft out of the Oakland refrigerator and so I’ve we’ve installed this webcam. Log into the sketchy website with your username and password. And I got, like, a 14% phish rate. I’ve used this campaign at multinational companies.
I used to travel around the world and I’d grab a mic at the offices that wanted me to speak there and I’d put the mic real close to my head and I say, “Why did you guys click on the link to the Oakland refrigerator?” [laugh]. I said, “You’re in Stockholm for God’s sake.” Like, it works. Phishing campaigns work.
They just don’t work if they’re dumb, honestly. There’s a lot of things that do work in the security awareness space. One of the biggest problems with security awareness is that people seem to think that there’s some minimum amount of time an employee should have to spend on security awareness training, which is just—
Corey: Right. Like, for example, here in California, we’re required to spend two hours on harassment training every so often—I think it’s every two years—and—
Jack: Every two years. Yes.
Corey: —at least for managerial staff. And it’s great, but that leads to things such as, “Oh, we’re not going to give you a transcript if you can read the video more effectively. You have to listen to it and make sure it takes enough time.” And it’s maddening to me just because that is how the law is written. And yes, it’s important to obey the law, don’t get me wrong, but at the same time, it just feels like it’s an intentional time suck.
Jack: It is. It is an intentional time suck. I think what happens is a lot of people find ways to game the system. Look, when I did security awareness training, my controls, the way I worded them, didn’t require people to take any training whatsoever. The phishing emails themselves satisfied it completely.
I worded that into my control framework. I still held the trainings, they still made people take them seriously. And then if we have a—you know, if somebody got phished horrifically, and let’s say wired $2 million to Hong Kong—you know who I’m talking about, all right, person who might is probably not listening to this, thankfully—but [laugh] she did. And I know she didn’t complete my awareness training. I know she never took any of it.
She also wired $2 million to Hong Kong. Well, we never got that money back. But we sure did spend a lot of executive time trying to. I spent a lot of time on the phone, getting passed around from department to department at the FBI. Obviously, the FBI couldn’t help us.
It was wired from Mexico to Hong Kong. Like the FBI doesn’t have anything to do with it. You know, bless them for taking their time to humor me because I needed to humor my CEO. But, you know, I use those awareness training things as a way to enforce the Code of Conduct. The Code of Conduct requiring disciplinary action for people who didn’t follow the security awareness training.
If you had taken the 15 minutes of awareness training that I had asked people to do—I mean, I told them to do it; it was the Code of Conduct; they had to—then there would be no disciplinary action for accidentally wiring that money. But people are pretty darn diligent on not doing things like that. It’s just a select few that seems to be the ones that get repeatedly—
Corey: And then you have the group conversations. One person screws something up and then you wind up with the emails to everyone. And then you have the people who are basically doing the right thing thinking they’re being singled out. And—ugh, management is hard, people is hard, but it feels like a lot of these things could be a lot less hard.
Jack: You know, I don’t think management is hard. I think management is about empathy. And management is really about just positive reinforce—you know what management is? This is going to sound real pretentious. Management’s kind of like raising a kid, you know? You want to have a really well-adjusted kid? Every time that kid says, “Hey, Dad,” answer. [crosstalk 00:30:28]—
Corey: Yeah, that’s a good—that’s a good approach.
Jack: I mean, just be there. Be clear, consistent, let them know what to expect. People loved my security program at the places that I’ve implemented it because it was very clear, it was concise, it was easy to understand, and I was very approachable. If anybody had a security concern and they came to me about it, they would [laugh] not get any shame. They certainly wouldn’t get ignored.
I don’t care if they were reporting the same email I had had reported to me 50 times that day. I would personally thank them. And, you know what I learned? I learned that from raising a kid, you know? It was interesting because it was like, the kid I was raising, when he would ask me a question, I would give him the same answer every time in the same tone. He’d be like, “Hey, Jack, can I have a piece of candy?” Like, “No, your mom says you can’t have any candy today.” They’d be like, “Oh, okay.” “Can I have a piece of candy?” And I would be like, “No, your mom says you can’t have any candy today.” “Can I have a piece of candy, Jack?” I said, “No. Your mom says he can’t have any candy.” And I’d just be like a broken record.
And he immediately wouldn’t ask me for a piece of candy six different times. And I realized the reason why he was asking me for a piece of candy six different times is because he would get a different response the sixth time or the third time or the second time. It was the inconsistency. Providing consistency and predictability in the workforce is key to management and it’s key to keeping things safe and secure.
Corey: I think there’s a lot of truth to that. I really want to thank you for taking so much time out of your day to talk to me about think topics ranging from GPT and ethics to parenting. If people want to learn more, where’s the best place to find you?
Jack: I’m [email protected]
, and I’m also [email protected]
. My last name is spelled—heh, no, I’m kidding. It’s a J-A-C-K-R-O-E-H-R-I-G dot com. So yeah, hit me up. You will get a response from me.
Corey: Excellent. And I will of course include links to that in the show notes. Thank you so much for your time. I appreciate it.
Corey: This promoted guest episode has been brought to us by our friends at Uptycs, featuring Jack Roehrig, Technology Evangelist at same. I’m Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment ghostwritten for you by ChatGPT so it has absolutely no content worth reading.
Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com
to get started.