Into the Year of Documentation with Dr. KellyAnn Fitzpatrick

Episode Summary

It is always a good day when you get to chat with any of the fine folks at RedMonk. So, after some polite badgering by Corey, Kelly Fitzpatrick, a Senior Industry Analysts at RedMonk, has joined the “Screaming” line up. At the forefront of today’s conversation, what exactly is an industry analysts, and what is an industry analysis firm. Kelly provides some distinction between what RedMonk does, versus what RedMonk is. Kelly talks about how they work as an industry analysis firm to bring the latest tech trends to what their customers are building, and how they are getting their work out into the wild. Kelly discusses some of the secrets to RedMonk’s success in the face of the shifting tides of the tech world, her unique background and PhD, and more!

Episode Show Notes & Transcript

About Kelly
KellyAnn Fitzpatrick is a Senior Industry Analyst at RedMonk, the developer-focused industry analyst firm. Having previously worked as a QA analyst, test & release manager, and tech writer, she has experience with containers, CI/CD, testing frameworks, documentation, and training. She has also taught technical communication to computer science majors at the Georgia Institute of Technology as a Brittain Postdoctoral Fellow.

Holding a Ph.D. in English from the University at Albany and a B.A. in English and Medieval Studies from the University of Notre Dame, KellyAnn’s side projects include teaching, speaking, and writing about medievalism (the ways that post-medieval societies reimagine or appropriate the Middle Ages), and running to/from donut shops.

Links:
Transcript
Announcer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.


Corey: Today’s episode is brought to you in part by our friends at MinIO the high-performance Kubernetes native object store that’s built for the multi-cloud, creating a consistent data storage layer for your public cloud instances, your private cloud instances, and even your edge instances, depending upon what the heck you’re defining those as, which depends probably on where you work. It’s getting that unified is one of the greatest challenges facing developers and architects today. It requires S3 compatibility, enterprise-grade security and resiliency, the speed to run any workload, and the footprint to run anywhere, and that’s exactly what MinIO offers. With superb read speeds in excess of 360 gigs and 100 megabyte binary that doesn’t eat all the data you’ve gotten on the system, it’s exactly what you’ve been looking for. Check it out today at min.io/download, and see for yourself. That’s min.io/download, and be sure to tell them that I sent you.


Corey: This episode is sponsored by our friends at Oracle HeatWave is a new high-performance query accelerator for the Oracle MySQL Database Service, although I insist on calling it “my squirrel.” While MySQL has long been the worlds most popular open source database, shifting from transacting to analytics required way too much overhead and, ya know, work. With HeatWave you can run your OLAP and OLTP—don’t ask me to pronounce those acronyms again—workloads directly from your MySQL database and eliminate the time-consuming data movement and integration work, while also performing 1100X faster than Amazon Aurora and 2.5X faster than Amazon Redshift, at a third of the cost. My thanks again to Oracle Cloud for sponsoring this ridiculous nonsense.


Corey: Welcome to Screaming in the Cloud, I’m Corey Quinn. It’s always a good day when I get to sit down and have a chat with someone who works over at our friends at RedMonk. Today is no exception because after trying for, well, an embarrassingly long time, my whining and pleading has finally borne fruit, and I’m joined by Kelly Fitzpatrick, who’s a senior industry analyst at RedMonk. Kelly, thank you for, I guess, finally giving in to my always polite, but remarkably persistent requests to show up on the show.


Kelly: Great, thanks for having me. It’s great to finally be on the show.


Corey: So, let’s start at the very beginning because I am always shockingly offended whenever it happens, but some people don’t actually know what RedMonk is. What is it you’d say it is that you folks do?


Kelly: Oh, I love this question. Because it’s like, “What do you do,” versus, “What are you?” And that’s a very big difference. And I’m going to start with maybe what we are. So, we are a developer-focused industry analyst firm. You put all those things, kind of, together.


And in terms of what we do, it means that we follow tech trends. And that’s something that many industry analysts do, but our perspective is really interested in developers specifically and then practitioners more broadly. So, it’s not just, “Okay, these are things that are happening in tech that you care about if you’re a CIO,” but what tech things affect developers in terms of how they’re building software and why they want to build software and where they’re building software?


Corey: So, backing it up slightly because it turns out that I don’t know the answer to this either. What exactly is an industry analyst firm? And the reason I bring this up is I’ve been invited to industry analyst events, and that is entirely your colleague, James Governor’s, fault because he took me out for lunch at I think it was Google Next a few years ago and said, “Oh, you’re definitely an analyst.” “Okay, cool. Well, I don’t think I am. Why should I be an analyst?”


“Oh, because companies have analyst budgets.” “Oh, you said, analyst”—protip: Never get in the way of people trying to pay you to do things. But I still feel like I don’t know what an analyst is, in this sense. Which means I’m about to get a whole bunch of refund requests when this thing airs.


Kelly: I should hope not. But industry analysts, one of the jokes that we have around RedMonk is how do we explain to our families what an industry analyst is? And I think even Steve and James, who are RedMonk’s founders, they’ve been doing this for quite a long time, like, much longer than they ever want to admit that they do, and they still are like, “Okay, how do I explain this to my parents?” Or you know, anyone else who’s asking, and partly, it’s almost like a very—a term that you’ll see in the tech industry, but outside of it doesn’t really have that much, kind of, currency in the same way that you can tell someone that you’re like, maybe a business analyst or something like that, or any of those, almost like spy-like versions of analyst. I think was it The Hunt for Red October, the actual hero of that is an analyst, but not the type of analyst that I am in any way, shape or form.


But you know, industry analyst firms, specifically, it’s like we keep up on what tech is out there. People engage with us because they want to know what to buy for the things that they’re doing and the things that they’re building, or how to better create and sell the stuff that they are building to people who build software. So, in our case, it’s like, all right, what type of tools are developers using? And where does this particular tool that our company is building fit into that? And how do you talk about that with developers in a way that makes sense to them?


Corey: On some level, what I imagine your approach to this stuff is aligns somewhat with my own. Before you became an industry analyst, which I’m still not entirely sure I know what that is—I’m sorry, not your fault; just so many expressions of it out there—before you wound up down that path, you were a QA manager; you wound up effectively finding interesting bugs in software, documentation, et cetera. And, on some level, that’s, I think, what has made me even somewhat useful in the space is I’ll go ahead and try and build something out of something that a vendor has released, and huh, the documentation says it should work this way, but I try it and it breaks and it fails. And the response is always invariably the same, which is, “That’s interesting,” which is engineering-speak for, “What the hell is that?” I have this knack for stumbling over weird issues, and I feel like that aligns with what makes for a successful QA person. Is that directionally correct, or am I dramatically misunderstanding things and I’m just accident-prone?


Kelly: [laugh]. No, I think that makes a lot of sense. And especially coming from QA where it’s like, not just making sure that something works, but making sure that something doesn’t break if you try to break it in different ways, the things that are not necessarily the expected, you know, behaviors, that type of mindset, I think, for me translated very easily to, kind of, being an analyst. Because it’s about asking questions; it’s about not just taking the word of your developers that this software works, but going and seeing if it actually does and kind of getting your hands dirty, and in some cases, trying to figure out where certain problems or who broke the build, or why did the build break is always kind of super fun mystery that I love doing—not really, but, like, everyone kind of has to do it—and I think that translates to the analyst world where it’s like, what pieces of these systems, or tech stacks, or just the way information is being conveyed about them is working or is not, and in what ways can people kind of maybe see things a different way that the people who are building or writing about these things did not anticipate?


Corey: From my position, and this is one of the reasons I sort of started down this whole path is if I’m trying to build something with a product or a platform—or basically anything, it doesn’t really matter what—and the user experience is bad, or there are bugs that get in my way, my default response—even now—is not, “Oh, this thing’s a piece of crap that’s nowhere near ready for primetime use,” but instead, it’s, “Oh, I’m not smart enough to figure out how to use it.” It becomes a reflection on the user, and they feel bad as a result. And I don’t like that for anyone, for any product because it doesn’t serve the product well, it certainly doesn’t serve the human being trying to use it and failing well, and from a pure business perspective, it certainly doesn’t serve the ability to solve a business problem in any meaningful respect. So, that has been one of the reasons that I’ve been tilting at that particular windmill for as long as I have.


Kelly: I think that makes sense because you can have the theoretically best, most innovative, going to change everyone’s lives for the better, product in the world, but if nobody can use it, it’s not going to change the world.


Corey: As you take a look at your time at RedMonk, which has been, I believe, four years, give or take?


Kelly: We’re going to say three to four.


Corey: Three to four? Because you’ve been promoted twice in your time there, let’s be very clear, and this is clearly a—


Kelly: That’s a very, very astute observation on your part.


Corey: It is a meteoric rise. And what makes that also fascinating from my perspective, is that despite being a company that is, I believe, 19 years old, you aren’t exactly a giant company that throws bodies at problems. I believe you have seven full-time employees, two of whom have been hired in the last quarter.


Kelly: That’s true. So, seven full-time employees and five analysts. So, we have—of that it’s five analysts, and we only added a fifth analyst the beginning of this year, with Dr. Kate Holterhoff. [unintelligible 00:08:09], kind of, bring her on the team.


So, we had been operating with, like, kind of, six full-time employees. We were like, “We need some more resources in this area.” And we heard another analyst, which if you talk about, okay, we hired one more, but when you’re talking about hiring one more and adding that to a team of, like, four analysts, it’s such a big difference, just in terms of, kind of, resources. And I think your observation about you ca—we don’t just throw bodies at problems is kind of correct. That is absolutely not the way we go about things at all.


Corey: At a company that is taking the same model that The Duckbill Group does—by which I mean not raising a bunch of outside money is, as best I can tell—that means that you have to fall back on this ancient business model known as making more money than it costs to run the place every month, you don’t get to do this massive scaled out hiring thing. So, bringing on multiple employees at a relatively low turnover company means that suddenly you’re onboarding not just one new person, but two. What has that been like? Because to be very clear, if you’re hiring 20 engineers or whatnot, okay, great, and you’re having significant turnover, yeah, onboarding two folks is not that big of a deal, but this is a significant percentage of your team.


Kelly: It is. And so for us—and Kate started at the beginning of this year, so she’s only been here for a bit—but in terms of onboarding another analyst, this is something where I haven’t done before, but, like, my colleagues have, whereas the other new member of our team, Morgan Harris, who is our Account Engagement Manager, and she is amazing, and has also, like, very interesting background and client success in, like, fashion, which is, you know, awesome when I’m trying to figure out what [unintelligible 00:09:48] fit I need to do, we have someone in-house who can actually give me advice on that. But that’s not something that we have onboarded for that role very much in the past, so bringing on someone where they’re the only person in their role and, like, having to begin to learn the role. And then also to bring in another analyst where we have a little bit more experience onboarding analysts, it takes a lot of patience for everybody involved. And the thing I love about RedMonk and the people that I get to work with is that they actually have that patience and we function very well as, like, a team.


And because of that, I think things that could really have thrown us off course, like losing an account engagement or onboarding one and then onboarding a new analyst, like, over the holidays, during a pandemic, and everything else that is happening, it’s going much more smoothly than it could have otherwise.


Corey: These are abnormal times, to be sure. It’s one of those things where it’s, we’re a couple years into a pandemic now, and I still feel like we haven’t really solved most of the problems that this has laid bare, which kind of makes me despair of ever really figuring out what that’s going to look like down the road.


Kelly: Yeah, absolutely. And there is very much the sense that, “Okay, we should be kind of back to normal, going to in-person conferences.” And then you get to an in-person conference, and then they all move back to virtual or, as in your case, you go to an in-person conference and then you have to sequester yourself away from your family for a couple of weeks to make sure that you’re not bringing something home.


Corey: So, I have to ask. You have been quoted as saying that 2022—for those listening, that is this year—is the year of documentation. You’re onboarding two new people into a company that does not see significant turnover, which means that invariably, “Oh, it’s been a while since we’ve updated the documentation. Whoops-a-doozy,” is a pretty common experience there. How much of your assertion that this is the year of documentation comes down to the, “Huh. Our onboarding stuff is really out of date,” versus a larger thing that you’re seeing in the industry?


Kelly: That is a great question because you never know what your documentation is like until you have someone new, kind of, come in with fresh eyes, has a perspective not only on, “Okay, I have no idea what this means,” or, “This is not where I thought it would be,” or, “This, you know, system is not working in any… in any way similar to anything I have ever seen in any other part of my, like, kind of, working career.” So, that’s where you really see what kind of gaps you have, but then you also kind of get to see which parts are working out really well. And not to spend, kind of, too much on that, but one of the best things that my coworkers did for me when I started was, Rachel Stephens had kept a log of, like, all the questions that she had as a new analyst. And she just, like, gave that to me with some advice on different things, like, in a spreadsheet, which I think is—I love spreadsheets so much and so does Rachel. And I think I might love spreadsheets more than Rachel at this point, even though she actually has a hat that says, “Spreadsheets.”


But when Kate started, it was fascinating to go through that and see what parts of that were either no longer relevant because the entire world had changed, or because the industry had advanced, or because there’s all these new things you need to know now that we're not on the list of things that you needed to know three years ago. And then what other, even, topics belong down on that kind of list of things to know. So, I think documentation is always a good, like, check-in for things like that.


But going back to, like, your larger question. So, documentation is important, not just because we happened to be onboarding, but a lot of people, I think once they no longer could be in the office with people and rely on that kind of face-to-face conversations to smooth over things began, I think, to realize how essential documentation was to just their everyday to day, kind of, working lives. So, I think that’s something that we’ve definitely seen from the pandemic. But then there are certainly other signals in the software industry-specific, which we can go into or not depending on your level of interest.


Corey: Well, something that I see that I have never been a huge fan of in corporate life—and it feels like it is very much a broad spectrum—has been that on one side of the coin, you have this idea that everything we do is bespoke and we just hire smart people and get out of their way. Yeah, that’s more uncontrolled anarchy than it is a repeatable company process around anything. And the other extreme is this tendency that companies have, particularly the large, somewhat slow-moving companies, to attempt to codify absolutely everything. It almost feels like it derives from the what I believe to be mistaken belief that with enough process, eventually you can arrive at the promised land where you don’t have to have intelligent, dynamic people working behind things, you can basically distill it down to follow the script and push the buttons in the proper order, and any conceivable outcome is going to be achieved. I don’t know if that’s accurate, but that’s always how it felt when you start getting too deeply mired in documentation-slash-process as almost religion.


Kelly: And I think—you know, I agree. There has to be something between, “All right, we don’t document anything and it’s not necessary and we don’t need it.” And then—


Corey: “We might get raided by the FBI. We want nothing written down.” At which point it’s like, what do you do here? Yeah.


Kelly: Yeah. Leave no evidence, leave no paper trail of anything like that. And going too far into thinking that processes is absolutely everything, and that absolutely anyone can be plugged into any given role and things will be equally successful, or that we’ll just be automated away or become just these, kind of, automatons. And I think that balance, it’s important to think about that because while documentation is important, and you know, I will say 2022, I think we’re going to hear more and more about it, we see it more as an increasingly valuable thing in tech, you can’t solve everything with documentation. You can use it as the, kind of, duct tape and baling wire for some of the things that your company is doing, but throwing documentation at it is not going to fix things in the same way that throwing engineers at a problem is not going to fix it either. Or most problems. I mean, there are some that you can just throw engineers at.


Corey: Well, there’s a company wiki, also known as where documentation goes to die.


Kelly: It is. And those, like, internal wikis, as horrible as they can be in terms of that’s where knowledge goes to die as well, places that have nothing like that, it can be even more chaotic than places that are relying on the, kind of, company internal wiki.


Corey: So, delving into a bit of a different topic here, before you were in the QA universe, you were what distills down to an academic. And I know that sometimes that can be interpreted as a personal attack in some quarters; I assure you, despite my own eighth grade level of education, that is not how this is intended at all. Your undergraduate degree was in medieval history—or medieval studies and your PhD was in English. So, a couple of questions around that. One, when we talk about medieval studies, are we talking about writing analyst reports about Netscape Navigator, or are we talking things a bit later in the sweep of history than that?


Kelly: I appreciate the Netscape Navigator reference. I get that reference.


Corey: Well, yeah. Medieval studies; you have to.


Kelly: Medieval studies, when you—where we study the internet in the 1990s, basically. I completely lost the line of questioning that you’re asking because I was just so taken by the Netscape Navigator reference.


Corey: Well, thank you. Started off with the medieval studies history. So, medieval studies of things dating back to, I guess, before we had reasonably recorded records in a consistent way. And also Twitter. But I’m wondering how much of that lends itself to what you do as an analyst.


Kelly: Quite a bit. And as much as I want to say, it’s all Monty Python references all the time, it isn’t. But the disciplinary rigor that you have to pick up as a medievalist or as anyone who’s getting any kind of PhD ever, you know, for the most part, that very much easily translated to being an analyst. And even more so tech culture is, in so many ways, like, enamored—there’s these pop culture medieval-isms that a lot of people who move in technical circles appreciate. And that kind of overlap for me was kind of fascinating.


So, when I started, like, working in tech, the fact that I was like writing a dissertation on Lord of the Rings was this little interesting thing that my coworkers could, like, kind of latch on to and talk about with me, that had nothing to do with tech and that had nothing to do with 
the seemingly scary parts of being an academic.


Corey: This episode is sponsored in part by our friends at Vultr. Spelled V-U-L-T-R because they’re all about helping save money,
including on things like, you know, vowels. So, what they do is they are a cloud provider that provides surprisingly high performance cloud compute at a price that—while sure they claim its better than AWS pricing—and when they say that they mean it is less money. Sure, I don’t dispute that but what I find interesting is that it’s predictable. They tell you in advance on a monthly basis what it’s going to going to cost. They have a bunch of advanced networking features. They have nineteen global locations and scale things elastically. Not to be confused with openly, because apparently elastic and open can mean the same thing sometimes. They have had over a million users. Deployments take less that sixty seconds across twelve pre-selected operating systems. Or, if you’re one of those nutters like me, you can bring your own ISO and install basically any operating system you want. Starting with pricing as low as $2.50 a month for Vultr cloud compute they have plans for developers and businesses of all sizes, except maybe Amazon, who stubbornly insists on having something to scale all on their own. Try Vultr today for free by visiting: vultr.com/screaming, and you’ll receive a $100 in credit. Thats V-U-L-T-R.com slash screaming.


Corey: I want to talk a little bit about the idea of academic rigor because to my understanding, in the academic world, the publication process is… I don’t want to say it’s arduous. But if people subjected my blog post anything approaching this, I would never write another one as long as I lived. How does that differ? Because a lot of what I write is off-the-cuff stuff—and I’m not just including tweets, but also tweets—whereas academic literature winds up in peer-reviewed journals and effectively expands the boundaries of our collective societal knowledge as we know it. And it does deserve a different level of scrutiny, let’s be clear. But how do you find that shifts given that you are writing full-on industry analyst reports, which is something that we almost never do on our side, just honestly, due to my own peccadilloes?


Kelly: You should write some industry reports. They’re so fun. They’re very fun.


Corey: I am so bad at writing the long-form stuff. And we’ve done one or two previously, and each time my business partner had to basically hold my nose to the grindstone by force to get me to ship, on some level.


Kelly: And also, I feel like you might be underselling the amount of writing talent it takes to tweet.


Corey: It depends. You can get a lot more trouble tweeting than you can in academia most of the time. Every Twitter person is Reviewer 2. It becomes this whole great thing of, “Well, did you consider this edge corner case nuance?” It’s, “I’ve got to say, in 208 any characters, not really. Kind of ran out of space.”


Kelly: Yeah, there’s no space at all. And it’s not what that was intended. But going back to your original question about, like, you know, academic publishing and that type of process, I don’t miss it. And I have actually published some academic pieces since I became an analyst. So, my book finally came out after I had started as—it came out the end of 2019 and I had already been at RedMonk for a year.


It’s an academic book; it has nothing to do with being an industry analyst. And I had an essay come out in another collection around the same time. So, I’ve had that come out, but the thing is, the cycle for that started about a year earlier. So, the timeframe for getting things out in, especially the humanities, can be very arduous and frustrating because you’re kind of like, “I wrote this thing. I want it to actually appear somewhere that people can read it or use it or rip it apart if that’s what they’re going to do.”


And then the jokes that you hear on Twitter about Reviewer 2 are often real. A lot of academic publishing is done in, like, usually, like, a double-blind process where you don’t know who’s reviewing you and the reviewers don’t know who you are. I’ve been a reviewer, too, so I’ve been on that side of it. And—


Corey: Which why you run into the common trope of people—


Kelly: Yes.


Corey: —suggesting, “Oh, you don’t know what you’re talking about. You should read this work by someone else,” who is in fact, the author they are reviewing.


Kelly: Absolutely. That I think happens even when people do know who [laugh] who’s stuff they’re reviewing. Because it happens on Twitter all the time.


Corey: Like, “Well, have you gotten to the next step beyond where you have a reviewer saying you should wind up looking at the work cited by”—and then they name-check themselves? Have we reached that level of petty yet, or has that still yet to be explored?


Kelly: That is definitely something that happens in academic publishing. In academic circles, there can be these, like, frenemy relations among people that you know, especially if you are in a subfield that is very tiny. You tend to know everybody who is in that subfield, and there’s, like, a lot of infighting. And it does not feel that far from tech, sometimes. [unintelligible 00:21:52] you could look at the whole tech industry, and you look at the little areas that people specialize in, and there are these communities around these specializations that—you can see some of them on Twitter.


Clearly, not all of them exist in the Twitterverse, but in some ways, I think that translated over nicely of, like, the year-long publication and, like, double peer-review process is not something that I have to deal with as much now, and it’s certainly something that I don’t miss.


Corey: You spent extensive amounts of time studying the past, and presumably dragons as well because, you know, it’s impossible to separate medieval studies from dragons in my mind because basically, I am a giant child who lives through fantasy novels when it comes to exploring that kind of past. And do you wind up seeing any lessons we can take from the things you have studied to our current industry? That is sort of a strange question, but they say that history doesn’t repeat, but it rhymes, and I’m curious to how far back that goes. Because most people are citing, you know, 1980s business studies. This goes centuries before that.


Kelly: I think the thing that maybe stands out for me the most the way that you framed that is, when we look at the past and we think of something like the Middle Ages, we will often use that term and be like, “Okay, here’s this thing that actually existed, right?” Here’s, like, this 500 years of history, and this is where the Middle Ages began, and here’s where it ended, and this is what it was like, and this is what the people were like. And we look at that as the some type of self-evident thing that exists when in reality, it’s a concept that we created, that people who lived in later ages created this concept, but then it becomes something that has real currency and, really, weight in terms of, like, how we talk about the world.


So, someone will say, you know, I like that film. It was very medieval. And it’ll be a complete fantasy that has nothing to do with Middle Ages but has a whole bunch of these tropes and signals that we translate as the Middle Ages. I feel like the tech industry has a great capacity to do that as well, to kind of fold in along with things that we tend to think of as being very scientific and very logical but to take a concept and then just kind of begin to act as if it is an actual thing when it’s something that people are trying to make a thing.


Corey: Tech has a lot of challenges around the refusing to learn from history aspect in some areas, too. One of the most common examples I’ve heard of—or at least one that resonated the most with me—is hiring, where tech loves to say, “No one really knows how to hire effectively and well.” And that is provably not true. Ford and GM and Coca-Cola have run multi-decade studies on how to do this. They’ve gotten it down to a science.


But very often, we look at that in tech and we’re trying to invent everything from first principles. And I think, on some level, part of that comes out as, “Well, I wouldn’t do so well in that type of interview scenario, therefore, it sucks.” And I feel like we’re too willing in some cases to fail to heed the lessons that others have painstakingly learned, so we go ahead and experiment on our own and try and reinvent things that maybe we should not be innovating around if we’re small, scrappy, and trying to one area of the industry. Maybe going back to how we hire human beings should not be one of those areas of innovation that you spend all your time on as a company.


Kelly: I think for some companies, I think it depends on how you’re hiring now. It’s like, if your hiring practices are horrible, like, you probably do need to change them. But to your point, like, spending all of your energy on how are we hiring, can be counterproductive. Am I allowed to ask you a question?


Corey: Oh, by all means. Mostly, the questions people ask me is, “What the hell is wrong with you?” But that’s fine, I’m used to that one, too. Bonus points if you have a different one.


Kelly: Like, your hiring processes at Duckbill Group. Because you’ve hired, you know, folks recently. How do you describe that? Like, what points of that you think… are working really well?


Corey: The things that have worked out well for us have been being very transparent at the beginning around things like comp, what the job looks like, where it starts, where it stops, what we expect from people, what we do not expect from people, so there are no surprises down that path. We explain how many rounds of interviews there are, who they’ll be meeting with at each stage. If we wind up declining to continue with a candidate in a particular cycle, anything past the initial blind resume submission, we will tell them; we don’t ghost people. Full stop. Originally, we wanted to wind up responding to every applicant with a, “Sorry, we’re not going to proceed,” if the resume was a colossal mismatch. For example, we’re hiring for a cloud economist, and we have people with PhDs in economics, and… that’s it. They have not read the job description.


And then when you started doing that people would argue with us on a constant basis, and it just became a soul-sucking time sink. So, it’s unfortunate, but that’s the reality of it. But once we’ve had a conversation with you, doing that is the right answer. We try and move relatively quickly. We’re honest with folks because we believe that an interview is very much a two-way street.


And even if we declined to proceed—or you declined to proceed with us; either way—that you should still think well enough of us that you would recommend us to people for whom it might be a fit. And if we treat you like crap, you’re never going to do that. Not to mention, I just don’t like making people feel like crap as a general rule. So, that stuff that has all come out of hiring studies.


So, has the idea of a standardized interview. We don’t have an arbitrary question list that we wind up smacking people with from a variety of different angles. And if you drew the lucky questions, you’ll do fine. We also don’t set this up as pass-fail, we tend to presume that by the time you’ve been around the industry for as long as generally is expected for years of experience for the role, we’re not going to suddenly unmask you as not knowing how computers work through our ridiculous series of trivia questions. We don’t ask those.


We also make the interview look a lot like what the job is, which is apparently a weird thing. It’s in a lot of tech companies it’s, “Go and solve whiteboard algorithms for us.” And then, “Great. Now, what’s the job?” “It’s going to be moving around some CSS nonsense.”


It’s like, first that is very different, and secondly, it’s way harder to move CSS than to implement quicksort, for most folks. At least for me. So, it’s… yeah, it just doesn’t measure the right things. That’s our approach. I’m not saying we cracked it by any means to be very clear here. This is just what we have found that sucks the least.


Kelly: Yeah, I think the, ‘we’re not going to do obscure whiteboarding exercises’ is probably one of the key things. I think some people are still very attached those personal reasons. And I think the other thing I liked about what you said, is to make the interview as similar to the job as you can, which based on my own getting hired process at RedMonk and then to some levels of being involved in hiring our, kind of, new hires, I really like that. And I think that for me, the process will like, okay, you submit your application. There’d be—I think I’d to do a writing sample.


But then it was like, you get on a call and you talk to Steve. And then you get on a call and you talk to James. And talking to people is my job. Like for the most part. I write things, but it’s mostly talking to people, which you may not believe by the level of articulate, articulate-ness, I am stumbling my way through in this sentence.


And then the transparency angle, I think it’s something that most companies are not—may not be able to approach hiring in such a transparent way for whatever reason, but at least the motion towards being transparent about things like salaries, as opposed to that horrible salary negotiation part where that can be a nightmare for people, especially if there’s this code of silence around what your coworkers or potential coworkers are making.


Corey: We learned we were underpaying our clouds economists, so we wound up adjusting the rate advertised; at the same time we wound up improving the comp for existing team because, “Yeah, we’re just going to make you apply again to be paid a fair wage for what you do,” no. Not how we play these games.


Kelly: Yeah, which is, you know, one of the things that we’re seeing in the industry now. Of course, the term ‘The Great Resignation’ is out there. But with that comes, you know, people going to new places partly because that’s how they can get, like, the salary increase or whatever it is they want for among other reasons.


Corey: Some of the employees who have left have been our staunchest advocates, both for new applicants as well as new clients. There’s something to be said for treating people as you mean to go on. My business partner, I’ve been clear that we aspire for this to be a 20, 25-year company, and you don’t do that by burning bridges.


Kelly: Yeah. Or just assuming that your folks are going to stay for three years and move on, which tends to be the kind of the lifespan of where people stay.


Corey: Well, if they do, that’s fine because it is expected. I don’t want people to wind up feeling that they owe us anything. If it no longer makes sense for them to be here because they’re not fulfilled or whatnot—this has happened to us before we’ve tried to change their mind, talked to them about what they wanted, and okay, we can’t offer what you’re after. How can we help you move on? That’s the way it works.


And like, the one thing we don’t do in interviews—and this is something I very much picked up from the RedMonk culture as well—is we do a lot of writing here, so there’s a writing sample of here’s a list of theoretical findings for an AWS bill—if we’re talking about a cloud economist role—great. Now, the next round is people are going to talk to you about that, and we’re going to roleplay as if we were a client. But let’s be clear, I won’t tolerate abusive behavior from clients to our team, I will fire a client if it happens. So, we’re not going to wind up bullying the applicant and smacking ‘em around on stuff—or smacking them around to be clear. That was an ‘em not a him, let’s be clear.


It’s a problem of not wanting to even set the baseline expectation that you just have to sit there and take it when clients decide to go down unfortunate paths. And I believe it’s happened all of maybe once in our five-and-a-half-year history. So, why would you ever sit around and basically have a bunch of people chip away at an applicant’s self-confidence? By virtue of being in the room and having the conversation, they are clearly baseline competent at a number of things. Now, it’s just a question of fit and whether their expression of skills is what we’re doing right now as a company.


At least that’s how I see it. And I think that there is a lot of alignment here, not just between our two companies, but between the kinds of companies I look at and can actively recommend that people go and talk to.


Kelly: Yeah. I think that emphasis on, it’s not just about what a company is doing—like, what is their business, you know, how they’re making money—but how they’re treating people, like, on their way in and on the way out. I don’t think you can oversell how important that is.


Corey: Culture is what you wind up with instead of what you intend. And I think that’s something that winds up getting lost a fair bit.


Kelly: Yeah, culture is definitely not something you can just go buy, right? [laugh], where you can, like—this is what our culture will be.


Corey: No, no. But if there is, “Culture-in-a-box. Like, you may not be able to buy it, but I would love to sell it to you,” seems to be the watchwords of a number of different companies out there. Kelly, I really want to thank you for taking the time to speak with me today. If people want to learn more, where can they find you?


Kelly: They can find me on Twitter at @drkellyannfitz, that’s D-R-K-E-L-L-Y-A-N-N-F-I-T-Z—I apologize for having such a long Twitter handle—or my RedMonk work and of my colleagues, you can find that at redmonk.com.


Corey: And we will, of course, include links to that in the [show notes 00:33:14]. Thank you so much for your time. I appreciate it.


Kelly: Thanks for having me.


Corey: Kelly Fitzpatrick, senior industry analyst at RedMonk. I’m Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment telling me how terrible this was and that we should go listen to Reviewer 2’s podcast instead.


Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.


Announcer: This has been a HumblePod production. Stay humble.
Newsletter Footer

Get the Newsletter

Reach over 30,000 discerning engineers, managers, enthusiasts who actually care about the state of Amazon’s cloud ecosystems.

This field is for validation purposes and should be left unchanged.
Sponsor Icon Footer

Sponsor an Episode

Get your message in front of people who care enough to keep current about the cloud phenomenon and its business impacts.