How to Grade DevOps Teams with Nicole Forsgren, PhD

Episode Summary

Nicole Forsgren grew up in a small farm town in Idaho. After working as a programmer, a software engineer, and a systems administrator at IBM, she went back to school to get her PhD in Management Information Systems. Now, she leads research and strategy at Google and oversees the production of the annual State of DevOps Report. Join Corey and Nicole as they discuss what it’s like to put together said reports, why people are so passionate about their DevOps team’s unique approach, the four metrics you can use to grade DevOps teams, how to scale DevOps teams, and more.

Episode Show Notes & Transcript

About Nicole Forsgren, PhD

Dr. Nicole Forsgren does research and strategy at Google Cloud following the acquisition of her startup DevOps Research and Assessment (DORA) by Google. She is co-author of the Shingo Publication Award winning book Accelerate: The Science of Lean Software and DevOps, and is best known for her work measuring the technology process and as the lead investigator on the largest DevOps studies to date. She has been an entrepreneur, professor, sysadmin, and performance engineer. Nicole’s work has been published in several peer-reviewed journals. Nicole earned her PhD in Management Information Systems from the University of Arizona, and is a Research Affiliate at Clemson University and Florida International University.

Links Referenced: 

Transcript

Announcer: Hello and welcome to Screaming in the Cloud, with your host cloud economist's, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on this state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is screaming in the cloud.



Corey Quinn: This week’s episode of Screaming in the Cloud is sponsored by X-Team. X-Team is a 100% remote company that helps other remote companies scale their development teams. You can live anywhere you like and enjoy a life of freedom while working on first-class company environments. I gotta say, I’m pretty skeptical of “remote work” environments, so I got on the phone with these folks for about half an hour, and, let me level with you: I’ve gotta say I believe in what they’re doing and their story is compelling. If I didn’t believe that, I promise you I wouldn’t say it. If you would like to work for a company that doesn’t require that you live in San Francisco, take my advice and check out X-Team. They’re hiring both developers and devops engineers. Check them out at the letter x dash Team dot com slash cloud. That’s x-team.com/cloud to learn more. Thank you for sponsoring this ridiculous podcast. 


Corey Quinn: Welcome to Screaming in the Cloud. I'm Corey Quinn. I am joined this week by Dr. Nicole Forsgren. Nicole, welcome to the show.


Nicole Forsgren: Thanks so much for having me.


Corey Quinn: Thank you for joining me. So, you work at Google cloud these days as a VP of Research and Strategy.


Nicole Forsgren: I mean, let's call that aspirational. I'm not a VP just yet.


Corey Quinn: I understand the Google's org chart is not caught up with your magnificence. Other people are willing to cut them slack. I am not. You are a VP to me. You will remain a VP, and eventually the business cards will reflect that very bright reality.


Nicole Forsgren: I'll take it. Yeah. Right now, my title is research and strategy.


Corey Quinn: Yes, you've done so much that it's difficult to start out, to even figure out where to start with what you've done and who you are, but so let's take it in stages. You've somewhat recently wrote the book Accelerate, The Science of Lean Software and DevOps, which is a fascinating book. I recommend that people check it out if they're at all interested in, I guess, putting a little bit of data to anecdata, but that's not where we you really began. To do that, let's go back to the very beginning. Who are you?


Nicole Forsgren: That involves me just starting out in a small farm town in Idaho, but maybe we want to go farther than that. I actually started out, it's interesting because some people are like, "Oh, you're just a researcher. You're just an academic." But I'm glad you asked this because I started out as a software engineer. Well, I guess I started as a programmer. I was on mainframe systems, but then, that was a software engineer at IBM. So, I was developing systems, and then, as I swear this happened so often, I had to maintain my own systems. So, then I was sysadmin. I was running my own systems, and then I ended up doing some consulting a bit because I wanted to help other people run their systems, and build their systems, and solve more interesting problems.


And then, I actually ended up in hardware for a bit. I was running RAID, which that's kind of a blast from the past, right? We don't do RAID the same way we used to do RAID.


Corey Quinn: Well, not on purpose anyway.


Nicole Forsgren: I know, right? And then, I ended up going to get my PhD, because I realized that kind of cycling through some of these consulting problems and even solving some of the problems in larger organizations, because I'm just bouncing back and forth between consulting and IBM for a couple of those last several years. It felt like many of the problems I was solving and many of the complex problems and organizational problems felt like I was answering some of the same problems in the same way. And in particular, when I was going to management and suggesting solutions, many times they were saying, "Oh, well that won't work here." Or, "Well, I know that worked there but that won't solve this problem."


Nicole Forsgren: And I was thinking, "Well there has to be some type of way to solve this, in some way that's more generalizable." I wonder if there's a classic problem that can be solved similar ways. So, that kind of led to the PhD and doing some research.


Corey Quinn: What is your PhD in?


Nicole Forsgren: So, my PhD is in MIS, it's Management Information Systems. And the reason I chose MIS as opposed to computer science is, I liked the fact that I could link technology and computering things with business outcomes, right? So MIS is inherently an interdisciplinary field, and back in the day, it was unique because it really specifically was linking and tying computer science concepts to business outcomes. That really is what I've done for over a decade now, is find ways to deliver business outcomes, or organizational outcomes, or team outcomes from computer types of things, like capabilities and practices. So, I was, this is such a hipster term, but like, it's like, "I was doing it before it was called DevOps."


Nicole Forsgren: And really, I kind of was, so I started doing my research in this area in '07, which is pretty parallel to a lot of the DevOps movement. And then, I finished my PhD in '08.


Corey Quinn: Excellent. So, one could say almost that you've brought ivory tower academia into the streets?


Nicole Forsgren: Actually, yeah. In many ways I did. And also that was in parallel with a handful of other academically rigorous research. So, there were a handful of people about the same time I was doing my research that were at IBM Watson Labs, right? So Cadigan, and Maglio, and Haber, a handful of people there, they were studying sysadmins specifically in some of their work practices. I started a bunch of my research with sysadmins as well, going to the LISA Conference a few years later, I chaired LISa. Then, I expanded my research to include developers and other engineers, software engineers, and a bunch of my work was focusing on how capabilities and practices in tooling, or automation, or process, or culture had impacts at the team, individual, and then organizational level, which if we think about it, that kind of is how we think about and define DevOps now, right?


Nicole Forsgren: It's tooling and automation, it's process, and it's culture, and how that has impacts at largely the software development and delivery, and then organizational level, how we deliver value.


Corey Quinn: All of that is made manifest in this year's State of DevOps report, an incredibly thorough academically researched paper except that a human being can read it. It's probably the best way to frame that from my perspective.


Nicole Forsgren: Yes. I often joke that I speak two and a half languages, English, academic English, and a little bit of Spanish.


Corey Quinn: Also, add math to that list.


Nicole Forsgren: Yes, yes, a little bit of math, more statistics than other types of math. And what we try to do is we try to take this really academically rigorous work and translate it, not just translate it, but also make it very, very accessible to people so that they can use it. Right? So, I've been leading, and running, and conducting the State of DevOps Reports for six years now, starting in 2014, now through 2019, so these reports are super accessible. I joke it's like an adult picture book, right? Like we have large type, we have graphics, we have pictures. It's very easy to flip through. It's about 80 pages, but it's like very large print. This is not like dense text.


Corey Quinn: Oh, and it's so gorgeously designed. I had to triple check to validate that you folks were still part of Google.


Nicole Forsgren: I have to say my copy editor and my designer are fantastic. Cheryl Coupe and Siobhan Doyle are unbelievable, unbelievable to work with. I will say the last couple of weeks of copyedit and design are a little intense. They're a little rough, but they turn around the most gorgeously designed work and they really helped me. We worked very closely together to make sure that it's very accessible. It's easy to read, it's easy to navigate. We're working to put out a couple of pages of an executive summary as well. So, if you just want to like flip through and find something that's really quick, that's available as well.


And then, in addition to this, like you mentioned, me and my co-authors for the book, Jez Humble and Gene Kim, also pulled together the first four years of the research into something that's a little more detailed, right? That includes additional descriptions about the capabilities we've researched, additional information about the outcomes that we've measured, more detailed information on the statistical methods about what it means, and the methodology and where the data comes from, and why we choose the statistical methods that we do. And then, part three included a contribution by previous Shingo winners, Karen Whitley Bell and Steve Bell on a case study out of ING Netherlands. And then, the book itself just to Shingo. And I will take them out of-


Corey Quinn: Congratulations.


Nicole Forsgren: Thank you. It's the first time as far as we can tell that a Shingo has ever been awarded to anything in technology. Now, I will say that came out of 2014 to 2017, so we have two more state of DevOps reports, research projects that have been published since then. So, my editor keeps pinging me, asking for a second edition. So, as soon as I take a few naps, we will work on that. And I did want to mention really quickly I highlighted the authors for the book, the authors for this year's report. So, I led, I was first author, Dustin Smith is a researcher who joined this year's report.


He was fantastic. He has a PhD in top stats for five years. So, he was wonderful. And joining this year's report, Jez humble was third author, and then Jessie Frazelle joined as an author this year as well. She was wonderful, wonderful to work with.


Corey Quinn: She's been a previous guest on this show, and we'll absolutely have to have her back to talk more about some of this.


Nicole Forsgren: Yeah, I think she's going to join us on another podcast where we will dig into all sorts of cloud and open source excitement that we covered this year.


Corey Quinn: Excellent. Excellent. So, before we dive in too far into the intricacies of this year's report-


Nicole Forsgren: Oh, there are so many things.


Corey Quinn: And there are, but the problem I've seen in most reviews and most discussions around the State of DevOps report is that no one starts off with a primer for someone who's never heard of it before. So, from that perspective, guide me through it. What is the Accelerate State of DevOps Report? Where did it come from? What is it for, and why do I care?


Nicole Forsgren: So, what would you say it is you do here, Nicole?


Corey Quinn: Exactly.


Nicole Forsgren: So, the nice thing about this report and the thing that makes this so unique and so different is that this is not just another vendor report, right? We're not selling a technology, we do not talk about vendor tooling or products anywhere in the report. I think there's one line that lists a whole bunch of tools as an example, right? What we do instead is we investigate the capabilities and practices that are predictive. So, if someone says, "I'm doing the DevOps," or whatever you want to call it, find and replace, whatever your company is doing, whether it's technology transformation, or digital transformation, or DevOps. If you say, "I want to know what types of things are actually impactful, which things are actually predictive of success in a statistically meaningful way.


Now, go back a little bit, right? Hit rewind on this podcast. Remember how I said I used to do consulting or I used to do these things in my organization, and my manager always said, "Ah, that's not gonna work here." Well, this helps answer that. Like it says, in a statistically meaningful way, these things will actually have an impact. There's a high likelihood this will work. So, this research takes an academically rigorous approach. So, I designed this from a research designed, PhD level standpoint. We designed this research to test a bunch of hypotheses to say, "According to the research, according to existing literature, according to lots of other things that suggest, these types of things have a good likelihood of having a difference in lots of different types of organizations. What will actually work?"


Then, we collect a bunch of data, and then, we see, "Okay, what works in a statistically meaningful way? What does the evidence show? Then, I'm going to break that down a bit. I say capabilities and practices, but we don't test tools. The reason we don't test tools is because, well first of all, there's a million different tools, That's going to be too hard. Also, tools change, right? Feature sets change, capabilities change, lots of different things change. So, instead what we do is we test capabilities and practices, because then, what that does is it gives you an evaluative framework. So, then you can go back. You can go back to your organization.


You can go back to your team, whether you're an IC or you're a leader and you can say, "Okay, these types of things will work. These types of things have a high likelihood of working." Okay, so when I'm doing CI, CI has a high likelihood of meaning that you will be more successful in developing and delivering software with speed and stability. What does CI mean? Also, everyone like redefined CI to be their own special thing. What does CI, in order for CI to be impactful, what does that mean? It means when you check in code, it results in a build of software. When you check in code, automated tests are run. You need to have automated builds and tests running successfully every day, and developers need to see those results every day.


Those four things need to be happening. Now, anyone can go back to their CI tool set of choice and they can say, "Are these four things happening?"


Corey Quinn: What I find fascinating about all of this, as I read it, it's very, again, first you brought the data. So, every time I see someone starting to argue from ... make a point, an anecdote, or pull a well actually against anything that you ever list in these reports, it's screamingly funny to me. I just immediately cringe and hide behind the tarp because there's going to be a bloody red mist where that person used to be by the time you're finished with them, metaphorically speaking, you bring the data and they're [crosstalk 00:15:58]-


Nicole Forsgren: I can be polite.


Corey Quinn: You are.


Nicole Forsgren: But, yeah, I've got data.


Corey Quinn: Yes, and what you say is right.


Nicole Forsgren: And we retest many things every year. We revalidate things several ... Some things have been revalidated for six years. Now, not everything needs to be revalidated every single year. We rotate them in and out, but we also do the revalidation thing, right? So, it's like this really has been revalidated several years. You can fight with me if you want, but if it's not working for you, maybe you're not actually doing it. Maybe it's not actually automated. Maybe it's hidden behind a manual gate. Like you're putting it in service now and you're waiting for a person to click it. I love you, but I award you no points, may God have mercy on your soul.


Corey Quinn: Exactly.


Nicole Forsgren: Like, citing Billy Madison. What part of this thing is not actually working. What part does not match?


Corey Quinn: Right. What I like about this before is that I did a lot of digging into it last year when I saw this, and really paid attention to it, is you come up with this idea of performance profiles, where you talk about high performing teams, elite performing teams, low performing teams, and I always wondered, didn't get the time-


Nicole Forsgren: People get real defensive, people get real defensive.


Corey Quinn: Well, that's what I wanted to ask you about, to some extent. Very few people self identify as, "Yeah, as far as performance goes, are company is complete crap. Thank you for asking." People like to speak aspirationally about their own work and unless you wind up working at Uber, generally you don't show up hoping to do a crappy job today at most companies. So, there's a question around, how do you wind up assessing whether a team is high performing, low performing, et cetera. Since this is all based on survey responses, you don't get to actually look at output of teams other than what people self-report. Correct?


Nicole Forsgren: Right, or do you know what also is interesting is occasionally these bands change, and the people are like, "Why did it change? How did it change? This should be a static low, medium, high elite performance category. I need to have a goal to point to because then I can arrive and I could be done." I've had people tell me that, and I'm like, "But that's not how the world works. The industry is changing, the industry is moving. We don't make software today like we made software 20 years ago. Why would that make sense?" And so, I love this question because what we do is we collect data along four key metrics. These have been termed the four key metrics. So, we've been actually collecting this data for six years now, and it's interesting.


ThoughtWorks actually started calling them the four key metrics, and enterprises around the world, across all types of industries have started tracking these and using these as outcome metrics to track their technology transformation. Now, these four metrics fall into two categories, speed metrics and stability metrics. Now, I'm going to come back to these but I'll explain the process really quickly and then we'll come back. What I do every year, so what I said is, I don't just arbitrarily decide this is low performance, this is ... like here's a line. This is medium performance and here's where you are, and this is high performance, and here's where we are.


And then, it's like set it and forget it, and let everyone decide where they are, because the industry changes. So, why would it make sense for me to just make something up and let everyone set themselves according to that? We are very data-driven. We want to see what's happening. What's important is for us to set and collect the metrics that are outcome metrics. So, we use speed and stability. The reason we choose speed stability is because they are system level outcome metrics. We're talking about the DevOps, right? We're talking about pulling together groups with seemingly opposing goals. Developers want to push code as often as possible, which introduces change and possibly instability in the systems.


You have operators, sysadmins, who want to have stability in systems, which means they might want to reject changes. They may want to reject code. So, can we see how does it make sense, Corey? How we may want to have these two metrics, because the goal of an organization is to deliver value, but you also want to have stable systems. So, we want to have both of those metrics in place, right? It's like a yin and a yang. So, we capture both of these, because if you're only pushing code, that doesn't help. But if you only have stable systems, if I only ever say no, then I never get changes. It's not just features, it's things like keeping up with compliance and regulatory changes.


It's keeping up with security updates, keeping up with patches. So, I capture these four metrics, and what I do ... Okay, I'm going to tell you what these four metrics are. My speed metrics are deployment frequency. Okay, so, we'll keep talking about these four metrics. Here are my four metrics. I've got deployment frequency, how often I push code? This is important to developers. It's important to infrastructure engineers, right? I also have lead time for changes. How long does it take me to get code through my system? I measure this is code commit, to code running in production. Now. from the stability point of view, I've got time to restore service. So, how long does it generally take to restore a service?


Anytime I have any type of service incident or a defect that impacts my service users, like unplanned outage, a service impairment, and then I've got change failure rate. That's my fourth metric, might my other stability metric. So, what percentage of changes to production result in any kind of degraded service, anytime it requires someone's attention. So, a service impairment, a service outage, anytime it requires remediation, like a hot fix, or rollback, a fixed forward, a patch. So, what I do is I take, like I mentioned, a very data-driven approach. I take these four metrics, I throw them in the hopper and I see how they group. It's called the cluster analysis because I want to see how they cluster.


And what I have seen for the last six years in a row is that these four metrics cluster in distinct groupings. This year, they fell into four distinct groups. So, you've got a group at the high end, where all four metrics group well, I'll say, where they group well together. And when I say they grew up well together, that means deployment frequency is fast. You're deploying on demand, your lead time for changes is less than a day. Your time to restore a service is less than an hour. Your change fail rate is low, between zero and 15%, so your elite performers are optimizing for all four, right? So, you're going fast and your stability is good. Okay, so I've got a group up there.


Then, I've got a gap. Then, I've got a group. Then, I've got a gap. Then, I've got a group, a cluster. Then, I've got a gap. Then, I've got a cluster. By the way, all of these groups, these clusters, were all statistically significant. They're significantly similar to each other and different from the other groups. So, what that tells me is that speed and stability don't have trade-offs. You don't have to sacrifice speed for stability, or stability for speed. Now, that's not necessarily what we heard for a long time. We used to think that in order to be stable, you had to slow down, but that's not what we see and that's not what we've seen for six years now. The low performance group, their deployment frequency is between once a month and once every six months.


Lead time for changes to get through that pipeline, the same thing, between once a month and once every six months. So, their time to restore service is between once a week and once a month. And then, that change fail rate is in that area between 46% and 60%. Okay, so now, I'm going to get back to a question you just asked me. How can people answer these questions for me when they're survey questions? You'll notice that I'm asking things in ranges. I'm not asking for millisecond response times. I'm asking for things in a scale, in a log scale. People can tell me if I'm deploying on demand, or they can tell me if I'm deploying about once a week, or if I'm deploying about quarterly, or if I'm deploying just a couple of times a year, right?


People can tell me that, or they can tell me when things go down, how long it takes us to restore a service. About a day, about a month. So, what I'm asking in those time increments that go up on a log scale, people can answer those questions. Does that answer?


Corey Quinn: No, that absolutely does. The question that I have then is, when you assimilate all of that and you read this, there's an awful lot of data in here and there's an awful lot that, shall we say, inspires passion in people who are reading it. For example, last year there was a kerfuffle that generally low performing teams tend to outsource an awful lot of technology. This was hotly debated and found to be completely without merit by outsourcing companies.


Nicole Forsgren: By outsourcing companies.


Corey Quinn: Exactly.


Nicole Forsgren: Now, I will say that it was highly correlated and we did make a careful distinction that that was outsourcing by function. And so, what happens there is it's outsourcing if you take an entire batch of something and you throw it over a wall, and you let them disappear for a while, and then, throw it back to you later. So, if you take all of development and you let them go do something and come back later, or if you take all of operations and you throw it away and you never ever see it. That is not what happens if you have a vendor partner that operates with you at the cadence of work, because what often happens then is you have introduced delay. Introducing delay, I love that you brought this up here, what we've seen is, introducing delay can introduce instability.


Because what happens then is when you have delay, it causes and leads to batching up of work. Batching up of work leads to a larger blast radius, a larger blast radius when you finally push to production leads to greater instability. And when you do have that higher likelihood of downtime, that higher likelihood of downtime also means that larger piece of code or something you have pushed makes it harder to debug. So, it's harder to restore a service.


Corey Quinn: You used to be a programmer, as you said at the beginning of this show, so it's always easier to think about what the bug could be in the code that broke the build three minutes ago instead of that code you wrote three weeks ago.


Nicole Forsgren: Yup, exactly. And now, you've got this giant ball of mud that you pushed instead of this nice tiny little tight package that you pushed.


Corey Quinn: Exactly. And this is really, I guess, the point that I'm getting to here, is if people want to read something and then feel bad and not change anything, we have something for that already. It's called Twitter. What impact do you find that these reports have in the world? What changes are companies making based upon these findings?


Nicole Forsgren: So, we've seen huge impact. As I mentioned, we're actually seeing several organizations using these four key metrics as a way to guide their transformation. The nice thing is that it's actually really difficult to fully fully instrument a full metric space platform to capture and correlate metrics that reflect your full instrumentation tool chain. People are like, "Oh, we'll capture system-based metrics." That can be a two to four year journey. Capturing in broad strokes your four key metrics of deployment frequency, lead time for changes, main time to restore, and change fail rate can be at least relatively straight forward.


Nicole Forsgren: You can capture these on a team level to see how well you're doing, and if you're at least generally moving in the right direction. So, that helps. And then, what you can do is you can say, "Okay, what types of things should I be focusing on to improve?" And then, you can identify the capabilities that have generally been shown to help improve, come up with that list. We actually outlined in this year's report like, what types of things ... It's sort of choose your own adventure, right? So, in this year's report we have the performance model, which this is, helping you improve your software delivery performance. And then, we have a productivity model, but start with this model. If this is software delivery performance, and that's what you want to improve, great.


Then work backwards. Which types of things, which capabilities improve it? Start with that list. Once you have that list, no, that does not mean that you start working on every single capability that improves that, because that list is like, after six years of research, that list is 20 or 30 capabilities long. But that's your candidate list. This is the list of all the possible things you could improve. But, you look at that list and you say, "Which things are my biggest problems right now? So, adopt a constraint space approach. What's my biggest constraint? What's my biggest hurdle right now? Pick three or four. Devote resources there. Now, I see resources. That doesn't always mean money, although money is nice. That could be time. That could be attention. That can be anything, right?


Focus there first, spend six months there, and then come back and reevaluate, "Is this still my hardest challenge?" It can be automation. It can be process like, "Am I having a really hard time with WIP limits? Am I having a really hard time breaking my work into small batch sizes? Can I deliver something in a week or less? It could be that.


Corey Quinn: Well, ask any software engineer, "Oh, I can build that in a weekend." You can deliver anything in a week. It's easy. Just ask them.


Nicole Forsgren: But can I do it without burning myself out?


Corey Quinn: Oh, now you're adding constraints.


Nicole Forsgren: I know heaven forbid.


Corey Quinn: You've been doing this for six years. As you look at this year's State of DevOps Report, what new findings, or I guess old findings for that matter, surprised you the most?


Nicole Forsgren: We had a couple. So, an additional thing that we asked this year was about scaling strategies. What types of things are you seeing in your organization to help you scale DevOps? That's a big question I get constantly, how do I scale? What's the best way to scale? A couple of things aren't big surprises, right? Centers of excellence, not great, big bang, not great. Big bangs are used most often by low performers. It doesn't necessarily mean that it's a bad thing, it's just that that's usually only used in the most dire of circumstances. When you really have to wipe slate clean, start over, you need to be most prepared for a longterm transformation. Something that was a bit of a surprise, but also not, can I answer it that way? A surprise, but also not a surprise is that dojo's aren't well used, aren't commonly used among the highest performers.


What we see is that the highest performers, so those that are high performers and elite performers, so the top 43% of our users, focus on structural solutions that build community. So, what does that mean? What that means is that those types of solutions focus on things like building up communities of practice, building up grassroots efforts, and building up proof of concepts, because these types of things will be resilient to re-orgs and product changes. We don't see things like dojos, like training centers and centers of excellence because they require so much investment. They require so many resources. We do see them, but we only see them 9% of the time. When we share this finding with a handful of people, they're shocked because they hear about it so much.


The thing is though, they only hear about it among a handful of cases that have been successful and those successful cases had tons of resources. They had entire buildings set out, they had entire education teams, they had curriculum teams, they had training teams. They also had an amazing PR.


Corey Quinn: Absolutely.


Nicole Forsgren: I think that was something that like at first was surprising, because I'm like, "It's so low. But then I realized I've only heard about it in a couple of cases and it's the cases where they have immense, immense resources.


Corey Quinn: One of the things I always found incredibly valuable about the reports is if you go to conferences and listen to people talk about whatever it is they're talking about, they're doing at their own workplaces, everything sounds amazing and wonderful, and it's all a ridiculous fantasy. Everyone's environment is broken, everyone works in a tire fire and there's not a lot of awareness, I think, in some circles that that's the case. So, whenever someone looks at their own environment and compares it to what they see on stage, it looks terrible. This starts putting data to some of those impressions and I guess contextualizing that in the larger sense. A question that I do have, I don't know if the study gets into this in any significant depth, is it possible for different organizations to simultaneously be high performing and low performing, either along different axis, or in different divisions?


Nicole Forsgren: Oh absolutely, and I'm glad you asked that, and we try to highlight this and we never do a good enough job. We do reiterate it throughout the report. The analysis and the classification for performance profiles is always done at the team level. That's because, particularly in large organizations, team performance is different throughout an organization. As I'm sure you've seen, because when you go to really large organizations, some teams are working at a super fast pace and other teams are at a very, very different place. And so, we always do the analysis at the team level.


Corey Quinn: There's an entire section in the report that talks about cloud computing, which is generally what people tune into this podcast to talk about, and we're not going to talk about it today. We're going to have a second podcast episode about that.


Nicole Forsgren: It's so good though. It's so good. Is this where I get to tell people that like you read, you did a pre-read on the report for me and you're like, "Hey, Nicole, you missed this whole section of nuance that you talk about in one sentence, but you have to expand it because otherwise people are gonna scream at you and I get to thank you for it."


Corey Quinn: I don't think that I framed it quite that way, or if you want to say-


Nicole Forsgren: It's not polite, but it's real.


Corey Quinn: Or, take it the other direction. I practiced that whole statement, "Well idiot," and then went from there. Yeah, you've got to double down on those things.


Nicole Forsgren: By the way, thanks.


Corey Quinn: No, thank you for asking my opinion on this. I'm astonished that anyone cares what I have to say, that it isn't a ridiculous joke or a terrible pun.


Nicole Forsgren: I mean, it's real though.


Corey Quinn: Well thank you so much for taking the time to speak with me today.


Nicole Forsgren: Yeah.


Corey Quinn: There will be another episode.


Nicole Forsgren: Can I get a quick teaser on the cloud stuff though?


Corey Quinn: You may indeed.


Nicole Forsgren: Okay, so cloud's important and it does help you develop and deliver software better, but only if you do it right. You can't just buy a membership to the gym and then not go to the gym and expect to be in amazing shape. That's what we find.


Corey Quinn: Excellent. And I'm sure that the correct answer to solving that problem is to buy the right vendor tool instead.


Nicole Forsgren: Something like that.


Corey Quinn: Yes. So, I will put a link to the report in the show notes so people can download this wonderful work of art/science, I consider it both, and go from there. Thank you. If people care additionally beyond that, of what you have to say and how you say it, where can they find you?


Nicole Forsgren: So, they can find all of DORA's research at cloud.google.com/devops, and if they want to snark on me, I am online at nicolefv.com.


Corey Quinn: Excellent. Nicole, thank you so much for taking the time to speak with me today. I appreciate it.


Nicole Forsgren: Hey, thanks so much.


Corey Quinn: Thank you for listening to screaming in the Cloud. If you've enjoyed this episode, please leave it five stars on iTunes. If you didn't like this episode, please leave it five stars on iTunes. I'm Corey Quinn and this is Screaming in the Cloud.


Announcer: This has been this week's episode of Screaming in the Cloud. You can also find more of Corey at screaminginthecloud.com, or wherever fine snark is sold.


Announcer: This has been a HumblePod production. Stay humble.


Newsletter Footer

Get the Newsletter

Reach over 30,000 discerning engineers, managers, enthusiasts who actually care about the state of Amazon’s cloud ecosystems.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Sponsor Icon Footer

Sponsor an Episode

Get your message in front of people who care enough to keep current about the cloud phenomenon and its business impacts.