Honeycomb on Observability as Developer Self-Care with Brooke Sargent

Episode Summary

Brooke Sargent, Software Engineer at Honeycomb, joins Corey on Screaming in the Cloud to discuss how she fell into the world of observability by adopting Honeycomb. Brooke explains how observability was new to her in her former role, but she quickly found it to enable faster learning and even a form of self care for herself as a developer. Corey and Brooke discuss the differences of working at a large company where observability is a new idea, versus an observability company like Honeycomb. Brooke also reveals the importance of helping people reach a personal understanding of what observability can do for them when trying to introduce it to a company for the first time. 

Episode Show Notes & Transcript

About Brooke

Brooke Sargent is a Software Engineer at Honeycomb, working on APIs and integrations in the developer ecosystem. She previously worked on IoT devices at Procter and Gamble in both engineering and engineering management roles, which is where she discovered an interest in observability and the impact it can have on engineering teams.



Links Referenced:


Transcript


Announcer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.



Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. This promoted guest episode—which is another way of saying sponsored episode—is brought to us by our friends at Honeycomb. And today’s guest is new to me. Brooke Sargent is a software engineer at Honeycomb. Welcome to the show, Brooke.



Brooke: Hey, Corey, thanks so much for having me.



Corey: So, you were part of I guess I would call it the new wave of Honeycomb employees, which is no slight to you, but I remember when Honeycomb was just getting launched right around the same time that I was starting my own company and I still think of it as basically a six-person company versus, you know, a couple of new people floating around. Yeah, turns out, last I checked, you were, what, north of 100 employees and doing an awful lot of really interesting stuff.



Brooke: Yeah, we regularly have, I think, upwards of 100 in our all-hands meeting, so definitely growing in size. I started about a year ago and at that point, we had multiple new people joining pretty much every week. So yeah, a lot of new people.



Corey: What was it that drove you to Honeycomb? Before this, you spent a bit of time over Procter and Gamble. You were an engineering manager and now you’re going—you went from IC to management and now you’re IC again. There’s a school of thought that I vehemently disagree with, that that’s a demotion. I think they are orthogonal skill sets to my mind, but I’m curious to hear your journey through your story.



Brooke: Yeah, absolutely. So yeah, I worked at Procter and Gamble, which is a big Cincinnati company. That’s where I live and I was there for around four years. And I worked in both engineering and engineering management roles there. I enjoy both types of roles.



What really drove me to Honeycomb is, my time at Procter and Gamble, I spent probably about a year-and-a-half, really diving into observability and setting up an observability practice on the team that I was on, which was working on connected devices, connected toothbrushes, that sort of thing. So, I set up an observability practice there and I just saw so much benefit to the engineering team culture and the way that junior and apprentice engineers on the team were able to learn from it, that it really caught my attention. And Honeycomb is what we were using and I kind of just wanted to spend all of my time working on observability-type of stuff.



Corey: When you say software engineer, my mind immediately shortcuts to a somewhat outdated definition of what that term means. It usually means application developer, to my mind, whereas I come from the world of operations, historically sysadmins, which it still is except now, with better titles, you get more money. But that’s functionally what SRE and DevOps and all the rest of the terms still currently are, which is, if it plugs into the wall, congratulations. It’s your problem now to go ahead and fix that thing immediately. Were you on the application development side of the fence? Were you focusing on the SRE side of the world or something else entirely?



Brooke: Yeah, so I was writing Go code in that role at P&G, but also doing what I call it, like, AWS pipe-connecting, so a little bit of both writing application code but also definitely thinking about the architecture aspects and lining those up appropriately using a lot of AWS serverless and managed services. At Honeycomb, I definitely find myself—I’m on the APIs and partnerships team—I find myself definitely writing a lot more code and focusing a lot more on code because we have a separate platform team that is focusing on the AWS aspects.



Corey: One thing that I find interesting is that it is odd, in many cases, to see, first, a strong focus on observability coming from the software engineer side of the world. And again, this might be a legacy of where I was spending a lot of my career, but it always felt like getting the application developers to instrument whatever it was that they were building felt in many ways like it was pulling teeth. And in many further cases, it seemed that you didn’t really understand the value of having that visibility or that perspective into what’s going on in your environment, until immediately after. You really wished you had that perspective into what was going on in your environment, but didn’t. It’s similar to, no one is as zealous about backups as someone who’s just suffered a data loss. Same operating theory. What was it that you came from the software engineering side to give a toss about the idea of observability?



Brooke: Yeah, so working on the IoT—I was working on, like, the cloud side of things, so in Internet of Things, you’re keeping a mobile application, firmware, and cloud synced up. So, I was focused on the cloud aspect of that triangle. And we got pretty close to launching this greenfield IoT cloud that we were working on for P&G, like, we were probably a few months from the initial go-live date, as they like to call it, and we didn’t have any observability. We were just kind of sending things to CloudWatch logs. And it was pretty painful to figure out when something went wrong, from, like, you know, hearing from a peer on a mobile app team or the firmware team that they sent us some data and they’re not seeing it reflected in the cloud that is, like, syncing it up.



Figuring out where that went wrong, just using CloudWatch logs was pretty difficult and syncing up the requests that they were talking about to the specific CloudWatch log that had the information that we needed, if we had even logged the right thing. And I was getting a little worried about the fact that people were going to be going into stores and buying these toothbrushes and we might not have visibility into what could be going wrong or even being able to be proactive about what is going wrong. So, then I started researching observability. I had seen people talking about it as a best practice thing that you should think about when you’re building a system, but I just hadn’t had the experience with it yet. So, I experimented with Honeycomb a bit and ended up really liking their approach to observability. It fit my mental model and made a lot of sense. And so, I went full-steam ahead with implementing it.



Corey: I feel what you just said is very key: the idea of finding an observability solution that keys into the mental model that someone’s operating with. I found that a lot of observability talk sailed right past me because it did not align with that, until someone said, “Oh yeah, and then here’s events.” “Well, what do you mean by event?” It distills down to logs. And oh, if you start viewing everything as a log event, then yeah, that suddenly makes a lot of sense, and that made it click for me in a way that, honestly, is a little embarrassing that it didn’t before then.



But I come from a world before containers and immutable infrastructure and certainly before the black boxes that are managed serverless products, where I’m used to, oh, something’s not working on this Linux box. Well, I have root, so let’s go ahead and fix that and see what’s going on. A lot of those tools don’t work, either at scale or in ephemeral environments or in scenarios where you just don’t have the access to do the environment. So, there’s this idea that if you’re trying to diagnose something that happened and the container that it happened on stopped existing 20 minutes ago, your telemetry game has got to be on point or you’re just guessing at that point. That is something that I think I did myself a bit of a disservice by getting out of hands-on keyboard operations roles before that type of architecture really became widespread.



Brooke: Yeah, that makes a lot of sense. On the team that I was on, we were using a lot of AWS Lambda and similarly, tracking things down could be a little bit challenging. And emitting telemetry data also has some quirks [laugh] with Lambda.



Corey: There certainly is. It’s also one of those areas that, on some level, being stubborn to adopt it works to your benefit. Because when Lambda first came out, it was a platform that was almost entirely identified by its constraints. And Amazon didn’t do a terrific job, at least in the way that I tend to learn, of articulating what those constraints are. So, you learn by experimenting and smacking face first into a lot of those things.



What the hell do you mean you can’t write to the file? Oh, it’s a read-only file system. [except slash tap 00:08:39]. What do you mean, it’s only half a gigabyte? Oh, that’s the constraint there. Well, what do you mean, it automatically stops after—I think back in that point it was five or ten minutes; it’s 15 these days. But—



Brooke: Right.



Corey: —I guess it’s their own creative approach to solving the halting problem from computer science classes, where after 15 minutes, your code will stop executing, whether you want it to or not. They’re effectively evolving these things as we go and once you break your understanding in a few key ways, at least from where I was coming from, it made a lot more sense. But ugh, that was a rough couple of weeks for me.



Brooke: Yeah [laugh]. Agreed.



Corey: So, a topic that you have found personally inspiring is that observability empowers junior engineers in a bunch of ways. And I do want to get into that, but beforehand, I am curious as to the modern-day path for SREs because it doesn’t feel to me like there is a good answer for, “What does a junior SRE look like?” Because the answer is, “Oh, they don’t.” It goes back to the old sysadmin school of thought, which is that, oh, you basically learn by having experience. I’ve lost count a number of startups I’ve encountered where you have a bunch of early-20-something engineers but the SRE folks are all generally a decade into what they’re what they’ve been doing because the number-one thing you want to hear from someone in that role is, “Oh, the last time I saw it, here’s what it was.” What is the observability story these days for junior engineers?



Brooke: So, with SRE I agr—like, that’s a conversation that I’ve had a lot of times on different teams that I’ve been on, is just can a junior SRE exist? And I think that they can.



Corey: I mean, they have to because otherwise, it’s well, where does it SRE come from? Oh, they spring—



Brooke: [laugh].



Corey: —fully formed from the forehead of some God out of mythology. It doesn’t usually work that way.



Brooke: Right. But you definitely need a team that is ready to support a junior SRE. You need a robust team that is interested in teaching and mentoring. And not all teams are like that, so making sure that you have a team culture that is receptive to taking on a junior SRE is step number one. And then I think that the act of having an observability practice on a team is very empowering to somebody who is new to the industry.



Myself, I came from a self-taught background, learning to code. I actually have a music degree; I didn’t go to school for computer science. And when I finally found my way to observability, it made so many, kind of, light bulbs go off of just giving me more visuals to go from, “I think this is happening,” to, “I know this is happening.” And then when I started mentoring juniors and apprentices and putting that same observability data in front of them, I noticed them learning so much faster.



Corey: I am curious in that you went from implementing a lot of these things and being in a management role of mentoring folks on observability concepts to working for an observability vendor, which is… I guess I would call Honeycomb the observability vendor. They were the first to really reframe a lot of how we considered what used to be called monitoring and now it’s called observability, or as I think of it, hipster monitoring.



Brooke: [laugh].



Corey: But I am curious as to when you look at this, my business partner wrote a book for O’Reilly, Practical Monitoring, and he loved it so much that by the end of that book, he got out of the observability monitoring space entirely and came to work on AWS bills with me. Did you find that going to Honeycomb has changed your perspective on observability drastically?



Brooke: I had definitely consumed a lot of Honeycomb’s blog posts, like, that’s one of the things that I had loved about the company is they put out a lot of interesting stuff, not just about observability but about operating healthy teams, and like you mentioned, like, a pendulum between engineering management and being an IC and just interesting concepts within our industry overall as, like, software engineers and SREs. So, I knew a lot of the thought leadership that the company put out, and that was very helpful. It was a big change going from an enterprise like Procter and Gamble to a startup observability company like Honeycomb, just—and also, going from a company that very much believes in in-person work to remote-first work at Honeycomb, now. So, there were a lot of, like, cultural changes, but I think I kind of knew what I was getting myself into as far as the perspective that the company takes on observability.



Corey: That is always the big, somewhat awkward question because of the answer goes a certain way, it becomes a real embarrassment, but I’m confident enough, having worked with Honeycomb as a recurring sponsor and having helped out on the AWS bill side of the world since you were a reference client on both sides of that business, I want to be very clear that I don’t think I’m throwing you under a bus on this one. But do you find that the reality, now that you’ve been there for a year, has matched the external advertising and the ethos of the story they tell about Honeycomb from the outside?



Brooke: I definitely think it matches up. One thing that is just different about working inside of a company like Honeycomb versus working at a company that doesn’t have any observability at all yet, is that there are a lot of abstraction layers in our codebase and things like that. So, me being a software engineer and writing code Honeycomb compared to P&G, I don’t have to think about observability as much because everybody in the company is thinking about observability and had thought about it before I joined and had put in a lot of thought to how to make sure that we consistently have telemetry data that we need to solve problems versus I was thinking about this stuff on the daily at P&G.



Corey: Something I’ve heard from former employees of a bunch of different observability companies has a recurring theme to it, and that it’s hard to leave. Because when you’re at an observability company, everything is built with an eye toward observability. And there’s always the dogfooding story of, we instrument absolutely everything we have with everything that we sell the customers. Now, in practice, you leave and go to a different company, that is almost never going to be true, if for no other reason than based on simple economics. Turning on every facet of every observability tool that a given company sells becomes extraordinarily expensive and is an investment decision, so companies say yes to some, no to others. Do you think you’re going to have that problem if and when you decide it’s time to move on to your next role, assuming of course, that it’s not at a competing observability company?



Brooke: I’m sure there will be some challenges if I decide to move on from working for observability platforms in the future. The one that I think would be the most challenging is joining a team where people just don’t understand the value of observability and don’t want to invest, like, the time and effort into actually instrumenting their code, and don’t see why they need to do it, versus just, like, they haven’t gotten there yet or they haven’t had enough people hired to do it just yet. But if people are actively, like, kind of against the idea of instrumenting your code, I think that would be really challenging to kind of shift to especially after, over the last two-and-a-half years or so, being so used to having this, like, extra sense when I’m debugging problems and dealing with outages.



Corey: I will say, it was a little surreal the first time I wound up taking a look at Honeycomb’s environment—because I do believe that cost and architecture are fundamentally the same thing when it comes to cloud—and you had clear lines of visibility into what was going on in your AWS bill by way of Honeycomb as a product. And that’s awesome. I haven’t seen anyone else do that yet and I don’t know that it would necessarily work as well because, as you said, there, everyone’s thinking about it through this same shared vision, whereas in a number of other companies, it flat out does not work that way. There are certain unknowns and questions. And from the outside, and when you first start down this path, it feels like a ridiculous thing to do, until you get to a point of seeing the payoff, and yeah, this makes an awful lot of sense.



I don’t know that it would, for example, work as a generic solution for us to roll out to our various clients and say, oh, let’s instrument your environment with this and see what’s going on because first, we don’t have that level of ability to make change in customer environments. We are read-only for some very good reasons. And further, it also seems like it’s a, “Step one: change your entire philosophy around these sorts of things so we can help you spend less on AWS,” seems like a bit of a tall order.



Brooke: Yeah, agreed. And yeah, on previous teams that I’ve been on, I definitely—and I think it’s fair, absolutely fair, that there were things where, especially using AWS serverless services, I was trying to get as much insight as possible into adding some of these services to our traces, like, AppSync was one example where I could not for the life of me figure out how to get AppSync API requests onto my Honeycomb trace. And I spent a lot of time trying to figure it out. And I had team members that would just be, like, you know, “Let’s timebox this; let’s not, like, sink all of our time into it.” And so, I think as observability evolves, hopefully, carving out those patterns continues to get easier so that engineers don’t have to spend all of their time, kind of, carving out those patterns.



Corey: It feels like that’s the hard part, is the shift in perspective. Instrumenting a given tool into an environment is not the heavy lift compared to appreciating the value of it. Do you find that that was an easy thing for you to overcome, back when you were at Procter and Gamble, as far as people already have bought in, on some level, to observability from having seen it in some kind of scenarios where it absolutely save folks’ bacon? Or was it the problem of, first you have to educate people about the painful problem that they have before they realize it is in fact, A, painful, and B, a problem, and then C, that you have something to sell them that will solve that? Because that pattern is a very hard sales motion to execute in most spaces. But you were doing it at it, from the customer side first.



Brooke: Yeah. Yeah, doing it from the customer side, I was able to get buy-in on the team that I was on, and I should also say, like, the team that I was on was considered an innovation team. We were in a separate building from, like, the corporate building and things like that, which I’m sure played into some of those cultural aspects and dynamics. But trying to educate people outside of our team and trying to build an observability practice within this big enterprise company was definitely very challenging, and it was a lot of spending time sharing information and talking to people about their stack and what languages and tools that they’re using and how this could help them. I think until people have had that, kind of, magical moment of using observability data to solve a problem for themselves, it’s very hard, it can be very hard to really make them understand the value.



Corey: That was is always my approach because it feels like observability is a significant and sizable investment in infrastructure alone, let alone mental overhead, the teams to manage these things, et cetera, et cetera. And until you have a challenge that observability can solve, it feels like it is pure cost, similar to backups, where it’s just a whole bunch of expense for no benefit until suddenly, one day, you’re very glad you had it. Now, the world is littered with stories that are very clear about what happens when you don’t have backups. Most people have a personal story around that, but it feels like it’s less straightforward to point at a visceral story where not having observability really hobbled someone or something.



It feels like—because in the benefit of perfect hindsight, oh yeah, like a disk filled up and we didn’t know about that. Like, “Ah, if we just had the right check, we would have caught that early on.” Yeah, coulda, woulda shoulda, but it was a cascading failure that wasn’t picked up until seven levels downstream. Do you think that that's the situation these days or am I misunderstanding how people are starting to conceive about this stuff?



Brooke: Yeah. I mean, I definitely have a couple of stories of even once I was on the journey to observability adoption—which I call it a journey because you don’t just—kind of—snap your fingers and have observability—I started with one service, instrumenting that and just, like, gradually, over sprint’s would instrument more services and pull more team members in to do that as well. But when we were in that process of instrumenting services, there was one service which was our auth service—which maybe should have been the first one that we instrumented—that a code change was made and it was erroring every time somebody tried to sign up in the app. And if we had observability instrumentation in place for that service, it wouldn’t have taken us, like, the four or five hours to find the problem of the one line of code that we had changed; we would have been able to see more clearly what error was happening and what line of code it was happening on and probably fix it within an hour.



And we had a similar issue with a Redshift database that we were running more on the metrics side of things. We were using it to send analytics data to other people in the company and that Redshift database just got maxed out at a certain point. The CPU utilization was at, like, 98% and people in the company were very upset and [laugh] having a very bad time querying their analytics data.



Corey: It’s a terrific sales pitch for Snowflake, to be very direct, because you hear that story kind of a lot.



Brooke: Yeah, it was not a fun time. But at that point, we started sending Redshift metrics data over to Honeycomb as well, so that we could keep a better pulse on what exactly was happening with that database.



Corey: So, here’s sort of the acid test: people tend to build software when they’re starting out greenfield, in ways that emphasize their perspective on the world. For example, when I’m building something new, doesn’t matter if it’s tiny or for just a one-off shitposting approach, and it touches anything involving AWS, first thing I do out of the gate is I wind up setting tags so that I can do cost allocation work on it; someday, I’m going to wonder how much this thing cost. That is, I guess my own level of brokenness.



Brooke: [laugh].



Corey: When you start building something at work from scratch, I guess this is part ‘you,’ part ‘all of Honeycomb,’ do you begin from that greenfield approach of Hello World of instrumenting it for observability, even if it’s not explicitly an observability-focused workload? Or is it something that you wind up retrofitting with observability insights later, once it hits a certain point of viability?



Brooke: Yeah. So, if I’m at the stage of just kind of trying things out locally on my laptop, kind of outside of, like, the central repo for the company, I might not do observability data because I’m just kind of learning and trying things out on my laptop. Once I pull it into our central repo, there is some observability data that I am going to get, just in the way that we kind of have our services set up. And as I’m going through writing code to do this whatever new feature I’m trying to do, I’m thinking about what things, when this breaks—not if it breaks; when it breaks [laugh]—am I going to want to know about in the future. And I’ll add those things, kind of, on the fly just to make things easier on myself, and that’s just kind of how my brain works at this point of thinking about my future self, which is, kind of like, the same definition of self-care. So, I think of observability as self-care for developers.



But later on, when we’re closer to actually launching a thing, I might take another pass at just, like, okay, let’s once again take a look at the error paths and how this thing can break and make sure that we have enough information at those points of error to know what is happening within a trace view of this request.



Corey: My two programming languages that I rely on the most are enthusiasm and brute force, and I understand this is not a traditional software engineering approach. But I’ve always found that having to get observability in involved a retrofit, on some level. And it always was frustrating to me just because it felt like it was so much effort in various ways that I’ve just always kicked myself: I should have done this early on. But I’ve been on the other side of that, and it’s like, should I instrument this with good observability? No, that sounds like work. I want to see if this thing actually works at all, or not first.



And I don’t know what side of the fence is the correct one to be on, but I always find that I’m on the wrong one. Like, I don’t know if it’s, like, one of those, there’s two approaches and neither one works. I do see in client environments where observability is always, always, always something that has to be retrofit into what it is that they’re doing. Does it stay that way once companies get past a certain point? Does observability of adoption among teams just become something that is ingrained into them or do people have to consistently relearn that same lesson, in your experience?



Brooke: I think it depends, kind of, on the size of your company. If you are a small company with a, you know, smaller engineering organization where it’s not, I won’t say easy, but easier to get kind of full team buy-in on points of view and decisions and things like that, it becomes more built-in. If you’re in a really big company like the one that I came from, I think it is continuously, like, educating people and trying to show the value of, like, why we are doing this—coming back to that why—and like, the magical moment of, like, stories of problems that have been solved because of the instrumentation that was in place. So, I guess, like most things, it’s an, ‘it depends.’ But the larger that your company becomes, I think the harder it gets to keep everybody on the same page.



Corey: I am curious, in that I tend to see the world through AWS bills, which is a sad, pathetic way to live that I don’t recommend to basically anyone, but I do see the industry, or at least my client base, forming a bit of a bimodal distribution. On one side, you have companies like Honeycomb, including, you know, Honeycomb, where the majority of your AWS spend is driven by the application that is Honeycomb, you know, the SaaS thing you sell to people to solve their problems. The other side of the world are companies that look a lot more like Procter and Gamble, presumably, where—because I think of oh, what does Procter and Gamble do? And the answer is, a lot. They’re basically the definition of conglomerate in some ways.



So, you look at that, a bill at a big company like that and it might be hundreds of millions of dollars, but the largest individual workload is going to be a couple million at best. So, it feels very much like it’s this incredibly diffuse selection of applications. And in those environments, you have to start thinking a lot more about centralization things you can do, for example, for savings plan purchases and whatnot, whereas at Honeycomb-like companies, you can start looking at, oh, well, you have this single application that’s the lion’s share of everything. We can go very deep into architecture and start looking at micro-optimizations here that will have a larger impact. Having been an engineer at both types of companies, do you find that there’s a different internal philosophy, or is it that when you’re working in a larger company on a specific project, that specific project becomes your entire professional universe?



Brooke: Yeah, definitely at P&G, for the most part, IoT was kind of the center of my universe. But one philosophy that I noticed as being different—and I think this is from being an enterprise in a startup—is just the way that thinking about cost and architecture choices, kind of, happened. So, at P&G, like I said, we were using a lot of Lambda, and pretty much any chance we got, we used a serverless or managed offering from AWS. And I think a big part of that reasoning was because, like I said earlier, P&G is very interested in in-person work. So, everybody that we hired her to be located in Cincinnati.



And it became hard to hire for people who had Go and Terraform experience because a lot of people in the Midwest are much more comfortable in .NET and Java; there’s just a lot more jobs using those technologies. So, we had a lot of trouble hiring and would choose—because P&G had a lot of money to spend—to give AWS that money because we had trouble finding engineers to hire, whereas Honeycomb really does not seem to have trouble hiring engineers. They hire remote employees and lots of people are interested in working at Honeycomb and they also do not have the bank account [laugh] that Procter and Gamble has, so just thinking about cost and architecture is kind of a different beast. So, at Honeycomb, we are building a lot more services versus just always choosing a serverless or easy, like, AWS managed option to think about it less.



Corey: Yeah, at some level, it’s an unfair question, just because it comes down to, in the fullness of time, even Honeycomb turns into something that looks a lot more like Procter and Gamble. Because, okay, you have the Honeycomb application. That’s great, but as the company continues to grow and offer different things to different styles of customers, you start seeing a diffusion where, yeah, everything stills observability focused, but I can see a future where it becomes a bunch of different subcomponents. You make acquisitions of other companies that wind up being treated as separate environments and the rest. And in the fullness of time, I can absolutely see that that is the path that a lot of companies go down.



So, it might also just be that I’m looking at this through a perspective lens of… just company stage, as opposed to what the internal story of the company is. I mean, Procter and Gamble’s, what, a century old give or take? Whereas Honeycomb is an ancient tech company, by which I mean it’s over 18 months old.



Brooke: Yeah, P&G was founded in 1837. So that’s—



Corey: Almost 200 years old. Wonderful.



Brooke: —quite old [laugh]. Yeah [laugh].



Corey: And for some reason, they did not choose to build their first technical backbone on top of AWS back then. I don’t understand why, for the life of me.



Brooke: [laugh]. Yeah, but totally agree on your point that the kind of difference of thinking about cost and architecture definitely comes from company’s stage rather than necessarily the industry.



Corey: I really want to thank you for taking the time out of your day to talk with me about what you’re up to and how you view these things. If people want to learn more, what’s the best place for them to find you?



Brooke: Yeah, so I think the main place that I still sometimes am, is Twitter: @codegirlbrooke is my username. But I’m only there sometimes, now [laugh].



Corey: I feel like that’s a problem a lot of us are facing right now. Like, I’m more active on Bluesky these days, but it’s still invite only and it feels like it’s too much of a weird flex to wind up moving people to just yet. I’m hoping that changes soon, but we’ll see how it plays. We’ll, of course, put links to that in the [show notes 00:31:53]. I really want to thank you for taking the time out of your day to talk with me.



Brooke: Yeah, thanks so much for chatting with me. It was a good time.


If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

Newsletter Footer

Get the Newsletter

Reach over 30,000 discerning engineers, managers, enthusiasts who actually care about the state of Amazon’s cloud ecosystems.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Sponsor Icon Footer

Sponsor an Episode

Get your message in front of people who care enough to keep current about the cloud phenomenon and its business impacts.