Communicating What an SDET Actually Is with Sean Corbett

Episode Summary

Companies come in many stripes these days, and everybody seems to be a unicorn. But for TheZebra and Sean Corbett, their Senior Software Engineer, this may just be the case. Over the past several years, they have “helped create software and proprietary platforms” that assist teams in their understanding of their own work as an SDET (Software Development Engineer in Test.) Sean and Corey rake over how QA departments are waning in relevance, and their replacement by SDETs. Sean clarifies what exactly an SDET is, its history, and how it has changed over the past few years. Sean reflects on TheZebra and their emphasis on a more collaborative environment that brings engineers and testing into the other teams in an organization.

Episode Show Notes & Transcript

About Sean
Sean is a senior software engineer at TheZebra, working to build developer experience tooling with a focus on application stability and scalability. Over the past seven years, they have helped create software and proprietary platforms that help teams understand and better their own work.

Links:

Transcript
Sean: Hello, and welcome to Screaming in the Cloud with your host, Chief cloud economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.


Corey: Today’s episode is brought to you in part by our friends at MinIO the high-performance Kubernetes native object store that’s built for the multi-cloud, creating a consistent data storage layer for your public cloud instances, your private cloud instances, and even your edge instances, depending upon what the heck you’re defining those as, which depends probably on where you work. It’s getting that unified is one of the greatest challenges facing developers and architects today. It requires S3 compatibility, enterprise-grade security and resiliency, the speed to run any workload, and the footprint to run anywhere, and that’s exactly what MinIO offers. With superb read speeds in excess of 360 gigs and 100 megabyte binary that doesn’t eat all the data you’ve gotten on the system, it’s exactly what you’ve been looking for. Check it out today at min.io/download, and see for yourself. That’s min.io/download, and be sure to tell them that I sent you.


Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig is the solution for securing DevOps. They have a blog post that went up recently about how an insecure AWS Lambda function could be used as a pivot point to get access into your environment. They’ve also gone deep in-depth with a bunch of other approaches to how DevOps and security are inextricably linked. To learn more, visit sysdig.com and tell them I sent you. That’s S-Y-S-D-I-G dot com. My thanks to them for their continued support of this ridiculous nonsense.


Corey: Welcome to Screaming in the Cloud, I’m Corey Quinn. An awful lot of companies out they’re calling themselves unicorns, which is odd because if you look at the root ‘uni,’ it means one, but they’re sure a lot of them out there. Conversely, my guest today works at a company called TheZebra with the singular definite article being the key differentiator here, and frankly, I’m a big fan of being that specific. My guest is Senior Software Development Engineer in Test, Sean Corbett. Sean, thank you for taking the time to join me today, and more or less suffer the slings and arrows, I will no doubt be hurling your direction.


Sean: Thank you very much for having me here.


Corey: So, you’ve been a great Twitter follow for a while: You’re clearly deeply technically skilled; you also have a soul, you’re strong on the empathy point, and that is an embarrassing lack in large swaths of our industry. I’m going to talk about that right now because I’m sure it comes through the way it does when you talk about virtually anything else. Instead, you are a Software Development Engineer in Test or SDET. I believe you are the only person I’m aware of in my orbit who uses that title, so I have to ask—and please don’t view this as me in any way criticizing you; it’s mostly my own ignorance speaking—what is that?


Sean: So, what is a Software Development Engineer in Test? If you look back—I believe it was Microsoft originally came up with the title, and what it stems from was they needed software development engineers who particularly specialized in creating automation frameworks for testing stuff at scale. And that was over a decade ago, I believe. Microsoft has since stopped using the term, but it persists in areas in the industry.


And what is an SDET today? Well, I think we’re going to find out it’s a strange mixture of things. SDET today is not just someone that creates automated frameworks or writes tests, or any of those things. An SDET is the strange amalgamation of everything from full-stack to DevOps to even some product management to even a little bit machine-learning engineer; it’s a truly strange field that, at least for me, has allowed me to basically embrace almost every other discipline and area of the current modern engineering around, to some degree. So, it’s fun, is what it is. [laugh].


Corey: This sounds similar in some respects to oh, I think back to a role that I had in 2008, 2009, where there was an entire department that was termed QA or Quality Assurance, and they were sort of the next step. You know, development would build something and start, and then deploy it to a test environment or staging environment, and then QA would climb all over this, sometimes with automation—which was still in the early days, back in that era—and sometimes by clicking the button, and going through scripts, and making sure that the website looked okay. Is that aligned with what you’re doing, or is that a bit of a different branch?


Sean: That is a little bit of a different branch from me. The way I would put it is QA and QA departments are an interesting artifact that I think, in particular, newer orgs still feel like they might need one, and what you quickly realize today, particularly with modern development and this, kind of, DevOps focus is that having that centralized QA department doesn’t really work. So, SDETs absolutely can do all those things: They can climb over a test environment with automation, they can click the buttons, they can tell you everything’s good, they can check the boxes for you if you want, but if that is what you’re using your SDETs for you are, frankly, missing out because I guarantee you, the people that you’ve hired as SDETs have a lot more skills than that, and not utilizing those to your advantage is missing out on a lot of potential benefit, both in terms of not just quality—which is this fantastic concept that dates all the way back to—gives people a lot of weird feelings [laugh] to be frank, and product.


Corey: So, one of the challenges I’ve always had is people talk about test-driven development, which sounds like a beautiful idea in theory, and in practice is something people—you know, just like using the AWS console, and then lying about it forms this heart and soul of ClickOps—we claim to be using test-driven development but we don’t seem to be the reality of software development. And again, no judgment on these; things are hard. I built out a, more or less, piecing together a whole bunch of toothpicks and string to come up with my newsletter production pipeline. And that’s about 29 Lambdas Function, behind about 5 APIs Gateway, and that was all kinds of ridiculous nonsense.


And I can deploy each of the six or so microservices that do this, independently. And I sometimes even do continuous build or slash continuous deploy to it because integration would imply I have tests, which is why I bring the topic up. And more often than not—because I’m very bad at computers—I will even have syntax errors, make it into this thing, and I push the button and suddenly it doesn’t work. It’s the iterative guess-and-check model that goes on here. So, I introduced regressions, a fair bit at the time, and the reason that I’m being so blase about this is that I am the only customer of this system, which means that I’m not out there making people’s lives harder, no one is paying me money to use this thing, no one else is being put out by it. It’s just me smacking into a wall and feeling dumb all the time.


And when I talk to people about the idea of building tests. And it’s like, “Oh, you should have unit tests and integration tests and all the rest.” And I did some research into the topics, and a lot of it sounds like what people were talking about 10 to 15 years ago in the world of tests. And again, to be clear, I’ve implemented none of these things because I am irresponsible and bad at computers. But what has changed over the last five or ten years? Because it feels like the overall high level as I understood it from intro to testing 101 in the world of Python, the first 18 chapters are about dependency manager—because of course they are; it’s Python—then the rest of it just seems to be the concepts that we’ve never really gotten away from. What’s new, what’s exciting, what's emerging in your space?


Sean: There’s definitely some emerging and exciting stuff in the space. There’s everything from, like, what Applitools does with using machine learning to do visual regressions—that’s a huge advantage, a huge time saver, so you don’t have to look pixel by pixel, and waste your time doing it—to things like our team at TheZebra is working on, which is, for example, a framework that utilizes Directed Acrylic Graph workflows that’s written GoLang—the prototype is—and it allows you to work with these tests, rather than just as kind of these blasé scripts that you either keep in a monorepo, or maybe possibly in each individual services’ repo, and just run them all together clumsily in this, kind of, packaged product, into this distributed resource that lets you think about tests as these, kind of, user flows and experiences and to dip between things like API layer, where you might, for example, say introduce regression [unintelligible 00:07:48] calling to a third-party resource, and something goes wrong, you can orchestrate that workflow as a whole. Rather than just having to write a script after script after script after script to cover all these test cases, you can focus on well, I’m going to create this block that represents this general action, can accept a general payload that conforms to this spec, and I’m going to orchestrate these general actions, maybe modify the payload of it, but I can recall those actions with a slightly different payload and not have to write script after script after script after script.


But the problem is that, like you’ve noticed, a lot of test tooling doesn’t embrace those, kind of, modern practices and ideas. It’s still very much the, your tests, you—particularly integration tests do this—will exist in one place, a monorepo, they will have all the resources there, they’ll be packaged together, you will run them after the fact, after a deploy, on an environment. And it makes it so that all these testing tools are very reactive, they don’t encourage a lot of experimentation, and they make it at times very difficult to experiment, in particular because the more tests you add, the more chaotic that code and that framework gets, and the harder it gets to run in a CI/CD environment, the longer it takes. Whereas if you have something like this graph tool that we’re building, these things just become data. You can store them in a database, for the love of God. You can apply modern DevOps practices, you can implement things like Jaeger.


Corey: I don’t think it’s ever used or anything in the database. Great, then you can use anything itself as a database, which is my entire schtick, so great.


Sean: Exactly.


Corey: That’s right, that means the entire world can indeed be reduced to TXT records in DNS, which I maintain is the… the holiest of all databases. I’m sorry, please, continue.


Sean: No, nonono, that’s true. The thing that has always driven me is this idea that why are we still just, kind of, spitting out code to test things in a way that is very prescriptive and very reactive? And so, the exciting things in test come from places like Applitools and places like the—oh, I forget. It was at a Test Days conference, where they talked about—they developed this test framework that was able to auto generate the models, and then it was so good at auto generating those models for test, they’d actually ended up auto generating the models for the actual product. [laugh]. I think it used a degree of machine learning to do so. It was for a flashcard site. A friend of mine, Jacob Evans on Twitter always likes to talk about it.


These are where the exciting things lay is where people are starting to break out of that very reactive, prescriptive, kind of, test philosophy of, like I like to say, checking the boxes to, “Let’s stop checking boxes and let’s create, like insight tooling. Let’s get ahead of the curve. What is the system actively doing? Let’s check in. What data do we have? What is the system doing right at this moment? How 
ahead of the curve can we get with what we’re actually using to test?”


Corey: One question I have is the cultural changes because back in those early days where things were handed off from the developers to the QA team, and then ideally to where I was sitting over in operations—lots of handoffs; not a lot of integrations there—QA was not popular on the development side of the world, specifically because their entire perception was that of, “Oh, they’re just the critics. They’re going to wind up doing the thing I just worked hard on and telling me what’s wrong with it.” And it becomes a ‘Department of No,’ on some level. One of the, I think, benefits of test automation is that suddenly you’re blaming a computer for things, which is, “Yep. You are a developer. Good work.” But the idea of putting people almost in the line of fire of being either actually or perceived as the person who’s the blocker, how has that evolved? And I’m really hoping the answer is that it has.


Sean: In some places, yes, in some places, no. I think it’s always, there’s a little bit more nuance than just yes, it’s all changed, it’s all better, or just no, we’re still back in QA are quote-unquote, “The bad guys,” and all that stuff. The perception that QA are the critics and are there to block a great idea from seeing fruition and to block you from that promotion definitely still persists. And it also persists a lot in terms of a number of other attitudes that get directed towards QA folks, in terms of the fact that our skill sets are limited to writing stuff like automation tooling for test frameworks and stuff like that, or that we only know how to use things like—okay, well, they know how to use Selenium and all this other stuff, but they don’t know how to work a database, they don’t know how an app [unintelligible 00:12:07] up, they don’t all the work that I put in. That’s really not the case. More and more so, folks I’m seeing in test have actually a lot of other engineers experience to back that up.


And so the places where I do see it moving forward is actually like TheZebra, it’s much more of a collaborative environment where the engineers are working together with the teams that they’re embedded in or with the SDETs to build things and help things that help engineers get ahead of the curve. So, the way I propose it to folks is, “We’re going to make sure you know and see exactly what you wrote in terms of the code, and that you can take full [confidence 00:12:44] on that so when you walk up to your manager for your one-on-one, you can go like, ‘I did this. And it’s great. And here’s what I know what it does, and this is where it goes, and this is how it affects everything else, and my test person helped me see all this, and that’s awesome.’” It’s this transition of QA and product as these adversarial relationships to recognizing that there’s no real differentiator at all there when you stop with that reactive mindset in test. Instead of trying to just catch things you’re trying to get ahead of the curve and focus on insight and that sort of thing.


Corey: This episode is sponsored in part by our friends at Vultr. Spelled V-U-L-T-R because they’re all about helping save money, including on things like, you know, vowels. So, what they do is they are a cloud provider that provides surprisingly high performance cloud compute at a price that—while sure they claim its better than AWS pricing—and when they say that they mean it is less money. Sure, I don’t dispute that but what I find interesting is that it’s predictable. They tell you in advance on a monthly basis what it’s going to going to cost. They have a bunch of advanced networking features. They have nineteen global locations and scale things elastically. Not to be confused with openly, because apparently elastic and open can mean the same thing sometimes. They have had over a million users. Deployments take less that sixty seconds across twelve pre-selected operating systems. Or, if you’re one of those nutters like me, you can bring your own ISO and install basically any operating system you want. Starting with pricing as low as $2.50 a month for Vultr cloud compute they have plans for developers and businesses of all sizes, except maybe Amazon, who stubbornly insists on having something to scale all on their own. Try Vultr today for free by visiting: vultr.com/screaming, and you’ll receive a $100 in credit. Thats V-U-L-T-R.com slash screaming.


Corey: One of my questions is, I guess, the terminology around a lot of this. If you tell me you’re an SDE, I know that oh, you’re a Software Development Engineer. If you tell me you’re a DBA, I know oh, great, you’re a Database Administrator. If you told me you’re an SRE, I know oh, okay, great. You worked at Google.


But what I’m trying to figure out is I don’t see SDET, at least in the waters that I tend to swim in, as a title, really, other than you. Is that a relatively new emerging title? Is it one that has historically been very industry or segment-specific, or you’re doing what I did, which is, “I don’t know what to call myself, so I described myself as a Cloud Economist,” two words no one can define. Cloud being a bunch of other people’s computers, and economist meaning claiming to know everything about money, but dresses like a flood victim. So, no one knows what I am when I make it up, and then people start giving actual job titles to people that are Cloud Economists now, and I’m starting to wonder, oh dear Lord, have I started the thing? What is, I guess, the history and positioning of SDET as a job title slash acronym?


Sean: So SDET, like I was saying, it came from Microsoft, I believe, back in the double-ohs.


Corey: Mmm.


Sean: And other companies caught on. I think Google actually [unintelligible 00:14:33] as well. And it’s hung on certain places, particularly places that feel like they need a concentrated quality department. That’s where you usually will see places that have that title of SDET. It is increasingly less common because the idea of having centralized quality—like I said before, particularly with the modern, kind of, DevOps-focused development, Agile, and all that sort of thing, it becomes much, much more difficult.


If you have a waterfall type of development cycle, it’s a lot easier to have a central singular quality department, and then you can have SDET stuff [unintelligible 00:15:08], that gets a lot easier when you have Agile and you have that, kind of, regular integration and you have, particularly, DevOps [unintelligible 00:15:14] cycle, it becomes increasingly difficult, so a lot of places that have been moving away from that. It is definitely a strange title, but it is not entirely rare. If you want to peek, put a SDET on your LinkedIn for about two weeks and see how many offers come in, or how many folks in your inbox you get. It is absolutely in demand. People want engineers to write these test frameworks, but that’s an entirely different point; that gets down to the point of the fact that people want people in these roles because a lot of test tooling, frankly, sucks.


Corey: It’s interesting you talk about that as a validation of it. I get remarkably few outreaches on LinkedIn, either for recruiting, which almost never happens or for trying to sell me something which happens once every week or so. My business partner has a CEO title, and he winds up getting people trying to sell him things four times a day by lunchtime, and occasionally people reaching out of, “Hey, I don’t know much about your company, but if it’s not going well, do you want to come work on something completely unrelated?” Great. And it’s odd because both he and I have similar settings where neither of us have the ‘looking for work’ box checked on LinkedIn because it turns out that does send a message to your staff who are depending on their job still being here next month, and that isn’t overly positive because we’re not on the market.


But changing just titles and how we describe what we do and how we do it absolutely has a bearing as to how that is perceived by others. And increasingly, I’m spending more of my time focusing less on the technical substance of things and more about how what they do is being communicated. Because increasingly, what I’m finding about the world of enterprise technology and enterprise cloud and all of this murky industry in which we swim, is that the technology is great—anything can be made to work; mostly—but so few companies are doing an effective job of telling the story. And we see it with not just an engineering-land; in most in all parts of the business. People are not storytelling about what they do, about the outcomes they drive, and we’re falling back to labels and buzzwords and acronyms and the rest.


Where do you stand on this? I know we’ve spoken briefly before about how this is one of those things that you’re paying attention to as well, so I know that we’re not—I’m not completely off base here. What’s your take on it?


Sean: I definitely look at the labels and things of that sort. It’s one of those things where humans like to group and aggregate things. Our brains like that degree of organization, and I’m going to say something that is very stereotypical here: This is helped a lot by social media which depends on things like hashtags and ability to group massive amounts of information is largely facilitated. And I don’t know if it’s caused by it, but it certainly aggravates the situation.


We like being able to group things with few words. But as you said before, that doesn’t help us. So, in a particular case, with something like a SDET title, yeah, that does absolutely send a signal, and it doesn’t necessarily send the right one in terms of the person that you’re talking to, you might have vastly different capabilities from the next SDET that you talk to. And it’s were putting up a story of impact-driven, kind of, that classic way of focusing on not just the labels, but what was actually done and who had helped and who had enabled and the impact of it, that is key. The trick is trying to balance that with this increasing focus on the cut-down presentation.


You and I’ve talked about this before, too, where you can only say so much on something like a LinkedIn profile before people just turn off their brains and they walk away to the next person. Or you can only put so much on your resume before people go, “Okay, ten pages, I’m done.” And it’s just one of those things where… the trick I find that test people increasingly have is there was a very certain label applied to us that was rooted in one particular company’s needs, and we have spent the better part of over a decade trying to escape and redefine that, and it’s incredibly challenging. And a lot of it comes down to folks like, for example, Angie Jones, who simply, just through pure action and being very open about exactly what they’re doing, change that narrative just by showing. That form of storytelling is show it, don't say it, you know? Rather than saying, “Oh, well, I bring into all this,” they just show it, and they bring it forward that way.


Corey: I think you hit on something there with the idea of social media, where there is validity to the idea of being able to describe something concisely. “What’s your elevator pitch?” Is a common question in business. “What is the problem you solve? What would someone use you for?”


And if your answer to that requires you sabotage the elevator for 45 minutes in order to deliver your message, it’s not going to work. With some products, especially very early-stage products where the only people who are working on them are the technical people building them, they have a lot of passion for the space, but they aren’t—haven’t quite gotten the messaging down to be able to articulate it. People’s attention spans aren’t great, by and large, so there’s a, if it doesn’t fit in a tweet, it’s boring and crappy is sort of the takeaway here. And yeah, you’re never going to encapsulate volume and nuance and shading into a tweet, but the baseline description of, “So, what do you do?” If it doesn’t fit in a tweet, keep workshopping it, to some extent.


And it’s odd because I do think you’re right, it leads to very yes or no, binary decisions about almost anything, someone is good or trash. There’s no, people are complicated, depending upon what aspect we’re talking about. And same story with companies. Companies are incredibly complex, but that tends to distill down in the Twitter ecosystem to, “Engineers are smart and executives are buffoons.” And anytime a company does something, clearly, it’s a giant mistake.


Well, contrary to popular opinion, Global Fortune 2000 companies do not tend to hire people who are not highly capable at the thing they’re doing. They have context and nuance and constraints that are not visible from the outside. So, that is one of the frustrating parts to me. So, labels are helpful as far as explaining what someone is and where they fit in the ecosystem. For example, yeah, if you describe yourself as an SDET, I know that we’re talking about testing to some extent; you’re not about to show up and start talking to me extensively about, oh, I don’t know, how you market observability products.


It at least gives a direction and bounding to the context. The challenge I always had, why I picked a title that no one else had, was that what I do is complicated, and if once people have a label that they think encompasses where you start and where you stop, they stop listening, in some cases. What’s been your experience, given that you do have a title that is not as widely traveled as a number of the more commonly used ones?


Sean: Definitely that experience. I think that I’ve absolutely worked at places where—the thing is, though, and I do want to cite this, that when folks do end up just turning off once they have that nice little snippet that they think encompasses who you are—because increasingly nowadays, we like to attach what you do to who you are—and it makes a certain degree of sense, absolutely, but it’s very hard to encompass those sorts of things, and let alone, kind of, closely nestle them together when you have, you know, 280 characters.


Yes, folks like to do that to folks like SDETs. There’s a definite mindset of, ‘stay in your lane,’ in certain shops. I will say that it’s not to the benefit of those shops, and it creates and often aggravates an adversarial relationship that is to the detriment of both, particularly today where the ability to spin up a rival product of reasonable quality and scale has never been easier, slowing yourself down with arbitrary delineations that are meant to relegate and overly-define folks, not necessarily for the actual convenience of your business, but for the convenience of your person, that is a very dangerous move. A previous company that I worked at almost lost a significant amount of their market share because they actively antagonized the SDET team to the point where several key members left. And it left them completely unable to cover areas of product with scalable automation tooling and other things. And it’s a very complex product.


And it almost cost them their position in the industry, potentially, the entire company as a whole got very close to that point. And that’s one of the things we have to be careful of when it comes to applying these labels, is that when you apply a label to encompass someone, yes, you affect them, but it also we’ll come back and affect you because when you apply that label to someone, you are immediately confining your relationship with that person. And that relationship is a two-way street. If you apply a label that closes off other roads of communication or potential collaboration or work or creativity or those sorts of things, that is your decision and you will have to accept those consequences.


Corey: I’ve gotten the sense that a lot of folks, as they describe what they do and how they do it, they are often thinking longer-term; their careers often trend toward the thing that happens to them rather than a thing that winds up being actively managed. And… like, one of my favorite interview questions whenever I’m looking to bring someone in, it’s always, “Yeah, ignore this job we’re talking about. Magically you get it or you don’t; whatever. That’s not relevant right now. What’s your next job? What’s the one after that? What is the trajectory here?”


And it’s always fun to me to see people’s responses to it. Often it’s, “I have no idea,” versus the, “Oh, I want to do this, and this is the thing I’m interested in working with you for because I think it’ll shore up this, this, and this.” And like, those are two extreme ends of the spectrum. There’s no wrong answer, but it’s helpful, I find, just to ask the question in the final round interview that I’m a part of, just to, I guess sort of like, boost them a bit into a longer-term picture view, as opposed to next week, next month, next year. Because if what you’re doing doesn’t bring you closer to what you want to be doing in the job after the next one, then I think you’re looking at it wrong, in some cases.


And I guess I’ll turn the question on to you. If you look at what you’re doing now, ignore whatever you do next, what’s your role after that? Like, where are you aiming at?


Sean: Ignoring the next position… which is interesting because I always—part of how I learned to operate, kind of in my earlier years was focus on the next two weeks because the longer you go out from that window, the more things you can’t control, [laugh] and the harder it is to actually make an effective plan. But for me, the real goal is I want to be in any position that enables the hard work we do in building these things to make people’s lives easier, better, give them access to additional information, maybe it’s joy in terms of, like, a content platform, maybe it’s something that helps other developers do what they do, something like Honeycomb, for example, just that little bit of extra insight to help them work a little bit better. And that’s, for me, where I want to be, is building things that make the hard work we do to create these tools, these products easier. So, for me, that would look a lot like an internal tooling team of some sort, something that helps with developer efficiency, with workflow.


One of the reasons—and it’s funny because I got to asked this recently: “Why are you still even in test? You know what reputation this field has”—wrongly deserved, maybe so—“Why are you still in test?” My response was, “Because”—and maybe with a degree of hubris, stubbornly so—“I want to make things better for test.” There are a lot of issues we’re facing, not just in terms of tooling, but in terms of processes, and how we think about solving problems, and like I said before, that kind of reactive nature, it sort of ends up kind of being an ouroboros, eating its own tail. Reactive tools generate reactive engineers, that then create more reactive tools, and it becomes this ouroboros eating itself.


Where I want to be in terms of this is creating things that change that, push us forward in that direction. So, I think that internal tooling team is a fantastic place to do that, but frankly, any place where I could do that at any level would be fantastic.


Corey: It’s nice to see the things that you care about involve a lot more about around things like impact, as opposed to raw technologies and the rest. And again, I’m not passing judgment on anyone who chooses to focus on technology or different areas of these things. It’s just, it’s nice to see folks who are deeply technical themselves, raising their head a little bit above it and saying, “All right, here’s the impact I want to have.” It’s great, and lots of folks do, but I’m always frustrated when I find myself talking to folks who think that the code ultimately speaks; code is the arbiter. Like, you see this with some of the smart contract stuff, too.


It’s the, “All right, if you believe that’s going to solve all the problems, I have a simple challenge to you, and then I will never criticize you again: Go to small claims court for a morning, four hours and watch all the disputes that wind up going through there, and ask yourselves how many of those a smart contract would have solved?”


Every time I bring that point up to someone, they never come back and say, “This is still a good idea.” Maybe I’m a little too anti-computer, a little bit too human these days. But again, most of cloud economics, in my experience, is psychology more than it is math.


Sean: I think it’s really the truth. And I think that [unintelligible 00:29:06] that I really want to seize on for a second because code and technology as this ultimate arbiter, we’ve become fascinated with it, not necessarily to our benefit. One of the things you will often see me—to take a line from Game of Thrones—whinging about [laugh] is we are overly focused on utilizing technology, whether code or anything else, to solve what are fundamentally human problems. These are problems that are rooted in human tendencies, habits, characters, psychology—as you were saying—that require human interaction and influence, as uncomfortable as that may be to quote-unquote, “Solve.”


And the reality of it is, is that the more that we insist upon, trying to use technology to solve those problems—things like cases of equity in terms of generational wealth and things of that sort, things like helping people communicate issues with one another within a software development engineering team—the more we will create complexity and additional problems, and the more we will fracture people’s focus and ability to stay focused on what the underlying cause of the problem is, which is something human. And just as a side note, the fundamental idea that code is this ultimate arbiter of truth is terrible because if code was the ultimate arbiter of truth, I wouldn’t have a job, Corey. [laugh]. I would be out of business so fast.


Corey: Oh, yeah, it’s great. It’s—ugh, I—it feels like that’s a naive perspective that people tend to have early in their career, and Lord knows I did. Everything was so straightforward and simple, back when I was in that era, whereas the older I get, the more the world is shades of nuance.


Sean: There are cases where technology can help, but I tend to find those a very specific class of solutions, and even then they can only assist a human with maybe providing some additional context. This is an idea from a Seeking SRE book that I love to reference—I think it’s, like, the first chapter—the Chief of Netflix SRE, I think it is, he talks about this is this, solving problems is this thing of relaying context, establishing context—and he focused a lot less on the technology side, a lot more of the human side, and brings in, like, “The technology can help this because it can give you a little bit better insight of how to communicate context, but context is valuable, but you’re still going to have to do some talking at the end of the day and establish these human relationships.” And I think that technology can help with a very specific class of insight or context issues, but I would like to reemphasize that is a very specific class, and very specific sort, and most of the human problems we’re trying to solve the technology don’t fall in there.


Corey: I think that’s probably a great place for us to call it an episode. I really appreciate the way you view these things. I think that you are one of the most empathetic people that I find myself talking to on an ongoing basis. If people want to learn more, where’s the best place to find you?


Sean: You can find me on Twitter at S-C—underscore—code, capital U, capital M. That’s probably the best place to find me. I’m most frequently on there.


Corey: We will, of course, include links to that in the [show notes 00:32:37].


Sean: And then, of course, my LinkedIn is not a bad place to reach out. So, you can probably find me there, Sean Corbett, working at TheZebra. And as always, you can reach me at [email protected]. That is my work email; feel free to email me there if you have any questions.


Corey: And we will, of course, put links to all of that in the [show notes 00:33:00]. Sean, thank you so much for taking the time to speak with me today. I really appreciate it.


Sean: Thank you.


Corey: Sean Corbett, Senior Software Development Engineer in Test at TheZebra—because there’s only one. I’m Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry ranting comment about how absolutely code speaks, and it is the ultimate arbiter of truth, and oh wait, what’s that the FBI is at the door make some inquiries about your recent online behavior.


Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.


Announcer: This has been a HumblePod production. Stay humble.

Transcript

Sean: Hello, and welcome to Screaming in the Cloud with your host, Chief cloud economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.

Corey: Today’s episode is brought to you in part by our friends at MinIO the high-performance Kubernetes native object store that’s built for the multi-cloud, creating a consistent data storage layer for your public cloud instances, your private cloud instances, and even your edge instances, depending upon what the heck you’re defining those as, which depends probably on where you work. It’s getting that unified is one of the greatest challenges facing developers and architects today. It requires S3 compatibility, enterprise-grade security and resiliency, the speed to run any workload, and the footprint to run anywhere, and that’s exactly what MinIO offers. With superb read speeds in excess of 360 gigs and 100 megabyte binary that doesn’t eat all the data you’ve gotten on the system, it’s exactly what you’ve been looking for. Check it out today at min.io/download, and see for yourself. That’s min.io/download, and be sure to tell them that I sent you.

Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig is the solution for securing DevOps. They have a blog post that went up recently about how an insecure AWS Lambda function could be used as a pivot point to get access into your environment. They’ve also gone deep in-depth with a bunch of other approaches to how DevOps and security are inextricably linked. To learn more, visit sysdig.com and tell them I sent you. That’s S-Y-S-D-I-G dot com. My thanks to them for their continued support of this ridiculous nonsense.

Corey: Welcome to Screaming in the Cloud, I’m Corey Quinn. An awful lot of companies out they’re calling themselves unicorns, which is odd because if you look at the root ‘uni,’ it means one, but they’re sure a lot of them out there. Conversely, my guest today works at a company called TheZebra with the singular definite article being the key differentiator here, and frankly, I’m a big fan of being that specific. My guest is Senior Software Development Engineer in Test, Sean Corbett. Sean, thank you for taking the time to join me today, and more or less suffer the slings and arrows, I will no doubt be hurling your direction.

Sean: Thank you very much for having me here.

Corey: So, you’ve been a great Twitter follow for a while: You’re clearly deeply technically skilled; you also have a soul, you’re strong on the empathy point, and that is an embarrassing lack in large swaths of our industry. I’m going to talk about that right now because I’m sure it comes through the way it does when you talk about virtually anything else. Instead, you are a Software Development Engineer in Test or SDET. I believe you are the only person I’m aware of in my orbit who uses that title, so I have to ask—and please don’t view this as me in any way criticizing you; it’s mostly my own ignorance speaking—what is that?

Sean: So, what is a Software Development Engineer in Test? If you look back—I believe it was Microsoft originally came up with the title, and what it stems from was they needed software development engineers who particularly specialized in creating automation frameworks for testing stuff at scale. And that was over a decade ago, I believe. Microsoft has since stopped using the term, but it persists in areas in the industry.

And what is an SDET today? Well, I think we’re going to find out it’s a strange mixture of things. SDET today is not just someone that creates automated frameworks or writes tests, or any of those things. An SDET is the strange amalgamation of everything from full-stack to DevOps to even some product management to even a little bit machine-learning engineer; it’s a truly strange field that, at least for me, has allowed me to basically embrace almost every other discipline and area of the current modern engineering around, to some degree. So, it’s fun, is what it is. [laugh].

Corey: This sounds similar in some respects to oh, I think back to a role that I had in 2008, 2009, where there was an entire department that was termed QA or Quality Assurance, and they were sort of the next step. You know, development would build something and start, and then deploy it to a test environment or staging environment, and then QA would climb all over this, sometimes with automation—which was still in the early days, back in that era—and sometimes by clicking the button, and going through scripts, and making sure that the website looked okay. Is that aligned with what you’re doing, or is that a bit of a different branch?

Sean: That is a little bit of a different branch from me. The way I would put it is QA and QA departments are an interesting artifact that I think, in particular, newer orgs still feel like they might need one, and what you quickly realize today, particularly with modern development and this, kind of, DevOps focus is that having that centralized QA department doesn’t really work. So, SDETs absolutely can do all those things: They can climb over a test environment with automation, they can click the buttons, they can tell you everything’s good, they can check the boxes for you if you want, but if that is what you’re using your SDETs for you are, frankly, missing out because I guarantee you, the people that you’ve hired as SDETs have a lot more skills than that, and not utilizing those to your advantage is missing out on a lot of potential benefit, both in terms of not just quality—which is this fantastic concept that dates all the way back to—gives people a lot of weird feelings [laugh] to be frank, and product.

Corey: So, one of the challenges I’ve always had is people talk about test-driven development, which sounds like a beautiful idea in theory, and in practice is something people—you know, just like using the AWS console, and then lying about it forms this heart and soul of ClickOps—we claim to be using test-driven development but we don’t seem to be the reality of software development. And again, no judgment on these; things are hard. I built out a, more or less, piecing together a whole bunch of toothpicks and string to come up with my newsletter production pipeline. And that’s about 29 Lambdas Function, behind about 5 APIs Gateway, and that was all kinds of ridiculous nonsense.

And I can deploy each of the six or so microservices that do this, independently. And I sometimes even do continuous build or slash continuous deploy to it because integration would imply I have tests, which is why I bring the topic up. And more often than not—because I’m very bad at computers—I will even have syntax errors, make it into this thing, and I push the button and suddenly it doesn’t work. It’s the iterative guess-and-check model that goes on here. So, I introduced regressions, a fair bit at the time, and the reason that I’m being so blase about this is that I am the only customer of this system, which means that I’m not out there making people’s lives harder, no one is paying me money to use this thing, no one else is being put out by it. It’s just me smacking into a wall and feeling dumb all the time.

And when I talk to people about the idea of building tests. And it’s like, “Oh, you should have unit tests and integration tests and all the rest.” And I did some research into the topics, and a lot of it sounds like what people were talking about 10 to 15 years ago in the world of tests. And again, to be clear, I’ve implemented none of these things because I am irresponsible and bad at computers. But what has changed over the last five or ten years? Because it feels like the overall high level as I understood it from intro to testing 101 in the world of Python, the first 18 chapters are about dependency manager—because of course they are; it’s Python—then the rest of it just seems to be the concepts that we’ve never really gotten away from. What’s new, what’s exciting, what's emerging in your space?

Sean: There’s definitely some emerging and exciting stuff in the space. There’s everything from, like, what Applitools does with using machine learning to do visual regressions—that’s a huge advantage, a huge time saver, so you don’t have to look pixel by pixel, and waste your time doing it—to things like our team at TheZebra is working on, which is, for example, a framework that utilizes Directed Acrylic Graph workflows that’s written GoLang—the prototype is—and it allows you to work with these tests, rather than just as kind of these blasé scripts that you either keep in a monorepo, or maybe possibly in each individual services’ repo, and just run them all together clumsily in this, kind of, packaged product, into this distributed resource that lets you think about tests as these, kind of, user flows and experiences and to dip between things like API layer, where you might, for example, say introduce regression [unintelligible 00:07:48] calling to a third-party resource, and something goes wrong, you can orchestrate that workflow as a whole. Rather than just having to write a script after script after script after script to cover all these test cases, you can focus on well, I’m going to create this block that represents this general action, can accept a general payload that conforms to this spec, and I’m going to orchestrate these general actions, maybe modify the payload of it, but I can recall those actions with a slightly different payload and not have to write script after script after script after script.

But the problem is that, like you’ve noticed, a lot of test tooling doesn’t embrace those, kind of, modern practices and ideas. It’s still very much the, your tests, you—particularly integration tests do this—will exist in one place, a monorepo, they will have all the resources there, they’ll be packaged together, you will run them after the fact, after a deploy, on an environment. And it makes it so that all these testing tools are very reactive, they don’t encourage a lot of experimentation, and they make it at times very difficult to experiment, in particular because the more tests you add, the more chaotic that code and that framework gets, and the harder it gets to run in a CI/CD environment, the longer it takes. Whereas if you have something like this graph tool that we’re building, these things just become data. You can store them in a database, for the love of God. You can apply modern DevOps practices, you can implement things like Jaeger.

Corey: I don’t think it’s ever used or anything in the database. Great, then you can use anything itself as a database, which is my entire schtick, so great.

Sean: Exactly.

Corey: That’s right, that means the entire world can indeed be reduced to TXT records in DNS, which I maintain is the… the holiest of all databases. I’m sorry, please, continue.

Sean: No, nonono, that’s true. The thing that has always driven me is this idea that why are we still just, kind of, spitting out code to test things in a way that is very prescriptive and very reactive? And so, the exciting things in test come from places like Applitools and places like the—oh, I forget. It was at a Test Days conference, where they talked about—they developed this test framework that was able to auto generate the models, and then it was so good at auto generating those models for test, they’d actually ended up auto generating the models for the actual product. [laugh]. I think it used a degree of machine learning to do so. It was for a flashcard site. A friend of mine, Jacob Evans on Twitter always likes to talk about it.

These are where the exciting things lay is where people are starting to break out of that very reactive, prescriptive, kind of, test philosophy of, like I like to say, checking the boxes to, “Let’s stop checking boxes and let’s create, like insight tooling. Let’s get ahead of the curve. What is the system actively doing? Let’s check in. What data do we have? What is the system doing right at this moment? How ahead of the curve can we get with what we’re actually using to test?”

Corey: One question I have is the cultural changes because back in those early days where things were handed off from the developers to the QA team, and then ideally to where I was sitting over in operations—lots of handoffs; not a lot of integrations there—QA was not popular on the development side of the world, specifically because their entire perception was that of, “Oh, they’re just the critics. They’re going to wind up doing the thing I just worked hard on and telling me what’s wrong with it.” And it becomes a ‘Department of No,’ on some level. One of the, I think, benefits of test automation is that suddenly you’re blaming a computer for things, which is, “Yep. You are a developer. Good work.” But the idea of putting people almost in the line of fire of being either actually or perceived as the person who’s the blocker, how has that evolved? And I’m really hoping the answer is that it has.

Sean: In some places, yes, in some places, no. I think it’s always, there’s a little bit more nuance than just yes, it’s all changed, it’s all better, or just no, we’re still back in QA are quote-unquote, “The bad guys,” and all that stuff. The perception that QA are the critics and are there to block a great idea from seeing fruition and to block you from that promotion definitely still persists. And it also persists a lot in terms of a number of other attitudes that get directed towards QA folks, in terms of the fact that our skill sets are limited to writing stuff like automation tooling for test frameworks and stuff like that, or that we only know how to use things like—okay, well, they know how to use Selenium and all this other stuff, but they don’t know how to work a database, they don’t know how an app [unintelligible 00:12:07] up, they don’t all the work that I put in. That’s really not the case. More and more so, folks I’m seeing in test have actually a lot of other engineers experience to back that up.

And so the places where I do see it moving forward is actually like TheZebra, it’s much more of a collaborative environment where the engineers are working together with the teams that they’re embedded in or with the SDETs to build things and help things that help engineers get ahead of the curve. So, the way I propose it to folks is, “We’re going to make sure you know and see exactly what you wrote in terms of the code, and that you can take full [confidence 00:12:44] on that so when you walk up to your manager for your one-on-one, you can go like, ‘I did this. And it’s great. And here’s what I know what it does, and this is where it goes, and this is how it affects everything else, and my test person helped me see all this, and that’s awesome.’” It’s this transition of QA and product as these adversarial relationships to recognizing that there’s no real differentiator at all there when you stop with that reactive mindset in test. Instead of trying to just catch things you’re trying to get ahead of the curve and focus on insight and that sort of thing.

Corey: This episode is sponsored in part by our friends at Vultr. Spelled V-U-L-T-R because they’re all about helping save money, including on things like, you know, vowels. So, what they do is they are a cloud provider that provides surprisingly high performance cloud compute at a price that—while sure they claim its better than AWS pricing—and when they say that they mean it is less money. Sure, I don’t dispute that but what I find interesting is that it’s predictable. They tell you in advance on a monthly basis what it’s going to going to cost. They have a bunch of advanced networking features. They have nineteen global locations and scale things elastically. Not to be confused with openly, because apparently elastic and open can mean the same thing sometimes. They have had over a million users. Deployments take less that sixty seconds across twelve pre-selected operating systems. Or, if you’re one of those nutters like me, you can bring your own ISO and install basically any operating system you want. Starting with pricing as low as $2.50 a month for Vultr cloud compute they have plans for developers and businesses of all sizes, except maybe Amazon, who stubbornly insists on having something to scale all on their own. Try Vultr today for free by visiting: vultr.com/screaming, and you’ll receive a $100 in credit. Thats V-U-L-T-R.com slash screaming.

Corey: One of my questions is, I guess, the terminology around a lot of this. If you tell me you’re an SDE, I know that oh, you’re a Software Development Engineer. If you tell me you’re a DBA, I know oh, great, you’re a Database Administrator. If you told me you’re an SRE, I know oh, okay, great. You worked at Google.

But what I’m trying to figure out is I don’t see SDET, at least in the waters that I tend to swim in, as a title, really, other than you. Is that a relatively new emerging title? Is it one that has historically been very industry or segment-specific, or you’re doing what I did, which is, “I don’t know what to call myself, so I described myself as a Cloud Economist,” two words no one can define. Cloud being a bunch of other people’s computers, and economist meaning claiming to know everything about money, but dresses like a flood victim. So, no one knows what I am when I make it up, and then people start giving actual job titles to people that are Cloud Economists now, and I’m starting to wonder, oh dear Lord, have I started the thing? What is, I guess, the history and positioning of SDET as a job title slash acronym?

Sean: So SDET, like I was saying, it came from Microsoft, I believe, back in the double-ohs.

Corey: Mmm.

Sean: And other companies caught on. I think Google actually [unintelligible 00:14:33] as well. And it’s hung on certain places, particularly places that feel like they need a concentrated quality department. That’s where you usually will see places that have that title of SDET. It is increasingly less common because the idea of having centralized quality—like I said before, particularly with the modern, kind of, DevOps-focused development, Agile, and all that sort of thing, it becomes much, much more difficult.

If you have a waterfall type of development cycle, it’s a lot easier to have a central singular quality department, and then you can have SDET stuff [unintelligible 00:15:08], that gets a lot easier when you have Agile and you have that, kind of, regular integration and you have, particularly, DevOps [unintelligible 00:15:14] cycle, it becomes increasingly difficult, so a lot of places that have been moving away from that. It is definitely a strange title, but it is not entirely rare. If you want to peek, put a SDET on your LinkedIn for about two weeks and see how many offers come in, or how many folks in your inbox you get. It is absolutely in demand. People want engineers to write these test frameworks, but that’s an entirely different point; that gets down to the point of the fact that people want people in these roles because a lot of test tooling, frankly, sucks.

Corey: It’s interesting you talk about that as a validation of it. I get remarkably few outreaches on LinkedIn, either for recruiting, which almost never happens or for trying to sell me something which happens once every week or so. My business partner has a CEO title, and he winds up getting people trying to sell him things four times a day by lunchtime, and occasionally people reaching out of, “Hey, I don’t know much about your company, but if it’s not going well, do you want to come work on something completely unrelated?” Great. And it’s odd because both he and I have similar settings where neither of us have the ‘looking for work’ box checked on LinkedIn because it turns out that does send a message to your staff who are depending on their job still being here next month, and that isn’t overly positive because we’re not on the market.

But changing just titles and how we describe what we do and how we do it absolutely has a bearing as to how that is perceived by others. And increasingly, I’m spending more of my time focusing less on the technical substance of things and more about how what they do is being communicated. Because increasingly, what I’m finding about the world of enterprise technology and enterprise cloud and all of this murky industry in which we swim, is that the technology is great—anything can be made to work; mostly—but so few companies are doing an effective job of telling the story. And we see it with not just an engineering-land; in most in all parts of the business. People are not storytelling about what they do, about the outcomes they drive, and we’re falling back to labels and buzzwords and acronyms and the rest.

Where do you stand on this? I know we’ve spoken briefly before about how this is one of those things that you’re paying attention to as well, so I know that we’re not—I’m not completely off base here. What’s your take on it?

Sean: I definitely look at the labels and things of that sort. It’s one of those things where humans like to group and aggregate things. Our brains like that degree of organization, and I’m going to say something that is very stereotypical here: This is helped a lot by social media which depends on things like hashtags and ability to group massive amounts of information is largely facilitated. And I don’t know if it’s caused by it, but it certainly aggravates the situation.

We like being able to group things with few words. But as you said before, that doesn’t help us. So, in a particular case, with something like a SDET title, yeah, that does absolutely send a signal, and it doesn’t necessarily send the right one in terms of the person that you’re talking to, you might have vastly different capabilities from the next SDET that you talk to. And it’s were putting up a story of impact-driven, kind of, that classic way of focusing on not just the labels, but what was actually done and who had helped and who had enabled and the impact of it, that is key. The trick is trying to balance that with this increasing focus on the cut-down presentation.

You and I’ve talked about this before, too, where you can only say so much on something like a LinkedIn profile before people just turn off their brains and they walk away to the next person. Or you can only put so much on your resume before people go, “Okay, ten pages, I’m done.” And it’s just one of those things where… the trick I find that test people increasingly have is there was a very certain label applied to us that was rooted in one particular company’s needs, and we have spent the better part of over a decade trying to escape and redefine that, and it’s incredibly challenging. And a lot of it comes down to folks like, for example, Angie Jones, who simply, just through pure action and being very open about exactly what they’re doing, change that narrative just by showing. That form of storytelling is show it, don't say it, you know? Rather than saying, “Oh, well, I bring into all this,” they just show it, and they bring it forward that way.

Corey: I think you hit on something there with the idea of social media, where there is validity to the idea of being able to describe something concisely. “What’s your elevator pitch?” Is a common question in business. “What is the problem you solve? What would someone use you for?”

And if your answer to that requires you sabotage the elevator for 45 minutes in order to deliver your message, it’s not going to work. With some products, especially very early-stage products where the only people who are working on them are the technical people building them, they have a lot of passion for the space, but they aren’t—haven’t quite gotten the messaging down to be able to articulate it. People’s attention spans aren’t great, by and large, so there’s a, if it doesn’t fit in a tweet, it’s boring and crappy is sort of the takeaway here. And yeah, you’re never going to encapsulate volume and nuance and shading into a tweet, but the baseline description of, “So, what do you do?” If it doesn’t fit in a tweet, keep workshopping it, to some extent.

And it’s odd because I do think you’re right, it leads to very yes or no, binary decisions about almost anything, someone is good or trash. There’s no, people are complicated, depending upon what aspect we’re talking about. And same story with companies. Companies are incredibly complex, but that tends to distill down in the Twitter ecosystem to, “Engineers are smart and executives are buffoons.” And anytime a company does something, clearly, it’s a giant mistake.

Well, contrary to popular opinion, Global Fortune 2000 companies do not tend to hire people who are not highly capable at the thing they’re doing. They have context and nuance and constraints that are not visible from the outside. So, that is one of the frustrating parts to me. So, labels are helpful as far as explaining what someone is and where they fit in the ecosystem. For example, yeah, if you describe yourself as an SDET, I know that we’re talking about testing to some extent; you’re not about to show up and start talking to me extensively about, oh, I don’t know, how you market observability products.

It at least gives a direction and bounding to the context. The challenge I always had, why I picked a title that no one else had, was that what I do is complicated, and if once people have a label that they think encompasses where you start and where you stop, they stop listening, in some cases. What’s been your experience, given that you do have a title that is not as widely traveled as a number of the more commonly used ones?

Sean: Definitely that experience. I think that I’ve absolutely worked at places where—the thing is, though, and I do want to cite this, that when folks do end up just turning off once they have that nice little snippet that they think encompasses who you are—because increasingly nowadays, we like to attach what you do to who you are—and it makes a certain degree of sense, absolutely, but it’s very hard to encompass those sorts of things, and let alone, kind of, closely nestle them together when you have, you know, 280 characters.

Yes, folks like to do that to folks like SDETs. There’s a definite mindset of, ‘stay in your lane,’ in certain shops. I will say that it’s not to the benefit of those shops, and it creates and often aggravates an adversarial relationship that is to the detriment of both, particularly today where the ability to spin up a rival product of reasonable quality and scale has never been easier, slowing yourself down with arbitrary delineations that are meant to relegate and overly-define folks, not necessarily for the actual convenience of your business, but for the convenience of your person, that is a very dangerous move. A previous company that I worked at almost lost a significant amount of their market share because they actively antagonized the SDET team to the point where several key members left. And it left them completely unable to cover areas of product with scalable automation tooling and other things. And it’s a very complex product.

And it almost cost them their position in the industry, potentially, the entire company as a whole got very close to that point. And that’s one of the things we have to be careful of when it comes to applying these labels, is that when you apply a label to encompass someone, yes, you affect them, but it also we’ll come back and affect you because when you apply that label to someone, you are immediately confining your relationship with that person. And that relationship is a two-way street. If you apply a label that closes off other roads of communication or potential collaboration or work or creativity or those sorts of things, that is your decision and you will have to accept those consequences.

Corey: I’ve gotten the sense that a lot of folks, as they describe what they do and how they do it, they are often thinking longer-term; their careers often trend toward the thing that happens to them rather than a thing that winds up being actively managed. And… like, one of my favorite interview questions whenever I’m looking to bring someone in, it’s always, “Yeah, ignore this job we’re talking about. Magically you get it or you don’t; whatever. That’s not relevant right now. What’s your next job? What’s the one after that? What is the trajectory here?”

And it’s always fun to me to see people’s responses to it. Often it’s, “I have no idea,” versus the, “Oh, I want to do this, and this is the thing I’m interested in working with you for because I think it’ll shore up this, this, and this.” And like, those are two extreme ends of the spectrum. There’s no wrong answer, but it’s helpful, I find, just to ask the question in the final round interview that I’m a part of, just to, I guess sort of like, boost them a bit into a longer-term picture view, as opposed to next week, next month, next year. Because if what you’re doing doesn’t bring you closer to what you want to be doing in the job after the next one, then I think you’re looking at it wrong, in some cases.

And I guess I’ll turn the question on to you. If you look at what you’re doing now, ignore whatever you do next, what’s your role after that? Like, where are you aiming at?

Sean: Ignoring the next position… which is interesting because I always—part of how I learned to operate, kind of in my earlier years was focus on the next two weeks because the longer you go out from that window, the more things you can’t control, [laugh] and the harder it is to actually make an effective plan. But for me, the real goal is I want to be in any position that enables the hard work we do in building these things to make people’s lives easier, better, give them access to additional information, maybe it’s joy in terms of, like, a content platform, maybe it’s something that helps other developers do what they do, something like Honeycomb, for example, just that little bit of extra insight to help them work a little bit better. And that’s, for me, where I want to be, is building things that make the hard work we do to create these tools, these products easier. So, for me, that would look a lot like an internal tooling team of some sort, something that helps with developer efficiency, with workflow.

One of the reasons—and it’s funny because I got to asked this recently: “Why are you still even in test? You know what reputation this field has”—wrongly deserved, maybe so—“Why are you still in test?” My response was, “Because”—and maybe with a degree of hubris, stubbornly so—“I want to make things better for test.” There are a lot of issues we’re facing, not just in terms of tooling, but in terms of processes, and how we think about solving problems, and like I said before, that kind of reactive nature, it sort of ends up kind of being an ouroboros, eating its own tail. Reactive tools generate reactive engineers, that then create more reactive tools, and it becomes this ouroboros eating itself.

Where I want to be in terms of this is creating things that change that, push us forward in that direction. So, I think that internal tooling team is a fantastic place to do that, but frankly, any place where I could do that at any level would be fantastic.

Corey: It’s nice to see the things that you care about involve a lot more about around things like impact, as opposed to raw technologies and the rest. And again, I’m not passing judgment on anyone who chooses to focus on technology or different areas of these things. It’s just, it’s nice to see folks who are deeply technical themselves, raising their head a little bit above it and saying, “All right, here’s the impact I want to have.” It’s great, and lots of folks do, but I’m always frustrated when I find myself talking to folks who think that the code ultimately speaks; code is the arbiter. Like, you see this with some of the smart contract stuff, too.

It’s the, “All right, if you believe that’s going to solve all the problems, I have a simple challenge to you, and then I will never criticize you again: Go to small claims court for a morning, four hours and watch all the disputes that wind up going through there, and ask yourselves how many of those a smart contract would have solved?”

Every time I bring that point up to someone, they never come back and say, “This is still a good idea.” Maybe I’m a little too anti-computer, a little bit too human these days. But again, most of cloud economics, in my experience, is psychology more than it is math.

Sean: I think it’s really the truth. And I think that [unintelligible 00:29:06] that I really want to seize on for a second because code and technology as this ultimate arbiter, we’ve become fascinated with it, not necessarily to our benefit. One of the things you will often see me—to take a line from Game of Thrones—whinging about [laugh] is we are overly focused on utilizing technology, whether code or anything else, to solve what are fundamentally human problems. These are problems that are rooted in human tendencies, habits, characters, psychology—as you were saying—that require human interaction and influence, as uncomfortable as that may be to quote-unquote, “Solve.”

And the reality of it is, is that the more that we insist upon, trying to use technology to solve those problems—things like cases of equity in terms of generational wealth and things of that sort, things like helping people communicate issues with one another within a software development engineering team—the more we will create complexity and additional problems, and the more we will fracture people’s focus and ability to stay focused on what the underlying cause of the problem is, which is something human. And just as a side note, the fundamental idea that code is this ultimate arbiter of truth is terrible because if code was the ultimate arbiter of truth, I wouldn’t have a job, Corey. [laugh]. I would be out of business so fast.

Corey: Oh, yeah, it’s great. It’s—ugh, I—it feels like that’s a naive perspective that people tend to have early in their career, and Lord knows I did. Everything was so straightforward and simple, back when I was in that era, whereas the older I get, the more the world is shades of nuance.

Sean: There are cases where technology can help, but I tend to find those a very specific class of solutions, and even then they can only assist a human with maybe providing some additional context. This is an idea from a Seeking SRE book that I love to reference—I think it’s, like, the first chapter—the Chief of Netflix SRE, I think it is, he talks about this is this, solving problems is this thing of relaying context, establishing context—and he focused a lot less on the technology side, a lot more of the human side, and brings in, like, “The technology can help this because it can give you a little bit better insight of how to communicate context, but context is valuable, but you’re still going to have to do some talking at the end of the day and establish these human relationships.” And I think that technology can help with a very specific class of insight or context issues, but I would like to reemphasize that is a very specific class, and very specific sort, and most of the human problems we’re trying to solve the technology don’t fall in there.

Corey: I think that’s probably a great place for us to call it an episode. I really appreciate the way you view these things. I think that you are one of the most empathetic people that I find myself talking to on an ongoing basis. If people want to learn more, where’s the best place to find you?

Sean: You can find me on Twitter at S-C—underscore—code, capital U, capital M. That’s probably the best place to find me. I’m most frequently on there.

Corey: We will, of course, include links to that in the [show notes 00:32:37].

Sean: And then, of course, my LinkedIn is not a bad place to reach out. So, you can probably find me there, Sean Corbett, working at TheZebra. And as always, you can reach me at [email protected]. That is my work email; feel free to email me there if you have any questions.

Corey: And we will, of course, put links to all of that in the [show notes 00:33:00]. Sean, thank you so much for taking the time to speak with me today. I really appreciate it.

Sean: Thank you.

Corey: Sean Corbett, Senior Software Development Engineer in Test at TheZebra—because there’s only one. I’m Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry ranting comment about how absolutely code speaks, and it is the ultimate arbiter of truth, and oh wait, what’s that the FBI is at the door make some inquiries about your recent online behavior.

Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

Announcer: This has been a HumblePod production. Stay humble.

Newsletter Footer

Get the Newsletter

Reach over 30,000 discerning engineers, managers, enthusiasts who actually care about the state of Amazon’s cloud ecosystems.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Sponsor Icon Footer

Sponsor an Episode

Get your message in front of people who care enough to keep current about the cloud phenomenon and its business impacts.