Keep on Rockin’ in the Server-Free World

Episode Summary

Michael Garski is the director of software engineering at Fender, the famed electrical guitar manufacturer. Prior to this position, he worked as a principal software architect at Viant, a principal software architect at MySpace, a manager of internet development at Countrywide Financial, and a manager of system architecture at Fandango, among other positions. He also had a four-year stint in the US Navy, working as an engineering laboratory technician. Join Corey and Michael as they talk about how artists are angels and Fender’s job is to give them wings, how Fender has diversified its offerings in recent years, how serverless is a mindset and how Fender approach serverless technology, how Fender’s traffic surged during the pandemic and how everything mostly scaled up without a hitch, the challenges of teaching students to play instruments over the internet, the vendor lock-in boogeyman, and more.

Episode Show Notes & Transcript

About Michael

Michael Garski is the Director of Platform Engineering at Fender Musical Instruments, where he leads the teams responsible for service development & testing, devops, and data. He’s been with Fender for over 5 years and prior to that  worked as a software engineer & architect on back-end systems at Viant, MySpace, Countrywide Home Loans & Fandango. He is passionate about application reliability and observability and their impact on customer satisfaction.


Links:
Transcript

Announcer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.



Corey: Your company might be stuck in the middle of a DevOps revolution without even realizing it. Lucky you! Does your company culture discourage risk? Are you willing to admit it? Does your team have clear responsibilities? Depends on who you ask. Are you struggling to get buy in on DevOps practices? Well, download the 2021 State of DevOps report brought to you annually by Puppet since 2011 to explore the trends and blockers keeping evolution firms stuck in the middle of their DevOps evolution. Because they fail to evolve or die like dinosaurs. The significance of organizational buy in, and oh it is significant indeed, and why team identities and interaction models matter. Not to mention weither the use of automation and the cloud translate to DevOps success. All that and more awaits you. Visit: www.puppet.com to download your copy of the report now!


Corey: If your familiar with Cloud Custodian, you’ll love Stacklet. Which is made by the same people who made Cloud Custodian, but put something useful on top of it so you don’t have to be a need to be a YAML expert to work with it. They’re hosting a webinar called “Governance as Code: The Guardrails for Cloud at Scale” because its a new paradigm that enables organizations to use code to manage and automate various aspects of governance. If you’re interested in exploring this you should absolutely make it a point to sign up, because they’re going to have people who know what they’re talking about—just kidding they’re going to have me talking about this. Its doing to be on Thursday, July 22nd at 1pm Eastern. To sign up visit snark.cloud/stackletwebinar and I’ll talk to you on Thursday, July 22nd.


Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. We talk to a lot of people here on this show who are deep in the weeds of SaaS companies, or cloud vendors, or cloud vendors cosplaying as SaaS companies. Today, we’re taking a bit of a different direction. My guest is Michael Garski, Director of Platform Engineering at Fender Musical Instruments. They make guitars among many other things. Michael, thank you for joining me.


Michael: Oh, thanks for having me on, Corey.


Corey: So, one of the things that I really appreciate about what you do as a company is I can, at least presumably, explain it to someone who is not super deep in technical weeds without 45 minutes of explainer first. The easy answer is, “Oh, Fender. You folks make guitars.” These days, no one just does one thing, I have to imagine. How do you describe what the company does?


Michael: Oh, well, to quote Leo Fender, his view was that artists are angels and it’s our job to give them wings. So, in addition to actually making and developing guitars and amplifiers, we’ve branched off into consumer-facing products to actually teach people how to play those instruments.


Corey: You folks have been relatively outspoken about the various things you’re doing at different AWS events. I mean, my approach to that tends to be that if AWS is great at making bricks that you can use to build amazing things with, “Well, great, can you draw a picture of the house that you can build with this?” “No, we’re going to have a customer come out and talk about that stuff instead.” You folks have been focusing on a lot of serverless work, and you’ve been very public about the fact that you are almost entirely serverless-driven in terms of architecture if I’m not mistaken.


Michael: That is true.


Corey: Tell me about that. How did you get there and what brought it about?


Michael: So, I work in the digital division in Fender. We started, let’s see, we’re coming up on five years I’ve been there. So, what we did was, initially, we started building services that could run within a container, or on an EC2 instance, but we started looking at Lambda functions. We had need to ingest a product catalog, so the IT team was able to drop us off a product catalog into an S3 bucket, and the easiest thing to do then was just trigger a Lambda function to then process that file. And it just kind of snowballed in from there.


Corey: I think the common problem when people hear ‘serverless’ is they think, “Oh, great. More discussions about Lambda functions.” And Lambda is almost getting something of a tarred reputation in some circles because when we can build amazing things with it ourselves, we love it, but when we ask AWS how to wind up integrating two services, or about a feature gap, their response is, “Oh, use a Lambda function for it,” It starts to feel like they’re using it as spackle and the spackle has become load-bearing. Do you view serverless as being purely function-driven or is it broader than that?


Michael: It’s much broader than that. Serverless is a mindset where you’re looking beyond just Lambda functions to using a lot of third-party services so that you can actually focus on your core business. Like, we use Zuora as a subscription provider for web-based subscriptions; we use Algolia for full-text search; we use a variety of other services so that we can just focus on the core business.


Corey: One thing that’s been on everyone’s mind, somewhat recently, has been the idea of dramatic changes as far as user behavior goes. And in the more traditional environments where you see things like EC2 instances or on-premises data centers, back when the pandemic first hit and companies that were very focused on a model of business that aligned directly with people behaving in certain ways that they suddenly didn’t, would the 80% drop-offs or more in their user traffic, but their infrastructure spend just kept hanging out exactly where it was, in a straight line. So, at some level, it feels like yes, the whole point of cloud is that it can be elastic, except no one builds it that way for a variety of reasons. When COVID hit, what changed for your business?


Michael: Change for our business is we launched a program called Playthrough, okay we did this about a year ago; we started it, we gave away three months of Fender Play for free. It was a single-use code that a user would redeem and no credit card required, and over a period of five days, we saw our traffic increase by more than ten times. And we had very little changes we needed to make. Everything scaled up, we had no issue with—we used a lot of Lambda functions, DynamoDB, everything just scaled up fine. The only point that became a bottleneck was our Elasticsearch cluster. However, beefing up the nodes and adding a few more nodes that resolved that issue immediately.


Corey: So, I’m going to go out on a limb and postulate that you folks increased pickup when the lockdowns hit, if for no other reason then, “Well, I’m trapped at home and I’m tired of staring at the guitar on the wall. I may as well learn to play it.” I would guess. I could be way off base on that.


Michael: No, no, that’s very true. Even since then, even after that program has expired—of course, not everyone then converts and sticks around—but many, many did, many more than we thought would did stick around, and our usage and our goals were exceeded for this last year, and we’re in a healthy place, and looking at continuing to grow and expand in the future.


Corey: So, one of the applications that I think gets a fair bit of attention—rightfully so—lately, is something called Fender Play, and as best I can tell, that is a app that works in web, it works on mobile, and it’s a video-based instruction tool for guitar at least, but some other instruments as well. How did that come to be? Did that exist before COVID hit? Has that been something that’s been in the works for a while? Or was it, “Well, we’re going to do a two-week sprint and build this thing from scratch?”


Michael: No, we launched that—this June we’re coming up on the fourth anniversary since it’s been launched, so we launched this in summer of 2017.


Corey: One of the problems I’ve always found is that it’s challenging to learn to do something that is as, I guess, physical and intricate, et cetera, as playing an instrument without having someone in the room looking at you and smacking you with a stick whenever you do things that are wrong. “Nope, that’s a bad habit. If you keep doing that it’s going to hurt you.” How do you approach that as a company from a non-interactive perspective of someone who’s going to watch a video and do things and maybe it’ll work, maybe it won’t? Particularly in light of things like, well, the competition is YouTube, which, you know, I’m going to roll the dice and sometimes I’ll see a great tutorial, sometimes I’ll see one that I don’t realize teaching me terrible things, and then it’s going to recommend some baseless conspiracy theory because YouTube. How do you differentiate that? What makes Fender Play different?


Michael: So currently, you’re right; it’s just a video-based instruction app. There’s not any way to, like, provide direct feedback to students within the web and mobile applications. However, we do have an online community, and our Fender Play instructors do an office hours feature, is where they’ll actually answer questions live and talk to students. We are investigating and doing some earlier research in some, possibly, being able to provide that type of feedback to users, but it’s very challenging problem, just due to the nature of you’re playing an instrument that has multiple strings, so you’re trying to pick out the chord that they’re playing in, and the timing. But it’s something we definitely need to add.


Corey: There’s something to be said as well for the kind of care and attention that you folks wind up putting into your media where, “This is how you finger a chord,” and someone on the YouTube video will do it for two-tenths of a second, and they’re filming it with a potato that isn’t focused properly and pointing at the wrong part of the guitar. You folks have a high bar for quality on this. Is that done in-house? Do you wind up just going through a bunch of random folks that you just wind up offering a bunch of gift cards to, or free guitars to do this? How does the program work on the back end?


Michael: So, we have an in-house curriculum team that puts together the lesson plans to really help people learn in small bite-sized lessons so that it’s not too overwhelming at once. And that curriculum then is shot and filmed by an in-house video team that put that together; they upload the data into S3 for the final cut, then that gets transcoded via MediaConvert, and we serve it up via CloudFront.


Corey: It’s rare to wind up talking to a company that is something of a household name about something that they’re doing, and hear the AWS services that they’re using not trend toward a baseline mean if I can be so bold. Normally, you’ll see some of the case studies, like, “Oh, this is an online bank. What services are they using?” “Oh, they’re using EC2, and S3, and load balancing because did you miss the part where it’s a bank?” They’re not going to use these far-future services due to regulatory risk, among other things, in many cases.


You’re using Elemental MediaConvert, which is one of those relatively high-up-the-stack offerings that isn’t broadly known. It’s one of those services that is focused on specific use cases and specific industry verticals in a way that a baseline primitive service isn’t. What does MediaConvert do?


Michael: What it does is it takes the final edit of the video, and we have several different presets so that it will put it into an HLS format with different bitrates so that the user is getting the best quality video depending on their bandwidth.


Corey: When I looked into it in the early days when it was first launching, I found that it looked an awful lot like Elastic Transcoder, which is a service that they’ve had for a while, only they changed up some of the capabilities. It’s obviously far more capable as a service, but they also added something that felt like 15 different billing dimensions to it, “So, what is this going to cost me?” “Well, we’re going to run it for a month and find out if we’re still in business.” And it seemed like it was one of those very difficult to get started with and run experiments with service. Now, obviously, services evolve over time. When you started looking into it was that experience roughly akin to what you felt, or am I completely and unfairly slandering in the product?


Michael: We actually started out using Elastic Transcoder and then moved over to MediaConvert, I believe it was last year. We found it to be a little bit easier to use, and the pricing overall in transcoding the videos for us is really a drop in the bucket as compared to actually hosting them and serving them up via CloudFront. And when we switched over to MediaConvert, we adjusted our settings to lower the maximum bitrate for a given video, we found that after a certain point, the quality to the user just doesn’t really improve, and yet we’re paying to serve the larger video.


Corey: One statistic that I found was that in March of 2020—you know which I believe we’re still in at this point; just, it’s the Endless September model, applied to March—you wound up seeing over an order of magnitude in traffic increase within five days, and looking at that through a lens of traditional architecture, that means that nobody sleeps a whole heck of a lot. Given that you’re in on the serverless story, and you have been since before that hit, what was that scaling experience like for you?


Michael: Scaling experience was completely seamless. We use a lot of Lambda, DynamoDB, Kinesis, SNS, to glue things together, and no problems whatsoever. Just had to bump up our Elasticsearch cluster a bit, that was really the only thing because we saw some latency starting to rise on some of our APIs.


Corey: Let me ask the uncomfortable question then because whenever I tried to scale things up quickly in a cloud environment, what was your experience with smacking into various AWS service limits as the traffic grew?


Michael: Initially, we actually requested some service limits increase to make sure we weren’t hitting the concurrent Lambda invocation limit, and same thing with Cognito, making sure that we weren’t going to hit any limits as far as sign-ins and things like that. So, we were able to just put in requests, and they served us around pretty quick turnaround time on that, as well.


Corey: It really does seem like there’s a strong benefit on the serverless space, but I had to double-check before we started recording that you do, in fact, work at Fender because you are a staunch advocate for observability. And usually, when someone is that passionate about observability, you can guess that they work at an observability-slash-monitoring company. It’s akin to the idea of someone selling mattresses telling you that mattresses are great and you should have four of them. You’re on the customer side of that and still very passionate about it. Where’d that come from?


Michael: Came from my time years ago, when I worked at MySpace—if anyone can still remember that—working on the search systems there. And as the company started winding down, to laying people off, and being one of the only people left working on those systems, being able to know and understand them, you just have to, so you have to continue to monitor and find ways to monitor, and that really ingrained how important instrumentation is and being able to really understand the health of your application as it’s running so that you can see, yes, everything is good, and then when something doesn’t look right so that you can know where to start looking, and you can be alerted of a problem.


Corey: So, I tend to view the world in olden terms where monitoring was what we did, and we use something like Nagios, which was the second-worst option out there because everything else felt like it was tied for first. I also take a somewhat regressive view that observability is to monitoring as DevOps is to being a systems administrator. It’s the same thing, but by using the more modern terminology, you can charge more for it. I’m going to go out on a limb and guess that you take a somewhat contrarian [laugh] view to that.


Michael: Yes, yes, I do. It’s about really understanding how your applications is running. It’s not just looking at, oh, how many HTTP 500s am I serving up per hour, if I hit a threshold for the last hour? It’s a lot more than that. It’s really being able to really dig in and see what the issue is or what’s working really well.


And to that end, we rely on two services for this. We use Honeycomb and Epsagon. Honeycomb, kind of, acts as our top layer because it gives us the really good high-cardinality metrics where I can punch in a user ID and I can see all the API traffic that this user has performed. As well as, even just like when we launched the Playthrough when our traffic rose, that the reason we discovered that our latency was dropping was due to a service-level objective being triggered in Honeycomb on latency. And we were able to respond to that using that before customers really noticed anything at all.


Corey: As an Epsagon customer myself, I’m always conflicted when I find myself going into their service and using it to figure out what the heck’s going on with my giant pile of Lambda functions, and API gateways, and whatnot, wired together because the experience is uniformly excellent, but I’m also frustrated in that it needs a third-party to even begin to allude to what’s going on. It feels, on some level, like the vendor that is providing this service to me should be reasonably effective at telling me what it’s doing, and when it’s breaking. I understand that how I wish the world is and how it actually is are two radically different things but does that ever strike you as well?


Michael: Whether or not AWS should be providing that type of level, that seems… that seems like more of a service that you can have competition and other vendors that really specialize and get in the weeds on it. I don’t think AWS needs to provide every service you could possibly use for your application. That’s not something I’m too concerned about. I don’t really even think it’s their place, frankly.


Corey: No, no, I understand. The problem I keep running into, on some level, whenever I try and diagnose it natively is, I look at CloudWatch and it’s difficult to understand that is this—in my case because again, I’m still early days with a lot of these things—is it the API gateway that’s having the problem? Is it the CloudFront distribution that is tied to that? Is it the Lambda function? Where’s the handoff?


Trying to understand where in a complicated application the failure is occurring is a challenge. And let’s be clear, most of that is a problem of my own making because I didn’t have the good sense to instrument this thing in a reliable repeatable way when I built it. It feels like everything is tied together with duct tape, and baling wire, and spit, and a bit of luck. As a counterpoint, the more companies I talk to, the more I realize that no, no, this is actually how most people feel [laugh] when they look at things that are working. It’s, yeah, it’s terrible. It’s a trash fire, but it makes money so we’re going to roll with it.


And there’s always, on some level, a sense of what we’ve built is very far from the platonic ideal of what we should have built. Does that resonate with you, or do you take a step back and look at what you’ve achieved with a perspective of, “This is awesome. More people should do it exactly like this.” And honestly, if it’s that one, I’d love to take a look at what you’ve built.


Michael: I think there’s always room for us to improve on what we’re doing because we’re constantly learning and evolving to improve both, even at such a low level of like, “Okay, how do we lay out the files in our service repository to make the best organization to make sense?” All the way up to, “Okay, how are we going to do tracing? And what kind of information do we need to get from that so that we can find problems when they occur?” We’re always looking to learn what others are doing, and talking to others in this space. No one will ever be a hundred percent right. There’s always room for improvement everywhere.


Corey: This episode is sponsored in part by LaunchDarkly. Take a look at what it takes to get your code into production. I’m going to just guess that it’s awful because it’s always awful. No one loves their deployment process. What if launching new features didn’t require you to do a full-on code and possibly infrastructure deploy? What if you could test on a small subset of users and then roll it back immediately if results aren’t what you expect? LaunchDarkly does exactly this. To learn more, visit launchdarkly.com and tell them Corey sent you, and watch for the wince.

Corey: One thing that you folks have done that I think was really interesting and didn’t get as much play as I think it really deserved, was that, especially in the early days of the pandemic, you wound up seeing that massive increase due to giving out almost a million free three-month subscriptions to Playthrough. Additionally, you also worked closely with LAUSD, the Los Angeles Unified School District, to add Fender Play to their middle school music program’s curriculum to help supplement their remote learning programs. First, was that all in the same timeframe? Or—and, two, what has it been like, I guess, working with a organization that is, I guess, on some level, not particularly cloud-first. I would imagine. When I lived in Los Angeles, I never got the sense that LAUSD was full-on serverless, full on-board with cloud, full on-board with remote learning. And then the pandemic of course exacerbates all of that.


Michael: Yeah, so those were really two different projects. So, that the Playthrough project that started in March, and we started working with Los Angeles Unified School District last year during their summer school program; started out with 1500 students and we put it together very quickly. Essentially, we use the same three-month codes that we used for that Playthrough promotion so that we could set things up very quickly for students and gave out, through our nonprofit arm of Fender, the Fender Play Foundation, gave out 1500 instruments to these students to use during the summer school program. And that program became so successful, we continued on with them in the fall, and now in the current semester, and we will be again this summer. I believe there’s 7000 students in the program now.


And working with their IT team has actually been quite nice. And in dealing with partners, you wouldn’t think much of, “Oh, it’s a school district, what do they have?” But as far as just ease of working with them, we actually hooked into their SAML provider in Cognito so that LAUSD students could authenticate when they come in through the remote learning systems. And they were great to work with and very helpful and cooperative.


Corey: One of the arguments that you’ll see that comes up against serverless, from time to time, is that you are now indelibly linked to your provider, but you can’t take what you’ve built with all of these services and just move it over to Azure or GCP on a moment’s whim. Now, in practice, people who tend to build for that, just build everything on top of EC2 and very little else, and then run it entirely in AWS and never move it to any of those other places. But was there friction with making that, I guess, architectural commitment to a single vendor?


Michael: Oh, you’re bringing up the vendor lock-in Boogeyman.


Corey: Oh, I absolutely am. Most people who bring that—when I bring it up as a straw man so you can attack it, most people who bring up the vendor lock-in Boogeyman, “Oh, you have to go multi-cloud,” are either trying to sell you something that is required if you want to go multi-cloud, or they’re a cloud provider themselves who know that if you go all-in on one provider, it will certainly not be theirs.


Michael: I think if you properly architect your applications with separations of concerns that you could move to, say—okay, say Lambda wasn’t working out for us anymore, and we needed to take our applications and, where, we’re going to put them into a container, but we’re going to stay in AWS. Our applications are set up in such a way that Lambda is basically a deployment pattern. We could easily convert those individual function handlers into route handlers with a minimal effort because the business logic and then the underlying data storage are separated. So, it would be feasible for us if we wanted to, say, move to Azure and use Azure Functions and whatever comparable service they have to DynamoDB. I’m not too familiar with a lot of their offerings.


But that would certainly be possible to do it with, obviously, some effort and really, at the end of the day, the resources you have working on the applications are end up going to costing you much more than any, sort of like, software licensing or specific savings you’re going to get from a cloud vendor, so might as well go ahead and just use those service that they’re providing. So that you can just focus on the business.


Corey: My approach has almost universally been that looking at an awful lot of companies and their AWS bills, it is a challenge to find an environment where the resources in the environment cost more than the people who are operating them. In the context of business, AWS bills seemed giant and enormous, right up until you look at payroll and then it’s, “Oh, okay.” That’s counterintuitive for folks who are learning this, and I fall prey to it myself is, when I’m playing around as a hobbyist trying to build something I value, my time is free because I’m learning as this goes, and then in that context, especially when I was starting out as a student, it was, “Oh, great. So, this winds up costing me $7 a month. Oh, that’s a lot of money. That’s my ramen budget, so I’m instead going to wind up spending eight hours avoiding it charging me anything.” It’s the exact opposite from the direction you want staff that you’re paying to work on these things to go in. How do you approach the idea of increasing the cloud cost if it will save time for your team?


Michael: It’s a balance between, where do we need to build this ourselves? And then not only build it, you have to operate it and maintain it? Or what is the cost of getting this third-party service? And that’s really what it comes down to in all of them. And do we actually want to spend time working on this piece of infrastructure that these other people are specializing in and do so well? I’ve got better things I can have people doing than that.


Corey: Speaking of people, one thing that you talk about, as you self-describe, is that you wind up not writing a whole lot of code anymore, but you’re something of a stickler for observability and enforcing consistency between services, so you’ll periodically do things like submit a PR to tweak a log message to put your mind at ease, was one example that you gave. Given that you’re a director, which is generally manager of managers style approaches, how do you avoid having those PRs come across to your team as either micromanagement or a condemnation of what they’ve built? Because I get it; when I see something that’s easy and small to tweak, I want to go ahead and get it fixed immediately. I don’t want to go back and forth and play those games; I just want it done. But I’m also always weighing that against, I don’t want to have people think that I’m judging them somehow for something I’m very much not.


Michael: That’s a very good point. The larger technical decisions on how things are laid out, I generally just try to—I don’t insert myself into. I let the team go ahead, and make those decisions, and leave that direction, and let them take the charge on that, and I take the approach of looking at it as more of a guiding, and mentoring and teaching to really hone and instill that discipline in really being able to understand what the applications are doing. And as our team is growing, I have less and less time to even do those things, but I can go through the systems and go, “Hey, how come we’re not tracing this call to the reCAPTCHA servers? Let’s add that in there.” And I’ll just at this point now, I mainly just write Jira tickets to have someone else actually do the work.


Corey: The more I do this, the more I realize that as complicated as the technology is, the people are in many ways, far more complicated. And let’s be fair here, non-deterministic things that work super well on one person one month could work entirely differently a following month, or even with the same person, or between teams. It’s a constant balancing act, on some level. And giving people a sense of psychological safety has always been the biggest challenge. The thing that surprised me about management, back when I was running ops teams was the more, I guess, responsibility you accrue as you rise from individual contributor into the management—or ‘rise’ is sort of a wrong term; it’s an orthogonal transition—is that you spend a lot more time on the people problems, and your ability to directly control or affect change diminishes because you have to do everything via influence. You get a lot more responsibility with a lot less direct power [laugh] over the outcome in some ways. Does that align with how you see it, or am I just—do I have very strange approaches on management? Which may be true, and why I got out of it as fast as I could.


Michael: No, that is a good point because you are having to [unintelligible 00:27:05], like, influence, and guide, and more take a higher-level view, as opposed to really getting into the weeds of like, “Okay, what methods are we going to put on this interface? How are we going to, say, architect the internals of an application?” Those are details I just really don’t have time for anymore. But larger things as to making sure that we’re okay, it’s like, “What’s the performance of this?” And, “Overall, is something that can be adapted as the business needs change, and as we change? And as we learn, what can we do to modify it?” And more just things like guiding, and mentoring, and really taking a higher-level view of that.


Corey: I’m going to selfishly ask about something that I struggle with myself. That goes a bit more into the technical area, but you talk about 
enforcing consistency across all of your different services. What does that mean? Similar coding style? Similar instrumentation?


Because I look at the things I built and microservices that power my internal nonsense, and each one of those is very different than all the rest. So, whatever your version of consistency is, I know I’m not doing it. But how do you view it?


Michael: So, there’s really two types of consistency. The one I really refer to the most is in observability. So that, if you’ve got a thousand Lambda functions out there, and each one is logging things slightly differently, that’s just a pain to deal with, and realistically, dealing with a thousand unicorns is a real pain. So, through that observability, at least in Lambda, we use an internally developed middleware to make sure that the logging is consistent, and it’s easy enough to use. And then other consistency, like, just within projects of how we lay things out.


That’s something that’s been consistently evolving. What’s the folder structure in how we organize the code? And we’ve kind of been evolving that over the last three years. And within about the last six months, we’ve come up with a really good pattern and a template for the future. And it’s not much different from what we started out with, but it’s a little bit easier, really, to comprehend as a new engineer coming in. It makes more sense.


Corey: I have to ask—and I understand if you don’t want to give a particular endorsement in any direction—but do you go through Serverless Framework, SAM CLI, the CDK, using the console and then lying about it? What is the template that you wind up using for that uniformity? Because even internally, I use three or four of those different things and professional advice: don’t do that.


Michael: Let’s see. So, in our development, QA, production environments, infrastructure is all managed with Terraform. Each engineer has their own personal AWS account so that they can work on things there—


Corey: Oh, that makes billing granularity super easy.


Michael: Oh, yes. You can tell who’s got EC2 instances running up for too long. But for the most part, we’ll use Serverless Framework in that regard to say—for the engineer can just deploy into your local environment. Although we are working on ways to reuse the Terraform infrastructure and deploy that. But we have our own build and deployment pipeline that we built using CircleCI, and all of our Lambda 
functions are in Go.


And so having to compile, say, 20 binaries in a service, that gets kind of slow, one of our DevOps engineers actually came up with a way to use Lambda to build the Lambdas, so that we can build them all in a distributed parallel fashion during the build process.


Corey: One thing that I do love about the whole serverless approach—and it is a neat part about Lambda—is no two people ever seem to do it quite the same way. You can tie things together in so many different and exciting ways, and it’s fun. It’s almost like a modern version of playing with Lego. And I know that if Jeff Barr is listening, he just perked up at that. But I love the concept that you can take so many different ways to achieve similar outcomes. And it almost gives a bigger sense of creativity in how you approach problems. Has that been your experience?


Michael: Oh, definitely. It’s not only the creativity; it’s also the flexibility in how you solve it, and the ability to adapt and evolve as services evolve, or change, or there’s new ones are added. And to the point of using AWS, kind of, saying, “Oh, using a Lambda function to do this.” Like, using Lambda functions for customizing behavior of Cognito with the Cognito triggers, is to me, I think, a perfect way to customize the service to do exactly what you need to do.


Corey: I want to thank you so much for taking the time to speak with me today. It’s always appreciated. If people want to hear more about what you have to say and how you view these things or even, possibly, decide to work with you, okay can they find you?


Michael: I’m somewhat active on LinkedIn. LinkedIn is the best place to find me. Please go ahead and connect to me; tell me you heard me on the podcast here.


And yes, we are hiring. We have, all within our technical organization, from client, to web, and mobile engineers, data engineers, DevOps, API, we’re always hiring and if we don’t have something right now that fits your experience, let me know that you’re interested and I’ll put you on the list so that when we do have an opening, we’ll reach out right away.


Corey: And we will, of course, include links to that in the [show notes 00:32:20]. Thank you so much for being so generous with your time. I appreciate it.


Michael: Thanks for having me on, Corey. It was nice talking to you.


Corey: Michael Garski, Director of Platform Engineering at Fender Musical Instruments. I’m Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice, along with a comment telling me that I’m almost certainly doing that chord incorrectly.


Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.


Announcer: This has been a HumblePod production. Stay humble.
Newsletter Footer

Get the Newsletter

Reach over 30,000 discerning engineers, managers, enthusiasts who actually care about the state of Amazon’s cloud ecosystems.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Sponsor Icon Footer

Sponsor an Episode

Get your message in front of people who care enough to keep current about the cloud phenomenon and its business impacts.