Episode Show Notes & Transcript
Corey: It seems like there is a new security breach every day. Are you confident that an old SSH key, or a shared admin account, isn’t going to come back and bite you? If not, check out Teleport. Teleport is the easiest, most secure way to access all of your infrastructure. The open source Teleport Access Plane consolidates everything you need for secure access to your Linux and Windows servers—and I assure you there is no third option there. Kubernetes clusters, databases, and internal applications like AWS Management Console, Yankins, GitLab, Grafana, Jupyter Notebooks, and more. Teleport’s unique approach is not only more secure, it also improves developer productivity. To learn more visit: goteleport.com. And not, that is not me telling you to go away, it is: goteleport.com.
Corey: This episode is sponsored in part by our friends at Redis, the company behind the incredibly popular open source database that is not the bind DNS server. If you’re tired of managing open source Redis on your own, or you’re using one of the vanilla cloud caching services, these folks have you covered with the go to manage Redis service for global caching and primary database capabilities; Redis Enterprise. To learn more and deploy not only a cache but a single operational data platform for one Redis experience, visit redis.com/hero. Thats r-e-d-i-s.com/hero. And my thanks to my friends at Redis for sponsoring my ridiculous non-sense.
Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. It’s often said that the sun never sets on the British Empire, but it’s often very cloudy and hard to see the sun because many parts of it are dreary and overcast. Here to talk today about how we can predict those things in advance—in theory—is Jake Hendy, Tech Lead at the Met Office. Jake, thanks for joining me.
Jake: Hey, Corey, it’s lovely to be here. Thanks for inviting me on.
Corey: There’s a common misconception that its startups in San Francisco or the culture thereof, if you can even elevate it to being a culture above something you’d find in a petri dish, that is where cloud stuff happens, where the computer stuff is done. And I’ve always liked cutting against that. There are governments that are doing interesting things with Cloud; there are large companies and ‘move fast and break things’ is the exact opposite of what you generally want from institutions that date back centuries. What’s it like working on Cloud, something that for all intents and purposes didn’t exist 20 years ago, in the context of a government office?
Jake: As you can imagine, it was a bit of a foray into cloud for us when it first came around. We weren’t one of the first people to jump. The Met Office, we’ve got our own data centers, which we’ve proudly sit on that contains supercomputers and mainframes as well as a plethora of x86 hardware. So, we didn’t move fast at the start, but nowadays, we don’t move at breakneck speeds, but we like to take advantage of those managed services. It gets out of the way of managing things for us.
Corey: Let’s back up a second because I tend to be stereotypically American in many ways. What is the Met Office?
Jake: What is the Met Office? The Met Office is the UK’s National Meteorological Service. And what does that mean? We do a lot of things though with meteorology, from weather forecasting and climate research from our Hadley Centre—which is world-renowned—down to observations, collections, and partnerships around the world. So, if you’ve been on a plane over Europe, the Middle East, Africa, over parts of Asia, that plane took off because the Met Office provided a forecast for that plane. There’s a whole range of things we can talk about there, if you want Corey, of what the Met Office actually does.
Corey: Well, let’s ask some of the baseline questions. You think of a weather office in a particular country as, oh okay, it tracks the weather in the area of operations for that particular country. Are you looking at weather on a global basis, on a somewhat local basis, or—as mentioned—since due to a long many-century history it turns out that there are UK Commonwealth territories scattered around the globe, where do you start? Where do you stop?
Jake: We don’t start and we don’t stop. The Met Office is very much a 24/7 operation. So, we’ve got a 24/7 operation center with staff constantly manning it, doing all sorts of things. So, we’ve got a defense, we work heavily with our defense colleagues from UK armed forces to NATO partners; we’ve got aviation, as mentioned; we’ve got marine shipping from—most of the listeners in the UK will have heard of the shipping forecast at one point or another. And we’ve got private sector as well, from transport, to energy, supermarkets, and more. We have a very heavy UK focus, for obvious reasons, but our remit goes wide. You can actually go and see some of our model data is actually on Amazon Open Data. We’ve got MOGREPS, which is our ensemble forecast, as well as global models and UK models, with a 24-hour time lag, but feel free to go and have a play. And you can see the wide variety of data that we produce in just those few models.
Corey: Yeah, just pulling up your website now; looking at where I am here in San Francisco, it gives me a detailed hour-by-hour forecast. There are only two problems I see with it. The first is that it’s using Celsius units, which I—
Corey: —as a matter of policy, don’t believe in because in this country, we don’t really use things that make sense in measuring context. And also, I don’t believe it’s a real weather site because it’s not absolutely festooned with advertisements for nonsense, which is apparently—I wasn’t aware—a thing that you could have on the internet. I thought that showing weather data automatically meant that you had to attempt to cater to the lowest common denominator at all times.
Jake: That’s an interesting point there. So, the Met Office is owned and operated by Her Majesty’s Government. We are a Trading Fund with the Department for Business, Energy and Industrial Strategy. But what does that mean it’s a Trading Fund?k it means that we’re funded by public money. So, that’s called the Public Weather Service.
But we also offer a more commercial venture. So, depending on what extensions you’ve got going on in your browser, there are actually adverts that do run on our website, and we do this to help recover some of the cost. So, the Public Weather Service has to recover some of that. And then lots of things are funded by the Public Weather Service, from observations, to public forecasting. But then there are more those commercial ventures such as the energy markets that have more paid products, and things like that as well. So, maybe not that many adverts, but definitely more usable.
Corey: Yeah, I disabled the ad blocker, and I’m reloading it and I’m not seeing any here. Maybe I’m just considered to be such a poor ad targeting prospect at this point that people have just given up in despair. Honestly, people giving up on me in despair is kind of my entire shtick.
Jake: We focus heavily on user-centered design, so I was fortunate in their previous team to work in our digital area, consumer digital, which looked after our web and mobile channels. And I can heartily say that there are a lot of changes, had a lot of heavy research into them. Not just internal, getting [unintelligible 00:06:09] and having a look at it, but what does this is actually mean for members of the? Public sending people out doing guerrilla public testing, standing outside Tescos—which is one of our large superstores here—and saying, “Hey, what do you think of this?” And then you’d get a variety of opinions, and then features would be adjusted, tweaked, and so on.
Corey: So, you folks have been a relatively early adopter, especially in an institutional context. And by institution, I mean, one of those things that feels like it is as permanent as the stones in a castle, on some level, something that’s lasted more than 20 years here in California, what a concept. And part of me wonders, were you one of the first UK government offices to use the cloud, and is that because you do weather and someone was very confused by what Cloud meant?
Jake: [laugh]. I think we were possibly one of the first; I couldn’t say if we were the first. Over in the UK, we’ve got a very capable network of government agencies doing some wonderful, and very cloud things. And the Government Digital Service was an initiative set up—uh, I can’t remember, and I—unfortunately I can’t remember the name of the report that caused its creation, but they had a big hand in doing design and cloud-first deployments. In the Met Office, we didn’t take a, “Ah, screw it. Let’s jump in,” we took a measured step into the cloud waters.
Like I said, we’ve been running supercomputers since the ’50s, and mainframes as well, and x86. I mean, we’ve been around for 100 years, so we constantly adapt, and engage, and iterate, and improve. But we don’t just jump in and take a risk because like you said, we are an institution; we have to provide services for the public. It’s not something that you can just ignore. These are services that protect life and property, both at home and abroad.
Corey: You have provided a case study historically to AWS, about your use cases of what you use, back in 2014. It was, oh, you’re a heavy user of EC2, and looking at the clock, and oh, it’s 2014. Surprise. But you’ve also focused on other services as well. I believe you personally provided a bit of a case study slash story of round your use of Pinpoint of all things, which is a wrapper around SES, their email service, in the hopes of making it a little bit more, I guess, understandable slash fully-featured for contacting people, but in my experience is a great sales device to drive business to its competitors.
What’s it been like working, I guess, both simultaneously with the tried and true, tested yadda, yadda, yadda, EC2 RDS style stuff, but then looking at what else you’re deep into Lambda, and DynamoDB, and SQS sort of stands between both worlds give it was the first service in beta, but it also is a very modern way of thinking about services. How do you contextualize all of that? Because AWS has product strategies, clearly, “Yes.” And they build anything for anyone is more or less what it seems. How do you think about the ecosystem of services that are available and apply it to problems that you’re working on?
Jake: So, in my personal opinion, I think the Met Office is one of a very small handfuls of companies around the world that could use every Amazon service that’s offered, even things like Ground Station. But on my first day in the office, I went and sat at my desk and was talking to my new colleagues, and I looked to the left and he said, “Oh, yeah, that’s a satellite dish collecting data from a satellite passing overhead.” So, we very much pick the best tool for the job. So, we have systems which do heavy number crunching, and very intense things, we’ll go for EC2.
We have systems that store data that needs relationships and all sorts of things. Fine, we’ll go RDS. In my space, we have over a billion observations a year coming through the system I lead on SurfaceNet. So, do we need RDS? No. What about if we use something like S3 and Glue and Athena to run queries against this?
We’re very fortunate that we can pick the best tool for the job, and we pride ourselves on getting the most out of our tools and getting the most value for money. Because like I said, we’re funded by the taxpayer; the taxpayer wants value for money, and we are taxpayers ourselves. We don’t want to see our money being wasted when we got a hundred size auto-scaling group, when we could do it with Lambda instead.
Corey: It’s fascinating talking about some of the forward-looking stuff, and oh, serverless and throw everything at Cloud and be all in on cloud. Cloud, cloud, cloud. Cloud is the future. But earlier this year, there was a press release where the Met Office and Microsoft are going to be joining forces to build the world’s, and I quote, “Most powerful weather and climate forecasting supercomputer.” The government—your government, to be clear—is investing over a billion pounds in the project.
It is slated to be online and running by the middle of next year, 2022, which for a government project as I contextualize them feels like it’s underwear-on-outside-the-pants superhero speed. But that, I guess, is what happens when you start looking at these public-private partnerships in some respects. How do you contextualize that? What is the story behind, oh, we’re—you’re clearly investing heavily in cloud, but you’re also building your own custom enormous supercomputer rather than just waiting for AWS to drop one at re:Invent. What is the decision-making process look like? What is the strategy behind it?
Jake: Oh. [laugh]. So—I’ll have to be careful here—supercomputing is something that we’ve been doing for a long time, since the ’50s, and we’ve grown with that. When the Met Office moved offices from Bracknell in 2002, 2003, we run two supercomputers for operational resilience, at that point [unintelligible 00:12:06] building in the new building; it was ready, and they were like, “Okay, let’s move a supercomputer.” So, it came hurtling down the motorway, plugged in, and congrats, we’ve now got two supercomputers running again. We’re very fortunate—
Corey: We had one. It got lonely. We wanted to make it a friend. Yeah, I get it.
Jake: Yeah. It’s long distance; it works. And the Met Office is actually very good at running projects. We’ve done many supercomputers over the years, and supercomputing our models, we run some very intense models, and we have more demands. We know we can do better.
We know there’s the observations in my group we collect, there’s the science that’s continually improving and iterating and getting better, and our limit isn’t poor optimizations or poorly written code. They’re scientists running some fantastic code; we have a team who go and optimize these models, and you know, in one release, they may knock down a model runtime by four minutes. And you think, okay, that’s four minutes, but for example, if that’s four minutes across 400 nodes, all of a sudden you’ve now got 400 nodes that have then got four minutes more of compute. That could be more research, that could be a different model run. You know, we’re very good at running these things, and we’re very fortunate with very technically capable to understand the difference between a workload that belongs on AWS, a workload that belongs on a supercomputer.
And you know, a supercomputer has many benefits, which the cloud providers… are getting into, you know, we have a high performance clusters on Amazon and Azure, or with, you know, InfiniBand networking. But sometimes you really can’t beat a hunking great big ton of metal and super water-cooling, sat in a data center somewhere, backed by—we’re very fortunate to have one hundred percent renewable energy for the supercomputer, which is—if you look at any of the power requirements for a supercomputer is phenomenal, so we’re throwing that credentials behind it for climate change as well. You can’t beat a supercomputer sometimes.
Corey: This episode is sponsored by our friends at Oracle HeatWave is a new high-performance accelerator for the Oracle MySQL Database Service. Although I insist on calling it “my squirrel.” While MySQL has long been the worlds most popular open source database, shifting from transacting to analytics required way too much overhead and, ya know, work. With HeatWave you can run your OLTP and OLAP, don’t ask me to ever say those acronyms again, workloads directly from your MySQL database and eliminate the time consuming data movement and integration work, while also performing 1100X faster than Amazon Aurora, and 2.5X faster than Amazon Redshift, at a third of the cost. My thanks again to Oracle Cloud for sponsoring this ridiculous nonsense.
Corey: I’m somewhat fortunate in the despite living in a world of web apps, these days, my business partner used to work at the Department of Energy at Oak Ridge National Lab, helping with the care and feeding of the supercomputer clusters that they had out there. And you’re absolutely right; that matches my understanding with the idea that there are certain workloads you’re not going to be able to beat just having this enormous purpose-built cluster sitting there ready to go. Or even if you can, certainly not economically. I have friends who are in the batch side of the world, the HPC side of the world over in the AWS organizations, and they keep—“Hey, look at this. This thing’s amazing.”
But so much of what they’re talking about seems to distill down to, “I have this one-off giant compute task that needs to get done.” Yes, you’re right. If I need to calculate the weather one time, then okay, I can make an argument for going with cloud but you’re doing this on what appears to be a pretty consistent basis. You’re not just assuming—as best I can tell that, “And starting next Wednesday, it will be sunny forever. The end.”
Jake: I’m sure many people would love it if we could do weather on-demand.
Corey: Oh, yes. [unintelligible 00:15:09] going to reserved instance weather. That would be great. Like, “All right. I’d like to schedule some rain, please.” It really seems like it’s one of those areas that is one of the most commonly accepted in science fiction without any real understanding of just what it would take to do something like that. Even understanding and predicting the weather is something that is beyond an awful lot of our current capabilities.
Jake: This is exactly it. So, the Met Office is world-renowned for its research capabilities and those really in-depth, very powerful models that we run. So, I mentioned earlier, something called MOGREPS, which is the Met Office’s ensemble-based models. And what do we mean by ensembles? You may see in the documentation it’s got 18 members.
What does that mean? It means that we actually run a simulation 18 times, and we tweak the starting parameters based on these real world inputs. And then you have a number of members that iterate through and supercomputer runs all of them. And we have deterministic models, which have one set of inputs. And you know, it’s not just, as you say, one time; these models must run.
There are a number of models we do, models on sea state as well, and they’ve all got to run, so we generally tend to run our supercomputers at top capacity. It’s not often you get to go on a supercomputer and there’ll be some space for your job to execute right this minute. And there’s all the setup as well, so it’s not just okay, the supercomputer is ready to go, but there’s all the things that go into it, like, those observations, whether it’s from the surface, whether it’s from satellite data passing overhead, we have our own lightning network, as well. We have many things, like a radar network that we own, and operate. We collaborate with the environment agency for rainfall. And all these things they feed into these models.
Okay, now we produce a model, and now it’s got to go out. So, it’s got to come off the supercomputer, it’s got to be processed, maybe the grid that we run the models on needs to be reprojected because different people feed maps in different ways. Then there’s got to be cut up because not every customer wants to know what the weather is everywhere. They’ve got a bit they care about. And of course, these models aren’t small; you know, they can be terabytes, so there’s also a case of customers might not want to download terabytes; that might cost them a lot. They might only be able to process gigabytes an hour.
But then there’s other products that we do processing on, so weather models, it might take 40 minutes to over an hour for a model to run. Okay, that’s great. You might have missed the first step. Okay, well, we can enrich it with other data that’s come in, things like nowcasting, where we do very short runs for the next six-hour forecast. There’s a whole number of things that run in the office. And we don’t have a choice; they run operationally 24/7, around the clock.
I mentioned to you before we started recording, we had an incident of ‘Beast from the East’ a number of years back. Some of your listeners may remember this; in the UK, we had a front come in from the east and the UK was blanketed with snow. It was a real severe event. We pretty much kept most of our services running. We worked really hard to make sure that they continued working.
And personally I say, perhaps when you go shopping for Black Friday, you might go to a retailer and it’s got a queue system up because, you know, it mimics that queue thing when you’re outside a store, like in Times Square, and it’s raining, be like oh, I might get a deal a minute. I think possibly in the Met Office, we have almost the inverse problem. If the weather’s benign, we’re still there. People rely on us to go, “Yeah, okay. I can go out and have fun.” When the weather’s bad, we don’t have a choice. We have to be there because everybody wants us to be there, but we need to be there. It’s not a case of this is an optional service.
Corey: People often forget that yeah, we are living in a world in which, especially with climate change doing what it’s doing, if you get this wrong, people can very easily die. That is not something to take lightly. It’s not just about can I go outside and play a pickup game of basketball today?
Jake: Exactly. So, you know, operationally, we have something called the National Severe Weather Warning Service, where we issue guidance and alerts across the UK, based on severe weather. And there’s a number of different weather types that we issued guidance for. And the severity of that goes from yellow to amber to red. And these are manually generated products, so there’s the chief meteorologist who’s on shift, and he approves these.
And these warnings don’t just go out to the members of the public. They go out to Cabinet Office, they go out to first responders, they go out to a number of people who are interested in the weather and have a responsibility. But the other side is that we don’t issue a weather warning willy-nilly. It’s a measured, calculated decision by our very capable operations team. And once that weather system has passed, the weather story has changed, we’ll review it. We go back and we say what could we have done differently?
Could the models have predicted this earlier? Could we have new data which would have picked up on this? Some of our next generation products that are in beta, would they have spotted this earlier? There’s a lot of service review that continually goes on because like I said, we are the best, and we need to stay the best. People rely on us.
Corey: So, here’s a question that probably betrays my own ignorance, and that’s okay, that’s what I’m here to do. When I was a kid, I distinctly remember—first, this is not the era wish the world was black and white; I’m a child of the ’80s, let’s be clear here, so this is not old-timey nonsense quite as much, but distinctly remember that it was a running gag how unreliable the weather report always was, and it was a bit hit or miss, like, “Well, the paper says it’s going to be sunny today, but we’re going to pack an umbrella because we know how this works.” It feels, and I could be way off base on this, but it really feels like weather forecasting has gotten significantly more accurate since I was a kid. Is that just nostalgia, and I remember my parents complaining about it, or has there been a qualitative improvement in the accuracy of weather forecasting?
Jake: I wish I could tell you all the scientific improvements that we’ve made, but there’s many groups of scientists in the office who I would more than happily shift that responsibility over to, but quite simply, yes. We have a lot of partners we work with around the world—the National Weather Service, DWD in Germany, Meteo France, just to name but a few; there are many—and we all collaborate with data. We all iterate. You know, the American Meteorological Society holds a conference every year, which we attend. And there have been absolutely leaping changes in forecast quality and accuracy over the years.
And that’s why we continually upgrade our supercomputers. Like I said, yeah, there’s research and stuff, but we’re pulling in all this science and Meteorology is generally very chaotic systems. We’re still discovering many things around how the climate works and how the weather systems work. And we’re going to use them to help improve quality of life, early warnings, actually, we can say, oh, in three days time, it’s going to be sunny at the beach. Be great if you could know that seven days in advance. It would be great if you knew that 14 days in advance.
I mean, we might not do that because at the moment, we might have an idea, but there’s also the case of understanding, you know, it’s a probability-based decision. And people say, “Oh, it’s not going to rain.” But actually, it’s a case of, well, we said there’s a 20% probability is going to rain. That doesn’t mean it’s not going to, but it’s saying, “Two times out of ten, at this time it’s going to rain.” But of course, if you go out 14 days, that’s a long lead time, and you know, you talk about chaos theory, and the butterfly moves and flaps its wings, and all of a sudden a [cake 00:22:50] changes color from green to pink or something like that, some other location in the world.
These are real systems that have real impacts, so we have to balance out the science of pure numbers, but what do people do with it? And what can people do with it, as well? So, that’s why we talk about having timely data as well. People say, “Well, you could run these simulations and all your products take longer to process them and generate them,” but for example, in SurfaceNet, we have five minutes to process an observation once it comes in. We could spend hours fine-tuning that observation to make it perfect, but it needs to be useful.
Corey: As you take a look throughout all of the things that AWS is doing—and sure, not all of these are going to necessarily apply directly to empowering the accuracy of weather forecasts, let’s be clear here—but you have expressed personal interest in for example, IoT, a bunch of the serverless nonsense we’re seeing out there. What excites you the most? What has you the most enthusiastic about what the future the cloud might hold? Because unlike almost everyone else I talk to in this space, you are not selling anything. You don’t have a position—that I’m aware of—that oh, yeah, I super want to see this particular thing win the industry because that means you get to buy a boat.
You work for the Met Office; you know that in some cases, oh, that boat is not going to have a great time in that part of the world anyway. I don’t need one. So, you’re a little bit more objective than most people. I have pushing a corporate story. What excites you? Where do you see the future of this industry going in ways that are neat?
Jake: Different parts of the office will tell you different things, you know. We worked with Google DeepMind on AI and machine learning. We work with many partners on AI and machine learning, we use it internally, as well. On a personal level, I like quality of life improvements and things that just make my life as both the developer fun and interesting. So, CDK was a big thing.
I was a CloudFormation wizard—still hate writing YAML—but the CDK came along and it was [unintelligible 00:24:52] people wouldn’t say, but that wasn’t, like, know when Lambda launched back in, what, 2013? 2014? No, but it made our lives easier. It meant that actually, we didn’t have to worry about, okay, how do we do templating with YAML? Do we have to run some pre-processes or something?
It meant that we could invest a little bit of time upfront on CDK and migrating everything over, and then that freed us up to actually doing things that we need for what we call the business or the organization, delivering value, you know? It’s great playing with tech but, you know, I need to deliver value. And I think, what was it, in the Google SRE book, they limit the things they do, toiling of manual tasks that don’t really contribute anything, they’re more like keeping the lights on. Let’s get rid of that. Let’s focus on delivering value.
It’s why Lambda is so great. I could patch an EC2, I can automate it, you know, you got AWS Systems Manager Patch Manager, or… whatever its name is, they can go and manage all those patches for you. Why when I can do it in a Lambda and I don’t need to worry about it?
Corey: So, one last question that I have for you is that you’re a tech lead. It’s easy for folks to fall into the trap of assuming, “Oh, you’re a government. It’s like an enterprise only bigger, slower, and way, way, way busier.” How many hundreds of thousands of engineers are working at the Met Office along with you?
Jake: So, you can have a look at our public report and you can see the number of staff we have. I think there’s about 1800 staff that work at the Met Office. And that includes our account manage, that includes our scientists, that includes HR and legal. And I’d say there’s probably less than 300 people who work in technology, as we call it, which is managing our IT estate, managing our Linux estate, managing our storage area networks because, funnily enough, managing petabytes of data is not an easy thing. You know, managing a supercomputer, a mainframe.
There really aren’t that many people here at the office, but we do so much great stuff. So, as a technical lead, I’m not just a leader of services, but I lead a team of people. I'm responsible for them, for empowering them, and helping them to develop their own careers and their own training. So, it’s me and a team of four that look after SurfaceNet. And it’s not just SurfaceNet; we’ve got other systems we look after that SurfaceNet produces data for. Sending messages around the world on the World Meteorological Organization’s global telecommunications system. What a mouthful. But you know, these messages go all around the world. And some people might say, “Well, I got a huge team for that.” Well, [unintelligible 00:27:27]. We have other teams that help us—I say, help us—in their own right, they transmit that data. But we’re really—I personally wouldn’t say we were huge, but boy, do we pack a punch.
Corey: Can I just say on a personal note, it’s so great to talk to someone who’s focusing on building out these environments and solving these problems for a higher purpose slash calling than—and I will get letters for this—than showing ads to people on the internet. I really want to thank you for taking time out of your day to speak with me. If people want to learn more about what you’re up to, how you do it, potentially consider maybe joining you if they are eligible to work at the Met Office, where can they find you?
Jake: Yeah, so you do have to be a resident in the UK, but www.metoffice.gov.uk is our home on the internet. You can find me on Twitter at @jakehendy, and I could absolutely chew Corey’s ear off for many more hours about many of the wonderful services that the Met Office provides. But I can tell he’s got something more interesting to do. So, uh [crosstalk 00:28:29]—
Corey: Oh, you’d be surprised. It’s loads of fun to—no, it’s always fun to talk to people who are just in different areas that I don’t get to work with very often. It turns out that most of my customers are not focused on telling you what the weather is going to do. And that’s fine; it takes all kinds. It’s just neat to have this conversation with a different area of the industry. Thank you so much for being so generous with your time. I appreciate it.
Jake: Thank you very much for inviting me on. I guess if we get some good feedback, I’ll have to come on and I will have to chew your ear off after all.
Corey: Don’t offer if you’re not serious.
Jake: Oh, I am.
Corey: Jake Hendy, Tech Lead at the Met Office. I’m Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice along with a comment yelling at one or both of us for having the temerity to rain on your parade.
Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
Announcer: This has been a HumblePod production. Stay humble.