Computing on the Edge with Macrometa’s Chetan Venkatesh

Episode Summary

Chetan Venkatesh, CEO and Co-Founder of Macrometa, joins Corey to discuss the seemingly magical capabilities of edge computing and how Macrometa is flipping cloud computing on its head by focusing on localization rather than centralization. Chetan describes his 20-year journey up the spiral staircase of edge computing, and then reveals the three problems with edge currently and how Macrometa is working to solve for those and other problems such as the carbon footprint of cloud computing. Chetan also announces an exciting event coming up - Macrometa’s Developer Week.

Episode Show Notes & Transcript

About Chetan

Chetan Venkatesh is a technology startup veteran focused on distributed data, edge computing, and software products for enterprises and developers. He has 20 years of experience in building primary data storage, databases, and data replication products. Chetan holds a dozen patents in the area of distributed computing and data storage.

Chetan is the CEO and Co-Founder of Macrometa – a Global Data Network featuring a Global Data Mesh, Edge Compute, and In-Region Data Protection. Macrometa helps enterprise developers build real-time apps and APIs in minutes – not months.

Links Referenced:


Announcer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.

Corey: Forget everything you know about SSH and try Tailscale. Imagine if you didn't need to manage PKI or rotate SSH keys every time someone leaves. That'd be pretty sweet, wouldn't it? With Tailscale SSH, you can do exactly that. Tailscale gives each server and user device a node key to connect to its VPN, and it uses the same node key to authorize and authenticate SSH.

Basically you're SSHing the same way you manage access to your app. What's the benefit here? Built in key rotation permissions is code connectivity between any two devices, reduce latency and there's a lot more, but there's a time limit here. You can also ask users to reauthenticate for that extra bit of security. Sounds expensive?

Nope, I wish it were. tail scales. Completely free for personal use on up to 20 devices. To learn more, visit Again, that's

Corey: Managing shards. Maintenance windows. Overprovisioning. ElastiCache bills. I know, I know. It's a spooky season and you're already shaking. It's time for caching to be simpler. Momento Serverless Cache lets you forget the backend to focus on good code and great user experiences. With true autoscaling and a pay-per-use pricing model, it makes caching easy. No matter your cloud provider, get going for free at That's GO M-O-M-E-N-T-O dot co slash screaming

Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. Today, this promoted guest episode is brought to us basically so I can ask a question that has been eating at me for a little while. That question is, what is the edge? Because I have a lot of cynical sarcastic answers to it, but that doesn’t really help understanding. My guest today is Chetan Venkatesh, CEO and co-founder at Macrometa. Chetan, thank you for joining me.

Chetan: It’s my pleasure, Corey. You’re one of my heroes. I think I’ve told you this before, so I am absolutely delighted to be here.

Corey: Well, thank you. We all need people to sit on the curb and clap as we go by and feel like giant frauds in the process. So let’s start with the easy question that sets up the rest of it. Namely, what is Macrometa, and what puts you in a position to be able to speak at all, let alone authoritatively, on what the edge might be?

Chetan: I’ll answer the second part of your question first, which is, you know, what gives me the authority to even talk about this? Well, for one, I’ve been trying to solve the same problem for 20 years now, which is build distributed systems that work really fast and can answer questions about data in milliseconds. And my journey’s sort of been like the spiral staircase journey, you know, I keep going around in circles, but the view just keeps getting better every time I do one of these things. So I’m on my fourth startup doing distributed data infrastructure, and this time really focused on trying to provide a platform that’s the antithesis of the cloud. It’s kind of like taking the cloud and flipping it on its head because instead of having a single region application where all your stuff runs in one place, on us-west-1 or us-east-1, what if your apps could run everywhere, like, they could run in hundreds and hundreds of cities around the world, much closer to where your users and devices and most importantly, where interesting things in the real world are happening?

And so we started Macrometa about five years back to build a new kind of distributed cloud—let’s call the edge—that kind of looks like a CDN, a Content Delivery Network, but really brings very sophisticated platform-level primitives for developers to build applications in a distributed way around primitives for compute, primitives for data, but also some very interesting things that you just can’t do in the cloud anymore. So that’s Macrometa. And we’re doing something with edge computing, which is a big buzzword these days, but I’m sure you’ll ask me about that.

Corey: It seems to be. Generally speaking, when I look around and companies are talking about edge, it feels almost like it is a redefining of what they already do to use a term that is currently trending and deep in the hype world.

Chetan: Yeah. You know, I think humans just being biologically social beings just tend to be herd-like, and so when we see a new trend, we like to slap it on everything we have. We did that 15 years back with cloud, if you remember, you know? Everybody was very busy trying to stick the cloud label on everything that was on-prem. Edge is sort of having that edge-washing moment right now.

But I define edge very specifically is very different from the cloud. You know, where the cloud is defined by centralization, i.e., you’ve got a giant hyperscale data center somewhere far, far away, where typically electricity, real estate, and those things are reasonably cheap, i.e., not in urban centers, where those things tend to be expensive.

You know, you have platforms where you run things at scale, it’s sort of a your mess for less business in the cloud and somebody else manages that for you. The edge is actually defined by location. And there are three types of edges. The first edge is the CDN edge, which is historically where we’ve been trying to make things faster with the internet and make the internet scale. So Akamai came about, about 20 years back and created this thing called the CDN that allowed the web to scale. And that was the first killer app for edge, actually. So that’s the first location that defines the edge where a lot of the peering happens between different network providers and the on-ramp around the cloud happens.

The second edge is the telecom edge. That’s actually right next to you in terms of, you know, the logical network topology because every time you do something on your computer, it goes through that telecom layer. And now we have the ability to actually run web services, applications, data, directly from that telecom layer.

And then the third edge is—sort of, people have been familiar with this for 30 years. The third edge is your device, just your mobile phone. It’s your internet gateway and, you know, things that you carry around in your pocket or sit on your desk, where you have some compute power, but it’s very restricted and it only deals with things that are interesting or important to you as a person, not in a broad range. So those are sort of the three things. And it’s not the cloud. And these three things are now becoming important as a place for you to build and run enterprise apps.

Corey: Something that I think is often overlooked here—and this is sort of a natural consequence of the cloud’s own success and the joy that we live in a system that we do where companies are required to always grow and expand and find new markets—historically, for example, when I went to AWS re:Invent, which is a cloud service carnival in the desert that no one in the right mind should ever want to attend but somehow we keep doing, it used to be that, oh, these announcements are generally all aligned with people like me, where I have specific problems and they look a lot like what they’re talking about on stage. And now they’re talking about things that, from that perspective, seem like Looney Tunes. Like, I’m trying to build Twitter for Pets or something close to it, and I don’t understand why there’s so much talk about things like industrial IoT and, “Machine learning,” quote-unquote, and other things that just do not seem to align with. I’m trying to build a web service, like it says on the name of a company; what gives?

And part of that, I think, is that it’s difficult to remember, for most of us—especially me—that what they’re coming out with is not your shopping list. Every service is for someone, not every service is for everyone, so figuring out what it is that they’re talking about and what those workloads look like, is something that I think is getting lost in translation. And in our defense—collective defense—Amazon is not the best at telling stories to realize that, oh, this is not me they’re talking to; I’m going to opt out of this particular thing. You figure it out by getting it wrong first. Does that align with how you see the market going?

Chetan: I think so. You know, I think of Amazon Web Services, or even Google, or Azure as sort of Costco and, you know, Sam’s Wholesale Club or whatever, right? They cater to a very broad audience and they sell a lot of stuff in bulk and cheap. And you know, so it’s sort of a lowest common denominator type of a model. And so emerging applications, and especially emerging needs that enterprises have, don’t necessarily get solved in the cloud. You’ve got to go and build up yourself on sort of the crude primitives that they provide.

So okay, go use your bare basic EC2, your S3, and build your own edgy, or whatever, you know, cutting edge thing you want to build over there. And if enough people are doing it, I’m sure Amazon and Google start to pay interest and you know, develop something that makes it easier. So you know, I agree with you, they’re not the best at this sort of a thing. The edge is phenomenon also that’s orthogonally, and diametrically opposite to the architecture of the cloud and the economics of the cloud.

And we do centralization in the cloud in a big way. Everything is in one place; we make giant piles of data in one database or data warehouse slice and dice it, and almost all our computer science is great at doing things in a centralized way. But when you take data and chop it into 50 copies and keep it in 50 different places on Earth, and you have this thing called the internet or the wide area network in the middle, trying to keep all those copies in sync is a nightmare. So you start to deal with some very basic computer science problems like distributed state and how do you build applications that have a consistent view of that distributed state? So you know, there have been attempts to solve these problems for 15, 18 years, but none of those attempts have really cracked the intersection of three things: a way for programmers to do this in a way that doesn’t blow their heads with complexity, a way to do this cheaply and effectively enough where you can build real-world applications that serve billions of users concurrently at a cost point that actually is economical and make sense, and third, a way to do this with adequate levels of performance where you don’t die waiting for the spinning wheel on your screen to go away.

So these are the three problems with edge. And as I said, you know, me and my team, we’ve been focused on this for a very long while. And me and my co-founder have come from this world and we created a platform very uniquely designed to solve these three problems, the problems of complexity for programmers to build in a distributed environment like this where data sits in hundreds of places around the world and you need a consistent view of that data, being able to operate and modify and replicate that data with consistency guarantees, and then a third one, being able to do that, at high levels of performance, which translates to what we call ultra-low latency, which is human perception. The threshold of human perception, visually, is about 70 milliseconds. Our finest athletes, the best Esports players are about 70 to 80 milliseconds in their twitch, in their ability to twitch when something happens on the screen. The average human is about 100 to 110 milliseconds.

So in a second, we can maybe do seven things at rapid rates. You know, that’s how fast our brain can process it. Anything that falls below 100 milliseconds—especially if it falls into 50 to 70 milliseconds—appears instantaneous to the human mind and we experience it as magic. And so where edge computing and where my platform comes in is that it literally puts data and applications within 50 milliseconds of 90% of humans and devices on Earth and allows now a whole new set of applications where latency and location and the ability to control those things with really fine-grained capability matters. And we can talk a little more about what those apps are in a bit.

Corey: And I think that’s probably an interesting place to dive into at the moment because whenever we talk about the idea of new ways of building things that are aimed at decentralization, first, people at this point automatically have a bit of an aversion to, “Wait, are you talking about some of the Web3 nonsense?” It’s one of those look around the poker table and see if you can spot the sucker, and if you can’t, it’s you. Because there are interesting aspects to that entire market, let’s be clear, but it also seems to be occluded by so much of the grift and nonsense and spam and the rest that, again, sort of characterize the early internet as well. The idea though, of decentralizing out of the cloud is deeply compelling just to anyone who’s really ever had to deal with the egress charges, or even the data transfer charges inside of one of the cloud providers. The counterpoint is it feels that historically, you either get to pay the tax and go all-in on a cloud provider and get all the higher-level niceties, or otherwise, you wind up deciding you’re going to have to more or less go back to physical data centers, give or take, and other than the very baseline primitives that you get to work with of VMs and block storage and maybe a load balancer, you’re building it all yourself from scratch. It seems like you’re positioning this as setting up for a third option. I’d be very interested to hear it.

Chetan: Yeah. And a quick comment on decentralization: good; not so sure about the Web3 pieces around it. We tend to talk about computer science and not the ideology of distributing data. There are political reasons, there are ideological reasons around data and sovereignty and individual human rights, and things like that. There are people far smarter than me who should explain that.

I fall personally into the Nicholas Weaver school of skepticism about Web3 and blockchain and those types of things. And for readers who are not familiar with Nicholas Weaver, please go online. He teaches at UC Berkeley is just one of the finest minds of our time. And I think he’s broken down some very good reasons why we should be skeptical about, sort of, Web3 and, you know, things like that. Anyway, that’s a digression.

Coming back to what we’re talking about, yes, it is a new paradigm, but that’s the challenge, which is I don’t want to introduce a new paradigm. I want to provide a continuum. So what we’ve built is a platform that looks and feels very much like Lambdas, and a poly-model database. I hate the word multi. It’s a pretty dumb word, so I’ve started to substitute ‘multi’ with ‘poly’ everywhere, wherever I can find it.

So it’s not multi-cloud; it’s poly-cloud. And it’s not multi-model; it’s poly-model. Because what we want is a world where developers have the ability to use the best paradigm for solving problems. And it turns out when we build applications that deal with data, data doesn’t just come in one form, it comes in many different forms, it’s polymorphic, and so you need a data platform, that’s also, you know, polyglot and poly-model to be able to handle that. So that’s one part of the problem, which is, you know, we’re trying to provide a platform that provides continuity by looking like a key-value store like Redis. It looks like a document database—

Corey: Or the best database in the world Route 53 TXT records. But please, keep going.

Chetan: Well, we’ve got that too, so [laugh] you know? And then we’ve got a streaming graph engine built into it that kind of looks and behaves like a graph database, like Neo4j, for example. And, you know, it’s got columnar capabilities as well. So it’s sort of a really interesting data platform that is not open-source; it’s proprietary because it’s designed to solve these problems of being able to distribute data, put it in hundreds of locations, keep it all in sync, but it looks like a conventional NoSQL database. And it speaks PostgreSQL, so if you know PostgreSQL, you can program it, you know, pretty easily.

What it’s also doing is taking away the responsibility for engineers and developers to understand how to deal with very arcane problems like conflict resolution in data. I made a change in Mumbai; you made a change in Tokyo; who wins? Our systems in the cloud—you know, DynamoDB, and things like that—they have very crude answers for this something called last writer wins. We’ve done a lot of work to build a protocol that brings you ACID-like consistency in these types of problems and makes it easy to reason with state change when you’ve got an application that’s potentially running in 100 locations and each of those places is modifying the same record, for example.

And then the second part of it is it’s a converged platform. So it doesn’t just provide data; it provides a compute layer that’s deeply integrated directly with the data layer itself. So think of it as Lambdas running, like, stored procedures inside the database. That’s really what it is. We’ve built a very, very specialized compute engine that exposes containers in functions as stored procedures directly on the database.

And so they run inside the context of the database and so you can build apps in Python, Go, your favorite language; it compiles down into a [unintelligible 00:15:02] kernel that actually runs inside the database among all these different polyglot interfaces that we have. And the third thing that we do is we provide an ability for you to have very fine-grained control on your data. Because today, data’s become a political tool; it’s become something that nation-states care a lot about.

Corey: Oh, do they ever.

Chetan: Exactly. And [unintelligible 00:15:24] regulated. So here’s the problem. You’re an enterprise architect and your application is going to be consumed in 15 countries, there are 13 different frameworks to deal with. What do you do? Well, you spin up 13 different versions, one for each country, and you know, build 13 different teams, and have 13 zero-day attacks and all that kind of craziness, right?

Well, data protection is actually one of the most important parts of the edge because, with something like Macrometa, you can build an app once, and we’ll provide all the necessary localization for any region processing, data protection with things like tokenization of data so you can exfiltrate data securely without violating potentially PII sensitive data exfiltration laws within countries, things like that, i.e. It’s solving some really hard problems by providing an opinionated platform that does these three things. And I’ll summarize it as thus, Corey, we can kind of dig into each piece. Our platform is called the Global Data Network. It’s not a global database; it’s a global data network. It looks like a frickin database, but it’s actually a global network available in 175 cities around the world.

Corey: The challenge, of course, is where does the data actually live at rest, and—this is why people care about—well, they’re two reasons people care about that; one is the data residency locality stuff, which has always, honestly for me, felt a little bit like a bit of a cloud provider shakedown. Yeah, build a data center here or you don’t get any of the business of anything that falls under our regulation. The other is, what is the egress cost of that look like? Because yeah, I can build a whole multicenter data store on top of AWS, for example, but minimum, we’re talking two cents, a gigabyte of transfer, even with inside of a region in some cases, and many times that externally.

Chetan: Yeah, that’s the real shakedown: the egress costs [laugh] more than the other example that you talked about over there. But it’s a reality of how cloud pricing works and things like that. What we have built is a network that is completely independent of the cloud providers. We’re built on top of five different service providers. Some of them are cloud providers, some of them are telecom providers, some of them are CDNs.

And so we’re building our global data network on top of routes and capacity provided by transfer providers who have different economics than the cloud providers do. So our cost for egress falls somewhere between two and five cents, for example, depending on which edge locations, which countries, and things that you’re going to use over there. We've got a pretty generous egress fee where, you know, for certain thresholds, there’s no egress charge at all, but over certain thresholds, we start to charge between two to five cents. But even if you were to take it at the higher end of that spectrum, five cents per gigabyte for transfer, the amount of value our platform brings in architecture and reduction in complexity and the ability to build apps that are frankly, mind-boggling—one of my customers is a SaaS company in marketing that uses us to inject offers while people are on their website, you know, browsing. Literally, you hit their website, you do a few things, and then boom, there’s a customized offer for them.

In banking that’s used, for example, you know, you’re making your minimum payments on your credit card, but you have a good payment history and you’ve got a decent credit score, well, let’s give you an offer to give you a short-term loan, for example. So those types of new applications, you know, are really at this intersection where you need low latency, you need in-region processing, and you also need to comply with data regulation. So when you building a high-value revenue-generating app like that egress cost, even at five cents, right, tends to be very, very cheap, and the smallest part of you know, the complexity of building them.

Corey: One of the things that I think we see a lot of is that the tone of this industry is set by the big players, and they have done a reasonable job, by and large, of making anything that isn’t running in their blessed environments, let me be direct, sound kind of shitty, where it’s like, “Oh, do you want to be smart and run things in AWS?”—or GCP? Or Azure, I guess—“Or do you want to be foolish and try and build it yourself out of popsicle sticks and twine?” And, yeah, on some level, if I’m trying to treat everything like it’s AWS and run a crappy analog version of DynamoDB, for example, I’m not going to have a great experience, but if I also start from a perspective of not using things that are higher up the stack offerings, that experience starts to look a lot more reasonable as we start expanding out. But it still does present to a lot of us as well, we’re just going to run things in VM somewhere and treat them just like we did back in 2005. What’s changed in that perspective?

Chetan: Yeah, you know, I can’t talk for others but for us, we provide a high-level Platform-as-a-Service, and that platform, the global data network, has three pieces to it. First piece is—and none of this will translate into anything that AWS or GCP has because this is the edge, Corey, is completely different, right? So the global data network that we have is composed of three technology components. The first one is something that we call the global data mesh. And this is Pub/Sub and event processing on steroids. We have the ability to connect data sources across all kinds of boundaries; you’ve got some data in Germany and you’ve got some data in New York. How do you put these things together and get them streaming so that you can start to do interesting things with correlating this data, for example?

And you might have to get across not just physical boundaries, like, they’re sitting in different systems in different data centers; they might be logical boundaries, like, hey, I need to collaborate with data from my supply chain partner and we need to be able to do something that’s dynamic in real-time, you know, to solve a business problem. So the global data mesh is a way to very quickly connect data wherever it might be in legacy systems, in flat files, in streaming databases, in data warehouses, what have you—you know, we have 500-plus types of connectors—but most importantly, it’s not just getting the data streaming, it’s then turning it into an API and making that data fungible. Because the minute you put an API on it and it’s become fungible now that data is actually got a lot of value. And so the data mesh is a way to very quickly connect things up and put an API on it. And that API can now be consumed by front-ends, it can be consumed by other microservices, things like that.

Which brings me to the second piece, which is edge compute. So we’ve built a compute runtime that is Docker compatible, so it runs containers, it’s also Lambda compatible, so it runs functions. Let me rephrase that; it’s not Lambda-compatible, it’s Lambda-like. So no, you can’t take your Lambda and dump it on us and it won’t just work. You have to do some things to make it work on us.

Corey: But so many of those things are so deeply integrated to the ecosystem that they’re operating within, and—

Chetan: Yeah.

Corey: That, on the one hand, is presented by cloud providers as, “Oh, yes. This shows how wonderful these things are.” In practice, talk to customers. “Yeah, we’re using it as spackle between the different cloud services that don’t talk to one another despite being made by the same company.”

Chetan: [laugh] right.

Corey: It’s fun.

Chetan: Yeah. So the second edge compute piece, which allows you now to build microservices that are stateful, i.e., they have data that they interact with locally, and schedule them along with the data on our network of 175 regions around the world. So you can build distributed applications now.

Now, your microservice back-end for your banking application or for your HR SaaS application or e-commerce application is not running in us-east-1 and Virginia; it’s running literally in 15, 18, 25 cities where your end-users are, potentially. And to take an industrial IoT case, for example, you might be ingesting data from the electricity grid in 15, 18 different cities around the world; you can do all of that locally now. So that’s what the edge functions does, it flips the cloud model around because instead of sending data to where the compute is in the cloud, you’re actually bringing compute to where the data is originating, or the data is being consumed, such as through a mobile app. So that’s the second piece.

And the third piece is global data protection, which is hey, now I’ve got a distributed infrastructure; how do I comply with all the different privacy and regulatory frameworks that are out there? How do I keep data secure in each region? How do I potentially share data between regions in such a way that, you know, I don’t break the model of compliance globally and create a billion-dollar headache for my CIO and CEO and CFO, you know? So that’s the third piece of capabilities that this provides.

All of this is presented as a set of serverless APIs. So you simply plug these APIs into your existing applications. Some of your applications work great in the cloud. Maybe there are just parts of that app that should be on our edge. And that’s usually where most customers start; they take a single web service or two that’s not doing so great in the cloud because it’s too far away; it has data sensitivity, location sensitivity, time sensitivity, and so they use us as a way to just deal with that on the edge.

And there are other applications where it’s completely what I call edge native, i.e., no dependancy on the cloud comes and runs completely distributed across our network and consumes primarily the edges infrastructure, and just maybe send some data back on the cloud for long-term storage or long-term analytics.

Corey: And ingest does remain free. The long-term analytics, of course, means that once that data is there, good luck convincing a customer to move it because that gets really expensive.

Chetan: Exactly, exactly. It’s a speciation—as I like to say—of the cloud, into a fast tier where interactions happen, i.e., the edge. So systems of record are still in the cloud; we still have our transactional systems over there, our databases, data warehouses.

And those are great for historical types of data, as you just mentioned, but for things that are operational in nature, that are interactive in nature, where you really need to deal with them because they’re time-sensitive, they’re depleting value in seconds or milliseconds, they’re location sensitive, there’s a lot of noise in the data and you need to get to just those bits of data that actually matter, throw the rest away, for example—which is what you do with a lot of telemetry in cybersecurity, for example, right—those are all the things that require a new kind of a platform, not a system of record, a system of interaction, and that’s what the global data network is, the GDN. And these three primitives, the data mesh, Edge compute, and data protection, are the way that our APIs are shaped to help our enterprise customers solve these problems. So put it another way, imagine ten years from now what DynamoDB and global tables with a really fast Lambda and Kinesis with actually Event Processing built directly into Kinesis might be like. That’s Macrometa today, available in 175 cities.

Corey: This episode is brought to us in part by our friends at Datadog. Datadog is a SaaS monitoring and security platform that enables full-stack observability for modern infrastructure and applications at every scale. Datadog enables teams to see everything: dashboarding, alerting, application performance monitoring, infrastructure monitoring, UX monitoring, security monitoring, dog logos, and log management, in one tightly integrated platform. With 600-plus out-of-the-box integrations with technologies including all major cloud providers, databases, and web servers, Datadog allows you to aggregate all your data into one platform for seamless correlation, allowing teams to troubleshoot and collaborate together in one place, preventing downtime and enhancing performance and reliability. Get started with a free 14-day trial by visiting, and get a free t-shirt after installing the agent.

Corey: I think it’s also worth pointing out that it’s easy for me to fall into a trap that I wonder if some of our listeners do as well, which is, I live in, basically, downtown San Francisco. I have gigabit internet connectivity here, to the point where when it goes out, it is suspicious and more a little bit frightening because my ISP——is amazing and deserves every bit of praise that you never hear any ISP ever get. But when I travel, it’s a very different experience. When I go to oh, I don’t know, the conference center at re:Invent last year and find that the internet is patchy at best, or downtown San Francisco on Verizon today, I discover that the internet is almost non-existent, and suddenly applications that I had grown accustomed to just working suddenly didn’t.

And there’s a lot more people who live far away from these data center regions and tier one backbones directly to same than don’t. So I think that there’s a lot of mistaken ideas around exactly what the lower bandwidth experience of the internet is today. And that is something that feels inadvertently classist if that make sense. Are these geographically bigoted?

Chetan: Yeah. No, I think those two points are very well articulated. I wish I could articulate it that well. But yes, if you can afford 5G, some of those things get better. But again, 5G is not everywhere yet. It will be, but 5G can in many ways democratize at least one part of it, which is provide an overlap network at the edge, where if you left home and you switched networks, on to a wireless, you can still get the same quality of service that you used to getting from Sonic, for example. So I think it can solve some of those things in the future. But the second part of it—what did you call it? What bigoted?

Corey: Geographically bigoted. And again, that’s maybe a bit of a strong term, but it’s easy to forget that you can’t get around the speed of light. I would say that the most poignant example of that I had was when I was—in the before times—giving a keynote in Australia. So ah, I know what I’ll do, I’ll spin up an EC2 instance for development purposes—because that’s how I do my development—in Australia. And then I would just pay my provider for cellular access for my iPad and that was great.

And I found the internet was slow as molasses for everything I did. Like, how do people even live here? Well, turns out that my provider would backhaul traffic to the United States. So to log into my session, I would wind up having to connect with a local provider, backhaul to the US, then connect back out from there to Australia across the entire Pacific Ocean, talk to the server, get the response, would follow that return path. It’s yeah, turns out that doing laps around the world is not the most efficient way of transferring any data whatsoever, let alone in sizable amounts.

Chetan: And that’s why we decided to call our platform the global data network, Corey. In fact, it’s really built inside of sort of a very simple reason is that we have our own network underneath all of this and we stop this whole ping-pong effect of data going around and help create deterministic guarantees around latency, around location, around performance. We’re trying to democratize latency and these types of problems in a way that programmers shouldn’t have to worry about all this stuff. You write your code, you push publish, it runs on a network, and it all gets there with a guarantee that 95% of all your requests will happen within 50 milliseconds round-trip time, from any device, you know, in these population centers around the world.

So yeah, it’s a big deal. It’s sort of one of our je ne sais quoi pieces in our mission and charter, which is to just democratize latency and access, and sort of get away from this geographical nonsense of, you know, how networks work and it will dynamically switch topology and just make everything slow, you know, very non-deterministic way.

Corey: One last topic that I want to ask you about—because I near certain given your position, you will have an opinion on this—what’s your take on, I guess, the carbon footprint of clouds these days? Because a lot of people been talking about it; there has been a lot of noise made about, justifiably so. I’m curious to get your take.

Chetan: Yeah, you know, it feels like we’re in the ’30s and the ’40s of the carbon movement when it comes to clouds today, right? Maybe there’s some early awareness of the problem, but you know, frankly, there’s very little we can do than just sort of put a wet finger in the air, compute some carbon offset and plant some trees. I think these are good building blocks; they’re not necessarily the best ways to solve this problem, ultimately. But one of the things I care deeply about and you know, my company cares a lot about is helping make developers more aware off what kind of carbon footprint their code tangibly has on the environment. And so we’ve started two things inside the company. We’ve started a foundation that we call the Carbon Conscious Computing Consortium—the four C’s. We’re going to announce that publicly next year, we’re going to invite folks to come and join us and be a part of it.

The second thing that we’re doing is we’re building a completely open-source, carbon-conscious computing platform that is built on real data that we’re collecting about, to start with, how Macrometa’s platform emits carbon in response to different types of things you build on it. So for example, you wrote a query that hits our database and queries, you know, I don’t know, 20 billion objects inside of our database. It’ll tell you exactly how many micrograms or how many milligrams of carbon—it’s an estimate; not exactly. I got to learn to throttle myself down. It’s an estimate, you know, you can’t really measure these things exactly because the cost of carbon is different in different places, you know, there are different technologies, et cetera.

Gives you a good decent estimate, something that reliably tells you, “Hey, you know that query that you have over there, that piece of SQL? That’s probably going to do this much of micrograms of carbon at this scale.” You know, if this query was called a million times every hour, this is how much it costs. A million times a day, this is how much it costs and things like that. But the most important thing that I feel passionate about is that when we give developers visibility, they do good things.

I mean, when we give them good debugging tools, the code gets better, the code gets faster, the code gets more efficient. And Corey, you’re in the business of helping people save money, when we give them good visibility into how much their code costs to run, they make the code more efficient. So we’re doing the same thing with carbon, we know there’s a cost to run your code, whether it’s a function, a container, a query, what have you, every operation has a carbon cost. And we’re on a mission to measure that and provide accurate tooling directly in our platform so that along with your debug lines, right, where you’ve got all these print statements that are spitting up stuff about what’s happening there, we can also print out, you know, what did it cost in carbon.

And you can set budgets. You can basically say, “Hey, I want my application to consume this much of carbon.” And down the road, we’ll have AI and ML models that will help us optimize your code to be able to fit within those carbon budgets. For example. I’m not a big fan of planting—you know, I love planting trees, but don’t get me wrong, we live in California and those trees get burned down.

And I was reading this heartbreaking story about how we returned back into the atmosphere a giant amount of carbon because the forest reserve that had been planted, you know, that was capturing carbon, you know, essentially got burned down in a forest fire. So, you know, we’re trying to just basically say, let’s try and reduce the amount of carbon, you know, that we can potentially create by having better tooling.

Corey: That would be amazing, and I think it also requires something that I guess acts almost as an exchange where there’s a centralized voice that can make sure that, well, one, the provider is being honest, and two, being able to ensure you’re doing an apples-to-apples comparison and not just discounting a whole lot of negative externalities. Because, yes, we’re talking about carbon released into the environment. Okay, great. What about water effects from what’s happening with your data centers are located? That can have significant climate impact as well. It’s about trying to avoid the picking and choosing. It’s hard, hard problem, but I’m unconvinced that there’s anything more critical in the entire ecosystem right now to worry about.

Chetan: So as a startup, we care very deeply about starting with the carbon part. And I agree, Corey, it’s a multi-dimensional problem; there’s lots of tentacles. The hydrocarbon industry goes very deeply into all parts of our lives. I’m a startup, what do I know? I can’t solve all of those things, but I wanted to start with the philosophy that if we provide developers with the right tooling, they’ll have the right incentives then to write better code. And as we open-source more of what we learn and, you know, our tooling, others will do the same. And I think in ten years, we might have better answers. But someone’s got to start somewhere, and this is where we’d like to start.

Corey: I really want to thank you for taking as much time as you have for going through what you’re up to and how you view the world. If people want to learn more, where’s the best place to find you?

Chetan: Yes, so two things on that front. Go to—M-A-C-R-O-M-E-T-A dot com—and that’s our website. And you can come and experience the full power of the platform. We’ve got a playground where you can come, open an account and build anything you want for free, and you can try and learn. You just can’t run it in production because we’ve got a giant network, as I said, of 175 cities around the world. But there are tiers available for you to purchase and build and run apps. Like I think about 80 different customers, some of the biggest ones in the world, some of the biggest telecom customers, retail, E-Tail customers, [unintelligible 00:34:28] tiny startups are building some interesting things on.

And the second thing I want to talk about is November 7th through 11th of 2022, just a couple of weeks—or maybe by the time this recording comes out, a week from now—is developer week at Macrometa. And we’re going to be announcing some really interesting new capabilities, some new features like real-time complex event processing with low, ultra-low latency, data connectors, a search feature that allows you to build search directly on top of your applications without needing to spin up a giant Elastic Cloud Search cluster, or providing search locally and regionally so that, you know, you can have search running in 25 cities that are instant to search rather than sending all your search requests back in one location. There’s all kinds of very cool things happening over there.

And we’re also announcing a partnership with the original, the OG of the edge, one of the largest, most impressive, interesting CDN players that has become a partner for us as well. And then we’re also announcing some very interesting experimental work where you as a developer can build apps directly on the 5G telecom cloud as well. And then you’ll hear from some interesting companies that are building apps that are edge-native, that are impossible to build in the cloud because they take advantage of these three things that we talked about: geography, latency, and data protection in some very, very powerful ways. So you’ll hear actual customer case studies from real customers in the flesh, not anonymous BS, no marchitecture. It’s a week-long of technical talk by developers, for developers. And so, you know, come and join the fun and let’s learn all about the edge together, and let’s go build something together that’s impossible to do today.

Corey: And we will, of course, put links to that in the [show notes 00:36:06]. Thank you so much for being so generous with your time. I appreciate it.

Chetan: My pleasure, Corey. Like I said, you’re one of my heroes. I’ve always loved your work. The Snark-as-a-Service is a trillion-dollar market cap company. If you’re ever interested in taking that public, I know some investors that I’d happily put you in touch with. But—

Corey: Sadly, so many of those investors lack senses of humor.

Chetan: [laugh]. That is true. That is true [laugh].

Corey: [laugh]. [sigh].

Chetan: Well, thank you. Thanks again for having me.

Corey: Thank you. Chetan Venkatesh, CEO and co-founder at Macrometa. I’m Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry and insulting comment about why we should build everything on the cloud provider that you work for and then the attempt to challenge Chetan for the title of Edgelord.

Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit to get started.

Announcer: This has been a HumblePod production. Stay humble.
Newsletter Footer

Get the Newsletter

Reach over 30,000 discerning engineers, managers, enthusiasts who actually care about the state of Amazon’s cloud ecosystems.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Sponsor Icon Footer

Sponsor an Episode

Get your message in front of people who care enough to keep current about the cloud phenomenon and its business impacts.