---
title: "S3 Is Not a Filesystem (But Now There’s One In Front of It)"
id: "15301"
type: "post"
slug: "s3-is-not-a-filesystem-but-now-theres-one-in-front-of-it"
published_at: "2026-04-07T19:38:51+00:00"
modified_at: "2026-04-07T21:42:46+00:00"
url: "https://www.lastweekinaws.com/blog/s3-is-not-a-filesystem-but-now-theres-one-in-front-of-it/"
markdown_url: "https://www.lastweekinaws.com/blog/s3-is-not-a-filesystem-but-now-theres-one-in-front-of-it.md"
excerpt: "I’ve been saying “S3 is not a filesystem” for over a decade. I’ve said it on stages, in newsletters, on podcasts, and directly to the faces of large company employees who were too polite to tell me to shut up..."
taxonomy_category:
  - "Uncategorized"
---

I’ve been saying “S3 is not a filesystem” for over a decade. I’ve said it on stages, in newsletters, on podcasts, and directly to the faces of large company employees who were too polite to tell me to shut up before they went back to their FUSE monstrosities. It was one of those reliable truths you could build a career on, like “NAT Gateways are a crime” or “nobody reads the Well-Architected Framework for fun.”

Today, AWS made me a liar. Sort of.

Today’s [launch of S3 Files](https://aws.amazon.com/blogs/aws/launching-s3-files-making-s3-buckets-accessible-as-file-systems)
 lets you mount an S3 bucket as a shared NFS filesystem—NFS 4.1 and 4.2, specifically—on EC2, Lambda, EKS, and ECS. A `mount` command and suddenly your applications, your teams, and yes, your agents can access S3 data as if it were local files. After twenty years, S3 has stopped pretending to be everything and started actually being everything: objects, files, tables, vectors, and HPC with Express. It of course is also a database, but I’ll fight that battle another day.

## What They Actually Built

They didn’t just bolt a POSIX layer on top of S3 and call it a day. That’s been tried, badly. That’s what [s3fs-fuse](https://github.com/s3fs-fuse/s3fs-fuse)
 was. That’s what [goofys](https://github.com/kahing/goofys)
 was. That’s what Amazon’s own [Mountpoint for Amazon S3](https://github.com/awslabs/mountpoint-s3)
 (motto: “you know it’s good because we put it on GitHub”) was. Every single one of those was the engineering equivalent of duct-taping a saddle onto a fish and calling it a horse.

Andy Warfield’s team went a different direction: instead of forcing files and objects to behave identically (which makes everyone miserable, as anyone who’s tried will confirm over drinks), they built a system where each works the way it’s supposed to, with automatic syncing between them. Your authoritative data stays in your S3 bucket. The filesystem maintains a view of your objects and translates filesystem operations into efficient S3 requests. Writes go through the filesystem and sync back to S3.

S3 still isn’t a filesystem. But your S3 data can now be *used with* a filesystem. That distinction matters, because the pricing tells a very specific story: what they built is less “S3 learned to be a filesystem” and more “EFS, but backstopped by S3.”

## The Pricing (Where It Gets Interesting)

This is where I started paying attention, because AWS pricing is where dreams go to get itemized.

S3 Files has two cost dimensions: file system storage (GB-month) and data access charges. The rates: $0.30/GB-month for high-performance storage, $0.03/GB for reads, $0.06/GB for writes. If those numbers look familiar, they should—they’re EFS Performance-optimized Standard pricing. It’s built on EFS. The rates are the same because the infrastructure is the same.

The neat part: you can mount a petabyte bucket and only pay those rates on the terabyte or two you actually touch. Everything else stays at standard S3 rates, doing absolutely nothing, costing you $0.023/GB-month, blissfully unaware it’s part of a filesystem now.

How they pull this off: you set a file size threshold (defaults to 128 KB). Files smaller than that get loaded onto the high-performance storage when accessed, because small-file latency is where filesystems actually matter vis-à-vis object stores. Reads of 128 KB or larger stream directly from S3 even if the data is already on the fast storage—no S3 Files charge at all. An expiration window (1 to 365 days, defaulting to 30) evicts untouched data from the fast tier automatically.

The gotcha is in the metering: every data access operation has a **32 KB minimum**. Read a 1-byte file? Metered for 32 KB. Write a 4-byte config update? 32 KB. Metadata operations—listing a directory, checking file attributes, creating or deleting files—cost 4 KB metered as a read. A commit (`fsync` or close-after-write) is 4 KB metered as a write. Everything rounds up to the next 1 KB boundary.

If your workload is millions of tiny metadata-heavy operations—and a lot of ML training checkpointing and agentic workflows are exactly that—those minimums add up. `ls` on a directory with 10,000 files? That’s 10,000 metadata reads at 4 KB each, and if it triggers prefetch, 10,000 writes at 32 KB minimum each. Do that math before you mount anything.

Sync operations cost you too: importing onto the fast storage is metered as writes, exporting changes back to S3 is metered as reads. Rename a file? S3 PUT plus a filesystem read (32 KB minimum). Rename a *directory*? Metered for every single object with that prefix. Moving a folder with 50,000 files is 50,000 individual operations.

One pricing nuance that isn’t obvious from the pricing page: the first time you read a small file, it gets imported onto the fast storage and you pay the $0.06/GB import write charge. The read itself is included in that operation—you’re not paying $0.06 to place it plus $0.03 to read it. So first-read cost for small files is $0.06/GB (double the headline read rate), and subsequent reads of the same cached file are $0.03/GB. AWS’s own pricing example is a bit misleading on this; they’ve told me they’re clarifying the page. Your Parquet files? Still free via S3 GET.

The pricing is reasonable—you’re charged proportional to what you’re actually using the filesystem for, not for the privilege of having mounted the bucket. But between the 32 KB minimums and the first-read import cost, model your workload’s actual I/O patterns before committing. To be clear, that’s not a criticism so much as the cost of a filesystem that tries to cheat physics.

### How It Stacks Up

Everyone’s going to compare this to EFS. Let’s do the math.

S3 Files isn’t a storage tier so much as it is a surcharge. Your data lives in a normal S3 bucket at normal S3 prices. The S3 Files cost is *on top of that*, only for the small hot slice on the high-performance filesystem layer. EFS charges you for every byte whether you touched it this month or not.

The underlying bucket doesn’t have to be S3 Standard, either. Intelligent-Tiering works. Infrequent Access works. The only things S3 Files *won’t* touch are Glacier Flexible Retrieval, Glacier Deep Archive, and the IT archive tiers (those need an S3 API restore first, fair enough). So your base layer can be Intelligent-Tiering at ~$0.0125/GB-month for data untouched in 90 days, and S3 Files only charges its surcharge on the tiny fraction you’re actively working with.

EFS has its own tiering story now, though—two of them, actually, because AWS can never resist having two pricing models where one would do.

**EFS Legacy mode** (bursting/provisioned throughput): Standard at $0.30/GB, IA at $0.025/GB, no Archive tier. Standard reads and writes are included in your throughput—no per-GB access charges. IA reads cost $0.01/GB. If you need more throughput than burst baseline, you pay $6/MB/s-month provisioned. This is the EFS most people remember.

**EFS Performance-optimized mode** (the new default): Standard still $0.30/GB, but IA drops to $0.016/GB and you get an Archive tier at $0.008/GB. The trade-off: now *every* read costs $0.03/GB and *every* write costs $0.06/GB, even on Standard. IA adds another $0.01/GB on reads, Archive adds another $0.03/GB. That Archive storage rate is cheaper than S3 IT infrequent (~$0.0125/GB), but you’re paying $0.06/GB to read from it.

Both modes charge tiering penalties when data moves between classes. S3 Intelligent-Tiering tiers for free—always has. That carries over to S3 Files.

EFS Legacy + IAEFS Perf-Optimized + ArchiveS3 IT + S3 Files10 TB, 90% cold storage~$333/mo ($0.025 IA)~$108/mo ($0.008 Archive)$145/mo ($0.0125 IT infrequent)500 GB hot working set$150/mo ($0.30 Std)$150/mo ($0.30 Std)$12/mo (S3 IT) + Files surcharge on sub-128 KB fractionRead 500 GB/mo (90% large, 10% small)~$5 (IA reads only)$15–$30 ($0.03 base + tier surcharges)large: FREE + small: $3 ($0.06/GB first read)Write 100 GB/mofree (throughput-included)$6 ($0.06/GB)$6 ($0.06/GB via Files)Tiering penalty$0.01/GB in and out of IA$0.01–$0.03/GB per tier transitionfreeThroughput ceilingburst baseline or $6/MB/s provisionedelastic, pay-per-byteS3 throughput (effectively unlimited)The savings aren’t in the rate card—those match EFS Perf-optimized exactly. The savings are in the design. EFS Archive wins on cold storage ($0.008/GB vs. S3 IT’s ~$0.0125/GB), but reading data back from Archive costs $0.06/GB. S3 Files reads anything 128 KB or larger for free, straight from S3. Reading 450 GB of large files in a month: $0 via S3 Files, $13.50 via EFS Perf-optimized. And S3 IT tiers for free where EFS charges $0.01–$0.03/GB every time data moves between classes.

Small files are where EFS fights back: first-read cost is $0.06/GB via S3 Files (the import write), versus $0.03/GB on EFS Perf-optimized (no import step). Subsequent reads are $0.03/GB on both. Metadata-heavy workloads widen the gap—the 32 KB minimums on S3 Files stack on top. Pick based on your access patterns, not vibes.

## Who Should Care

If you’re running ML training pipelines that chew through millions of small checkpoint files scattered across S3, this is what you’ve been duct-taping together with Mountpoint and prayer.

If you’re building agentic AI workloads that need shared storage without your team becoming S3 API experts, a mount command gets you there. This is clearly the pitch, and it’s the right one.

If you have legacy applications that assume POSIX semantics and you’ve been running EFS or FSx just to give them something to mount, you now have an option that keeps S3 as the source of truth.

If you’re running happily with S3 APIs today, keep doing that. This doesn’t replace the S3 API. It’s an additional access pattern for workloads that think in files, not objects.

## The Bigger Picture

S3 at twenty is quietly becoming the data substrate for everything. Objects, files, tables, vectors, high-performance computing, breakfast cereals, etc. Five years ago if you’d told me S3 would be a viable filesystem I’d have asked what you were drinking and whether you had enough to share, while simultaneously disabling your access to production.

Credit to the team for not taking the lazy path. “Make S3 pretend to be a filesystem” has been tried and it’s always been terrible. Building a real filesystem on EFS infrastructure, backed by S3 durability and pricing, with S3 handling everything that doesn’t need low-latency access? That’s monstrously harder. It’s also the right call.

I still maintain that S3 is not a filesystem. It just doesn’t have to be anymore—there’s a real one in front of it now, and the pricing finally makes sense.

by Corey Quinn Corey is the Chief Cloud Economist at Duckbill, where he specializes in helping companies improve their AWS bills by making them smaller and less horrifying. He also hosts the "Screaming in the Cloud" and "AWS Morning Brief" podcasts; and curates "Last Week in AWS," a weekly newsletter summarizing the latest in AWS news, blogs, and tools, sprinkled with snark and thoughtful analysis in roughly equal measure.

## More Posts from Corey

[Back to the Blog](https://www.lastweekinaws.com/blog/)

[https://www.lastweekinaws.com/blog/2-ways-to-correct-the-financial-times-at-aws-so-far/](https://www.lastweekinaws.com/blog/2-ways-to-correct-the-financial-times-at-aws-so-far/)

### [2 Ways to Correct the Financial Times at AWS (So Far)](https://www.lastweekinaws.com/blog/2-ways-to-correct-the-financial-times-at-aws-so-far/)

 [By Corey Quinn](https://www.lastweekinaws.com/blog/author/cquinn/)
 2 Ways to Correct the Financial Times at AWS (So Far) Amazon's Fastest-Shipping Product Is Now Blog Posts Correcting the Financial Times I've been watching AWS long enough to develop a feel for when a company's communications shift from "informing" to "coping." We crossed that line somewhere around February 20th, when Amazon published a blog […]

[Read More about 2 Ways to Correct the Financial Times at AWS (So Far)](https://www.lastweekinaws.com/blog/2-ways-to-correct-the-financial-times-at-aws-so-far/)

[https://www.lastweekinaws.com/blog/chris-hemsworth-is-an-l9-at-amazon-and-i-have-questions/](https://www.lastweekinaws.com/blog/chris-hemsworth-is-an-l9-at-amazon-and-i-have-questions/)

### [Chris Hemsworth Is an L9 at Amazon, and I Have Questions](https://www.lastweekinaws.com/blog/chris-hemsworth-is-an-l9-at-amazon-and-i-have-questions/)

 [By Corey Quinn](https://www.lastweekinaws.com/blog/author/cquinn/)
 Chris Hemsworth Is an L9 at Amazon, and I Have Questions

[Read More about Chris Hemsworth Is an L9 at Amazon, and I Have Questions](https://www.lastweekinaws.com/blog/chris-hemsworth-is-an-l9-at-amazon-and-i-have-questions/)

[https://www.lastweekinaws.com/blog/i-hope-this-email-finds-you-before-i-do/](https://www.lastweekinaws.com/blog/i-hope-this-email-finds-you-before-i-do/)

### [I Hope This Email Finds You Before I Do](https://www.lastweekinaws.com/blog/i-hope-this-email-finds-you-before-i-do/)

 [By Corey Quinn](https://www.lastweekinaws.com/blog/author/cquinn/)
 I Hope This Email Finds You Before I Do

[Read More about I Hope This Email Finds You Before I Do](https://www.lastweekinaws.com/blog/i-hope-this-email-finds-you-before-i-do/)

## Get the newsletter!

Stay up to date on the latest AWS news, opinions, and tools, all lovingly sprinkled with a bit of snark.

"*" indicates required fields
