tl;dr, I can make btrfs look awesome, and I can make it look stupid. It depends
on your workload, but generally speaking a normal Fedora user's workload is
going to feel the same regardless of the file system.
Phoronix is widely ignored and a joke to the fs development community. Facebook
deploys latency sensitive workloads across millions of machines with btrfs. We
do not accept latency regressions, our threshold for performance regressions
depends on the workload, but is generally under 5% and usually 1%.
That's not to say that btrfs is faster always, but that it depends on your
workload. The overwrite case is clearly a loser almost always, unless you use
the nocow file attr on those files. Reads can sometimes have higher latencies,
but that's the price you pay for checksumming. Inside Facebook we run with
compression on everywhere, which reduces the IO sizes both ways, which reduces
the overhead to the point where it's clearly faster than either xfs or ext4.
That's not the sole reason we use compression, but it's a nice way to get around
the cost of checksumming.
I'm currently knee deep in a performance discrepancy issue that WhatsApp has
discovered. However I have to get the file system to the point where it has > 1
TiB of metadata (not data, actual metadata) before I see a difference between
btrfs and xfs for that workload. Before that point it's not even a competition,
btrfs wins hands down. And even then we're talking about maximum latencies, so
every once and a while btrfs appears to go out to lunch for around 500ms for
unknown reasons.
These are the sort of performance investigations that btrfs is having right now,
edge cases under specific workloads. Generally speaking it's on par if not
faster than anybody else, and the cases where it's slower there's a clear trade
off being made for checksumming. Thanks,
Josef