ZFS has been in production use for 20 years, and has seen a lot of improvements since then, but it is well tested and well understood.
BTRFS has never really achieved "production" status in most people's eyes (at least, that I've seen), and RedHat removed support for it completely (not that they support ZFS either).
ZFS is also very straightforward once you learn how it works, and that takes very little time. The commands and the documentation (man pages) are thorough and detailed. Conversely, trying to figure out how and why BTRFS does what it does has been a huge challenge for me, since nothing seems to be as straightforward as it could be. I'm not sure why that is.
Development on BTRFS is ongoing, but it is starting to feel as though it's never going to actually finish its core features, let alone add quality of life improvements. As an example of what I mean: I run a Gitlab server, which divides its data into tons of directories, some of which are huge and some of which are not, but many of which have different use cases; a Postgres database, large binary files, small text files, temporary files, etc. With ZFS, I set up my storage pool something like this:
gitlab/postgres
gitlab/git-data
gitlab/shared/lfs-data
gitlab/backups
Now everything is divided up and I can specify different caching and block sizes on each one depending on the workload. When I'm going to do an upgrade I can do an atomic recursive snapshot on gitlab/ and I get snapshots on everything.
BTRFS, as far as I can tell, doesn't let you change as many fine-grained things per-storage-space, and it doesn't have atomic recursive snapshots (and touts this as a feature). I'm not sure if it supports a similar feature to zvols, where you can create a block device using the ZFS pool storage (in case you need an ext4 file system but you want to be able to snapshot it, or similar).
<anecdote> I have never once had a single issue with ZFS and data quality, with the exception of cases where underlying storage has failed. Meanwhile, I've had BTRFS lose data every single time I've tried to use it, often within days. Obviously lots of other people haven't had that issue, but suffice to say that personally, I don't trust it. </anecdote>
Meanwhile...
ZFS doesn't support reflinks like XFS and BTRFS do, so you can't do `cp -R --reflink=always somedir/ otherdir/` and just get a reflink copy (i.e. copy-on-write per-file). On XFS, and presumably BTRFS, I can do this and get a "copy" of a 30-50 GB git repository in less than a second, which takes up no extra space until I start modifying files. On ZFS, I have to do `cp -R somedir/ otherdir/` and it copies each file individually, reading the data from each file and then writing the data to the copy of the file.
ZFS also doesn't come as part of the kernel, so you can run into issues where you upgrade the kernel but for whatever reason ZFS doesn't upgrade (maybe the installed version of ZFS doesn't build against your new kernel version) and then you reboot and your ZFS pool is gone.
You also "can't" boot from ZFS, which is to say you can but if you do something like upgrade the ZFS kernel modules and then update your ZFS pool with features that Grub doesn't understand, you now cannot boot the system until you fix it by booting into a rescue image and updating the system with a new version of Grub. Ask me how I found that out.
In the end, my experience has been that ZFS is polished, streamlined, works great out of the box with no tuning necessary, and is extremely flexible as far as doing whatever it is I want to do with it. I see no real reason not to use ZFS, honestly, except for the "hassle" of installing/updating it yourself, and there's an Ubuntu PPA from jonathanf which provides very up-to-date ZFS packages so that you can get access to bug fixes or new features (filesystem features or tooling features) very quickly, with zero effort on your part.
BTRFS has never really achieved "production" status in most people's eyes (at least, that I've seen), and RedHat removed support for it completely (not that they support ZFS either).
ZFS is also very straightforward once you learn how it works, and that takes very little time. The commands and the documentation (man pages) are thorough and detailed. Conversely, trying to figure out how and why BTRFS does what it does has been a huge challenge for me, since nothing seems to be as straightforward as it could be. I'm not sure why that is.
Development on BTRFS is ongoing, but it is starting to feel as though it's never going to actually finish its core features, let alone add quality of life improvements. As an example of what I mean: I run a Gitlab server, which divides its data into tons of directories, some of which are huge and some of which are not, but many of which have different use cases; a Postgres database, large binary files, small text files, temporary files, etc. With ZFS, I set up my storage pool something like this:
gitlab/postgres
gitlab/git-data
gitlab/shared/lfs-data
gitlab/backups
Now everything is divided up and I can specify different caching and block sizes on each one depending on the workload. When I'm going to do an upgrade I can do an atomic recursive snapshot on gitlab/ and I get snapshots on everything.
BTRFS, as far as I can tell, doesn't let you change as many fine-grained things per-storage-space, and it doesn't have atomic recursive snapshots (and touts this as a feature). I'm not sure if it supports a similar feature to zvols, where you can create a block device using the ZFS pool storage (in case you need an ext4 file system but you want to be able to snapshot it, or similar).
<anecdote> I have never once had a single issue with ZFS and data quality, with the exception of cases where underlying storage has failed. Meanwhile, I've had BTRFS lose data every single time I've tried to use it, often within days. Obviously lots of other people haven't had that issue, but suffice to say that personally, I don't trust it. </anecdote>
Meanwhile...
ZFS doesn't support reflinks like XFS and BTRFS do, so you can't do `cp -R --reflink=always somedir/ otherdir/` and just get a reflink copy (i.e. copy-on-write per-file). On XFS, and presumably BTRFS, I can do this and get a "copy" of a 30-50 GB git repository in less than a second, which takes up no extra space until I start modifying files. On ZFS, I have to do `cp -R somedir/ otherdir/` and it copies each file individually, reading the data from each file and then writing the data to the copy of the file.
ZFS also doesn't come as part of the kernel, so you can run into issues where you upgrade the kernel but for whatever reason ZFS doesn't upgrade (maybe the installed version of ZFS doesn't build against your new kernel version) and then you reboot and your ZFS pool is gone.
You also "can't" boot from ZFS, which is to say you can but if you do something like upgrade the ZFS kernel modules and then update your ZFS pool with features that Grub doesn't understand, you now cannot boot the system until you fix it by booting into a rescue image and updating the system with a new version of Grub. Ask me how I found that out.
In the end, my experience has been that ZFS is polished, streamlined, works great out of the box with no tuning necessary, and is extremely flexible as far as doing whatever it is I want to do with it. I see no real reason not to use ZFS, honestly, except for the "hassle" of installing/updating it yourself, and there's an Ubuntu PPA from jonathanf which provides very up-to-date ZFS packages so that you can get access to bug fixes or new features (filesystem features or tooling features) very quickly, with zero effort on your part.