Have you researched the USB-SATA bridge chips in the enclosures? Reliability of those chips/drivers on Linux used to be very questionable a few years ago when I looked around. Not sure if the situation has improved recently given the popularity of NAS devices.
From my research a couple years ago it seemed like most issues involved feeding a bridge into a port multiplier, so I got a multi drive enclosure with no multipliers. I've had no problems so far even with a disk dying in it.
Though even flaky adapters just tend to lock up, I think.
Ah yes! Port multiplier is usually the source of most evils (after flaky bridge chips). Unfortunately enclosure makers seldom reveal the internal topology and usually only test again Windows, and Linux kernel has a long blacklist of bad bridge chips…
that's a benefit of zfs, it doesn't trust the drives actually wrote the data to the drives, the so called RAID write hole, since most RAID doesn't actually do that checking and drives don't have the per block checksums in a long time. It checksums to ensure.
The issue with flaky bridge chips wasn't usually about data integrity——it works fine most of the time, i.e. data written got read back correctly.
But often after extensive use, `dmesg` would complain about problems when talking to the drives, e.g. drive not responding or other strange error messages (forgot the exact text but very irritating and google-fu didn't help). There were also problems with SMART commands passthrough and drive power management e.g. sleep/standby adjustment which wasn't reliable when talking via bridge chips.
I use only disks directly connected to SATA controllers afterwards and no such issues ever happened again.