Re: Seagate 3TB lawsuit
Which is why you should be using ZFS.
ZFS is simple.
RAID is outdated and needs to die.
RAID 5 write holes, whole platter resilvers if URE occurs or drive drops out, need to rely on hardware for performance and battery backup cache units to prevent data corruption on power loss. Array migration is difficult especially going to a different controller.
If array fails (unless mirrored), total data loss.
LONG ASS TIME TO RESILVER LARGE DRIVES.
It was cool in the 90's, time for it to die.
ZFS
Does not have raid 5 write hole.
Triple redundancy available.
If array fails, data is not 100% loss, you can force the drives back in array and retrieve data and it will tell you exactly which files or directories suffer loss or corruption.
Does not rely on hardware specific controllers, makes migration to a new hardware simple.
On power loss, zfs is copy-on write. Unless you deviate from standard configuration or use crappy hard-drives that don't obey flush requests or lie and say data has been written, you shouldn't have any data loss.
If a drive hits URE and times out, ZFS will wait for the drive to dropout and come back before re-inserting it into array. On the next read or scrub if corruption is found zfs will attempt to correct it and log it, if it's unable it will tell you, and if the array suffer data loss because of it, it will tell you which file or directory is corrupt.
It can use SSD for cache or log or both.
It uses a lot of ram ( this can be tweaked)
Can only use up to a recommended maximum of 11-16 drives per vdev but can have multiple vdevs in array and multiple arrays.
Good performance trade-off for redundancy and reliability.
ECC ram is a must.
LONG ASS TIME TO RESILVER LARGE DRIVES, but only for replacement disks.
But I can understand if data on your fileserver is not important enough and/or you have backups.
Originally posted by ratdude747
View Post
ZFS is simple.
RAID is outdated and needs to die.
RAID 5 write holes, whole platter resilvers if URE occurs or drive drops out, need to rely on hardware for performance and battery backup cache units to prevent data corruption on power loss. Array migration is difficult especially going to a different controller.
If array fails (unless mirrored), total data loss.
LONG ASS TIME TO RESILVER LARGE DRIVES.
It was cool in the 90's, time for it to die.
ZFS
Does not have raid 5 write hole.
Triple redundancy available.
If array fails, data is not 100% loss, you can force the drives back in array and retrieve data and it will tell you exactly which files or directories suffer loss or corruption.
Does not rely on hardware specific controllers, makes migration to a new hardware simple.
On power loss, zfs is copy-on write. Unless you deviate from standard configuration or use crappy hard-drives that don't obey flush requests or lie and say data has been written, you shouldn't have any data loss.
If a drive hits URE and times out, ZFS will wait for the drive to dropout and come back before re-inserting it into array. On the next read or scrub if corruption is found zfs will attempt to correct it and log it, if it's unable it will tell you, and if the array suffer data loss because of it, it will tell you which file or directory is corrupt.
It can use SSD for cache or log or both.
It uses a lot of ram ( this can be tweaked)
Can only use up to a recommended maximum of 11-16 drives per vdev but can have multiple vdevs in array and multiple arrays.
Good performance trade-off for redundancy and reliability.
ECC ram is a must.
LONG ASS TIME TO RESILVER LARGE DRIVES, but only for replacement disks.
But I can understand if data on your fileserver is not important enough and/or you have backups.
Comment