Site Tools


start:isbtrfsok

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
start:isbtrfsok [2022/10/23 21:21] peterstart:isbtrfsok [2024/01/24 19:49] (current) peter
Line 1: Line 1:
-BTRFS is working well at the moment on a single disk, or with RAID6 (metadata Raid1C3) (2 disks can fail) in a NAS environment with more than disks (ie 5+ disks). For 2 or 3 disks RAID1 is preferred which also works well (data is duplicated so it can still self heal and 1 disk can fail).  +DATE CHECKED THIS PAGE WAS VALID: 14/09/2023  
-Note: For RAID 6 to avoid loss of data/corruption on a power failure/kernel hang you must raid the metadata differently then it is safe. + 
-eg imagine 6 disks the command to create the array is:+BTRFS is working well at the moment on a single disk, or with RAID1C3 (metadata also RAID1C3 as 2 disks can fail) in a NAS environment with more than disks (ie 4+ disks). For 2 or 3 disks RAID1 is preferred which also works well (data is duplicated so it can still self heal and 1 disk can fail). Note: For RAID 6 to avoid loss of data/corruption on a power failure/kernel hang you must raid the metadata differently although RAID6 still has some potential issues so its not advised to use it. Its mostly ready but has a few edge cases where data can be lost they are fixing still. egimagine 6 disks the command to create the RAID6 array is:
  
 mkfs.btrfs -L myraidlabel -m raid1c3 -d raid6 -f <device1> <device2> <device3> <device4> <device5> <device6> mkfs.btrfs -L myraidlabel -m raid1c3 -d raid6 -f <device1> <device2> <device3> <device4> <device5> <device6>
 +
 +However as RAID6 is not considered 100% safe yet its better to do something like:
 +
 +mkfs.btrfs -L myraidlabel -m raid1c3 -d raid1c3 -f <device1> <device2> <device3> <device4> <device5> <device6>
  
 This command destroys whats on the disks (-f was used = force creation). This command destroys whats on the disks (-f was used = force creation).
  
-Advantage of this is metadata is normally small while raid 6 gives the space and speed benefits for the data written. Also very safe and sanctioned by the btrfs team.+Advantage of the first command is metadata is normally small while raid 6 gives the space and speed benefits for the data written. In the second command that is 100% safe you only get the space of 2/6 disks as 33% is usable (data is written 3 times in raid1c3). 
 + 
 +Once you have more than 12 disks or 24TB across the disks RAID 6 is no longer considered appropriate/safe anyway due to URE so you should use only 1c3 or 1c4 for metadata and data. You can only use 33% or 25% of total disk space though of all drives. Some companies use Raid 10 which is 50% of the disk space with an even number of disks but only 1 disk failure is safe, so I personally dont use it or recommend it. ZFS might be a better choice at this number of disks (but its performance is slower unfortunatly) or more though so best do your research here… if you are unsure raid1c3 is pretty good as 2 disks can fail and its fairly performant. You just have to buy 3x the number of disks as only 33% of the total disks space is usable. 
 + 
 +Dont forget you can convert raid by balance eg: 
 + 
 +If the array is already created, you can convert to 1c3 or whatever with btrfs balance start -mconvert=raid1c3 /path/to/array.
  
-Once you have more than 16 disks RAID 5/6 is no longer considered safe so you should use only 1c3 for metadata and data. You can only use 33% of total disk space though of all drives. ZFS might be a better choice at this number of disks or more..+-mconvert=… converts metadata while -dconvert=… converts data, and you can provide both at the same time if you want and so on.
  
 +A weekly scrub AND balance is suggested in a RAID environment so that self healing and filesystem maintenance is performed. For a single disk just a monthly scrub is fine and you can run it manually or schedule one with cron if you prefer. If there are any corrupt files, it cant self heal anyway so you would be restoring from backup, and balancing on a single disk generally happens automatically to a degree so I normally dont worry about it (only 1 copy of data is kept).
  
-A weekly scrub and balance is suggested in a RAID environment so that self healing and filesystem maintenance is performed. For single disk just monthly scrub is fine and you can run it manually or schedule one with cron if you prefer. If there are any corrupt filesit cant self heal anyway so you would be restoring from backupand balancing on a single disk generally happens automatically to a degree so I normally dont worry about it.+For single disks in home PC I use the following mount options in FStab: defaults,compress=lzo,discard=async,space_cache=v2
  
-For single disks in home PC I use the following mount options in FStab: +So for example disk would be mounted as such in the fstabUUID=383732b1-5e87-4b68-a15a-f044bc559877 / btrfs defaults,compress=lzo,discard=async,space_cache=v2,subvol=@ 0 0
-defaults,compress=lzo,autodefrag,discard=async,space_cache=v2+
  
-So for example a disk would be mounted as such in the fstab: +This keeps things nice and tidy and automatically handles trim, and balance. As mentioned there is no self healing on a single disk so you can run a scrub to check for errors every now and again but backup is needed to restore files. Dont forget to dup metadata even on a single disk. That is in this wiki elsewhere if you dont know how. 
-UUID=383732b1-5e87-4b68-a15a-f044bc559877 /               btrfs   defaults,compress=lzo,autodefrag,discard=async,space_cache=v2,subvol=@ 0       0+
  
-This keeps things nice and tidy and automatically handles trim, and balanceAs mentioned there is no self healing on single disk so you can run a scrub to check for errors every now and again but a backup is needed to restore files.+Overall BTRFS is production ready now since Linux Kernel 6 onwardsThere are few gotchas but nothing major and if you use RAID1C3/4 data is kept safe
  
-Overall BTRFS is production ready now since Linux Kernel 6 onwards.+Also note: autodefrag is no longer needed or reccomended on SSD disks. Do not use this mount option.
start/isbtrfsok.1666560094.txt.gz · Last modified: 2022/10/23 21:21 by peter