This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
start:isbtrfsok [2022/06/30 13:16] – peter | start:isbtrfsok [2024/01/24 19:49] (current) – peter | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | BTRFS is working well at the moment on a single disk, like for a home user. For multiple disks the RAID implementation requires too much babysitting. Even RAID1 is a nightmare in some scenarios, where the disks can become out of balance and all sorts of ridiculous things with no feedback to the end user anything is wrong. The development of BTRFS is progressing, | + | DATE CHECKED THIS PAGE WAS VALID: 14/ |
- | If you want some sort of mirror setup over multiple disks, use zfs for now instead. One day we will have BTRFS working well, but not today. Unless you really know what you are doing, | + | BTRFS is working well at the moment on a single disk, or with RAID1C3 (metadata also RAID1C3 as 2 disks can fail) in a NAS environment with more than 3 disks (ie 4+ disks). For 2 or 3 disks RAID1 is preferred which also works well (data is duplicated so it can still self heal and 1 disk can fail). Note: For RAID 6 to avoid loss of data/ |
- | On a single disk its working well, just ensure | + | mkfs.btrfs -L myraidlabel -m raid1c3 -d raid6 -f < |
+ | |||
+ | However as RAID6 is not considered 100% safe yet its better to do something like: | ||
+ | |||
+ | mkfs.btrfs -L myraidlabel -m raid1c3 -d raid1c3 -f < | ||
+ | |||
+ | This command destroys whats on the disks (-f was used = force creation). | ||
+ | |||
+ | Advantage of the first command is metadata is normally small while raid 6 gives the space and speed benefits for the data written. In the second command that is 100% safe you only get the space of 2/6 disks as 33% is usable (data is written 3 times in raid1c3). | ||
+ | |||
+ | Once you have more than 12 disks or 24TB across the disks RAID 6 is no longer considered appropriate/ | ||
+ | |||
+ | Dont forget you can convert raid by balance eg: | ||
+ | |||
+ | If the array is already created, you can convert to 1c3 or whatever with btrfs balance start -mconvert=raid1c3 / | ||
+ | |||
+ | -mconvert=… converts metadata while -dconvert=… converts data, and you can provide both at the same time if you want and so on. | ||
+ | |||
+ | A weekly scrub AND balance is suggested in a RAID environment so that self healing | ||
+ | |||
+ | For single disks in a home PC I use the following mount options in FStab: defaults, | ||
+ | |||
+ | So for example a disk would be mounted as such in the fstab: UUID=383732b1-5e87-4b68-a15a-f044bc559877 / btrfs defaults, | ||
+ | |||
+ | This keeps things nice and tidy and automatically handles trim, and balance. As mentioned there is no self healing on a single disk so you can run a scrub to check for errors every now and again but a backup is needed to restore files. Dont forget to dup metadata even on a single | ||
+ | |||
+ | Overall BTRFS is production ready now since Linux Kernel 6 onwards. There are a few gotchas but nothing major and if you use RAID1C3/4 data is kept safe. | ||
+ | |||
+ | Also note: autodefrag is no longer needed or reccomended on SSD disks. Do not use this mount option. |